dataset
stringclasses 9
values | input
stringlengths 351
18.6k
| output
stringlengths 8
7.48k
| id
stringlengths 20
47
| messages
listlengths 2
2
|
---|---|---|---|---|
science.mup_single_document_summarization | Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.
Title of the Paper:
TarMAC: Targeted Multi-Agent Communication
Main Content:
We explore a collaborative multi-agent reinforcement learning setting where a team of agents attempts to solve cooperative tasks in partially-observable environments . In this scenario , learning an effective communication protocol is key . We propose a communication architecture that allows for targeted communication , where agents learn both what messages to send and who to send them to , solely from downstream task-specific reward without any communication supervision . Additionally , we introduce a multi-stage communication approach where the agents co-ordinate via multiple rounds of communication before taking actions in the environment . We evaluate our approach on a diverse set of cooperative multi-agent tasks , of varying difficulties , with varying number of agents , in a variety of environments ranging from 2D grid layouts of shapes and simulated traffic junctions to complex 3D indoor environments . We demonstrate the benefits of targeted as well as multi-stage communication . Moreover , we show that the targeted communication strategies learned by agents are both interpretable and intuitive . 1 INTRODUCTION . Effective communication is a key ability for collaborative multi-agents systems . Indeed , intelligent agents ( humans or artificial ) in real-world scenarios can significantly benefit from exchanging information that enables them to coordinate , strategize , and utilize their combined sensory experiences to act in the physical world . The ability to communicate has wide-ranging applications for artificial agents – from multi-player gameplay in simulated games ( e.g . DoTA , Quake , StarCraft ) or physical worlds ( e.g . robot soccer ) , to networks of self-driving cars communicating with each other to achieve safe and swift transport , to teams of robots on search-and-rescue missions deployed in hostile and fast-evolving environments . A salient property of human communication is the ability to hold targeted interactions . Rather than the ‘ one-size-fits-all ’ approach of broadcasting messages to all participating agents , as has been previously explored ( Sukhbaatar et al. , 2016 ; Foerster et al. , 2016 ) , it can be useful to direct certain messages to specific recipients . This enables a more flexible collaboration strategy in complex environments . For example , within a team of search-and-rescue robots with a diverse set of roles and goals , a message for a fire-fighter ( “ smoke is coming from the kitchen ” ) is largely meaningless for a bomb-defuser . In this work we develop a collaborative multi-agent deep reinforcement learning approach that supports targeted communication . Crucially , each individual agent actively selects which other agents to send messages to . This targeted communication behavior is operationalized via a simple signaturebased soft attention mechanism : along with the message , the sender broadcasts a key which encodes properties of agents the message is intended for , and is used by receivers to gauge the relevance of the message . This communication mechanism is learned implicitly , without any attention supervision , as a result of end-to-end training using a downstream task-specific team reward . The inductive bias provided by soft attention in the communication architecture is sufficient to enable agents to 1 ) communicate agent-goal-specific messages ( e.g . guide fire-fighter towards fire , bomb-defuser towards bomb , etc . ) , 2 ) be adaptive to variable team sizes ( e.g . the size of the local neighborhood a self-driving car can communicate with changes as it moves ) , and 3 ) be interpretable through predicted attention probabilities that allow for inspection of which agent is communicating what message and to whom . Our results however show that just using targeted communication is not enough . Complex realworld tasks might require large populations of agents to go through multiple stages of collaborative communication and reasoning , involving large amounts of information to be persistent in memory and exchanged via high-bandwidth communication channels . To this end , our actor-critic framework combines centralized training with decentralized execution ( Lowe et al. , 2017 ) , thus enabling scaling to a large number of agents . In this context , our inter-agent communication architecture supports multiple stages of targeted interactions at every time-step , and the agents ’ recurrent policies support persistent relevant information in internal states . While natural language , i.e . a finite set of discrete tokens with pre-specified human-conventionalized meanings , may seem like an intuitive protocol for inter-agent communication – one that enables human-interpretability of interactions – forcing machines to communicate among themselves in discrete tokens presents additional training challenges . Since our work focuses on machine-only multi-agent teams , we allow agents to communicate via continuous vectors ( rather than discrete symbols ) , and via the learning process , agents have the flexibility to discover and optimize their communication protocol as per task requirements . We provide extensive empirical demonstration of the efficacy of our approach across a range of tasks , environments , and team sizes . We begin by benchmarking multi-agent communication with and without attention on a cooperative navigation task derived from the SHAPES environment ( Andreas et al. , 2016 ) . We show that agents learn intuitive attention behavior across a spectrum of task difficulties . Next , we evaluate the same targeted multi-agent communication architecture on the traffic junction environment ( Sukhbaatar et al. , 2016 ) , and show that agents are able to adaptively focus on ‘ active ’ agents in the case of varying team sizes . Finally , we demonstrate effective multi-agent communication in 3D environments on a cooperative first-person point-goal navigation task in the rich House3D environment ( Wu et al. , 2018 ) . 2 RELATED WORK . Multi-agent systems fall at the intersection of game theory , distributed systems , and Artificial Intelligence in general ( Shoham & Leyton-Brown , 2008 ) , and thus have a rich and diverse literature . Our work builds on and is related to prior work in deep multi-agent reinforcement learning , the centralized training and decentralized execution paradigm , and emergent communication protocols . Multi-Agent Reinforcement Learning ( MARL ) . Within MARL ( see Busoniu et al . ( 2008 ) for a survey ) , our work is related to recent efforts on using recurrent neural networks to approximate agent policies ( Hausknecht & Stone , 2015 ) , algorithms stabilizing multi-agent training ( Lowe et al. , 2017 ; Foerster et al. , 2018 ) , and tasks in novel application domains such as coordination and navigation in 3D simulated environments ( Peng et al. , 2017 ; OpenAI , 2018 ; Jaderberg et al. , 2018 ) . Centralized Training & Decentralized Execution . Both Sukhbaatar et al . ( 2016 ) and Hoshen ( 2017 ) adopt a fully centralized framework at both training and test time – a central controller processes local observations from all agents and outputs a probability distribution over joint actions . In this setting , any controller ( e.g . a fully-connected network ) can be viewed as implicitly encoding communication . Sukhbaatar et al . ( 2016 ) present an efficient architecture to learn a centralized controller invariant to agent permutations – by sharing weights and averaging as in Zaheer et al . ( 2017 ) . Meanwhile Hoshen ( 2017 ) proposes to replace averaging by an attentional mechanism to allow targeted interactions between agents . While closely related to our communication architecture , his work only considers fully supervised one-next-step prediction tasks , while we tackle the full reinforcement learning problem with tasks requiring planning over long time horizons . Moreover , a centralized controller quickly becomes intractable in real-world tasks with many agents and high-dimensional observation spaces ( e.g . navigation in House3D ( Wu et al. , 2018 ) ) . To address these weaknesses , we adopt the framework of centralized learning but decentralized execution ( following Foerster et al . ( 2016 ) ; Lowe et al . ( 2017 ) ) and further relax it by allowing agents to communicate . While agents can use extra information during training , at test time , they pick actions solely based on local observations and communication messages received from other agents . Finally , we note that fully decentralized execution at test time without communication is very restrictive . It means 1 ) each agent must act myopically based solely on its local observation and 2 ) agents can not coordinate their actions . In our setting , communication between agents offers a rea- sonable trade-off between allowing agents to globally coordinate while retaining tractability ( since the communicated messages are much lower-dimensional than the observation space ) . Emergent Communication Protocols . Our work is also related to recent work on learning communication protocols in a completely end-to-end manner with reinforcement learning – from perceptual input ( e.g . pixels ) to communication symbols ( discrete or continuous ) to actions ( e.g . navigating in an environment ) . While ( Foerster et al. , 2016 ; Jorge et al. , 2016 ; Das et al. , 2017 ; Kottur et al. , 2017 ; Mordatch & Abbeel , 2017 ; Lazaridou et al. , 2017 ) constrain agents to communicate with discrete symbols with the explicit goal to study emergence of language , our work operates in the paradigm of learning a continuous communication protocol in order to solve a downstream task ( Sukhbaatar et al. , 2016 ; Hoshen , 2017 ; Jiang & Lu , 2018 ) . While ( Jiang & Lu , 2018 ) also operate in a decentralized execution setting and use an attentional communication mechanism , their setup is significantly different from ours as they use attention to decide when to communicate , not who to communicate with ( ‘ who ’ depends on a hand-tuned neighborhood parameter in their work ) . Table 1 summarizes the main axes of comparison between our work and previous efforts in this exciting space . 3 TECHNICAL BACKGROUND . Decentralized Partially Observable Markov Decision Processes ( Dec-POMDPs ) . A DecPOMDP is a cooperative multi-agent extension of a partially observable Markov decision process ( Oliehoek ( 2012 ) ) . For N agents , it is defined by a set of states S describing possible configurations of all agents , a global reward function R , a transition probability function T , and for each agent i P 1 , ... , N a set of allowed actions Ai , a set of possible observations Ωi and an observation functionOi . Operationally , at each time step every agent picks an action ai based on its local observation ωi following its own stochastic policy πθipai|ωiq . The system randomly transitions to the next state s1 given the current state and joint action T ps1|s , a1 , ... , aN q . The agent team receives a global reward r “ Rps , a1 , ... , aN q while each agent receives a local observation of the new state Oipωi|s1q . Agents aim to maximize the total expected return J “ řT t “ 0 γ trt where γ is a discount factor and T is the episode time horizon . Actor-Critic Algorithms . Policy gradient methods directly adjust the parameters θ of the policy in order to maximize the objective Jpθq “ Es „ pπ , a „ πθpsq rRps , aqs by taking steps in the direction of ∇Jpθq . We can write the gradient with respect to the policy parameters as ∇θJpθq “ Es „ pπ , a „ πθpsq r∇θ log πθpa|sqQπps , aqs , where Qπps , aq is called the action-value , it is the expected remaining discounted reward if we take action a in state s and follow policy π thereafter . Actor-Critic algorithms learn an approximation of the unknown true action-value function Q̂ps , aq by e.g . temporal-difference learning ( Sutton & Barto , 1998 ) . This Q̂ps , aq is called the Critic while the policy πθ is called the Actor . Multi-Agent Actor-Critic . Lowe et al . ( 2017 ) propose a multi-agent Actor-Critic algorithm adapted to centralized learning and decentralized execution . Each agent learns its own individual policy πθipai|ωiq conditioned on local observation ωi , using a centralized Critic which estimates the joint action-value Q̂ps , a1 , ... , aN q . | The authors propose a new architecture for learning communication protocols. In this architecture each message consists of a key and a value. When receiving the message the listener produces an attention key that is used to selectively attend to some messages more than other using soft attention. This differs from the typical 'broadcasting' protocols learned in literature. | science.mup_single_document_summarization.9771 | [
{
"content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nTarMAC: Targeted Multi-Agent Communication\n\nMain Content:\nWe explore a collaborative multi-agent reinforcement learning setting where a team of agents attempts to solve cooperative tasks in partially-observable environments . In this scenario , learning an effective communication protocol is key . We propose a communication architecture that allows for targeted communication , where agents learn both what messages to send and who to send them to , solely from downstream task-specific reward without any communication supervision . Additionally , we introduce a multi-stage communication approach where the agents co-ordinate via multiple rounds of communication before taking actions in the environment . We evaluate our approach on a diverse set of cooperative multi-agent tasks , of varying difficulties , with varying number of agents , in a variety of environments ranging from 2D grid layouts of shapes and simulated traffic junctions to complex 3D indoor environments . We demonstrate the benefits of targeted as well as multi-stage communication . Moreover , we show that the targeted communication strategies learned by agents are both interpretable and intuitive . 1 INTRODUCTION . Effective communication is a key ability for collaborative multi-agents systems . Indeed , intelligent agents ( humans or artificial ) in real-world scenarios can significantly benefit from exchanging information that enables them to coordinate , strategize , and utilize their combined sensory experiences to act in the physical world . The ability to communicate has wide-ranging applications for artificial agents – from multi-player gameplay in simulated games ( e.g . DoTA , Quake , StarCraft ) or physical worlds ( e.g . robot soccer ) , to networks of self-driving cars communicating with each other to achieve safe and swift transport , to teams of robots on search-and-rescue missions deployed in hostile and fast-evolving environments . A salient property of human communication is the ability to hold targeted interactions . Rather than the ‘ one-size-fits-all ’ approach of broadcasting messages to all participating agents , as has been previously explored ( Sukhbaatar et al. , 2016 ; Foerster et al. , 2016 ) , it can be useful to direct certain messages to specific recipients . This enables a more flexible collaboration strategy in complex environments . For example , within a team of search-and-rescue robots with a diverse set of roles and goals , a message for a fire-fighter ( “ smoke is coming from the kitchen ” ) is largely meaningless for a bomb-defuser . In this work we develop a collaborative multi-agent deep reinforcement learning approach that supports targeted communication . Crucially , each individual agent actively selects which other agents to send messages to . This targeted communication behavior is operationalized via a simple signaturebased soft attention mechanism : along with the message , the sender broadcasts a key which encodes properties of agents the message is intended for , and is used by receivers to gauge the relevance of the message . This communication mechanism is learned implicitly , without any attention supervision , as a result of end-to-end training using a downstream task-specific team reward . The inductive bias provided by soft attention in the communication architecture is sufficient to enable agents to 1 ) communicate agent-goal-specific messages ( e.g . guide fire-fighter towards fire , bomb-defuser towards bomb , etc . ) , 2 ) be adaptive to variable team sizes ( e.g . the size of the local neighborhood a self-driving car can communicate with changes as it moves ) , and 3 ) be interpretable through predicted attention probabilities that allow for inspection of which agent is communicating what message and to whom . Our results however show that just using targeted communication is not enough . Complex realworld tasks might require large populations of agents to go through multiple stages of collaborative communication and reasoning , involving large amounts of information to be persistent in memory and exchanged via high-bandwidth communication channels . To this end , our actor-critic framework combines centralized training with decentralized execution ( Lowe et al. , 2017 ) , thus enabling scaling to a large number of agents . In this context , our inter-agent communication architecture supports multiple stages of targeted interactions at every time-step , and the agents ’ recurrent policies support persistent relevant information in internal states . While natural language , i.e . a finite set of discrete tokens with pre-specified human-conventionalized meanings , may seem like an intuitive protocol for inter-agent communication – one that enables human-interpretability of interactions – forcing machines to communicate among themselves in discrete tokens presents additional training challenges . Since our work focuses on machine-only multi-agent teams , we allow agents to communicate via continuous vectors ( rather than discrete symbols ) , and via the learning process , agents have the flexibility to discover and optimize their communication protocol as per task requirements . We provide extensive empirical demonstration of the efficacy of our approach across a range of tasks , environments , and team sizes . We begin by benchmarking multi-agent communication with and without attention on a cooperative navigation task derived from the SHAPES environment ( Andreas et al. , 2016 ) . We show that agents learn intuitive attention behavior across a spectrum of task difficulties . Next , we evaluate the same targeted multi-agent communication architecture on the traffic junction environment ( Sukhbaatar et al. , 2016 ) , and show that agents are able to adaptively focus on ‘ active ’ agents in the case of varying team sizes . Finally , we demonstrate effective multi-agent communication in 3D environments on a cooperative first-person point-goal navigation task in the rich House3D environment ( Wu et al. , 2018 ) . 2 RELATED WORK . Multi-agent systems fall at the intersection of game theory , distributed systems , and Artificial Intelligence in general ( Shoham & Leyton-Brown , 2008 ) , and thus have a rich and diverse literature . Our work builds on and is related to prior work in deep multi-agent reinforcement learning , the centralized training and decentralized execution paradigm , and emergent communication protocols . Multi-Agent Reinforcement Learning ( MARL ) . Within MARL ( see Busoniu et al . ( 2008 ) for a survey ) , our work is related to recent efforts on using recurrent neural networks to approximate agent policies ( Hausknecht & Stone , 2015 ) , algorithms stabilizing multi-agent training ( Lowe et al. , 2017 ; Foerster et al. , 2018 ) , and tasks in novel application domains such as coordination and navigation in 3D simulated environments ( Peng et al. , 2017 ; OpenAI , 2018 ; Jaderberg et al. , 2018 ) . Centralized Training & Decentralized Execution . Both Sukhbaatar et al . ( 2016 ) and Hoshen ( 2017 ) adopt a fully centralized framework at both training and test time – a central controller processes local observations from all agents and outputs a probability distribution over joint actions . In this setting , any controller ( e.g . a fully-connected network ) can be viewed as implicitly encoding communication . Sukhbaatar et al . ( 2016 ) present an efficient architecture to learn a centralized controller invariant to agent permutations – by sharing weights and averaging as in Zaheer et al . ( 2017 ) . Meanwhile Hoshen ( 2017 ) proposes to replace averaging by an attentional mechanism to allow targeted interactions between agents . While closely related to our communication architecture , his work only considers fully supervised one-next-step prediction tasks , while we tackle the full reinforcement learning problem with tasks requiring planning over long time horizons . Moreover , a centralized controller quickly becomes intractable in real-world tasks with many agents and high-dimensional observation spaces ( e.g . navigation in House3D ( Wu et al. , 2018 ) ) . To address these weaknesses , we adopt the framework of centralized learning but decentralized execution ( following Foerster et al . ( 2016 ) ; Lowe et al . ( 2017 ) ) and further relax it by allowing agents to communicate . While agents can use extra information during training , at test time , they pick actions solely based on local observations and communication messages received from other agents . Finally , we note that fully decentralized execution at test time without communication is very restrictive . It means 1 ) each agent must act myopically based solely on its local observation and 2 ) agents can not coordinate their actions . In our setting , communication between agents offers a rea- sonable trade-off between allowing agents to globally coordinate while retaining tractability ( since the communicated messages are much lower-dimensional than the observation space ) . Emergent Communication Protocols . Our work is also related to recent work on learning communication protocols in a completely end-to-end manner with reinforcement learning – from perceptual input ( e.g . pixels ) to communication symbols ( discrete or continuous ) to actions ( e.g . navigating in an environment ) . While ( Foerster et al. , 2016 ; Jorge et al. , 2016 ; Das et al. , 2017 ; Kottur et al. , 2017 ; Mordatch & Abbeel , 2017 ; Lazaridou et al. , 2017 ) constrain agents to communicate with discrete symbols with the explicit goal to study emergence of language , our work operates in the paradigm of learning a continuous communication protocol in order to solve a downstream task ( Sukhbaatar et al. , 2016 ; Hoshen , 2017 ; Jiang & Lu , 2018 ) . While ( Jiang & Lu , 2018 ) also operate in a decentralized execution setting and use an attentional communication mechanism , their setup is significantly different from ours as they use attention to decide when to communicate , not who to communicate with ( ‘ who ’ depends on a hand-tuned neighborhood parameter in their work ) . Table 1 summarizes the main axes of comparison between our work and previous efforts in this exciting space . 3 TECHNICAL BACKGROUND . Decentralized Partially Observable Markov Decision Processes ( Dec-POMDPs ) . A DecPOMDP is a cooperative multi-agent extension of a partially observable Markov decision process ( Oliehoek ( 2012 ) ) . For N agents , it is defined by a set of states S describing possible configurations of all agents , a global reward function R , a transition probability function T , and for each agent i P 1 , ... , N a set of allowed actions Ai , a set of possible observations Ωi and an observation functionOi . Operationally , at each time step every agent picks an action ai based on its local observation ωi following its own stochastic policy πθipai|ωiq . The system randomly transitions to the next state s1 given the current state and joint action T ps1|s , a1 , ... , aN q . The agent team receives a global reward r “ Rps , a1 , ... , aN q while each agent receives a local observation of the new state Oipωi|s1q . Agents aim to maximize the total expected return J “ řT t “ 0 γ trt where γ is a discount factor and T is the episode time horizon . Actor-Critic Algorithms . Policy gradient methods directly adjust the parameters θ of the policy in order to maximize the objective Jpθq “ Es „ pπ , a „ πθpsq rRps , aqs by taking steps in the direction of ∇Jpθq . We can write the gradient with respect to the policy parameters as ∇θJpθq “ Es „ pπ , a „ πθpsq r∇θ log πθpa|sqQπps , aqs , where Qπps , aq is called the action-value , it is the expected remaining discounted reward if we take action a in state s and follow policy π thereafter . Actor-Critic algorithms learn an approximation of the unknown true action-value function Q̂ps , aq by e.g . temporal-difference learning ( Sutton & Barto , 1998 ) . This Q̂ps , aq is called the Critic while the policy πθ is called the Actor . Multi-Agent Actor-Critic . Lowe et al . ( 2017 ) propose a multi-agent Actor-Critic algorithm adapted to centralized learning and decentralized execution . Each agent learns its own individual policy πθipai|ωiq conditioned on local observation ωi , using a centralized Critic which estimates the joint action-value Q̂ps , a1 , ... , aN q .",
"role": "user"
},
{
"content": "The authors propose a new architecture for learning communication protocols. In this architecture each message consists of a key and a value. When receiving the message the listener produces an attention key that is used to selectively attend to some messages more than other using soft attention. This differs from the typical 'broadcasting' protocols learned in literature.",
"role": "assistant"
}
]
|
science.mup_single_document_summarization | Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.
Title of the Paper:
TestRank: Bringing Order into Unlabeled Test Instances for Deep Learning Tasks
Main Content:
1 Introduction . Deep learning ( DL ) systems are prone to errors due to many factors , such as the biased training/validation dataset , the limitations of the model architecture , and the constraints on training cost . It is essential to conduct high-quality testing before DL models are deployed in the field ; otherwise , the behaviors of DL models can be unpredictable and result in severe accidents after deployment . However , the cost of building test oracles ( i.e. , the ground-truth output ) by manually labeling a massive set of test instances is prohibitive , especially for tasks requiring experts for accurate labeling , such as medical images and malware executables . To tackle the above problem , various test input prioritization techniques [ Feng et al. , 2020 , Byun et al. , 2019 , Shen et al. , 2020 ] are proposed to identify ‘ high-quality ’ test instances from a large amount of unlabeled data , which facilitates revealing more failures ( e.g. , misclassification ) of the DL model with reasonable labeling effort . These methods try to derive the failure-revealing capability of a test instance with its intrinsic attributes extracted from the responses of the model under test ( e.g. , the softmax-based probabilities given by the target DL model to this specific input ) . DeepGini [ Feng et al. , 2020 ] feeds the unlabeled data to the target DL model and calculates confidence-related scores based on the model ’ s output probabilities to rank the unlabeled test cases . Test cases with nearly equal probabilities on all output classes are regarded as less confident ones and are likely to reveal 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . model failures . Similarly , [ Byun et al. , 2019 ] use the uncertainty score obtained from MC-Dropout for test input prioritization . Multiple-boundary clustering and prioritization ( MCP ) [ Shen et al. , 2020 ] considers both the output probabilities and the balance among each classification boundary when selecting test cases . All existing works try to identify instances near the decision boundary and prioritize them . However , we argue that near-boundary instances are not necessarily failures , especially for well-trained classifiers with high accuracy . Also , as failures can be far from the decision boundary , existing methods could fail to reveal these remote failures . To estimate a test instance ’ s capability in revealing failures , in addition to the intrinsic attributes mentioned above , there is another type of information : the known classification correctness of labeled samples ( i.e. , training samples and previously tested samples ) and their relationship to the unlabeled instance . This information provides extra insight into the target model ’ s behavior . Such data is already known , and it provides contextual information that reflects the corresponding inference behaviors of the target model for a set of similar instances . This work presents a novel test input prioritization technique , namely TestRank , for DL classifiers . TestRank exploits both intrinsic and contextual attributes of test instances to evaluate their failurerevealing capabilities . Based on the intuition that similar inputs are usually associated with the same classification results , we propose to use graph neural networks ( GNNs ) [ Kipf and Welling , 2017 ] to summarize the neighboring classification correctness for each unlabeled instance into contextual attributes . GNNs have been well-studied and valued for their relational inductive bias for extracting graph information . Our method , TestRank , constructs a similarity graph on both unlabeled and labeled instances and apply the semi-supervised GNN learning to extract the contextual attributes . After that , we aggregate intrinsic ( such attributes are extracted from the input samples without considering their neighbors ) and contextual attributes with a neural-network-based binary classifier for test input prioritization . The contributions of our work are as follows : • To the best of our knowledge , TestRank is the first work that takes the contextual information from the target DL model into consideration for test input prioritization . • We propose constructing a similarity graph on both labeled and unlabeled samples and training a graph neural network to extract useful contextual attributes from the contextual information for these unlabeled instances . We also present approximation techniques to reduce its computational complexity with minor impact on the performance of TestRank . • We propose a simple yet effective neural network that combines the intrinsic attributes and contextual attributes of unlabeled test instances for their failure-revealing capability estimation . We empirically evaluate TestRank on three popular image classification benchmarks : CIFAR-10 , SVHN , and STL10 . The results show that our method outperforms the state-of-the-art methods by a considerable margin . 2 Test Input Prioritization . Let us use f : X → Y to represent the given target DL model , where X and Y are the input and output space , respectively . For effective testing1 , the debugging center needs to perform test input prioritization , i.e. , select a certain number of test instances from the large unlabeled test instance pool that can reveal as many failures as possible . Later , these failures are fed back to the training center for failure analysis and model repair . We define the model failures as follows : Definition 1 . DL Model Failure . A failure of the DL model can be uncovered by the test instance x if the predicted label f ( x ) is inconsistent with its ground truth label yx , namely f ( x ) , yx . Formally , the debugging center selects and labels b test cases XS ( |XS | = b ) from the unlabeled test instance pool XU . The objective of test input prioritization is to maximize the detected failures : max | { x| f ( x ) , yx } | , where x ∈ XS and |XS | = b . ( 1 ) 1Please note that , we focus on testing the functional correctness of the DL model , and we assume the collected testing data are clean samples instead of maliciously generated ones . Different solutions are proposed to quantify the failure-revealing capability of unlabeled instances . DeepGini [ Feng et al. , 2020 ] proposes to evaluate a single test instance via the DL model ’ s final statistical output : f ( t ) = 1 − ΣNi=1 p2t , i , where pt , i is the predicted probability that the test case t belongs to the class i . Given the sum of pt , i is 1 , impurity function f ( t ) is maximal when all pt , i values are equal . DeepGini also adopts the neuron coverage criteria proposed in DeepXplore [ Pei et al. , 2017 ] and DeepGauge [ Ma et al. , 2018 ] in test input prioritization , and the result shows that the impurity-based selection is much better than coverage-based selection . Instead of evaluating the overall likelihood of failure for all classes , Multiple-Boundary Clustering and Prioritization ( MCP ) proposes to evaluate it for each pair of classes individually [ Shen et al. , 2020 ] . In this way , test instances can be evenly selected for each class pair and the failure cases are investigated at the finer granularity . Besides these metrics , [ Byun et al. , 2019 ] also propose to measure the likelihood of incorrect prediction by the uncertainty of the model ’ s output , which reflects the degree to which a model is uncertain about its prediction . In practice , evaluating uncertainty requires the task DL model to be a Bayesian Neural Network [ Richard and Lippmann , 1991 , Neal , 2012 ] or containing a dropout layer for approximation [ Gal and Ghahramani , 2016 ] . Besides examining the DL model ’ s final outputs , [ Kim et al. , 2019 ] proposes two surprise adequacy ( SA ) criteria that make use of the target DL ’ s internal outputs ( e.g. , the activation traces ) . They are Likelihood-based Surprise Adequacy Coverage ( LSA ) and Distance-based Surprise Adequacy Coverage ( DSA ) . LSA and DSA measure the likelihood or distance of an unlabeled instance to the training instances , respectively . Test samples with higher SA values are preferred in testing . To sum up , all existing methods use the target model ’ s outputs to one input , i.e. , its intrinsic attributes , for its failure-revealing capability estimation . In contrast , we make use of both intrinsic and contextual attributes of an instance for better estimation ( see later sections for details ) . 3 TestRank 3.1 Motivation The failure-revealing capability of an unlabeled test input is closely related to its attributes for the DL model under test . In this work , we distinguish two kinds of attributes for an unlabeled instance : the intrinsic attributes and the contextual attributes . We define the intrinsic attributes of an input as the output responses assigned by the target DL model to this specific input . It could be , for example , the predictive output distribution of the input from the target DL model , reflecting the sentiment derived from the computation performed by the target model [ Byun et al. , 2019 ] . This kind of attributes is adopted by existing test input prioritization approaches [ Feng et al. , 2020 , Shen et al. , 2020 , Byun et al. , 2019 ] . Note that we define such attributes as ‘ intrinsic ’ because they are extracted from inputs without considering their context , i.e. , the classification correctness of its similar instances . In contrast with the intrinsic attributes , the contextual attributes provide a deeper insight into the target model for the unlabeled samples : the contextual attributes for an unlabeled sample summarize the classification correctness of similar and labeled samples . For a particular test instance , such contextual attributes are useful and complementary to the intrinsic attributes . An illustrative example is shown in Figure 1 , wherein we visualize the behavior of a two-class classifier on the unlabeled test data and historically labeled data distribution . The blue region includes the instances that are near the decision boundary . Intuitively , the classifier is uncertain about the data when data is near the decision boundary and is likely to misclassify it . Existing works [ Shen et al. , 2020 , Feng et al. , 2020 , Byun et al. , 2019 ] propose various indicators ( e.g. , confidence/uncertainty/surprise scores ) to help identify the near-boundary instances . However , the near-boundary instances are not necessarily failures , and some of them can be correctly classified by a well-trained classifier . What is worse , such testing approaches fail to capture the failures lying far from the decision boundary ( i.e. , remote failures , shown in the red region in Figure 1 ) , because DL models usually output high confidence ( or low uncertainty ) for these inputs . These failures may be caused by limited model capacity , insufficient training data , etc . Our key insight is that we can use the contextual information ( e.g . the classification correctness of similar labeled samples ) to help locate both near-boundary and remote failures . The usefulness of the contextual information is due to the local continuity property [ Bishop , 2006 ] , which means that inputs close in the feature space share similar prediction behavior , e.g. , classification results from the target model . As shown in Figure 1 , some already labeled data , whose classification correctness is already known , surround the unlabeled data . If an unlabeled instance is close to already falsely classified data , under the local continuity property , it is likely that this instance is also a model failure . This property motivates us to extract the contextual attributes for an unlabeled instance from its neighboring labeled data . By combining the extracted contextual attributes with the intrinsic attributes , we expect to achieve better failure-revealing capability estimation . | This paper proposes a new test sample prioritization strategy to test deep learning systems. The objective is to maximize the number of failure-inducing samples selected from unlabeled samples. The authors propose to leverage both the samples' intrinsic properties (from the model's prediction behavior on them) and contextual properties (from other labeled samples close to them) as the criteria to select samples. By learning another binary classifier that takes the two properties as input, TestRank produces probabilities to rank the test samples. Empirical results demonstrate that TestRank outperforms existing test prioritization techniques. | science.mup_single_document_summarization.13954 | [
{
"content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nTestRank: Bringing Order into Unlabeled Test Instances for Deep Learning Tasks\n\nMain Content:\n1 Introduction . Deep learning ( DL ) systems are prone to errors due to many factors , such as the biased training/validation dataset , the limitations of the model architecture , and the constraints on training cost . It is essential to conduct high-quality testing before DL models are deployed in the field ; otherwise , the behaviors of DL models can be unpredictable and result in severe accidents after deployment . However , the cost of building test oracles ( i.e. , the ground-truth output ) by manually labeling a massive set of test instances is prohibitive , especially for tasks requiring experts for accurate labeling , such as medical images and malware executables . To tackle the above problem , various test input prioritization techniques [ Feng et al. , 2020 , Byun et al. , 2019 , Shen et al. , 2020 ] are proposed to identify ‘ high-quality ’ test instances from a large amount of unlabeled data , which facilitates revealing more failures ( e.g. , misclassification ) of the DL model with reasonable labeling effort . These methods try to derive the failure-revealing capability of a test instance with its intrinsic attributes extracted from the responses of the model under test ( e.g. , the softmax-based probabilities given by the target DL model to this specific input ) . DeepGini [ Feng et al. , 2020 ] feeds the unlabeled data to the target DL model and calculates confidence-related scores based on the model ’ s output probabilities to rank the unlabeled test cases . Test cases with nearly equal probabilities on all output classes are regarded as less confident ones and are likely to reveal 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . model failures . Similarly , [ Byun et al. , 2019 ] use the uncertainty score obtained from MC-Dropout for test input prioritization . Multiple-boundary clustering and prioritization ( MCP ) [ Shen et al. , 2020 ] considers both the output probabilities and the balance among each classification boundary when selecting test cases . All existing works try to identify instances near the decision boundary and prioritize them . However , we argue that near-boundary instances are not necessarily failures , especially for well-trained classifiers with high accuracy . Also , as failures can be far from the decision boundary , existing methods could fail to reveal these remote failures . To estimate a test instance ’ s capability in revealing failures , in addition to the intrinsic attributes mentioned above , there is another type of information : the known classification correctness of labeled samples ( i.e. , training samples and previously tested samples ) and their relationship to the unlabeled instance . This information provides extra insight into the target model ’ s behavior . Such data is already known , and it provides contextual information that reflects the corresponding inference behaviors of the target model for a set of similar instances . This work presents a novel test input prioritization technique , namely TestRank , for DL classifiers . TestRank exploits both intrinsic and contextual attributes of test instances to evaluate their failurerevealing capabilities . Based on the intuition that similar inputs are usually associated with the same classification results , we propose to use graph neural networks ( GNNs ) [ Kipf and Welling , 2017 ] to summarize the neighboring classification correctness for each unlabeled instance into contextual attributes . GNNs have been well-studied and valued for their relational inductive bias for extracting graph information . Our method , TestRank , constructs a similarity graph on both unlabeled and labeled instances and apply the semi-supervised GNN learning to extract the contextual attributes . After that , we aggregate intrinsic ( such attributes are extracted from the input samples without considering their neighbors ) and contextual attributes with a neural-network-based binary classifier for test input prioritization . The contributions of our work are as follows : • To the best of our knowledge , TestRank is the first work that takes the contextual information from the target DL model into consideration for test input prioritization . • We propose constructing a similarity graph on both labeled and unlabeled samples and training a graph neural network to extract useful contextual attributes from the contextual information for these unlabeled instances . We also present approximation techniques to reduce its computational complexity with minor impact on the performance of TestRank . • We propose a simple yet effective neural network that combines the intrinsic attributes and contextual attributes of unlabeled test instances for their failure-revealing capability estimation . We empirically evaluate TestRank on three popular image classification benchmarks : CIFAR-10 , SVHN , and STL10 . The results show that our method outperforms the state-of-the-art methods by a considerable margin . 2 Test Input Prioritization . Let us use f : X → Y to represent the given target DL model , where X and Y are the input and output space , respectively . For effective testing1 , the debugging center needs to perform test input prioritization , i.e. , select a certain number of test instances from the large unlabeled test instance pool that can reveal as many failures as possible . Later , these failures are fed back to the training center for failure analysis and model repair . We define the model failures as follows : Definition 1 . DL Model Failure . A failure of the DL model can be uncovered by the test instance x if the predicted label f ( x ) is inconsistent with its ground truth label yx , namely f ( x ) , yx . Formally , the debugging center selects and labels b test cases XS ( |XS | = b ) from the unlabeled test instance pool XU . The objective of test input prioritization is to maximize the detected failures : max | { x| f ( x ) , yx } | , where x ∈ XS and |XS | = b . ( 1 ) 1Please note that , we focus on testing the functional correctness of the DL model , and we assume the collected testing data are clean samples instead of maliciously generated ones . Different solutions are proposed to quantify the failure-revealing capability of unlabeled instances . DeepGini [ Feng et al. , 2020 ] proposes to evaluate a single test instance via the DL model ’ s final statistical output : f ( t ) = 1 − ΣNi=1 p2t , i , where pt , i is the predicted probability that the test case t belongs to the class i . Given the sum of pt , i is 1 , impurity function f ( t ) is maximal when all pt , i values are equal . DeepGini also adopts the neuron coverage criteria proposed in DeepXplore [ Pei et al. , 2017 ] and DeepGauge [ Ma et al. , 2018 ] in test input prioritization , and the result shows that the impurity-based selection is much better than coverage-based selection . Instead of evaluating the overall likelihood of failure for all classes , Multiple-Boundary Clustering and Prioritization ( MCP ) proposes to evaluate it for each pair of classes individually [ Shen et al. , 2020 ] . In this way , test instances can be evenly selected for each class pair and the failure cases are investigated at the finer granularity . Besides these metrics , [ Byun et al. , 2019 ] also propose to measure the likelihood of incorrect prediction by the uncertainty of the model ’ s output , which reflects the degree to which a model is uncertain about its prediction . In practice , evaluating uncertainty requires the task DL model to be a Bayesian Neural Network [ Richard and Lippmann , 1991 , Neal , 2012 ] or containing a dropout layer for approximation [ Gal and Ghahramani , 2016 ] . Besides examining the DL model ’ s final outputs , [ Kim et al. , 2019 ] proposes two surprise adequacy ( SA ) criteria that make use of the target DL ’ s internal outputs ( e.g. , the activation traces ) . They are Likelihood-based Surprise Adequacy Coverage ( LSA ) and Distance-based Surprise Adequacy Coverage ( DSA ) . LSA and DSA measure the likelihood or distance of an unlabeled instance to the training instances , respectively . Test samples with higher SA values are preferred in testing . To sum up , all existing methods use the target model ’ s outputs to one input , i.e. , its intrinsic attributes , for its failure-revealing capability estimation . In contrast , we make use of both intrinsic and contextual attributes of an instance for better estimation ( see later sections for details ) . 3 TestRank 3.1 Motivation The failure-revealing capability of an unlabeled test input is closely related to its attributes for the DL model under test . In this work , we distinguish two kinds of attributes for an unlabeled instance : the intrinsic attributes and the contextual attributes . We define the intrinsic attributes of an input as the output responses assigned by the target DL model to this specific input . It could be , for example , the predictive output distribution of the input from the target DL model , reflecting the sentiment derived from the computation performed by the target model [ Byun et al. , 2019 ] . This kind of attributes is adopted by existing test input prioritization approaches [ Feng et al. , 2020 , Shen et al. , 2020 , Byun et al. , 2019 ] . Note that we define such attributes as ‘ intrinsic ’ because they are extracted from inputs without considering their context , i.e. , the classification correctness of its similar instances . In contrast with the intrinsic attributes , the contextual attributes provide a deeper insight into the target model for the unlabeled samples : the contextual attributes for an unlabeled sample summarize the classification correctness of similar and labeled samples . For a particular test instance , such contextual attributes are useful and complementary to the intrinsic attributes . An illustrative example is shown in Figure 1 , wherein we visualize the behavior of a two-class classifier on the unlabeled test data and historically labeled data distribution . The blue region includes the instances that are near the decision boundary . Intuitively , the classifier is uncertain about the data when data is near the decision boundary and is likely to misclassify it . Existing works [ Shen et al. , 2020 , Feng et al. , 2020 , Byun et al. , 2019 ] propose various indicators ( e.g. , confidence/uncertainty/surprise scores ) to help identify the near-boundary instances . However , the near-boundary instances are not necessarily failures , and some of them can be correctly classified by a well-trained classifier . What is worse , such testing approaches fail to capture the failures lying far from the decision boundary ( i.e. , remote failures , shown in the red region in Figure 1 ) , because DL models usually output high confidence ( or low uncertainty ) for these inputs . These failures may be caused by limited model capacity , insufficient training data , etc . Our key insight is that we can use the contextual information ( e.g . the classification correctness of similar labeled samples ) to help locate both near-boundary and remote failures . The usefulness of the contextual information is due to the local continuity property [ Bishop , 2006 ] , which means that inputs close in the feature space share similar prediction behavior , e.g. , classification results from the target model . As shown in Figure 1 , some already labeled data , whose classification correctness is already known , surround the unlabeled data . If an unlabeled instance is close to already falsely classified data , under the local continuity property , it is likely that this instance is also a model failure . This property motivates us to extract the contextual attributes for an unlabeled instance from its neighboring labeled data . By combining the extracted contextual attributes with the intrinsic attributes , we expect to achieve better failure-revealing capability estimation .",
"role": "user"
},
{
"content": "This paper proposes a new test sample prioritization strategy to test deep learning systems. The objective is to maximize the number of failure-inducing samples selected from unlabeled samples. The authors propose to leverage both the samples' intrinsic properties (from the model's prediction behavior on them) and contextual properties (from other labeled samples close to them) as the criteria to select samples. By learning another binary classifier that takes the two properties as input, TestRank produces probabilities to rank the test samples. Empirical results demonstrate that TestRank outperforms existing test prioritization techniques.",
"role": "assistant"
}
]
|
science.qasper_abstractive_qa | Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.
Document Content:
Using General Adversarial Networks for Marketing: A Case Study of Airbnb
In this paper, we examine the use case of general adversarial networks (GANs) in the field of marketing. In particular, we analyze how GAN models can replicate text patterns from successful product listings on Airbnb, a peer-to-peer online market for short-term apartment rentals. To do so, we define the Diehl-Martinez-Kamalu (DMK) loss function as a new class of functions that forces the model's generated output to include a set of user-defined keywords. This allows the general adversarial network to recommend a way of rewording the phrasing of a listing description to increase the likelihood that it is booked. Although we tailor our analysis to Airbnb data, we believe this framework establishes a more general model for how generative algorithms can be used to produce text samples for the purposes of marketing.
Introduction
The development of online peer-to-peer markets in the 1990s, galvanized by the launch of sites like eBay, fundamentally shifted the way buyers and sellers could connect [4]. These new markets not only leveraged technology to allow for faster transaction speeds, but in the process also exposed a variety of unprecedented market-designs [4].
Today, many of the most well-known peer-to-peer markets like Uber and Instacart use a centralized system that matches workers with assigned tasks via a series of complex algorithms [4]. Still, a number of other websites like Airbnb and eBay rely on sellers and buyers to organically find one-another in a decentralized fashion. In the case of these decentralized systems, sellers are asked to price and market their products in order to attract potential buyers. Without a large marketing team at their disposal, however, sellers most often rely on their intuitions for how to present their articles or listings in the most appealing manner. Naturally, this leads to market inefficiencies, where willing sellers and buyers often fail to connect due to an inadequate presentation of the product or service offered.
Background
Fortunately, we believe that the introduction of unsupervised generative language models presents a way in which to tackle this particular shortcoming of peer-to-peer markets. In 2014, Ian Goodfellow et. al proposed the general adversarial network (GAN) [5]. The group showcased how this generative model could learn to artificially replicate data patterns to an unprecedented realistic degree [5]. Since then, these models have shown tremendous potential in their ability to generate photo-realistic images and coherent text samples [5].
The framework that GANs use for generating new data points employs an end-to-end neural network comprised of two models: a generator and a discriminator [5]. The generator is tasked with replicating the data that is fed into the model, without ever being directly exposed to the real samples. Instead, this model learns to reproduce the general patterns of the input via its interaction with the discriminator.
The role of the discriminator, in turn, is to tell apart which data points are ‘real’ and which have been created by the generator. On each run through the model, the generator then adapts its constructed output so as to more effectively ‘trick’ the discriminator into not being able to distinguish the real from the generated data. The end-to-end nature of the model then forces both the generator and discriminator to learn in parallel [7]. While GAN models have shown great potential in their ability to generate realistic data samples, they are notoriously difficult to train. This difficulty arises from two-parts: 1) First, it is difficult to tune the hyper-parameters correctly for the adversarial model to continue learning throughout all of the training epochs [5]. Since both the discriminator and generator are updated via the same gradient, it is very common for the model to fall into a local minima before completing all of the defined training cycles. 2) GANs are computationally expensive to train, given that both models are updated on each cycle in parallel [5]. This compounds the difficulty of tuning the model’s parameters.
Nonetheless, GANs have continued to show their value particularly in the domain of text-generation. Of particular interest for our purposes, Radford et al. propose synthesizing images from text descriptions [3]. The group demonstrates how GANs can produce images that correspond to a user-defined text description. It thus seems feasible that by using a similar model, we can produce text samples that are conditioned upon a set of user-specified keywords.
We were similarly influenced by the work of Radford et. al, who argue for the importance of layer normalization and data-specific trained word embeddings for text generation [9] and sentiment analysis categorization. These findings lead us to question whether it is possible to employ recurrent neural networks with long short-term memory gates, as defined by Mikolov et al., to categorize product descriptions into categories based on the product's popularity [6].
Data
The data for the project was acquired from Airdna, a data processing service that collaborates with Airbnb to produce high-accuracy data summaries for listings in geographic regions of the United States. For the sake of simplicity, we focus our analysis on Airbnb listings from Manhattan, NY, during the time period of January 1, 2016, to January 1, 2017. The data provided to us contained information for roughly 40,000 Manhattan listings that were posted on Airbnb during this defined time period. For each listing, we were given information of the amenities of the listing (number of bathrooms, number of bedrooms …), the listing’s zip code, the host’s description of the listing, the price of the listing, and the occupancy rate of the listing. Airbnb defines a home's occupancy rate, as the percentage of time that a listing is occupied over the time period that the listing is available. This gives us a reasonable metric for defining popular versus less popular listings.
Approach
Prior to building our generative model, we sought to gain a better understanding of how less and more popular listing descriptions differed in their writing style. We defined a home’s popularity via its occupancy rate metric, which we describe in the Data section. Using this popularity heuristic, we first stratified our dataset into groupings of listings at similar price points (i.e. $0-$30, $30-$60, ...). Importantly, rather than using the home’s quoted price, we relied on the price per bedroom as a better metric for the cost of the listing. Having clustered our listings into these groupings, we then selected the top third of listings by occupancy rate, as part of the ‘high popularity’ group. Listings in the middle and lowest thirds by occupancy rate were labeled ‘medium popularity’ and ‘low popularity’ respectively. We then combined all of the listings with high/medium/low popularity together for our final data set.
Generative Adversarial Network
Having observed an unideal performance on this task (see Experiments below), we turned our attention to building a model that can replicate the writing style of high popularity listing descriptions. To solve this task, we designed a framework for a general adversarial network. This model employs the standard set up of a generator and a discriminator, but extends the framework with the adoption of the Diehl-Martinez-Kamalu loss.
The generator is designed as a feed-forward neural network with three layers of depth. The input to the generator is simply a vector of random noise. This input is then fed directly to the first hidden layer via a linear transformation. Between the first and second layer we apply an exponential linear unit (ELU) as a non-linear activation function. Our reasoning for doing so is based on findings by Dash et al. that the experimental accuracy of ELUs over rectified linear units (RLU) tends to be somewhat higher for generative tasks [3]. Then, to scale the generator’s output to be in the range 0-1, we apply a sigmoid non-linearity between the second and the third layer of the model.
The discriminator similarly used a feed-forward structure with three layers of depth. The input to the discriminator comes from two sources: real data fed directly into the discriminator and the data generated by the generator. This input is then piped into the first hidden layer. As before, an ELU transformation is then applied between the first and second layer, as well as between the second and third hidden layers. Finally, a sigmoid activation is used on the output of the last hidden layer. This sigmoid activation is important since the output of our discriminator is a binary boolean that indicates whether the discriminator believes the input to have been real data or data produced by the generator. This discriminator is thus trained to minimize the binary cross-entropy loss of its prediction (whether the data was real or fake) and the real ground-truth of each data point.
The general framework defined above was inspired largely by the open-source code of Nag Dev, and was built using Pytorch [7]. One key extension to the basic GAN model, however, is the loss function that we apply to the generator, namely the Diehl-Martinez-Kamalu (DMK) Loss which we define below.
The Diehl-Martinez-Kamalu Loss is a weighted combination of a binary cross entropy loss with a dot-product attention metric of each user-defined keyword with the model’s generated output. Formally, the binary cross entropy (BCE) loss for one example is defined as: $ BCE(x,y) = y \cdot logx + (1-y) \cdot log(1-x), $
where x is defined as the predicted label for each sample and y is the true label (i.e. real versus fake data). The DMK loss then calculates an additional term, which corresponds to the dot-product attention of each word in the generated output with each keyword specified by the user. To illustrate by example, say a user desires the generated output to contain the keywords, $\lbrace subway, manhattan\rbrace $ . The model then converts each of these keywords to their corresponding glove vectors. Let us define the following notation $e(‘apple’)$ is the GloVe representation of the word apple, and let us suppose that $g$ is the vector of word embeddings generated by the generator. That is, $g_1$ is the first word embedding of the generator’s output. Let us also suppose $k$ is a vector of the keywords specified by the user. In our examples, $k$ is always in $R^{1}$ with $k_1$ one equaling of $‘subway’$ or $‘parking’$ . The dot-product term of the DMK loss then calculates $e(‘apple’)$0 . Weighing this term by some hyper-parameter, $e(‘apple’)$1 , then gives us the entire definition of the DMK loss: $e(‘apple’)$2 $e(‘apple’)$3
Experiments
In seeking to answer the question of whether the occupancy rate of a listing could be extracted from the listing’s summary, we ran a number of experiments on our first model. Two parameterizations which we present here are (1) whether the word vectors used in the embedding layer are trained on our corpus or come pretrained from Wikipedia and Gigaword and (2) whether ensembling or the final hidden state in isolation are used to make a prediction for the sequence. Common to all experiments was our decision to use an Adam optimizer, 16 LSTM units, 50-dimensional GloVe vectors, and a 70-30 split in train and test data.
Over ten epochs, the model parameterization which performs the best uses GloVe vectors trained on a corpus consisting of all listing descriptions and ensembling to make its class prediction. As a result, our findings are well in-line with those presented by Radford et. al who underscore the importance of training word embeddings on a data-specific corpus for best results on generative tasks [9].
That said, these results, though they do show a marginal increase in dev accuracy and a decrease in CE loss, suggest that perhaps listing description is not too predictive of occupancy rate given our parameterizations. While the listing description is surely an influential metric in determining the quality of a listing, other factors such as location, amenities, and home type might play a larger role in the consumer's decision. We were hopeful that these factors would be represented in the price per bedroom of the listing – our control variable – but the relationship may not have been strong enough.
However, should a strong relationship actually exist and there be instead a problem with our method, there are a few possibilities of what went wrong. We assumed that listings with similar occupancy rates would have similar listing descriptions regardless of price, which is not necessarily a strong assumption. This is coupled with an unexpected sparseness of clean data. With over 40,000 listings, we did not expect to see such poor attention to orthography in what are essentially public advertisements of the properties. In this way, our decision to use a window size of 5, a minimum occurrence count of 2, and a dimensionality of 50 when training our GloVe vectors was ad hoc.
Seeking to create a model which could generate and discriminate a “high-occupancy listing description”, we wanted to evaluate the capabilities of a generative adversarial network trained on either the standard binary cross-entropy loss or the DMK loss proposed above. Common to both models was the decision to alternate between training the generator for 50 steps and the discriminator for 2000 steps. We leave further tuning of the models to future research as each occasionally falls into unideal local optima within 20 iterations. One potential culprit is the step imbalance between generator and discriminator – should either learn at a much faster rate than the other, one component is liable to be “defeated" and cease to learn the training data.
Qualitatively the network trained on the DMK loss shows great promise. With respect to the two experiments presented here, we have shown that it is possible to introduce a measure of suggestion in the text produced by the generator. While this model is also subject to a rapid deadlock between generator and discriminator, it is interesting to see how the introduction of keywords is gradual and affects the proximal tokens included in the output. This behavior was made possible by paying close attention to the hyperparameter $\gamma $ , the weight given to the dot product attention term of the DMK loss. After manual tuning, we settle on $\gamma =0.00045$ for this weight. Below, we illustrate model outputs using different values of Gamma. As is apparent, for a hyper-parameter value less than roughly $\gamma = 0.0004$ , the model tends to ignore the importance of the keyword weights. Conversely, with a $\gamma $ value higher than $0.0005$ , the model tends towards overweighting the representation of the keywords in the model output.
Figure 1: RNN/LSTM Framework
Figure 2: GAN Framework
Figure 3: RNN/LSTM Accuracy over Number of Epochs
Table 1: Results of RNN/LSTM
Query: What are the user-defined keywords?
JSON Response: | {"answer": "Words that a user wants them to appear in the generated output.", "evidence": ["Nonetheless, GANs have continued to show their value particularly in the domain of text-generation. Of particular interest for our purposes, Radford et al. propose synthesizing images from text descriptions [3]. The group demonstrates how GANs can produce images that correspond to a user-defined text description. It thus seems feasible that by using a similar model, we can produce text samples that are conditioned upon a set of user-specified keywords.", "where x is defined as the predicted label for each sample and y is the true label (i.e. real versus fake data). The DMK loss then calculates an additional term, which corresponds to the dot-product attention of each word in the generated output with each keyword specified by the user. To illustrate by example, say a user desires the generated output to contain the keywords, $\\lbrace subway, manhattan\\rbrace $ . The model then converts each of these keywords to their corresponding glove vectors. Let us define the following notation $e(\u2018apple\u2019)$ is the GloVe representation of the word apple, and let us suppose that $g$ is the vector of word embeddings generated by the generator. That is, $g_1$ is the first word embedding of the generator\u2019s output. Let us also suppose $k$ is a vector of the keywords specified by the user. In our examples, $k$ is always in $R^{1}$ with $k_1$ one equaling of $\u2018subway\u2019$ or $\u2018parking\u2019$ . The dot-product term of the DMK loss then calculates $e(\u2018apple\u2019)$0 . Weighing this term by some hyper-parameter, $e(\u2018apple\u2019)$1 , then gives us the entire definition of the DMK loss: $e(\u2018apple\u2019)$2 $e(\u2018apple\u2019)$3"]} | science.qasper_abstractive_qa.155.0 | [
{
"content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nUsing General Adversarial Networks for Marketing: A Case Study of Airbnb\n\nIn this paper, we examine the use case of general adversarial networks (GANs) in the field of marketing. In particular, we analyze how GAN models can replicate text patterns from successful product listings on Airbnb, a peer-to-peer online market for short-term apartment rentals. To do so, we define the Diehl-Martinez-Kamalu (DMK) loss function as a new class of functions that forces the model's generated output to include a set of user-defined keywords. This allows the general adversarial network to recommend a way of rewording the phrasing of a listing description to increase the likelihood that it is booked. Although we tailor our analysis to Airbnb data, we believe this framework establishes a more general model for how generative algorithms can be used to produce text samples for the purposes of marketing.\n\nIntroduction\nThe development of online peer-to-peer markets in the 1990s, galvanized by the launch of sites like eBay, fundamentally shifted the way buyers and sellers could connect [4]. These new markets not only leveraged technology to allow for faster transaction speeds, but in the process also exposed a variety of unprecedented market-designs [4].\nToday, many of the most well-known peer-to-peer markets like Uber and Instacart use a centralized system that matches workers with assigned tasks via a series of complex algorithms [4]. Still, a number of other websites like Airbnb and eBay rely on sellers and buyers to organically find one-another in a decentralized fashion. In the case of these decentralized systems, sellers are asked to price and market their products in order to attract potential buyers. Without a large marketing team at their disposal, however, sellers most often rely on their intuitions for how to present their articles or listings in the most appealing manner. Naturally, this leads to market inefficiencies, where willing sellers and buyers often fail to connect due to an inadequate presentation of the product or service offered.\n\nBackground\nFortunately, we believe that the introduction of unsupervised generative language models presents a way in which to tackle this particular shortcoming of peer-to-peer markets. In 2014, Ian Goodfellow et. al proposed the general adversarial network (GAN) [5]. The group showcased how this generative model could learn to artificially replicate data patterns to an unprecedented realistic degree [5]. Since then, these models have shown tremendous potential in their ability to generate photo-realistic images and coherent text samples [5].\nThe framework that GANs use for generating new data points employs an end-to-end neural network comprised of two models: a generator and a discriminator [5]. The generator is tasked with replicating the data that is fed into the model, without ever being directly exposed to the real samples. Instead, this model learns to reproduce the general patterns of the input via its interaction with the discriminator.\nThe role of the discriminator, in turn, is to tell apart which data points are ‘real’ and which have been created by the generator. On each run through the model, the generator then adapts its constructed output so as to more effectively ‘trick’ the discriminator into not being able to distinguish the real from the generated data. The end-to-end nature of the model then forces both the generator and discriminator to learn in parallel [7]. While GAN models have shown great potential in their ability to generate realistic data samples, they are notoriously difficult to train. This difficulty arises from two-parts: 1) First, it is difficult to tune the hyper-parameters correctly for the adversarial model to continue learning throughout all of the training epochs [5]. Since both the discriminator and generator are updated via the same gradient, it is very common for the model to fall into a local minima before completing all of the defined training cycles. 2) GANs are computationally expensive to train, given that both models are updated on each cycle in parallel [5]. This compounds the difficulty of tuning the model’s parameters.\nNonetheless, GANs have continued to show their value particularly in the domain of text-generation. Of particular interest for our purposes, Radford et al. propose synthesizing images from text descriptions [3]. The group demonstrates how GANs can produce images that correspond to a user-defined text description. It thus seems feasible that by using a similar model, we can produce text samples that are conditioned upon a set of user-specified keywords.\nWe were similarly influenced by the work of Radford et. al, who argue for the importance of layer normalization and data-specific trained word embeddings for text generation [9] and sentiment analysis categorization. These findings lead us to question whether it is possible to employ recurrent neural networks with long short-term memory gates, as defined by Mikolov et al., to categorize product descriptions into categories based on the product's popularity [6].\n\nData\nThe data for the project was acquired from Airdna, a data processing service that collaborates with Airbnb to produce high-accuracy data summaries for listings in geographic regions of the United States. For the sake of simplicity, we focus our analysis on Airbnb listings from Manhattan, NY, during the time period of January 1, 2016, to January 1, 2017. The data provided to us contained information for roughly 40,000 Manhattan listings that were posted on Airbnb during this defined time period. For each listing, we were given information of the amenities of the listing (number of bathrooms, number of bedrooms …), the listing’s zip code, the host’s description of the listing, the price of the listing, and the occupancy rate of the listing. Airbnb defines a home's occupancy rate, as the percentage of time that a listing is occupied over the time period that the listing is available. This gives us a reasonable metric for defining popular versus less popular listings.\n\nApproach\nPrior to building our generative model, we sought to gain a better understanding of how less and more popular listing descriptions differed in their writing style. We defined a home’s popularity via its occupancy rate metric, which we describe in the Data section. Using this popularity heuristic, we first stratified our dataset into groupings of listings at similar price points (i.e. $0-$30, $30-$60, ...). Importantly, rather than using the home’s quoted price, we relied on the price per bedroom as a better metric for the cost of the listing. Having clustered our listings into these groupings, we then selected the top third of listings by occupancy rate, as part of the ‘high popularity’ group. Listings in the middle and lowest thirds by occupancy rate were labeled ‘medium popularity’ and ‘low popularity’ respectively. We then combined all of the listings with high/medium/low popularity together for our final data set.\n\nGenerative Adversarial Network\nHaving observed an unideal performance on this task (see Experiments below), we turned our attention to building a model that can replicate the writing style of high popularity listing descriptions. To solve this task, we designed a framework for a general adversarial network. This model employs the standard set up of a generator and a discriminator, but extends the framework with the adoption of the Diehl-Martinez-Kamalu loss.\nThe generator is designed as a feed-forward neural network with three layers of depth. The input to the generator is simply a vector of random noise. This input is then fed directly to the first hidden layer via a linear transformation. Between the first and second layer we apply an exponential linear unit (ELU) as a non-linear activation function. Our reasoning for doing so is based on findings by Dash et al. that the experimental accuracy of ELUs over rectified linear units (RLU) tends to be somewhat higher for generative tasks [3]. Then, to scale the generator’s output to be in the range 0-1, we apply a sigmoid non-linearity between the second and the third layer of the model.\nThe discriminator similarly used a feed-forward structure with three layers of depth. The input to the discriminator comes from two sources: real data fed directly into the discriminator and the data generated by the generator. This input is then piped into the first hidden layer. As before, an ELU transformation is then applied between the first and second layer, as well as between the second and third hidden layers. Finally, a sigmoid activation is used on the output of the last hidden layer. This sigmoid activation is important since the output of our discriminator is a binary boolean that indicates whether the discriminator believes the input to have been real data or data produced by the generator. This discriminator is thus trained to minimize the binary cross-entropy loss of its prediction (whether the data was real or fake) and the real ground-truth of each data point.\nThe general framework defined above was inspired largely by the open-source code of Nag Dev, and was built using Pytorch [7]. One key extension to the basic GAN model, however, is the loss function that we apply to the generator, namely the Diehl-Martinez-Kamalu (DMK) Loss which we define below.\nThe Diehl-Martinez-Kamalu Loss is a weighted combination of a binary cross entropy loss with a dot-product attention metric of each user-defined keyword with the model’s generated output. Formally, the binary cross entropy (BCE) loss for one example is defined as: $ BCE(x,y) = y \\cdot logx + (1-y) \\cdot log(1-x), $ \nwhere x is defined as the predicted label for each sample and y is the true label (i.e. real versus fake data). The DMK loss then calculates an additional term, which corresponds to the dot-product attention of each word in the generated output with each keyword specified by the user. To illustrate by example, say a user desires the generated output to contain the keywords, $\\lbrace subway, manhattan\\rbrace $ . The model then converts each of these keywords to their corresponding glove vectors. Let us define the following notation $e(‘apple’)$ is the GloVe representation of the word apple, and let us suppose that $g$ is the vector of word embeddings generated by the generator. That is, $g_1$ is the first word embedding of the generator’s output. Let us also suppose $k$ is a vector of the keywords specified by the user. In our examples, $k$ is always in $R^{1}$ with $k_1$ one equaling of $‘subway’$ or $‘parking’$ . The dot-product term of the DMK loss then calculates $e(‘apple’)$0 . Weighing this term by some hyper-parameter, $e(‘apple’)$1 , then gives us the entire definition of the DMK loss: $e(‘apple’)$2 $e(‘apple’)$3 \n\nExperiments\nIn seeking to answer the question of whether the occupancy rate of a listing could be extracted from the listing’s summary, we ran a number of experiments on our first model. Two parameterizations which we present here are (1) whether the word vectors used in the embedding layer are trained on our corpus or come pretrained from Wikipedia and Gigaword and (2) whether ensembling or the final hidden state in isolation are used to make a prediction for the sequence. Common to all experiments was our decision to use an Adam optimizer, 16 LSTM units, 50-dimensional GloVe vectors, and a 70-30 split in train and test data.\nOver ten epochs, the model parameterization which performs the best uses GloVe vectors trained on a corpus consisting of all listing descriptions and ensembling to make its class prediction. As a result, our findings are well in-line with those presented by Radford et. al who underscore the importance of training word embeddings on a data-specific corpus for best results on generative tasks [9].\nThat said, these results, though they do show a marginal increase in dev accuracy and a decrease in CE loss, suggest that perhaps listing description is not too predictive of occupancy rate given our parameterizations. While the listing description is surely an influential metric in determining the quality of a listing, other factors such as location, amenities, and home type might play a larger role in the consumer's decision. We were hopeful that these factors would be represented in the price per bedroom of the listing – our control variable – but the relationship may not have been strong enough.\nHowever, should a strong relationship actually exist and there be instead a problem with our method, there are a few possibilities of what went wrong. We assumed that listings with similar occupancy rates would have similar listing descriptions regardless of price, which is not necessarily a strong assumption. This is coupled with an unexpected sparseness of clean data. With over 40,000 listings, we did not expect to see such poor attention to orthography in what are essentially public advertisements of the properties. In this way, our decision to use a window size of 5, a minimum occurrence count of 2, and a dimensionality of 50 when training our GloVe vectors was ad hoc.\nSeeking to create a model which could generate and discriminate a “high-occupancy listing description”, we wanted to evaluate the capabilities of a generative adversarial network trained on either the standard binary cross-entropy loss or the DMK loss proposed above. Common to both models was the decision to alternate between training the generator for 50 steps and the discriminator for 2000 steps. We leave further tuning of the models to future research as each occasionally falls into unideal local optima within 20 iterations. One potential culprit is the step imbalance between generator and discriminator – should either learn at a much faster rate than the other, one component is liable to be “defeated\" and cease to learn the training data.\nQualitatively the network trained on the DMK loss shows great promise. With respect to the two experiments presented here, we have shown that it is possible to introduce a measure of suggestion in the text produced by the generator. While this model is also subject to a rapid deadlock between generator and discriminator, it is interesting to see how the introduction of keywords is gradual and affects the proximal tokens included in the output. This behavior was made possible by paying close attention to the hyperparameter $\\gamma $ , the weight given to the dot product attention term of the DMK loss. After manual tuning, we settle on $\\gamma =0.00045$ for this weight. Below, we illustrate model outputs using different values of Gamma. As is apparent, for a hyper-parameter value less than roughly $\\gamma = 0.0004$ , the model tends to ignore the importance of the keyword weights. Conversely, with a $\\gamma $ value higher than $0.0005$ , the model tends towards overweighting the representation of the keywords in the model output.\n\nFigure 1: RNN/LSTM Framework\n\nFigure 2: GAN Framework\n\nFigure 3: RNN/LSTM Accuracy over Number of Epochs\n\nTable 1: Results of RNN/LSTM\n\nQuery: What are the user-defined keywords?\n\nJSON Response:",
"role": "user"
},
{
"content": "{\"answer\": \"Words that a user wants them to appear in the generated output.\", \"evidence\": [\"Nonetheless, GANs have continued to show their value particularly in the domain of text-generation. Of particular interest for our purposes, Radford et al. propose synthesizing images from text descriptions [3]. The group demonstrates how GANs can produce images that correspond to a user-defined text description. It thus seems feasible that by using a similar model, we can produce text samples that are conditioned upon a set of user-specified keywords.\", \"where x is defined as the predicted label for each sample and y is the true label (i.e. real versus fake data). The DMK loss then calculates an additional term, which corresponds to the dot-product attention of each word in the generated output with each keyword specified by the user. To illustrate by example, say a user desires the generated output to contain the keywords, $\\\\lbrace subway, manhattan\\\\rbrace $ . The model then converts each of these keywords to their corresponding glove vectors. Let us define the following notation $e(\\u2018apple\\u2019)$ is the GloVe representation of the word apple, and let us suppose that $g$ is the vector of word embeddings generated by the generator. That is, $g_1$ is the first word embedding of the generator\\u2019s output. Let us also suppose $k$ is a vector of the keywords specified by the user. In our examples, $k$ is always in $R^{1}$ with $k_1$ one equaling of $\\u2018subway\\u2019$ or $\\u2018parking\\u2019$ . The dot-product term of the DMK loss then calculates $e(\\u2018apple\\u2019)$0 . Weighing this term by some hyper-parameter, $e(\\u2018apple\\u2019)$1 , then gives us the entire definition of the DMK loss: $e(\\u2018apple\\u2019)$2 $e(\\u2018apple\\u2019)$3\"]}",
"role": "assistant"
}
]
|
science.biored_ner | Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: ["CellLine", "Variant", "Species", "Disease", "Chemical", "Gene"].
Ensure the output is strictly the JSON object.
Source Text:
Genetic variants of the T-cell immunoglobulin mucin 1 but not the T-cell immunoglobulin mucin 3 gene are associated with asthma in an African American population. BACKGROUND: The T-cell immunoglobulin mucin ( TIM ) proteins and their genetic variants have been suggested to play a role in regulating allergic diseases. OBJECTIVE: Genetic association of the sequence variants for TIM-1 and TIM-3 genes with asthma in an African American population was investigated. METHODS: Both case-control and family-based association analyses were performed for a total of 7 polymorphisms, including 3 single nucleotide polymorphism (SNPs) and 1 insertion/deletion polymorphism in the TIM-1 and 3 SNPs in the TIM-3 genes. The exposure to hepatitis A virus as judged by seropositivity was also examined. RESULTS: In the case-control design, the frequencies of the TT genotype for SNP rs2277025 and the homozygous deletion variant (157delMTTTVP) in the fourth exon of the TIM-1 gene were higher among patients with patients with asthma compared with the controls (odds ratio [OR], 2.779, P = .016; and OR, 3.09, P = .022, respectively). This association was substantiated by haplotype analysis of these and 2 additional SNPs (OR, 2.48; P = .004), and also by family-based tests for the allele and haplotype carrying 157delMTTTVP (P = .009 and P = .048, respectively). Furthermore, this association seems to exist even in the hepatitis A virus-seronegative subjects in our data. None of the 3 variants in TIM-3 genes yielded significant association with either asthma or asthma-related phenotypes. CONCLUSION: Our findings suggest that the genetic variants of the TIM-1 but not the TIM-3 gene contribute to asthma susceptibility in this African-American population. | {"CellLine": [], "Chemical": [], "Disease": ["allergic diseases", "asthma"], "Gene": ["T-cell immunoglobulin mucin", "TIM", "T-cell immunoglobulin mucin 3", "TIM-3", "TIM-1", "TIM-1 and 3", "T-cell immunoglobulin mucin 1"], "Species": ["hepatitis A virus", "patients"], "Variant": ["rs2277025", "157delMTTTVP"]} | science.biored_ner.23 | [
{
"content": "Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: [\"CellLine\", \"Variant\", \"Species\", \"Disease\", \"Chemical\", \"Gene\"].\n\nEnsure the output is strictly the JSON object.\n\nSource Text:\n\nGenetic variants of the T-cell immunoglobulin mucin 1 but not the T-cell immunoglobulin mucin 3 gene are associated with asthma in an African American population. BACKGROUND: The T-cell immunoglobulin mucin ( TIM ) proteins and their genetic variants have been suggested to play a role in regulating allergic diseases. OBJECTIVE: Genetic association of the sequence variants for TIM-1 and TIM-3 genes with asthma in an African American population was investigated. METHODS: Both case-control and family-based association analyses were performed for a total of 7 polymorphisms, including 3 single nucleotide polymorphism (SNPs) and 1 insertion/deletion polymorphism in the TIM-1 and 3 SNPs in the TIM-3 genes. The exposure to hepatitis A virus as judged by seropositivity was also examined. RESULTS: In the case-control design, the frequencies of the TT genotype for SNP rs2277025 and the homozygous deletion variant (157delMTTTVP) in the fourth exon of the TIM-1 gene were higher among patients with patients with asthma compared with the controls (odds ratio [OR], 2.779, P = .016; and OR, 3.09, P = .022, respectively). This association was substantiated by haplotype analysis of these and 2 additional SNPs (OR, 2.48; P = .004), and also by family-based tests for the allele and haplotype carrying 157delMTTTVP (P = .009 and P = .048, respectively). Furthermore, this association seems to exist even in the hepatitis A virus-seronegative subjects in our data. None of the 3 variants in TIM-3 genes yielded significant association with either asthma or asthma-related phenotypes. CONCLUSION: Our findings suggest that the genetic variants of the TIM-1 but not the TIM-3 gene contribute to asthma susceptibility in this African-American population.",
"role": "user"
},
{
"content": "{\"CellLine\": [], \"Chemical\": [], \"Disease\": [\"allergic diseases\", \"asthma\"], \"Gene\": [\"T-cell immunoglobulin mucin\", \"TIM\", \"T-cell immunoglobulin mucin 3\", \"TIM-3\", \"TIM-1\", \"TIM-1 and 3\", \"T-cell immunoglobulin mucin 1\"], \"Species\": [\"hepatitis A virus\", \"patients\"], \"Variant\": [\"rs2277025\", \"157delMTTTVP\"]}",
"role": "assistant"
}
]
|
science.qasper_abstractive_qa | Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.
Document Content:
A Surprising Density of Illusionable Natural Speech
Recent work on adversarial examples has demonstrated that most natural inputs can be perturbed to fool even state-of-the-art machine learning systems. But does this happen for humans as well? In this work, we investigate: what fraction of natural instances of speech can be turned into"illusions"which either alter humans' perception or result in different people having significantly different perceptions? We first consider the McGurk effect, the phenomenon by which adding a carefully chosen video clip to the audio channel affects the viewer's perception of what is said (McGurk and MacDonald, 1976). We obtain empirical estimates that a significant fraction of both words and sentences occurring in natural speech have some susceptibility to this effect. We also learn models for predicting McGurk illusionability. Finally we demonstrate that the Yanny or Laurel auditory illusion (Pressnitzer et al., 2018) is not an isolated occurrence by generating several very different new instances. We believe that the surprising density of illusionable natural speech warrants further investigation, from the perspectives of both security and cognitive science. Supplementary videos are available at: https://www.youtube.com/playlist?list=PLaX7t1K-e_fF2iaenoKznCatm0RC37B_k.
Introduction
A growing body of work on adversarial examples has identified that for machine-learning (ML) systems that operate on high-dimensional data, for nearly every natural input there exists a small perturbation of the point that will be misclassified by the system, posing a threat to its deployment in certain critical settings BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . More broadly, the susceptibility of ML systems to adversarial examples has prompted a re-examination of whether current ML systems are truly learning or if they are assemblages of tricks that are effective yet brittle and easily fooled BIBREF9 . Implicit in this line of reasoning is the assumption that instances of ”real" learning, such as human cognition, yield extremely robust systems. Indeed, at least in computer vision, human perception is regarded as the gold-standard for robustness to adversarial examples.
Evidently, humans can be fooled by a variety of illusions, whether they be optical, auditory, or other; and there is a long line of research from the cognitive science and psychology communities investigating these BIBREF10 . In general, however, these illusions are viewed as isolated examples that do not arise frequently, and which are far from the instances encountered in everyday life.
In this work, we attempt to understand how susceptible humans' perceptual systems for natural speech are to carefully designed “adversarial attacks.” We investigate the density of certain classes of illusion, that is, the fraction of natural language utterances whose comprehension can be affected by the illusion. Our study centers around the McGurk effect, which is the well-studied phenomenon by which the perception of what we hear can be influenced by what we see BIBREF0 . A prototypical example is that the audio of the phoneme “baa,” accompanied by a video of someone mouthing “vaa”, can be perceived as “vaa” or “gaa” (Figure 1 ). This effect persists even when the subject is aware of the setup, though the strength of the effect varies significantly across people and languages and with factors such as age, gender, and disorders BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 .
A significant density of illusionable instances for humans might present similar types of security risks as adversarial examples do for ML systems. Auditory signals such as public service announcements, instructions sent to first responders, etc., could be targeted by a malicious agent. Given only access to a screen within eyesight of the intended victims, the agent might be able to significantly obfuscate or alter the message perceived by those who see the screen (even peripherally).
Related work
Illusionable instances for humans are similar to adversarial examples for ML systems. Strictly speaking, however, our investigation of the density of natural language for which McGurk illusions can be created, is not the human analog of adversarial examples. The adversarial examples for ML systems are datapoints that are misclassified, despite being extremely similar to a typical datapoint (that is correctly classified). Our illusions of misdubbed audio are not extremely close to any typically encountered input, since our McGurk samples have auditory signals corresponding to one phoneme/word and visual signals corresponding to another. Also, there is a compelling argument for why the McGurk confusion occurs, namely that human speech perception is bimodal (audio-visual) in nature when lip reading is available BIBREF20 , BIBREF21 . To the best of our knowledge, prior to our work, there has been little systematic investigation of the extent to which the McGurk effect, or other types of illusions, can be made dense in the set of instances encountered in everyday life. The closest work is BIBREF22 , where the authors demonstrate that some adversarial examples for computer vision systems also fool humans when humans were given less than a tenth of second to view the image. However, some of these examples seem less satisfying as the perturbation acts as a pixel-space interpolation between the original image and the “incorrect” class. This results in images that are visually borderline between two classes, and as such, do not provide a sense of illusion to the viewer. In general, researchers have not probed the robustness of human perception with the same tools, intent, or perspective, with which the security community is currently interrogating the robustness of ML systems.
Problem setup
For the McGurk effect, we attempt an illusion for a language token (e.g. phoneme, word, sentence) $x$ by creating a video where an audio stream of $x$ is visually dubbed over by a person saying $x^{\prime }\ne x$ . We stress that the audio portion of the illusion is not modified and corresponds to a person saying $x$ . The illusion $f(x^{\prime },x)$ affects a listener if they perceive what is being said to be $y\ne x$ if they watched the illusory video whereas they perceive $x$ if they had either listened to the audio stream without watching the video or had watched the original unaltered video, depending on specification. We call a token illusionable if an illusion can be made for the token that affects the perception of a significant fraction of people.
In Section "Phoneme-level experiments" , we analyze the extent to which the McGurk effect can be used to create illusions for phonemes, words, and sentences, and analyze the fraction of natural language that is susceptible to such illusionability. We thereby obtain a lower bound on the density of illusionable natural speech.
We find that 1) a significant fraction of words that occur in everyday speech can be turned into McGurk-style illusions, 2) such illusions persist when embedded within the context of natural sentences, and in fact affect a significant fraction of natural sentences, and 3) the illusionability of words and sentences can be predicted using features from natural language modeling.
Phoneme-level experiments
We began by determining which phoneme sounds can be paired with video dubs of other phonemes to effect a perceived phoneme that is different from the actual sound. We created McGurk videos for all vowel pairs preceded with the consonant // as well as for all consonant pairs followed by the vowel // spoken by a speaker. There are 20 vowel phonemes and 24 consonant phonemes in American English although /ʤ/ and /ʒ/ are redundant for our purposes. Based on labels provided by 10 individuals we found that although vowels were not easily confused, there are a number of illusionable consonants. We note that the illusionable phoneme pairs depend both on the speaker and listener identities. Given Table 1 of illusionable phonemes, the goal was then to understand whether these could be leveraged within words or sentences; and if so, the fraction of natural speech that is susceptible.
Future Directions
This work is an initial step towards exploring the density of illusionable phenomena for humans. There are many natural directions for future work. In the vein of further understanding McGurk-style illusions, it seems worth building more accurate predictive models for sentence-level effects, and further investigating the security risks posed by McGurk illusions. For example, one concrete next step in understanding McGurk-style illusions would be to actually implement a system which takes an audio input, and outputs a video dub resulting in significant misunderstanding. Such a system would need to combine a high-quality speech-to-video-synthesis system BIBREF25 , BIBREF26 , with a fleshed-out language model and McGurk prediction model. There is also the question of how to guard against “attacks” on human perception. For example, in the case of the McGurk effect, how can one rephrase a passage of text in such a way that the meaning is unchanged, but the rephrased text is significantly more robust to McGurk style manipulations? The central question in this direction is what fraction of natural language can be made robust without significantly changing the semantics.
A better understanding of when and why certain human perception systems are nonrobust can also be applied to make ML systems more robust. In particular, neural networks have been found to be susceptible to adversarial examples in automatic speech recognition BIBREF27 , BIBREF28 and to the McGurk effect BIBREF29 , and a rudimentary approach to making language robust to the latter problem would be to use a reduced vocabulary that avoids words that score highly in our word-level illusionability prediction model. Relatedly, at the interface of cognitive science and adversarial examples, there has been work suggesting that humans can anticipate when or how machines will misclassify, including for adversarial examples BIBREF30 , BIBREF31 , BIBREF32 .
More broadly, as the tools for probing the weaknesses of ML systems develop further, it seems like a natural time to reexamine the supposed robustness of human perception. We anticipate unexpected findings. To provide one example, we summarize some preliminary results on audio-only illusions.
Conclusion
Our work suggests that for a significant fraction of natural speech, human perception can be altered by using subtle, learnable perturbations. This is an initial step towards exploring the density of illusionable phenomenon for humans, and examining the extent to which human perception may be vulnerable to security risks like those that adversarial examples present for ML systems.
We hope our work inspires future investigations into the discovery, generation, and quantification of multimodal and unimodal audiovisual and auditory illusions for humans. There exist many open research questions on when and why humans are susceptible to various types of illusions, how to model the illusionability of natural language, and how natural language can be made more robust to illusory perturbations. Additionally, we hope such investigations inform our interpretations of the strengths and weaknesses of current ML systems. Finally, there is the possibility that some vulnerability to carefully crafted adversarial examples may be inherent to all complex learning systems that interact with high-dimensional inputs in an environment with limited data; any thorough investigation of this question must also probe the human cognitive system.
Figure 1: Illustration of the McGurk effect. For some phoneme pairs, when the speaker visibly mouths phoneme A but the auditory stimulus is actually the phoneme B, listeners tend to perceive a phoneme C 6= B.
Table 1: Illusionable phonemes and effects based on preliminary phoneme-pair testing. Where a number of lip movements were available to affect a phoneme, the most effective one is listed.
Table 2: The 200 unique words sampled from the Project Gutenberg novel corpus. The 147 of those for which an illusory video was created are listed on top. Ordering is otherwise alphabetical.
Table 3: Test results for word-level McGurk illusions among the 147 words predicted to be illusionable. Shown are average error rates for watching the illusory video vs listening to the audio only, as well as the percentage of words that are correctly identified but sound ambiguous to the listener.
Figure 2: Predicted word illusionability closely matches observed word illusionability, with out-ofsample correlation of 57%. The words are sorted by increasing predicted word illusionability (and the observed illusionability of each word was not used in calculating the prediction of that word).
Query: What is the McGurk effect?
JSON Response: | {"answer": "a perceptual illusion, where listening to a speech sound while watching a mouth pronounce a different sound changes how the audio is heard", "evidence": ["For the McGurk effect, we attempt an illusion for a language token (e.g. phoneme, word, sentence) $x$ by creating a video where an audio stream of $x$ is visually dubbed over by a person saying $x^{\\prime }\\ne x$ . We stress that the audio portion of the illusion is not modified and corresponds to a person saying $x$ . The illusion $f(x^{\\prime },x)$ affects a listener if they perceive what is being said to be $y\\ne x$ if they watched the illusory video whereas they perceive $x$ if they had either listened to the audio stream without watching the video or had watched the original unaltered video, depending on specification. We call a token illusionable if an illusion can be made for the token that affects the perception of a significant fraction of people.", "In this work, we attempt to understand how susceptible humans\u0027 perceptual systems for natural speech are to carefully designed \u201cadversarial attacks.\u201d We investigate the density of certain classes of illusion, that is, the fraction of natural language utterances whose comprehension can be affected by the illusion. Our study centers around the McGurk effect, which is the well-studied phenomenon by which the perception of what we hear can be influenced by what we see BIBREF0 . A prototypical example is that the audio of the phoneme \u201cbaa,\u201d accompanied by a video of someone mouthing \u201cvaa\u201d, can be perceived as \u201cvaa\u201d or \u201cgaa\u201d (Figure 1 ). This effect persists even when the subject is aware of the setup, though the strength of the effect varies significantly across people and languages and with factors such as age, gender, and disorders BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 ."]} | science.qasper_abstractive_qa.229.0 | [
{
"content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nA Surprising Density of Illusionable Natural Speech\n\nRecent work on adversarial examples has demonstrated that most natural inputs can be perturbed to fool even state-of-the-art machine learning systems. But does this happen for humans as well? In this work, we investigate: what fraction of natural instances of speech can be turned into\"illusions\"which either alter humans' perception or result in different people having significantly different perceptions? We first consider the McGurk effect, the phenomenon by which adding a carefully chosen video clip to the audio channel affects the viewer's perception of what is said (McGurk and MacDonald, 1976). We obtain empirical estimates that a significant fraction of both words and sentences occurring in natural speech have some susceptibility to this effect. We also learn models for predicting McGurk illusionability. Finally we demonstrate that the Yanny or Laurel auditory illusion (Pressnitzer et al., 2018) is not an isolated occurrence by generating several very different new instances. We believe that the surprising density of illusionable natural speech warrants further investigation, from the perspectives of both security and cognitive science. Supplementary videos are available at: https://www.youtube.com/playlist?list=PLaX7t1K-e_fF2iaenoKznCatm0RC37B_k.\n\nIntroduction\nA growing body of work on adversarial examples has identified that for machine-learning (ML) systems that operate on high-dimensional data, for nearly every natural input there exists a small perturbation of the point that will be misclassified by the system, posing a threat to its deployment in certain critical settings BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . More broadly, the susceptibility of ML systems to adversarial examples has prompted a re-examination of whether current ML systems are truly learning or if they are assemblages of tricks that are effective yet brittle and easily fooled BIBREF9 . Implicit in this line of reasoning is the assumption that instances of ”real\" learning, such as human cognition, yield extremely robust systems. Indeed, at least in computer vision, human perception is regarded as the gold-standard for robustness to adversarial examples.\nEvidently, humans can be fooled by a variety of illusions, whether they be optical, auditory, or other; and there is a long line of research from the cognitive science and psychology communities investigating these BIBREF10 . In general, however, these illusions are viewed as isolated examples that do not arise frequently, and which are far from the instances encountered in everyday life.\nIn this work, we attempt to understand how susceptible humans' perceptual systems for natural speech are to carefully designed “adversarial attacks.” We investigate the density of certain classes of illusion, that is, the fraction of natural language utterances whose comprehension can be affected by the illusion. Our study centers around the McGurk effect, which is the well-studied phenomenon by which the perception of what we hear can be influenced by what we see BIBREF0 . A prototypical example is that the audio of the phoneme “baa,” accompanied by a video of someone mouthing “vaa”, can be perceived as “vaa” or “gaa” (Figure 1 ). This effect persists even when the subject is aware of the setup, though the strength of the effect varies significantly across people and languages and with factors such as age, gender, and disorders BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 .\nA significant density of illusionable instances for humans might present similar types of security risks as adversarial examples do for ML systems. Auditory signals such as public service announcements, instructions sent to first responders, etc., could be targeted by a malicious agent. Given only access to a screen within eyesight of the intended victims, the agent might be able to significantly obfuscate or alter the message perceived by those who see the screen (even peripherally).\n\nRelated work\nIllusionable instances for humans are similar to adversarial examples for ML systems. Strictly speaking, however, our investigation of the density of natural language for which McGurk illusions can be created, is not the human analog of adversarial examples. The adversarial examples for ML systems are datapoints that are misclassified, despite being extremely similar to a typical datapoint (that is correctly classified). Our illusions of misdubbed audio are not extremely close to any typically encountered input, since our McGurk samples have auditory signals corresponding to one phoneme/word and visual signals corresponding to another. Also, there is a compelling argument for why the McGurk confusion occurs, namely that human speech perception is bimodal (audio-visual) in nature when lip reading is available BIBREF20 , BIBREF21 . To the best of our knowledge, prior to our work, there has been little systematic investigation of the extent to which the McGurk effect, or other types of illusions, can be made dense in the set of instances encountered in everyday life. The closest work is BIBREF22 , where the authors demonstrate that some adversarial examples for computer vision systems also fool humans when humans were given less than a tenth of second to view the image. However, some of these examples seem less satisfying as the perturbation acts as a pixel-space interpolation between the original image and the “incorrect” class. This results in images that are visually borderline between two classes, and as such, do not provide a sense of illusion to the viewer. In general, researchers have not probed the robustness of human perception with the same tools, intent, or perspective, with which the security community is currently interrogating the robustness of ML systems.\n\nProblem setup\nFor the McGurk effect, we attempt an illusion for a language token (e.g. phoneme, word, sentence) $x$ by creating a video where an audio stream of $x$ is visually dubbed over by a person saying $x^{\\prime }\\ne x$ . We stress that the audio portion of the illusion is not modified and corresponds to a person saying $x$ . The illusion $f(x^{\\prime },x)$ affects a listener if they perceive what is being said to be $y\\ne x$ if they watched the illusory video whereas they perceive $x$ if they had either listened to the audio stream without watching the video or had watched the original unaltered video, depending on specification. We call a token illusionable if an illusion can be made for the token that affects the perception of a significant fraction of people.\nIn Section \"Phoneme-level experiments\" , we analyze the extent to which the McGurk effect can be used to create illusions for phonemes, words, and sentences, and analyze the fraction of natural language that is susceptible to such illusionability. We thereby obtain a lower bound on the density of illusionable natural speech.\nWe find that 1) a significant fraction of words that occur in everyday speech can be turned into McGurk-style illusions, 2) such illusions persist when embedded within the context of natural sentences, and in fact affect a significant fraction of natural sentences, and 3) the illusionability of words and sentences can be predicted using features from natural language modeling.\n\nPhoneme-level experiments\nWe began by determining which phoneme sounds can be paired with video dubs of other phonemes to effect a perceived phoneme that is different from the actual sound. We created McGurk videos for all vowel pairs preceded with the consonant // as well as for all consonant pairs followed by the vowel // spoken by a speaker. There are 20 vowel phonemes and 24 consonant phonemes in American English although /ʤ/ and /ʒ/ are redundant for our purposes. Based on labels provided by 10 individuals we found that although vowels were not easily confused, there are a number of illusionable consonants. We note that the illusionable phoneme pairs depend both on the speaker and listener identities. Given Table 1 of illusionable phonemes, the goal was then to understand whether these could be leveraged within words or sentences; and if so, the fraction of natural speech that is susceptible.\n\nFuture Directions\nThis work is an initial step towards exploring the density of illusionable phenomena for humans. There are many natural directions for future work. In the vein of further understanding McGurk-style illusions, it seems worth building more accurate predictive models for sentence-level effects, and further investigating the security risks posed by McGurk illusions. For example, one concrete next step in understanding McGurk-style illusions would be to actually implement a system which takes an audio input, and outputs a video dub resulting in significant misunderstanding. Such a system would need to combine a high-quality speech-to-video-synthesis system BIBREF25 , BIBREF26 , with a fleshed-out language model and McGurk prediction model. There is also the question of how to guard against “attacks” on human perception. For example, in the case of the McGurk effect, how can one rephrase a passage of text in such a way that the meaning is unchanged, but the rephrased text is significantly more robust to McGurk style manipulations? The central question in this direction is what fraction of natural language can be made robust without significantly changing the semantics.\nA better understanding of when and why certain human perception systems are nonrobust can also be applied to make ML systems more robust. In particular, neural networks have been found to be susceptible to adversarial examples in automatic speech recognition BIBREF27 , BIBREF28 and to the McGurk effect BIBREF29 , and a rudimentary approach to making language robust to the latter problem would be to use a reduced vocabulary that avoids words that score highly in our word-level illusionability prediction model. Relatedly, at the interface of cognitive science and adversarial examples, there has been work suggesting that humans can anticipate when or how machines will misclassify, including for adversarial examples BIBREF30 , BIBREF31 , BIBREF32 .\nMore broadly, as the tools for probing the weaknesses of ML systems develop further, it seems like a natural time to reexamine the supposed robustness of human perception. We anticipate unexpected findings. To provide one example, we summarize some preliminary results on audio-only illusions.\n\nConclusion\nOur work suggests that for a significant fraction of natural speech, human perception can be altered by using subtle, learnable perturbations. This is an initial step towards exploring the density of illusionable phenomenon for humans, and examining the extent to which human perception may be vulnerable to security risks like those that adversarial examples present for ML systems.\nWe hope our work inspires future investigations into the discovery, generation, and quantification of multimodal and unimodal audiovisual and auditory illusions for humans. There exist many open research questions on when and why humans are susceptible to various types of illusions, how to model the illusionability of natural language, and how natural language can be made more robust to illusory perturbations. Additionally, we hope such investigations inform our interpretations of the strengths and weaknesses of current ML systems. Finally, there is the possibility that some vulnerability to carefully crafted adversarial examples may be inherent to all complex learning systems that interact with high-dimensional inputs in an environment with limited data; any thorough investigation of this question must also probe the human cognitive system.\n\nFigure 1: Illustration of the McGurk effect. For some phoneme pairs, when the speaker visibly mouths phoneme A but the auditory stimulus is actually the phoneme B, listeners tend to perceive a phoneme C 6= B.\n\nTable 1: Illusionable phonemes and effects based on preliminary phoneme-pair testing. Where a number of lip movements were available to affect a phoneme, the most effective one is listed.\n\nTable 2: The 200 unique words sampled from the Project Gutenberg novel corpus. The 147 of those for which an illusory video was created are listed on top. Ordering is otherwise alphabetical.\n\nTable 3: Test results for word-level McGurk illusions among the 147 words predicted to be illusionable. Shown are average error rates for watching the illusory video vs listening to the audio only, as well as the percentage of words that are correctly identified but sound ambiguous to the listener.\n\nFigure 2: Predicted word illusionability closely matches observed word illusionability, with out-ofsample correlation of 57%. The words are sorted by increasing predicted word illusionability (and the observed illusionability of each word was not used in calculating the prediction of that word).\n\nQuery: What is the McGurk effect?\n\nJSON Response:",
"role": "user"
},
{
"content": "{\"answer\": \"a perceptual illusion, where listening to a speech sound while watching a mouth pronounce a different sound changes how the audio is heard\", \"evidence\": [\"For the McGurk effect, we attempt an illusion for a language token (e.g. phoneme, word, sentence) $x$ by creating a video where an audio stream of $x$ is visually dubbed over by a person saying $x^{\\\\prime }\\\\ne x$ . We stress that the audio portion of the illusion is not modified and corresponds to a person saying $x$ . The illusion $f(x^{\\\\prime },x)$ affects a listener if they perceive what is being said to be $y\\\\ne x$ if they watched the illusory video whereas they perceive $x$ if they had either listened to the audio stream without watching the video or had watched the original unaltered video, depending on specification. We call a token illusionable if an illusion can be made for the token that affects the perception of a significant fraction of people.\", \"In this work, we attempt to understand how susceptible humans\\u0027 perceptual systems for natural speech are to carefully designed \\u201cadversarial attacks.\\u201d We investigate the density of certain classes of illusion, that is, the fraction of natural language utterances whose comprehension can be affected by the illusion. Our study centers around the McGurk effect, which is the well-studied phenomenon by which the perception of what we hear can be influenced by what we see BIBREF0 . A prototypical example is that the audio of the phoneme \\u201cbaa,\\u201d accompanied by a video of someone mouthing \\u201cvaa\\u201d, can be perceived as \\u201cvaa\\u201d or \\u201cgaa\\u201d (Figure 1 ). This effect persists even when the subject is aware of the setup, though the strength of the effect varies significantly across people and languages and with factors such as age, gender, and disorders BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 .\"]}",
"role": "assistant"
}
]
|
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT.PURPOSE: After completing treatment, cancer survivors may suffer from a multitude of physical and mental health impairments, resulting in compromised quality of life. This exploratory study investigated whether two mind–body interventions, i.e., Mind–Body Bridging (MBB) and Mindfulness Meditation (MM), could improve posttreatment cancer survivors' self-reported sleep disturbance and comorbid symptoms, as compared to sleep hygiene education (SHE) as an active control.
ABSTRACT.METHODS: This randomized controlled trial examined 57 cancer survivors with clinically significant self-reported sleep disturbance, randomly assigned to receive MBB, MM, or SHE. All interventions were conducted in three sessions, once per week. Patient-reported outcomes were assessed via the Medical Outcomes Study Sleep Scale and other indicators of psychosocial functioning relevant to quality of life, stress, depression, mindfulness, self-compassion, and well-being.
ABSTRACT.RESULTS: Mixed effects model analysis revealed that mean sleep disturbance symptoms in the MBB (p = .0029) and MM (p = .0499) groups were lower than in the SHE group, indicating that both mind–body interventions improved sleep. In addition, compared with the SHE group, the MBB group showed reductions in self-reported depression symptoms (p = .040) and improvements in overall levels of mindfulness (p = .018), self-compassion (p = .028), and well-being (p = .019) at postintervention.
ABSTRACT.CONCLUSIONS: This study provides preliminary evidence that brief sleep-focused MBB and MM are promising interventions for sleep disturbance in cancer survivors. Integrating MBB or MM into posttreatment supportive plans should enhance care of cancer survivors with sleep disturbance. Because MBB produced additional secondary benefits, MBB may serve as a promising multipurpose intervention for posttreatment cancer survivors suffering from sleep disturbance and other comorbid symptoms.
ABSTRACT.IMPLICATIONS FOR CANCER SURVIVORS: Two brief sleep-focused mind–body interventions investigated in the study were effective in reducing sleep disturbance and one of them further improved other psychosocial aspects of the cancer survivors' life. Management of sleep problems in survivors is a high priority issue that demands more attention in cancer survivorship.
Extracted Data: | [["Mind\u2013Body Bridging (MBB) and Mindfulness Meditation (MM) ", "sleep hygiene education (SHE)", "sleep disturbance symptoms", "significantly decreased", "Mixed effects model analysis revealed that mean sleep disturbance symptoms in the MBB (p\u2009=\u2009.0029) and MM (p\u2009=\u2009.0499) groups were lower than in the SHE group, indicating that both mind\u2013body interventions improved sleep."], ["Mind\u2013Body Bridging (MBB) ", "sleep hygiene education (SHE)", "overall levels of mindfulness, self-compassion, well-being", "significantly increased", "In addition, compared with the SHE group, the MBB group showed reductions in self-reported depression symptoms (p\u2009=\u2009.040) and improvements in overall levels of mindfulness (p\u2009=\u2009.018), self-compassion (p\u2009=\u2009.028), and well-being (p\u2009=\u2009.019) at postintervention."], ["Mind\u2013Body Bridging (MBB) ", "sleep hygiene education (SHE)", "self-reported depression symptoms", "significantly decreased", "In addition, compared with the SHE group, the MBB group showed reductions in self-reported depression symptoms (p\u2009=\u2009.040)"]] | science.evidence_inference.236 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.PURPOSE: After completing treatment, cancer survivors may suffer from a multitude of physical and mental health impairments, resulting in compromised quality of life. This exploratory study investigated whether two mind–body interventions, i.e., Mind–Body Bridging (MBB) and Mindfulness Meditation (MM), could improve posttreatment cancer survivors' self-reported sleep disturbance and comorbid symptoms, as compared to sleep hygiene education (SHE) as an active control.\n\nABSTRACT.METHODS: This randomized controlled trial examined 57 cancer survivors with clinically significant self-reported sleep disturbance, randomly assigned to receive MBB, MM, or SHE. All interventions were conducted in three sessions, once per week. Patient-reported outcomes were assessed via the Medical Outcomes Study Sleep Scale and other indicators of psychosocial functioning relevant to quality of life, stress, depression, mindfulness, self-compassion, and well-being.\n\nABSTRACT.RESULTS: Mixed effects model analysis revealed that mean sleep disturbance symptoms in the MBB (p = .0029) and MM (p = .0499) groups were lower than in the SHE group, indicating that both mind–body interventions improved sleep. In addition, compared with the SHE group, the MBB group showed reductions in self-reported depression symptoms (p = .040) and improvements in overall levels of mindfulness (p = .018), self-compassion (p = .028), and well-being (p = .019) at postintervention.\n\nABSTRACT.CONCLUSIONS: This study provides preliminary evidence that brief sleep-focused MBB and MM are promising interventions for sleep disturbance in cancer survivors. Integrating MBB or MM into posttreatment supportive plans should enhance care of cancer survivors with sleep disturbance. Because MBB produced additional secondary benefits, MBB may serve as a promising multipurpose intervention for posttreatment cancer survivors suffering from sleep disturbance and other comorbid symptoms.\n\nABSTRACT.IMPLICATIONS FOR CANCER SURVIVORS: Two brief sleep-focused mind–body interventions investigated in the study were effective in reducing sleep disturbance and one of them further improved other psychosocial aspects of the cancer survivors' life. Management of sleep problems in survivors is a high priority issue that demands more attention in cancer survivorship.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"Mind\\u2013Body Bridging (MBB) and Mindfulness Meditation (MM) \", \"sleep hygiene education (SHE)\", \"sleep disturbance symptoms\", \"significantly decreased\", \"Mixed effects model analysis revealed that mean sleep disturbance symptoms in the MBB (p\\u2009=\\u2009.0029) and MM (p\\u2009=\\u2009.0499) groups were lower than in the SHE group, indicating that both mind\\u2013body interventions improved sleep.\"], [\"Mind\\u2013Body Bridging (MBB) \", \"sleep hygiene education (SHE)\", \"overall levels of mindfulness, self-compassion, well-being\", \"significantly increased\", \"In addition, compared with the SHE group, the MBB group showed reductions in self-reported depression symptoms (p\\u2009=\\u2009.040) and improvements in overall levels of mindfulness (p\\u2009=\\u2009.018), self-compassion (p\\u2009=\\u2009.028), and well-being (p\\u2009=\\u2009.019) at postintervention.\"], [\"Mind\\u2013Body Bridging (MBB) \", \"sleep hygiene education (SHE)\", \"self-reported depression symptoms\", \"significantly decreased\", \"In addition, compared with the SHE group, the MBB group showed reductions in self-reported depression symptoms (p\\u2009=\\u2009.040)\"]]",
"role": "assistant"
}
]
|
science.discomat_te | | Sample | SiO2 (wt%) | Na2CO3 (wt%) | Ag2O (wt%) |
| M3 | 65.63 | 31.25 | 3.12 |
| M4 | 64.95 | 30.93 | 4.12 |
| M5 | 64.28 | 30.61 | 5.11 |
| M6 | 63.64 | 30.30 | 6.06 |
Caption: Content of the oxides in the mixtures used to form the set of glasses in weight percentage
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "65.63", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "wt", "material": "M3"}
{"value": "31.25", "type": "Glass_Compound_Amount", "constituent": "Na2CO3", "unit": "wt", "material": "M3"}
{"value": "3.12", "type": "Glass_Compound_Amount", "constituent": "Ag2O", "unit": "wt", "material": "M3"}
{"value": "64.95", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "wt", "material": "M4"}
{"value": "30.93", "type": "Glass_Compound_Amount", "constituent": "Na2CO3", "unit": "wt", "material": "M4"}
{"value": "4.12", "type": "Glass_Compound_Amount", "constituent": "Ag2O", "unit": "wt", "material": "M4"}
{"value": "64.28", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "wt", "material": "M5"}
{"value": "30.61", "type": "Glass_Compound_Amount", "constituent": "Na2CO3", "unit": "wt", "material": "M5"}
{"value": "5.11", "type": "Glass_Compound_Amount", "constituent": "Ag2O", "unit": "wt", "material": "M5"}
{"value": "63.64", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "wt", "material": "M6"}
{"value": "30.30", "type": "Glass_Compound_Amount", "constituent": "Na2CO3", "unit": "wt", "material": "M6"}
{"value": "6.06", "type": "Glass_Compound_Amount", "constituent": "Ag2O", "unit": "wt", "material": "M6"} | science.discomat_te.7 | [
{
"content": "| Sample | SiO2 (wt%) | Na2CO3 (wt%) | Ag2O (wt%) |\n| M3 | 65.63 | 31.25 | 3.12 |\n| M4 | 64.95 | 30.93 | 4.12 |\n| M5 | 64.28 | 30.61 | 5.11 |\n| M6 | 63.64 | 30.30 | 6.06 |\n\nCaption: Content of the oxides in the mixtures used to form the set of glasses in weight percentage\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"65.63\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"wt\", \"material\": \"M3\"}\n{\"value\": \"31.25\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2CO3\", \"unit\": \"wt\", \"material\": \"M3\"}\n{\"value\": \"3.12\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Ag2O\", \"unit\": \"wt\", \"material\": \"M3\"}\n{\"value\": \"64.95\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"wt\", \"material\": \"M4\"}\n{\"value\": \"30.93\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2CO3\", \"unit\": \"wt\", \"material\": \"M4\"}\n{\"value\": \"4.12\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Ag2O\", \"unit\": \"wt\", \"material\": \"M4\"}\n{\"value\": \"64.28\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"wt\", \"material\": \"M5\"}\n{\"value\": \"30.61\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2CO3\", \"unit\": \"wt\", \"material\": \"M5\"}\n{\"value\": \"5.11\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Ag2O\", \"unit\": \"wt\", \"material\": \"M5\"}\n{\"value\": \"63.64\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"wt\", \"material\": \"M6\"}\n{\"value\": \"30.30\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2CO3\", \"unit\": \"wt\", \"material\": \"M6\"}\n{\"value\": \"6.06\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Ag2O\", \"unit\": \"wt\", \"material\": \"M6\"}",
"role": "assistant"
}
]
|
science.qasper_abstractive_qa | Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.
Document Content:
A Resource for Computational Experiments on Mapudungun
We present a resource for computational experiments on Mapudungun, a polysynthetic indigenous language spoken in Chile with upwards of 200 thousand speakers. We provide 142 hours of culturally significant conversations in the domain of medical treatment. The conversations are fully transcribed and translated into Spanish. The transcriptions also include annotations for code-switching and non-standard pronunciations. We also provide baseline results on three core NLP tasks: speech recognition, speech synthesis, and machine translation between Spanish and Mapudungun. We further explore other applications for which the corpus will be suitable, including the study of code-switching, historical orthography change, linguistic structure, and sociological and anthropological studies.
Introduction
Recent years have seen unprecedented progress for Natural Language Processing (NLP) on almost every NLP subtask. Even though low-resource settings have also been explored, this progress has overwhelmingly been observed in languages with significant data resources that can be leveraged to train deep neural networks. Low-resource languages still lag behind.
Endangered languages pose an additional challenge. The process of documenting an endangered language typically includes the creation of word lists, audio and video recordings, notes, or grammar fragments, with the created resources then stored in large online linguistics archives. This process is often hindered by the Transcription Bottleneck: the linguistic fieldworker and the language community may not have time to transcribe all of the recordings and may only transcribe segments that are linguistically salient for publication or culturally significant for the creation of community resources.
With this work we make publicly available a large corpus in Mapudungun, a language of the indigenous Mapuche people of southern Chile and western Argentina. We hope to ameliorate the resource gap and the transcription bottleneck in two ways. First, we are providing a larger data set than has previously been available, and second, we are providing baselines for NLP tasks (speech recognition, speech synthesis, and machine translation). In providing baselines and datasets splits, we hope to further facilitate research on low-resource NLP for this language through our data set. Research on low-resource speech recognition is particularly important in relieving the transcription bottleneck, while tackling the research challenges that speech synthesis and machine translation pose for such languages could lead to such systems being deployed to serve more under-represented communities.
The Mapudungun Language
Mapudungun (iso 639-3: arn) is an indigenous language of the Americas spoken natively in Chile and Argentina, with an estimated 100 to 200 thousand speakers in Chile and 27 to 60 thousand speakers in Argentina BIBREF0. It is an isolate language and is classified as threatened by Ethnologue, hence the critical importance of all documentary efforts. Although the morphology of nouns is relatively simple, Mapudungun verb morphology is highly agglutinative and complex. Some analyses provide as many as 36 verb suffix slots BIBREF1. A typical complex verb form occurring in our corpus of spoken Mapudungun consists of five or six morphemes.
Mapudungun has several interesting grammatical properties. It is a polysynthetic language in the sense of BIBREF2; see BIBREF3 for explicit argumentation. As with other polysynthetic languages, Mapudungun has Noun Incorporation; however, it is unique insofar as the Noun appears to the right of the Verb, instead of to the left, as in most polysynthetic languages BIBREF4. One further distinction of Mapudungun is that, whereas other polysynthetic languages are characterized by a lack of infinitives, Mapudungun has infinitival verb forms; that is, while subordinate clauses in Mapudungun closely resemble possessed nominals and may occur with an analytic marker resembling possessor agreement, there is no agreement inflection on the verb itself. One further remarkable property of Mapudungun is its inverse voice system of agreement, whereby the highest agreement is with the argument highest in an animacy hierarchy regardless of thematic role BIBREF5.
The Resource
The resource is comprised of 142 hours of spoken Mapudungun that was recorded during the AVENUE project BIBREF6 in 2001 to 2005. The data was recorded under a partnership between the AVENUE project, funded by the US National Science Foundation at Carnegie Mellon University, the Chilean Ministry of Education (Mineduc), and the Instituto de Estudios Indígenas at Universidad de La Frontera, originally spanning 170 hours of audio. We have recently cleaned the data and are releasing it publicly for the first time (although it has been shared with individual researchers in the past) along with NLP baselines.
The recordings were transcribed and translated into Spanish at the Instituto de Estudios Indígenas at Universidad de La Frontera. The corpus covers three dialects of Mapudungun: about 110 hours of Nguluche, 20 hours of Lafkenche and 10 hours of Pewenche. The three dialects are quite similar, with some minor semantic and phonetic differences. The fourth traditionally distinguished dialect, Huilliche, has several grammatical differences from the other three and is classified by Ethnologue as a separate language, iso 639-3: huh, and as nearly extinct.
The recordings are restricted to a single domain: primary, preventive, and treatment health care, including both Western and Mapuche traditional medicine. The recording sessions were conducted as interactive conversations so as to be natural in Mapuche culture, and they were open-ended, following an ethnographic approach. The interviewer was trained in these methods along with the use of the digital recording systems that were available at the time. We also followed human subject protocol. Each person signed a consent form to release the recordings for research purposes and the data have been accordingly anonymized. Because Machi (traditional Mapuche healers) were interviewed, we asked the transcribers to delete any culturally proprietary knowledge that a Machi may have revealed during the conversation. Similarly, we deleted any names or any information that may identify the participants.
The corpus is culturally relevant because it was created by Mapuche people, using traditional ways of relating to each other in conversations. They discussed personal experiences with primary health care in the traditional Mapuche system and the Chilean health care system, talking about illnesses and the way they were cured. The participants ranged from 16 years old to 100 years old, almost in equal numbers of men and women, and they were all native speakers of Mapudungun.
The Resource ::: Additional Annotations
In addition, the transcription includes annotations for noises and disfluencies including aborted words, mispronunciations, poor intelligibility, repeated and corrected words, false starts, hesitations, undefined sound or pronunciations, non-verbal articulations, and pauses. Foreign words, in this case Spanish words, are also labelled as such.
The Resource ::: Cleaning
The dialogues were originally recorded using a Sony DAT recorder (48kHz), model TCD-D8, and Sony digital stereo microphone, model ECM-DS70P. Transcription was performed with the TransEdit transcription tool v.1.1 beta 10, which synchronizes the transcribed text and the wave files.
However, we found that a non-trivial number of the utterance boundaries and speaker annotations were flawed. Also some recording sessions did not have a complete set of matching audio, transcription, and translation files. Hence, in an effort to provide a relatively “clean" corpus for modern computational experiments, we converted the encoding of the textual transcription from Latin-1 to Unicode, DOS to UNIX line endings, a now more standard text encoding format than what was used when the data was first collected. Additionally, we renamed a small portion of files which had been misnamed and removed several duplicate files.
Although all of the data was recorded with similar equipment in relatively quiet environments, the acoustics are not as uniform as we would like for building speech synthesizers. Thus we applied standardized power normalization. We also moved the boundaries of the turns to standardize the amount of leading and trailing silence in each turn. This is a standard procedure for speech recognition and synthesis datasets. Finally we used the techniques in BIBREF7 for found data to re-align the text to the audio and find out which turns are best (or worst) aligned so that we can select segments that give the most accurate alignments. Some of the misalignments may in part be due to varied orthography, and we intend, but have not yet, to investigate normalization of orthography (i.e. spelling correction) to mitigate this.
Applications
Our resource has the potential to be the basis of computational research in Mapudungun across several areas. Since the collected audio has been transcribed, our resource is appropriate for the study of automatic speech recognition and speech synthesis. The Spanish translations enable the creation of machine translation systems between Mapudungun and Spanish, as well as end-to-end (or direct) speech translation. We in fact built such speech synthesis, speech recognition, and machine translation systems as a showcase of the usefulness of our corpus in that research direction.
Furthermore, our annotations of the Spanish words interspersed in Mapudungun speech could allow for a study of code-switching patterns within the Mapuche community. In addition, our annotations of non-standardized orthographic transcriptions could be extremely useful in the study of historical language and orthography change as a language moves from predominantly oral to being written in a standardized orthography, as well as in building spelling normalization and correction systems. The relatively large amount of data that we collected will also allow for the training of large language models, which in turn could be used as the basis for predictive keyboards tailored to Mapudungun. Last, since all data are dialogues annotated for the different speaker turns, they could be useful for building Mapudungun dialogue systems and chatbot-like applications.
The potential applications of our resource, however, are not exhausted in language technologies. The resource as a whole could be invaluable for ethnographic and sociological research, as the conversations contrast traditional and Western medicine practices, and they could reveal interesting aspects of the Mapuche culture.
In addition, the corpus is a goldmine of data for studying the morphostyntax of Mapudungun BIBREF8. As an isolate polysynthetic language, the study of Mapudungun can provide insights into the range of possibilities within human languages can work.
Baseline Results
Using the aforementioned higher quality portions of the corpus, we trained baseline systems for Mapudungun speech recognition and speech synthesis, as well as Machine Translation systems between Mapudungun and Spanish.
Baseline Results ::: Speech Recognition
For speech recognition (ASR) we used Kaldi BIBREF11. As we do not have access to pronunciation lexica for Mapudungun, we had to approximate them with two settings. In the first setting, we make the simple assumption that each character corresponds to a pronunced phoneme. In the second setting, we instead used the generated phonetic lexicon also used in the above-mentioned speech synthesis techniques. The train/dev/test splits are across conversations, as described above.
Under the first setting, we obtained a 60% character error rate, while the generated lexicon significantly boosts performance, as our systems achieve a notably reduced 30% phone error rate. Naturally, these results are relatively far from the quality of ASR systems trained on large amounts of clean data such as those available in English. Given the quality of the recordings, and the lack of additional resources, we consider our results fairly reasonable and they would still be usable for simple dialog-like tasks. We anticipate, though, that one could significantly improve ASR quality over our dataset, by using in-domain language models, or by training end-to-end neural recognizers leveraging languages with similar phonetic inventories BIBREF12 or by using the available Spanish translations in a multi-source scenario BIBREF13.
Baseline Results ::: Mapudungun–Spanish Machine Translation
We built neural end-to-end machine translation systems between Mapudungun and Spanish in both directions, using state-of-the-art Transformer architecture BIBREF14 with the toolkit of BIBREF15. We train our systems at the subword level using Byte-Pair Encoding BIBREF16 with a vocabulary of 5000 subwords, shared between the source and target languages. We use five layers for each of the encoder and the decoder, an embedding size of 512, feed forward transformation size of 2048, and eight attention heads. We use dropout BIBREF17 with $0.4$ probability as well as label smoothing set to $0.1$. We train with the Adam optimizer BIBREF18 for up to 200 epochs using learning decay with a patience of six epochs.
The baseline results using different portions of the training set (10k, 50k, 100k, and all (220k) parallel sentences) on both translation directions are presented in Table , using detokenized BLEU BIBREF19 (a standard MT metric) and chrF BIBREF20 (a metric that we consider to be more appropriate for polysynthetic languages, as it does not rely on word n-grams) computed with the sacreBLEU toolkit BIBREF21. It it worth noting the difference in quality between the two directions, with translation into Spanish reaching 20.4 (almost 21) BLEU points in the development set, while the opposite direction (translating into Mapudungun) shows about a 7 BLEU points worse performance. This is most likely due to Mapudungun being a polysynthetic language, with its complicated morphology posing a challenge for proper generation.
Related Work
Mapudungun grammar has been studied since the arrival of European missionaries and colonizers hundreds of years ago. More recent descriptions of Mapudungun grammar BIBREF1 and BIBREF0 informed the collection of the resource that we are presenting in this paper.
Portions of our resource have been used in early efforts to build language systems for Mapudungun. In particular, BIBREF22 focused on Mapudungun morphology in order to create spelling correction systems, while BIBREF23, BIBREF6, BIBREF24, and BIBREF25 developed hybrid rule- and phrase-based Statistical Machine Translation systems.
Naturally, similar works in collecting corpora in Indigenous languages of Latin America are abundant, but very few, if any, have the scale and potential of our resource to be useful in many downstream language-specific and inter-disciplinary applications. A general overview of the state of NLP for the under-represented languages of the Americas can be found at BIBREF26. To name a few of the many notable works, BIBREF27 created a parallel Mixtec-Spanish corpus for Machine Translation and BIBREF28 created lexical resources for Arapaho, while BIBREF29 and BIBREF30 focused on building speech corpora for Southern Quechua and Chatino respectively.
Acknowledgements
The data collection described in this paper was supported by NSF grants IIS-0121631 (AVENUE) and IIS-0534217 (LETRAS), with supplemental funding from NSF's Office of International Science and Education. Preliminary funding for work on Mapudungun was also provided by DARPA The experimental material is based upon work generously supported by the National Science Foundation under grant 1761548.
Table 1: Basic Statistics of our corpus.
Table 3: Machine Translation Results
Query: How is non-standard pronunciation identified?
JSON Response: | {"answer": "Original transcription was labeled with additional labels in [] brackets with nonstandard pronunciation.", "evidence": ["In addition, the transcription includes annotations for noises and disfluencies including aborted words, mispronunciations, poor intelligibility, repeated and corrected words, false starts, hesitations, undefined sound or pronunciations, non-verbal articulations, and pauses. Foreign words, in this case Spanish words, are also labelled as such."]} | science.qasper_abstractive_qa.61.1 | [
{
"content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nA Resource for Computational Experiments on Mapudungun\n\nWe present a resource for computational experiments on Mapudungun, a polysynthetic indigenous language spoken in Chile with upwards of 200 thousand speakers. We provide 142 hours of culturally significant conversations in the domain of medical treatment. The conversations are fully transcribed and translated into Spanish. The transcriptions also include annotations for code-switching and non-standard pronunciations. We also provide baseline results on three core NLP tasks: speech recognition, speech synthesis, and machine translation between Spanish and Mapudungun. We further explore other applications for which the corpus will be suitable, including the study of code-switching, historical orthography change, linguistic structure, and sociological and anthropological studies.\n\nIntroduction\nRecent years have seen unprecedented progress for Natural Language Processing (NLP) on almost every NLP subtask. Even though low-resource settings have also been explored, this progress has overwhelmingly been observed in languages with significant data resources that can be leveraged to train deep neural networks. Low-resource languages still lag behind.\nEndangered languages pose an additional challenge. The process of documenting an endangered language typically includes the creation of word lists, audio and video recordings, notes, or grammar fragments, with the created resources then stored in large online linguistics archives. This process is often hindered by the Transcription Bottleneck: the linguistic fieldworker and the language community may not have time to transcribe all of the recordings and may only transcribe segments that are linguistically salient for publication or culturally significant for the creation of community resources.\nWith this work we make publicly available a large corpus in Mapudungun, a language of the indigenous Mapuche people of southern Chile and western Argentina. We hope to ameliorate the resource gap and the transcription bottleneck in two ways. First, we are providing a larger data set than has previously been available, and second, we are providing baselines for NLP tasks (speech recognition, speech synthesis, and machine translation). In providing baselines and datasets splits, we hope to further facilitate research on low-resource NLP for this language through our data set. Research on low-resource speech recognition is particularly important in relieving the transcription bottleneck, while tackling the research challenges that speech synthesis and machine translation pose for such languages could lead to such systems being deployed to serve more under-represented communities.\n\nThe Mapudungun Language\nMapudungun (iso 639-3: arn) is an indigenous language of the Americas spoken natively in Chile and Argentina, with an estimated 100 to 200 thousand speakers in Chile and 27 to 60 thousand speakers in Argentina BIBREF0. It is an isolate language and is classified as threatened by Ethnologue, hence the critical importance of all documentary efforts. Although the morphology of nouns is relatively simple, Mapudungun verb morphology is highly agglutinative and complex. Some analyses provide as many as 36 verb suffix slots BIBREF1. A typical complex verb form occurring in our corpus of spoken Mapudungun consists of five or six morphemes.\nMapudungun has several interesting grammatical properties. It is a polysynthetic language in the sense of BIBREF2; see BIBREF3 for explicit argumentation. As with other polysynthetic languages, Mapudungun has Noun Incorporation; however, it is unique insofar as the Noun appears to the right of the Verb, instead of to the left, as in most polysynthetic languages BIBREF4. One further distinction of Mapudungun is that, whereas other polysynthetic languages are characterized by a lack of infinitives, Mapudungun has infinitival verb forms; that is, while subordinate clauses in Mapudungun closely resemble possessed nominals and may occur with an analytic marker resembling possessor agreement, there is no agreement inflection on the verb itself. One further remarkable property of Mapudungun is its inverse voice system of agreement, whereby the highest agreement is with the argument highest in an animacy hierarchy regardless of thematic role BIBREF5.\n\nThe Resource\nThe resource is comprised of 142 hours of spoken Mapudungun that was recorded during the AVENUE project BIBREF6 in 2001 to 2005. The data was recorded under a partnership between the AVENUE project, funded by the US National Science Foundation at Carnegie Mellon University, the Chilean Ministry of Education (Mineduc), and the Instituto de Estudios Indígenas at Universidad de La Frontera, originally spanning 170 hours of audio. We have recently cleaned the data and are releasing it publicly for the first time (although it has been shared with individual researchers in the past) along with NLP baselines.\nThe recordings were transcribed and translated into Spanish at the Instituto de Estudios Indígenas at Universidad de La Frontera. The corpus covers three dialects of Mapudungun: about 110 hours of Nguluche, 20 hours of Lafkenche and 10 hours of Pewenche. The three dialects are quite similar, with some minor semantic and phonetic differences. The fourth traditionally distinguished dialect, Huilliche, has several grammatical differences from the other three and is classified by Ethnologue as a separate language, iso 639-3: huh, and as nearly extinct.\nThe recordings are restricted to a single domain: primary, preventive, and treatment health care, including both Western and Mapuche traditional medicine. The recording sessions were conducted as interactive conversations so as to be natural in Mapuche culture, and they were open-ended, following an ethnographic approach. The interviewer was trained in these methods along with the use of the digital recording systems that were available at the time. We also followed human subject protocol. Each person signed a consent form to release the recordings for research purposes and the data have been accordingly anonymized. Because Machi (traditional Mapuche healers) were interviewed, we asked the transcribers to delete any culturally proprietary knowledge that a Machi may have revealed during the conversation. Similarly, we deleted any names or any information that may identify the participants.\nThe corpus is culturally relevant because it was created by Mapuche people, using traditional ways of relating to each other in conversations. They discussed personal experiences with primary health care in the traditional Mapuche system and the Chilean health care system, talking about illnesses and the way they were cured. The participants ranged from 16 years old to 100 years old, almost in equal numbers of men and women, and they were all native speakers of Mapudungun.\n\nThe Resource ::: Additional Annotations\nIn addition, the transcription includes annotations for noises and disfluencies including aborted words, mispronunciations, poor intelligibility, repeated and corrected words, false starts, hesitations, undefined sound or pronunciations, non-verbal articulations, and pauses. Foreign words, in this case Spanish words, are also labelled as such.\n\nThe Resource ::: Cleaning\nThe dialogues were originally recorded using a Sony DAT recorder (48kHz), model TCD-D8, and Sony digital stereo microphone, model ECM-DS70P. Transcription was performed with the TransEdit transcription tool v.1.1 beta 10, which synchronizes the transcribed text and the wave files.\nHowever, we found that a non-trivial number of the utterance boundaries and speaker annotations were flawed. Also some recording sessions did not have a complete set of matching audio, transcription, and translation files. Hence, in an effort to provide a relatively “clean\" corpus for modern computational experiments, we converted the encoding of the textual transcription from Latin-1 to Unicode, DOS to UNIX line endings, a now more standard text encoding format than what was used when the data was first collected. Additionally, we renamed a small portion of files which had been misnamed and removed several duplicate files.\nAlthough all of the data was recorded with similar equipment in relatively quiet environments, the acoustics are not as uniform as we would like for building speech synthesizers. Thus we applied standardized power normalization. We also moved the boundaries of the turns to standardize the amount of leading and trailing silence in each turn. This is a standard procedure for speech recognition and synthesis datasets. Finally we used the techniques in BIBREF7 for found data to re-align the text to the audio and find out which turns are best (or worst) aligned so that we can select segments that give the most accurate alignments. Some of the misalignments may in part be due to varied orthography, and we intend, but have not yet, to investigate normalization of orthography (i.e. spelling correction) to mitigate this.\n\nApplications\nOur resource has the potential to be the basis of computational research in Mapudungun across several areas. Since the collected audio has been transcribed, our resource is appropriate for the study of automatic speech recognition and speech synthesis. The Spanish translations enable the creation of machine translation systems between Mapudungun and Spanish, as well as end-to-end (or direct) speech translation. We in fact built such speech synthesis, speech recognition, and machine translation systems as a showcase of the usefulness of our corpus in that research direction.\nFurthermore, our annotations of the Spanish words interspersed in Mapudungun speech could allow for a study of code-switching patterns within the Mapuche community. In addition, our annotations of non-standardized orthographic transcriptions could be extremely useful in the study of historical language and orthography change as a language moves from predominantly oral to being written in a standardized orthography, as well as in building spelling normalization and correction systems. The relatively large amount of data that we collected will also allow for the training of large language models, which in turn could be used as the basis for predictive keyboards tailored to Mapudungun. Last, since all data are dialogues annotated for the different speaker turns, they could be useful for building Mapudungun dialogue systems and chatbot-like applications.\nThe potential applications of our resource, however, are not exhausted in language technologies. The resource as a whole could be invaluable for ethnographic and sociological research, as the conversations contrast traditional and Western medicine practices, and they could reveal interesting aspects of the Mapuche culture.\nIn addition, the corpus is a goldmine of data for studying the morphostyntax of Mapudungun BIBREF8. As an isolate polysynthetic language, the study of Mapudungun can provide insights into the range of possibilities within human languages can work.\n\nBaseline Results\nUsing the aforementioned higher quality portions of the corpus, we trained baseline systems for Mapudungun speech recognition and speech synthesis, as well as Machine Translation systems between Mapudungun and Spanish.\n\nBaseline Results ::: Speech Recognition\nFor speech recognition (ASR) we used Kaldi BIBREF11. As we do not have access to pronunciation lexica for Mapudungun, we had to approximate them with two settings. In the first setting, we make the simple assumption that each character corresponds to a pronunced phoneme. In the second setting, we instead used the generated phonetic lexicon also used in the above-mentioned speech synthesis techniques. The train/dev/test splits are across conversations, as described above.\nUnder the first setting, we obtained a 60% character error rate, while the generated lexicon significantly boosts performance, as our systems achieve a notably reduced 30% phone error rate. Naturally, these results are relatively far from the quality of ASR systems trained on large amounts of clean data such as those available in English. Given the quality of the recordings, and the lack of additional resources, we consider our results fairly reasonable and they would still be usable for simple dialog-like tasks. We anticipate, though, that one could significantly improve ASR quality over our dataset, by using in-domain language models, or by training end-to-end neural recognizers leveraging languages with similar phonetic inventories BIBREF12 or by using the available Spanish translations in a multi-source scenario BIBREF13.\n\nBaseline Results ::: Mapudungun–Spanish Machine Translation\nWe built neural end-to-end machine translation systems between Mapudungun and Spanish in both directions, using state-of-the-art Transformer architecture BIBREF14 with the toolkit of BIBREF15. We train our systems at the subword level using Byte-Pair Encoding BIBREF16 with a vocabulary of 5000 subwords, shared between the source and target languages. We use five layers for each of the encoder and the decoder, an embedding size of 512, feed forward transformation size of 2048, and eight attention heads. We use dropout BIBREF17 with $0.4$ probability as well as label smoothing set to $0.1$. We train with the Adam optimizer BIBREF18 for up to 200 epochs using learning decay with a patience of six epochs.\nThe baseline results using different portions of the training set (10k, 50k, 100k, and all (220k) parallel sentences) on both translation directions are presented in Table , using detokenized BLEU BIBREF19 (a standard MT metric) and chrF BIBREF20 (a metric that we consider to be more appropriate for polysynthetic languages, as it does not rely on word n-grams) computed with the sacreBLEU toolkit BIBREF21. It it worth noting the difference in quality between the two directions, with translation into Spanish reaching 20.4 (almost 21) BLEU points in the development set, while the opposite direction (translating into Mapudungun) shows about a 7 BLEU points worse performance. This is most likely due to Mapudungun being a polysynthetic language, with its complicated morphology posing a challenge for proper generation.\n\nRelated Work\nMapudungun grammar has been studied since the arrival of European missionaries and colonizers hundreds of years ago. More recent descriptions of Mapudungun grammar BIBREF1 and BIBREF0 informed the collection of the resource that we are presenting in this paper.\nPortions of our resource have been used in early efforts to build language systems for Mapudungun. In particular, BIBREF22 focused on Mapudungun morphology in order to create spelling correction systems, while BIBREF23, BIBREF6, BIBREF24, and BIBREF25 developed hybrid rule- and phrase-based Statistical Machine Translation systems.\nNaturally, similar works in collecting corpora in Indigenous languages of Latin America are abundant, but very few, if any, have the scale and potential of our resource to be useful in many downstream language-specific and inter-disciplinary applications. A general overview of the state of NLP for the under-represented languages of the Americas can be found at BIBREF26. To name a few of the many notable works, BIBREF27 created a parallel Mixtec-Spanish corpus for Machine Translation and BIBREF28 created lexical resources for Arapaho, while BIBREF29 and BIBREF30 focused on building speech corpora for Southern Quechua and Chatino respectively.\n\nAcknowledgements\nThe data collection described in this paper was supported by NSF grants IIS-0121631 (AVENUE) and IIS-0534217 (LETRAS), with supplemental funding from NSF's Office of International Science and Education. Preliminary funding for work on Mapudungun was also provided by DARPA The experimental material is based upon work generously supported by the National Science Foundation under grant 1761548.\n\nTable 1: Basic Statistics of our corpus.\n\nTable 3: Machine Translation Results\n\nQuery: How is non-standard pronunciation identified?\n\nJSON Response:",
"role": "user"
},
{
"content": "{\"answer\": \"Original transcription was labeled with additional labels in [] brackets with nonstandard pronunciation.\", \"evidence\": [\"In addition, the transcription includes annotations for noises and disfluencies including aborted words, mispronunciations, poor intelligibility, repeated and corrected words, false starts, hesitations, undefined sound or pronunciations, non-verbal articulations, and pauses. Foreign words, in this case Spanish words, are also labelled as such.\"]}",
"role": "assistant"
}
]
|
science.discomat_te | | Wave number (cm-1) | Wave number (cm-1) | Compositions | Compositions | Compositions | Compositions | Assignment |
| Diopside [44] | Di-Ca-Ts GCs | BaCMAS1 | BaCMAS2 | BaCMAS3 | BaCMAS4 | Assignment |
| 335 | 335 | - | - | 850degC | 850degC | O(nbr)-Si-O (nbr) wagging |
| 335 | 337 | 900degC, 950degC, 1000degC | 900degC, 950degC, 1000degC | 900degC, 950degC, 1000degC | 900degC, 950degC, 1000degC | O(nbr)-Si-O (nbr) wagging |
| 366 | - | - | - | - | - | Not assigned |
| 395 | 390-392 | 900degC, 950degC, 1000degC | 900degC, 950degC, 1000degC | 900degC, 950degC, 1000degC | 900degC, 950degC, 1000degC | Chain deformation [O (nbr)-Ca-O (nbr)] bending |
| 395 | 400 | 850degC | 850degC | 850degC | 850degC | Chain deformation [O (nbr)-Ca-O (nbr)] bending |
| - | 418 | 900degC, 950degC, 1000degC | 900degC, 950degC, 1000degC | 900degC, 950degC, 1000degC | 950degC, 1000degC | Not assigned |
| 470 | 480 | 850degC, 900degC, 950degC, 1000degC | 850degC, 900degC, 950degC, 1000degC | 850degC, 900degC, 950degC, 1000degC | 850degC, 900degC, 950degC, 1000degC | Chain deformation [O (nbr)-Mg-O (nbr)] bending |
| 510 | 520 | 850degC, 900degC, 950degC, 1000degC | 850degC, 900degC, 950degC, 1000degC | 850degC, 900degC, 950degC, 1000degC | 850degC, 900degC, 950degC, 1000degC | Chain deformation [O (nbr)-Mg-O (nbr)] bending |
| 630 | 638-640 | 850degC, 900degC, 950degC, 1000degC | 850degC, 900degC, 950degC, 1000degC | 850degC, 900degC, 950degC, 1000degC | 850degC, 900degC, 950degC, 1000degC | O(nbr)-Si-O (nbr) bending |
| 670 | 660 | - | - | 850degC | - | Not assigned |
| 670 | 675 | 900degC, 950degC, 1000degC | 900degC, 950degC, 1000degC | 900degC, 950degC, 1000degC | 850-1000degC | Not assigned |
| 670 | 680 | 850degC | 850degC | - | - | Not assigned |
| 865 | 875 | 850degC, 900degC, 950degC, 1000degC | 850degC, 900degC, 950degC, 1000degC | 850degC, 900degC, 950degC, 1000degC | 850degC, 900degC, 950degC, 1000degC | Si-O (nbr) stretching |
| 920 | 925 | 900-1000degC | 900-1000degC | 850-1000degC | 850-1000degC | Si-O (nbr) stretching |
| 965 | 975 | 850degC, 900degC, 950degC, 1000degC | 850degC, 900degC, 950degC, 1000degC | 850degC, 900degC, 950degC, 1000degC | 850degC, 900degC, 950degC, 1000degC | Si-O (br) stretching |
| 1070 | 1075 | 850degC, 900degC | 850degC, 900degC | - | - | Si-O (br) stretching |
| 1070 | 1080 | - | - | 950degC, 1000degC | 950degC, 1000degC | Si-O (br) stretching |
Caption: Wave numbers and assignments of the transmittance bands corresponding to diopside in the investigated GCs at different temperatures.
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "335", "type": "Other"}
{"value": "335", "type": "Other"}
{"value": "335", "type": "Other"}
{"value": "337", "type": "Other"}
{"value": "366", "type": "Other"}
{"value": "395", "type": "Other"}
{"value": "390", "type": "Other"}
{"value": "392", "type": "Other"}
{"value": "395", "type": "Other"}
{"value": "400", "type": "Other"}
{"value": "418", "type": "Other"}
{"value": "470", "type": "Other"}
{"value": "480", "type": "Other"}
{"value": "510", "type": "Other"}
{"value": "520", "type": "Other"}
{"value": "630", "type": "Other"}
{"value": "638", "type": "Other"}
{"value": "640", "type": "Other"}
{"value": "670", "type": "Other"}
{"value": "660", "type": "Other"}
{"value": "670", "type": "Other"}
{"value": "675", "type": "Other"}
{"value": "670", "type": "Other"}
{"value": "680", "type": "Other"}
{"value": "865", "type": "Other"}
{"value": "875", "type": "Other"}
{"value": "920", "type": "Other"}
{"value": "925", "type": "Other"}
{"value": "965", "type": "Other"}
{"value": "975", "type": "Other"}
{"value": "1070", "type": "Other"}
{"value": "1075", "type": "Other"}
{"value": "1070", "type": "Other"}
{"value": "1080", "type": "Other"} | science.discomat_te.53 | [
{
"content": "| Wave number (cm-1) | Wave number (cm-1) | Compositions | Compositions | Compositions | Compositions | Assignment |\n| Diopside [44] | Di-Ca-Ts GCs | BaCMAS1 | BaCMAS2 | BaCMAS3 | BaCMAS4 | Assignment |\n| 335 | 335 | - | - | 850degC | 850degC | O(nbr)-Si-O (nbr) wagging |\n| 335 | 337 | 900degC, 950degC, 1000degC | 900degC, 950degC, 1000degC | 900degC, 950degC, 1000degC | 900degC, 950degC, 1000degC | O(nbr)-Si-O (nbr) wagging |\n| 366 | - | - | - | - | - | Not assigned |\n| 395 | 390-392 | 900degC, 950degC, 1000degC | 900degC, 950degC, 1000degC | 900degC, 950degC, 1000degC | 900degC, 950degC, 1000degC | Chain deformation [O (nbr)-Ca-O (nbr)] bending |\n| 395 | 400 | 850degC | 850degC | 850degC | 850degC | Chain deformation [O (nbr)-Ca-O (nbr)] bending |\n| - | 418 | 900degC, 950degC, 1000degC | 900degC, 950degC, 1000degC | 900degC, 950degC, 1000degC | 950degC, 1000degC | Not assigned |\n| 470 | 480 | 850degC, 900degC, 950degC, 1000degC | 850degC, 900degC, 950degC, 1000degC | 850degC, 900degC, 950degC, 1000degC | 850degC, 900degC, 950degC, 1000degC | Chain deformation [O (nbr)-Mg-O (nbr)] bending |\n| 510 | 520 | 850degC, 900degC, 950degC, 1000degC | 850degC, 900degC, 950degC, 1000degC | 850degC, 900degC, 950degC, 1000degC | 850degC, 900degC, 950degC, 1000degC | Chain deformation [O (nbr)-Mg-O (nbr)] bending |\n| 630 | 638-640 | 850degC, 900degC, 950degC, 1000degC | 850degC, 900degC, 950degC, 1000degC | 850degC, 900degC, 950degC, 1000degC | 850degC, 900degC, 950degC, 1000degC | O(nbr)-Si-O (nbr) bending |\n| 670 | 660 | - | - | 850degC | - | Not assigned |\n| 670 | 675 | 900degC, 950degC, 1000degC | 900degC, 950degC, 1000degC | 900degC, 950degC, 1000degC | 850-1000degC | Not assigned |\n| 670 | 680 | 850degC | 850degC | - | - | Not assigned |\n| 865 | 875 | 850degC, 900degC, 950degC, 1000degC | 850degC, 900degC, 950degC, 1000degC | 850degC, 900degC, 950degC, 1000degC | 850degC, 900degC, 950degC, 1000degC | Si-O (nbr) stretching |\n| 920 | 925 | 900-1000degC | 900-1000degC | 850-1000degC | 850-1000degC | Si-O (nbr) stretching |\n| 965 | 975 | 850degC, 900degC, 950degC, 1000degC | 850degC, 900degC, 950degC, 1000degC | 850degC, 900degC, 950degC, 1000degC | 850degC, 900degC, 950degC, 1000degC | Si-O (br) stretching |\n| 1070 | 1075 | 850degC, 900degC | 850degC, 900degC | - | - | Si-O (br) stretching |\n| 1070 | 1080 | - | - | 950degC, 1000degC | 950degC, 1000degC | Si-O (br) stretching |\n\nCaption: Wave numbers and assignments of the transmittance bands corresponding to diopside in the investigated GCs at different temperatures.\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"335\", \"type\": \"Other\"}\n{\"value\": \"335\", \"type\": \"Other\"}\n{\"value\": \"335\", \"type\": \"Other\"}\n{\"value\": \"337\", \"type\": \"Other\"}\n{\"value\": \"366\", \"type\": \"Other\"}\n{\"value\": \"395\", \"type\": \"Other\"}\n{\"value\": \"390\", \"type\": \"Other\"}\n{\"value\": \"392\", \"type\": \"Other\"}\n{\"value\": \"395\", \"type\": \"Other\"}\n{\"value\": \"400\", \"type\": \"Other\"}\n{\"value\": \"418\", \"type\": \"Other\"}\n{\"value\": \"470\", \"type\": \"Other\"}\n{\"value\": \"480\", \"type\": \"Other\"}\n{\"value\": \"510\", \"type\": \"Other\"}\n{\"value\": \"520\", \"type\": \"Other\"}\n{\"value\": \"630\", \"type\": \"Other\"}\n{\"value\": \"638\", \"type\": \"Other\"}\n{\"value\": \"640\", \"type\": \"Other\"}\n{\"value\": \"670\", \"type\": \"Other\"}\n{\"value\": \"660\", \"type\": \"Other\"}\n{\"value\": \"670\", \"type\": \"Other\"}\n{\"value\": \"675\", \"type\": \"Other\"}\n{\"value\": \"670\", \"type\": \"Other\"}\n{\"value\": \"680\", \"type\": \"Other\"}\n{\"value\": \"865\", \"type\": \"Other\"}\n{\"value\": \"875\", \"type\": \"Other\"}\n{\"value\": \"920\", \"type\": \"Other\"}\n{\"value\": \"925\", \"type\": \"Other\"}\n{\"value\": \"965\", \"type\": \"Other\"}\n{\"value\": \"975\", \"type\": \"Other\"}\n{\"value\": \"1070\", \"type\": \"Other\"}\n{\"value\": \"1075\", \"type\": \"Other\"}\n{\"value\": \"1070\", \"type\": \"Other\"}\n{\"value\": \"1080\", \"type\": \"Other\"}",
"role": "assistant"
}
]
|
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
Knowledge Base Population (KBP, e.g.: Riedel et al., 2013; Sterckx et al., 2016) attempts to identify facts within raw text and convert them into triples consisting of a subject, object and the relation between them. One common form of this task is slot filling (Surdeanu and Heng, 2014) , in which a knowledge base (KB) query, such as place of birth(Obama, ?) is applied to a set of documents and a set of slot fillers is returned. By converting such KB queries to natural language questions, <cite>Levy et al. (2017)</cite> showed that a question answering (QA) system could be effectively applied to this task. | ["Background"] | science.multicite_intent_classification.714 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nKnowledge Base Population (KBP, e.g.: Riedel et al., 2013; Sterckx et al., 2016) attempts to identify facts within raw text and convert them into triples consisting of a subject, object and the relation between them. One common form of this task is slot filling (Surdeanu and Heng, 2014) , in which a knowledge base (KB) query, such as place of birth(Obama, ?) is applied to a set of documents and a set of slot fillers is returned. By converting such KB queries to natural language questions, <cite>Levy et al. (2017)</cite> showed that a question answering (QA) system could be effectively applied to this task.",
"role": "user"
},
{
"content": "[\"Background\"]",
"role": "assistant"
}
]
|
science.discomat_te | | Heating ratedegC/min | 20C | 20C | 20C | 40C | 40C | 40C |
| Heating ratedegC/min | Tg (degC) | Tc (degC) | Tp (degC) | Tg (degC) | Tc (degC) | Tp (degC) |
| 5 | - | 498+-5 | 513+-5 | - | 475+-5 | 498+-5 |
| 10 | 332+-3 | 503+-5 | 518+-5 | 304+-3 | 476+-5 | 502+-5 |
| 15 | 335+-3 | 507+-5 | 520+-5 | 307+-3 | 480+-5 | 506+-5 |
| 20 | 337+-3 | 510+-5 | 523+-5 | 309+-3 | 485+-5 | 511+-5 |
| 25 | 340+-3 | 514+-5 | 528+-5 | 310+-3 | 501+-5 | 518+-5 |
Caption: Glass transition temperature (Tg) and onset (Tc) and peak (Tp) crystallization temperatures of the two studied glasses.
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "5", "type": "Other"}
{"value": "498", "type": "Other"}
{"value": "513", "type": "Other"}
{"value": "475", "type": "Other"}
{"value": "498", "type": "Other"}
{"value": "10", "type": "Other"}
{"value": "332", "type": "Other"}
{"value": "503", "type": "Other"}
{"value": "518", "type": "Other"}
{"value": "304", "type": "Other"}
{"value": "476", "type": "Other"}
{"value": "502", "type": "Other"}
{"value": "15", "type": "Other"}
{"value": "335", "type": "Other"}
{"value": "507", "type": "Other"}
{"value": "520", "type": "Other"}
{"value": "307", "type": "Other"}
{"value": "480", "type": "Other"}
{"value": "506", "type": "Other"}
{"value": "20", "type": "Other"}
{"value": "337", "type": "Other"}
{"value": "510", "type": "Other"}
{"value": "523", "type": "Other"}
{"value": "309", "type": "Other"}
{"value": "485", "type": "Other"}
{"value": "511", "type": "Other"}
{"value": "25", "type": "Other"}
{"value": "340", "type": "Other"}
{"value": "514", "type": "Other"}
{"value": "528", "type": "Other"}
{"value": "310", "type": "Other"}
{"value": "501", "type": "Other"}
{"value": "518", "type": "Other"} | science.discomat_te.367 | [
{
"content": "| Heating ratedegC/min | 20C | 20C | 20C | 40C | 40C | 40C |\n| Heating ratedegC/min | Tg (degC) | Tc (degC) | Tp (degC) | Tg (degC) | Tc (degC) | Tp (degC) |\n| 5 | - | 498+-5 | 513+-5 | - | 475+-5 | 498+-5 |\n| 10 | 332+-3 | 503+-5 | 518+-5 | 304+-3 | 476+-5 | 502+-5 |\n| 15 | 335+-3 | 507+-5 | 520+-5 | 307+-3 | 480+-5 | 506+-5 |\n| 20 | 337+-3 | 510+-5 | 523+-5 | 309+-3 | 485+-5 | 511+-5 |\n| 25 | 340+-3 | 514+-5 | 528+-5 | 310+-3 | 501+-5 | 518+-5 |\n\nCaption: Glass transition temperature (Tg) and onset (Tc) and peak (Tp) crystallization temperatures of the two studied glasses.\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"5\", \"type\": \"Other\"}\n{\"value\": \"498\", \"type\": \"Other\"}\n{\"value\": \"513\", \"type\": \"Other\"}\n{\"value\": \"475\", \"type\": \"Other\"}\n{\"value\": \"498\", \"type\": \"Other\"}\n{\"value\": \"10\", \"type\": \"Other\"}\n{\"value\": \"332\", \"type\": \"Other\"}\n{\"value\": \"503\", \"type\": \"Other\"}\n{\"value\": \"518\", \"type\": \"Other\"}\n{\"value\": \"304\", \"type\": \"Other\"}\n{\"value\": \"476\", \"type\": \"Other\"}\n{\"value\": \"502\", \"type\": \"Other\"}\n{\"value\": \"15\", \"type\": \"Other\"}\n{\"value\": \"335\", \"type\": \"Other\"}\n{\"value\": \"507\", \"type\": \"Other\"}\n{\"value\": \"520\", \"type\": \"Other\"}\n{\"value\": \"307\", \"type\": \"Other\"}\n{\"value\": \"480\", \"type\": \"Other\"}\n{\"value\": \"506\", \"type\": \"Other\"}\n{\"value\": \"20\", \"type\": \"Other\"}\n{\"value\": \"337\", \"type\": \"Other\"}\n{\"value\": \"510\", \"type\": \"Other\"}\n{\"value\": \"523\", \"type\": \"Other\"}\n{\"value\": \"309\", \"type\": \"Other\"}\n{\"value\": \"485\", \"type\": \"Other\"}\n{\"value\": \"511\", \"type\": \"Other\"}\n{\"value\": \"25\", \"type\": \"Other\"}\n{\"value\": \"340\", \"type\": \"Other\"}\n{\"value\": \"514\", \"type\": \"Other\"}\n{\"value\": \"528\", \"type\": \"Other\"}\n{\"value\": \"310\", \"type\": \"Other\"}\n{\"value\": \"501\", \"type\": \"Other\"}\n{\"value\": \"518\", \"type\": \"Other\"}",
"role": "assistant"
}
]
|
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
To address these issues, we first present two new data sets for German readability assessment in Section 3: a set of German news broadcast subtitles based on the primary German TV news outlet Tagesschau and the children's counterpart Logo!, and a GEO/GEOlino corpus crawled from the educational GEO magazine's web site, a source first identified by <cite>Hancke et al. (2012)</cite> , but double in size. | ["Extends"] | science.multicite_intent_classification.375 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nTo address these issues, we first present two new data sets for German readability assessment in Section 3: a set of German news broadcast subtitles based on the primary German TV news outlet Tagesschau and the children's counterpart Logo!, and a GEO/GEOlino corpus crawled from the educational GEO magazine's web site, a source first identified by <cite>Hancke et al. (2012)</cite> , but double in size.",
"role": "user"
},
{
"content": "[\"Extends\"]",
"role": "assistant"
}
]
|
science.discomat_te | | Isothermal temperature (K) | Reaction ratek (x10-4 s-1) | Avrami indexn |
| 635 | 7.5 | 4.0 |
| 637 | 10.4 | 4.0 |
| 639 | 14.4 | 4.2 |
| 641 | 19.9 | 4.1 |
| 643 | 27.6 | 4.2 |
Caption: Reaction rate (k) and Avrami index (n) for 66.7Li2O-33.3P2O5 mol% glass.
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "635", "type": "Other"}
{"value": "7.5", "type": "Other"}
{"value": "4.0", "type": "Other"}
{"value": "637", "type": "Other"}
{"value": "10.4", "type": "Other"}
{"value": "4.0", "type": "Other"}
{"value": "639", "type": "Other"}
{"value": "14.4", "type": "Other"}
{"value": "4.2", "type": "Other"}
{"value": "641", "type": "Other"}
{"value": "19.9", "type": "Other"}
{"value": "4.1", "type": "Other"}
{"value": "643", "type": "Other"}
{"value": "27.6", "type": "Other"}
{"value": "4.2", "type": "Other"} | science.discomat_te.198 | [
{
"content": "| Isothermal temperature (K) | Reaction ratek (x10-4 s-1) | Avrami indexn |\n| 635 | 7.5 | 4.0 |\n| 637 | 10.4 | 4.0 |\n| 639 | 14.4 | 4.2 |\n| 641 | 19.9 | 4.1 |\n| 643 | 27.6 | 4.2 |\n\nCaption: Reaction rate (k) and Avrami index (n) for 66.7Li2O-33.3P2O5 mol% glass.\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"635\", \"type\": \"Other\"}\n{\"value\": \"7.5\", \"type\": \"Other\"}\n{\"value\": \"4.0\", \"type\": \"Other\"}\n{\"value\": \"637\", \"type\": \"Other\"}\n{\"value\": \"10.4\", \"type\": \"Other\"}\n{\"value\": \"4.0\", \"type\": \"Other\"}\n{\"value\": \"639\", \"type\": \"Other\"}\n{\"value\": \"14.4\", \"type\": \"Other\"}\n{\"value\": \"4.2\", \"type\": \"Other\"}\n{\"value\": \"641\", \"type\": \"Other\"}\n{\"value\": \"19.9\", \"type\": \"Other\"}\n{\"value\": \"4.1\", \"type\": \"Other\"}\n{\"value\": \"643\", \"type\": \"Other\"}\n{\"value\": \"27.6\", \"type\": \"Other\"}\n{\"value\": \"4.2\", \"type\": \"Other\"}",
"role": "assistant"
}
]
|
science.scifact_entailment | Determine the relationship between them and generate a JSON response. Scientific Claim:
Cytosolic proteins bind to iron-responsive elements on mRNAs coding for TFRC1.
Research Abstract:
Chronic obstructive pulmonary disease (COPD) is linked to both cigarette smoking and genetic determinants.
We have previously identified iron-responsive element-binding protein 2 (IRP2) as an important COPD susceptibility gene and have shown that IRP2 protein is increased in the lungs of individuals with COPD.
Here we demonstrate that mice deficient in Irp2 were protected from cigarette smoke (CS)-induced experimental COPD.
By integrating RNA immunoprecipitation followed by sequencing (RIP-seq), RNA sequencing (RNA-seq), and gene expression and functional enrichment clustering analysis, we identified Irp2 as a regulator of mitochondrial function in the lungs of mice.
Irp2 increased mitochondrial iron loading and levels of cytochrome c oxidase (COX), which led to mitochondrial dysfunction and subsequent experimental COPD.
Frataxin-deficient mice, which had higher mitochondrial iron loading, showed impaired airway mucociliary clearance (MCC) and higher pulmonary inflammation at baseline, whereas mice deficient in the synthesis of cytochrome c oxidase, which have reduced COX, were protected from CS-induced pulmonary inflammation and impairment of MCC.
Mice treated with a mitochondrial iron chelator or mice fed a low-iron diet were protected from CS-induced COPD.
Mitochondrial iron chelation also alleviated CS-induced impairment of MCC, CS-induced pulmonary inflammation and CS-associated lung injury in mice with established COPD, suggesting a critical functional role and potential therapeutic intervention for the mitochondrial-iron axis in COPD. | {"verdict": "NEI", "evidence": []} | science.scifact_entailment.197 | [
{
"content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nCytosolic proteins bind to iron-responsive elements on mRNAs coding for TFRC1.\n\nResearch Abstract:\nChronic obstructive pulmonary disease (COPD) is linked to both cigarette smoking and genetic determinants.\nWe have previously identified iron-responsive element-binding protein 2 (IRP2) as an important COPD susceptibility gene and have shown that IRP2 protein is increased in the lungs of individuals with COPD.\nHere we demonstrate that mice deficient in Irp2 were protected from cigarette smoke (CS)-induced experimental COPD.\nBy integrating RNA immunoprecipitation followed by sequencing (RIP-seq), RNA sequencing (RNA-seq), and gene expression and functional enrichment clustering analysis, we identified Irp2 as a regulator of mitochondrial function in the lungs of mice.\nIrp2 increased mitochondrial iron loading and levels of cytochrome c oxidase (COX), which led to mitochondrial dysfunction and subsequent experimental COPD.\nFrataxin-deficient mice, which had higher mitochondrial iron loading, showed impaired airway mucociliary clearance (MCC) and higher pulmonary inflammation at baseline, whereas mice deficient in the synthesis of cytochrome c oxidase, which have reduced COX, were protected from CS-induced pulmonary inflammation and impairment of MCC.\nMice treated with a mitochondrial iron chelator or mice fed a low-iron diet were protected from CS-induced COPD.\nMitochondrial iron chelation also alleviated CS-induced impairment of MCC, CS-induced pulmonary inflammation and CS-associated lung injury in mice with established COPD, suggesting a critical functional role and potential therapeutic intervention for the mitochondrial-iron axis in COPD.",
"role": "user"
},
{
"content": "{\"verdict\": \"NEI\", \"evidence\": []}",
"role": "assistant"
}
]
|
science.discomat_te | | %As/(As+Ge) | l 0 (nm) | T g (degC) | T s (degC) | a (ppm/degC) | r (g/cm3) | V (cm3) |
| 28.6 | 551 | 368 | 490 | 15.8 | 2.923 | 16.24 |
| 50 | 562 | 312 | 412 | 21.8 | 3.000 | 16.25 |
| 66.7 | 596 | | 379 | 39.1 | 3.152 | 16.33 |
| 73.0 | 606 | 246 | 346 | 50.8 | 3.207 | 16.31 |
| 94.6 | 611 | 39 | 68 | 95.8 | 3.416 | 16.27 |
| 97.8 | | 19 | | | 3.432 | 16.36 |
Caption: Properties of glasses on the GeS2-As4S3 join
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "28.6", "type": "Other"}
{"value": "551", "type": "Other"}
{"value": "368", "type": "Other"}
{"value": "490", "type": "Other"}
{"value": "15.8", "type": "Other"}
{"value": "2.923", "type": "Other"}
{"value": "16.24", "type": "Other"}
{"value": "50", "type": "Other"}
{"value": "562", "type": "Other"}
{"value": "312", "type": "Other"}
{"value": "412", "type": "Other"}
{"value": "21.8", "type": "Other"}
{"value": "3.000", "type": "Other"}
{"value": "16.25", "type": "Other"}
{"value": "66.7", "type": "Other"}
{"value": "596", "type": "Other"}
{"value": "379", "type": "Other"}
{"value": "39.1", "type": "Other"}
{"value": "3.152", "type": "Other"}
{"value": "16.33", "type": "Other"}
{"value": "73.0", "type": "Other"}
{"value": "606", "type": "Other"}
{"value": "246", "type": "Other"}
{"value": "346", "type": "Other"}
{"value": "50.8", "type": "Other"}
{"value": "3.207", "type": "Other"}
{"value": "16.31", "type": "Other"}
{"value": "94.6", "type": "Other"}
{"value": "611", "type": "Other"}
{"value": "39", "type": "Other"}
{"value": "68", "type": "Other"}
{"value": "95.8", "type": "Other"}
{"value": "3.416", "type": "Other"}
{"value": "16.27", "type": "Other"}
{"value": "97.8", "type": "Other"}
{"value": "19", "type": "Other"}
{"value": "3.432", "type": "Other"}
{"value": "16.36", "type": "Other"} | science.discomat_te.278 | [
{
"content": "| %As/(As+Ge) | l 0 (nm) | T g (degC) | T s (degC) | a (ppm/degC) | r (g/cm3) | V (cm3) |\n| 28.6 | 551 | 368 | 490 | 15.8 | 2.923 | 16.24 |\n| 50 | 562 | 312 | 412 | 21.8 | 3.000 | 16.25 |\n| 66.7 | 596 | | 379 | 39.1 | 3.152 | 16.33 |\n| 73.0 | 606 | 246 | 346 | 50.8 | 3.207 | 16.31 |\n| 94.6 | 611 | 39 | 68 | 95.8 | 3.416 | 16.27 |\n| 97.8 | | 19 | | | 3.432 | 16.36 |\n\nCaption: Properties of glasses on the GeS2-As4S3 join\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"28.6\", \"type\": \"Other\"}\n{\"value\": \"551\", \"type\": \"Other\"}\n{\"value\": \"368\", \"type\": \"Other\"}\n{\"value\": \"490\", \"type\": \"Other\"}\n{\"value\": \"15.8\", \"type\": \"Other\"}\n{\"value\": \"2.923\", \"type\": \"Other\"}\n{\"value\": \"16.24\", \"type\": \"Other\"}\n{\"value\": \"50\", \"type\": \"Other\"}\n{\"value\": \"562\", \"type\": \"Other\"}\n{\"value\": \"312\", \"type\": \"Other\"}\n{\"value\": \"412\", \"type\": \"Other\"}\n{\"value\": \"21.8\", \"type\": \"Other\"}\n{\"value\": \"3.000\", \"type\": \"Other\"}\n{\"value\": \"16.25\", \"type\": \"Other\"}\n{\"value\": \"66.7\", \"type\": \"Other\"}\n{\"value\": \"596\", \"type\": \"Other\"}\n{\"value\": \"379\", \"type\": \"Other\"}\n{\"value\": \"39.1\", \"type\": \"Other\"}\n{\"value\": \"3.152\", \"type\": \"Other\"}\n{\"value\": \"16.33\", \"type\": \"Other\"}\n{\"value\": \"73.0\", \"type\": \"Other\"}\n{\"value\": \"606\", \"type\": \"Other\"}\n{\"value\": \"246\", \"type\": \"Other\"}\n{\"value\": \"346\", \"type\": \"Other\"}\n{\"value\": \"50.8\", \"type\": \"Other\"}\n{\"value\": \"3.207\", \"type\": \"Other\"}\n{\"value\": \"16.31\", \"type\": \"Other\"}\n{\"value\": \"94.6\", \"type\": \"Other\"}\n{\"value\": \"611\", \"type\": \"Other\"}\n{\"value\": \"39\", \"type\": \"Other\"}\n{\"value\": \"68\", \"type\": \"Other\"}\n{\"value\": \"95.8\", \"type\": \"Other\"}\n{\"value\": \"3.416\", \"type\": \"Other\"}\n{\"value\": \"16.27\", \"type\": \"Other\"}\n{\"value\": \"97.8\", \"type\": \"Other\"}\n{\"value\": \"19\", \"type\": \"Other\"}\n{\"value\": \"3.432\", \"type\": \"Other\"}\n{\"value\": \"16.36\", \"type\": \"Other\"}",
"role": "assistant"
}
]
|
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
The first six papers describe linguistic annotation in four languages: Spanish (Alcántara and Moreno, 2004) , English <cite>(Miltsakaki et al., 2004</cite>; Babko-Malaya et al., 2004; Meyers et al., 2004) , Czech (Sgall et al., 2004) and German (Baumann et al., 2004) . | ["Background"] | science.multicite_intent_classification.2062 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nThe first six papers describe linguistic annotation in four languages: Spanish (Alcántara and Moreno, 2004) , English <cite>(Miltsakaki et al., 2004</cite>; Babko-Malaya et al., 2004; Meyers et al., 2004) , Czech (Sgall et al., 2004) and German (Baumann et al., 2004) .",
"role": "user"
},
{
"content": "[\"Background\"]",
"role": "assistant"
}
]
|
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
The remainder of this paper is organized as follows: Section 3 presents the shared task description and the sub-tasks included in OffensEval and Section 4 includes a brief description of OLID based on <cite>Zampieri et al. (2019)</cite> . | ["Uses", "Similarities"] | science.multicite_intent_classification.265 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nThe remainder of this paper is organized as follows: Section 3 presents the shared task description and the sub-tasks included in OffensEval and Section 4 includes a brief description of OLID based on <cite>Zampieri et al. (2019)</cite> .",
"role": "user"
},
{
"content": "[\"Uses\", \"Similarities\"]",
"role": "assistant"
}
]
|
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
The correct sentence space to use is less obvious; previous approaches have either mapped sentence meaning to the same topic-based noun space [6,<cite> 7]</cite> or defined a new space for sentence meaning, particularly plausibility space [11, 14] . | ["Background"] | science.multicite_intent_classification.1826 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nThe correct sentence space to use is less obvious; previous approaches have either mapped sentence meaning to the same topic-based noun space [6,<cite> 7]</cite> or defined a new space for sentence meaning, particularly plausibility space [11, 14] .",
"role": "user"
},
{
"content": "[\"Background\"]",
"role": "assistant"
}
]
|
science.discomat_te | | Samples | T g/degC | T p 1/degC | T p 2/degC | DT = T p - T g | |
| Samples | T g/degC | T p 1/degC | T p 2/degC | DT = T p 1 - T g | DT = T p 2 - T g |
| B0 | 549+-3 | 637+-3 | 678+-3 | 88+-6 | 129+-6 |
| B4 | 521+-3 | 617+-3 | 715+-3 | 96+-6 | 194+-6 |
| B8 | 492+-3 | 595+-3 | 745+-3 | 103+-6 | 253+-6 |
| B12 | 463+-3 | 567+-3 | 782+-3 | 104+-6 | 319+-6 |
Caption: Characteristic temperatures for the samples in the study (Tg: glass transition temperature; T p 1 and T p 2: maximum exothermic peak temperatures).
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "549", "type": "Other"}
{"value": "637", "type": "Other"}
{"value": "678", "type": "Other"}
{"value": "88", "type": "Other"}
{"value": "129", "type": "Other"}
{"value": "521", "type": "Other"}
{"value": "617", "type": "Other"}
{"value": "715", "type": "Other"}
{"value": "96", "type": "Other"}
{"value": "194", "type": "Other"}
{"value": "492", "type": "Other"}
{"value": "595", "type": "Other"}
{"value": "745", "type": "Other"}
{"value": "103", "type": "Other"}
{"value": "253", "type": "Other"}
{"value": "463", "type": "Other"}
{"value": "567", "type": "Other"}
{"value": "782", "type": "Other"}
{"value": "104", "type": "Other"}
{"value": "319", "type": "Other"} | science.discomat_te.209 | [
{
"content": "| Samples | T g/degC | T p 1/degC | T p 2/degC | DT = T p - T g | |\n| Samples | T g/degC | T p 1/degC | T p 2/degC | DT = T p 1 - T g | DT = T p 2 - T g |\n| B0 | 549+-3 | 637+-3 | 678+-3 | 88+-6 | 129+-6 |\n| B4 | 521+-3 | 617+-3 | 715+-3 | 96+-6 | 194+-6 |\n| B8 | 492+-3 | 595+-3 | 745+-3 | 103+-6 | 253+-6 |\n| B12 | 463+-3 | 567+-3 | 782+-3 | 104+-6 | 319+-6 |\n\nCaption: Characteristic temperatures for the samples in the study (Tg: glass transition temperature; T p 1 and T p 2: maximum exothermic peak temperatures).\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"549\", \"type\": \"Other\"}\n{\"value\": \"637\", \"type\": \"Other\"}\n{\"value\": \"678\", \"type\": \"Other\"}\n{\"value\": \"88\", \"type\": \"Other\"}\n{\"value\": \"129\", \"type\": \"Other\"}\n{\"value\": \"521\", \"type\": \"Other\"}\n{\"value\": \"617\", \"type\": \"Other\"}\n{\"value\": \"715\", \"type\": \"Other\"}\n{\"value\": \"96\", \"type\": \"Other\"}\n{\"value\": \"194\", \"type\": \"Other\"}\n{\"value\": \"492\", \"type\": \"Other\"}\n{\"value\": \"595\", \"type\": \"Other\"}\n{\"value\": \"745\", \"type\": \"Other\"}\n{\"value\": \"103\", \"type\": \"Other\"}\n{\"value\": \"253\", \"type\": \"Other\"}\n{\"value\": \"463\", \"type\": \"Other\"}\n{\"value\": \"567\", \"type\": \"Other\"}\n{\"value\": \"782\", \"type\": \"Other\"}\n{\"value\": \"104\", \"type\": \"Other\"}\n{\"value\": \"319\", \"type\": \"Other\"}",
"role": "assistant"
}
]
|
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT:
In-vitro studies suggest that vitamin D reduces inflammation by inhibiting nuclear factor kappa-B (NFκB) activity. Yet, no trials have examined the effects of vitamin D supplementation on NFκB activity in-vivo in humans. We conducted a double-blind randomized trial (RCT) examining effects of vitamin D supplementation on inflammatory markers and NFκB activity in peripheral blood mononuclear cells (PBMCs). Sixty-five overweight/obese, vitamin D-deficient (25-hydroxyvitamin D [25(OH)D] ≤ 50 nmol/L) adults were randomized to a single 100,000 IU bolus followed by 4,000 IU daily cholecalciferol or matching placebo for 16 weeks. We measured BMI, % body fat, serum 25(OH)D, high-sensitivity C-reactive protein (hsCRP), tumour necrosis factor (TNF), monocyte chemoattractant protein-1 (MCP-1), interferon-gamma (IFN-γ), several interleukins, and NFκB activity in PBMCs. Fifty-four participants completed the study. Serum 25(OH)D concentrations increased with vitamin D supplementation compared to placebo (p < 0.001). Vitamin D and placebo groups did not differ in any inflammatory markers or NFκB activity (all p > 0.05). Results remained non-significant after adjustment for age, sex, and % body fat, and after further adjustment for sun exposure, physical activity, and dietary vitamin D intake. Although in-vitro studies report anti-inflammatory effects of vitamin D, our RCT data show no effect of vitamin D supplementation on inflammatory markers or NFκB activity in-vivo in humans.
Extracted Data: | [["vitamin D group receiving an initial bolus dose of 100,000 IU (in 2 capsules) followed by 4,000 IU (in 4 capsules) of cholecalciferol daily", "the placebo group receiving an equivalent number of identical placebo capsules, which were continued daily for a period of 16 weeks", "Serum 25(OH)D concentrations", "significantly increased", "Serum 25(OH)D concentrations increased with vitamin D supplementation compared to placebo (p \u003c 0.001)."], ["vitamin D group receiving an initial bolus dose of 100,000 IU (in 2 capsules) followed by 4,000 IU (in 4 capsules) of cholecalciferol daily", "the placebo group receiving an equivalent number of identical placebo capsules, which were continued daily for a period of 16 weeks", "NF?B activity", "no significant difference", "NF\u03baB activity also did not differ between vitamin D and placebo groups in this subgroup (p = 0.3; "], ["vitamin D group receiving an initial bolus dose of 100,000 IU (in 2 capsules) followed by 4,000 IU (in 4 capsules) of cholecalciferol daily", "the placebo group receiving an equivalent number of identical placebo capsules, which were continued daily for a period of 16 weeks", "NF?B activity", "no significant difference", "Vitamin D and placebo groups did not differ in any inflammatory markers or NF\u03baB activity (all p \u003e 0.05)."]] | science.evidence_inference.10 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT: \nIn-vitro studies suggest that vitamin D reduces inflammation by inhibiting nuclear factor kappa-B (NFκB) activity. Yet, no trials have examined the effects of vitamin D supplementation on NFκB activity in-vivo in humans. We conducted a double-blind randomized trial (RCT) examining effects of vitamin D supplementation on inflammatory markers and NFκB activity in peripheral blood mononuclear cells (PBMCs). Sixty-five overweight/obese, vitamin D-deficient (25-hydroxyvitamin D [25(OH)D] ≤ 50 nmol/L) adults were randomized to a single 100,000 IU bolus followed by 4,000 IU daily cholecalciferol or matching placebo for 16 weeks. We measured BMI, % body fat, serum 25(OH)D, high-sensitivity C-reactive protein (hsCRP), tumour necrosis factor (TNF), monocyte chemoattractant protein-1 (MCP-1), interferon-gamma (IFN-γ), several interleukins, and NFκB activity in PBMCs. Fifty-four participants completed the study. Serum 25(OH)D concentrations increased with vitamin D supplementation compared to placebo (p < 0.001). Vitamin D and placebo groups did not differ in any inflammatory markers or NFκB activity (all p > 0.05). Results remained non-significant after adjustment for age, sex, and % body fat, and after further adjustment for sun exposure, physical activity, and dietary vitamin D intake. Although in-vitro studies report anti-inflammatory effects of vitamin D, our RCT data show no effect of vitamin D supplementation on inflammatory markers or NFκB activity in-vivo in humans.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"vitamin D group receiving an initial bolus dose of 100,000 IU (in 2 capsules) followed by 4,000 IU (in 4 capsules) of cholecalciferol daily\", \"the placebo group receiving an equivalent number of identical placebo capsules, which were continued daily for a period of 16 weeks\", \"Serum 25(OH)D concentrations\", \"significantly increased\", \"Serum 25(OH)D concentrations increased with vitamin D supplementation compared to placebo (p \\u003c 0.001).\"], [\"vitamin D group receiving an initial bolus dose of 100,000 IU (in 2 capsules) followed by 4,000 IU (in 4 capsules) of cholecalciferol daily\", \"the placebo group receiving an equivalent number of identical placebo capsules, which were continued daily for a period of 16 weeks\", \"NF?B activity\", \"no significant difference\", \"NF\\u03baB activity also did not differ between vitamin D and placebo groups in this subgroup (p = 0.3; \"], [\"vitamin D group receiving an initial bolus dose of 100,000 IU (in 2 capsules) followed by 4,000 IU (in 4 capsules) of cholecalciferol daily\", \"the placebo group receiving an equivalent number of identical placebo capsules, which were continued daily for a period of 16 weeks\", \"NF?B activity\", \"no significant difference\", \"Vitamin D and placebo groups did not differ in any inflammatory markers or NF\\u03baB activity (all p \\u003e 0.05).\"]]",
"role": "assistant"
}
]
|
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
Rather than use a hierarchical attention neural network<cite> (Xing et al., 2017)</cite> to obtain the contextual representation of a conversation, we propose two utterance-level attentions for weighting the importance of each utterance in the context, which is more simple in structure and has less number of parameters than the hierarchical attention approach. | ["Extends", "Differences"] | science.multicite_intent_classification.1833 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nRather than use a hierarchical attention neural network<cite> (Xing et al., 2017)</cite> to obtain the contextual representation of a conversation, we propose two utterance-level attentions for weighting the importance of each utterance in the context, which is more simple in structure and has less number of parameters than the hierarchical attention approach.",
"role": "user"
},
{
"content": "[\"Extends\", \"Differences\"]",
"role": "assistant"
}
]
|
science.biored_ner | Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: ["CellLine", "Variant", "Species", "Disease", "Chemical", "Gene"].
Ensure the output is strictly the JSON object.
Source Text:
Myocardial Fas ligand expression increases susceptibility to AZT-induced cardiomyopathy. BACKGROUND: Dilated cardiomyopathy (DCM) and myocarditis occur in many HIV-infected individuals, resulting in symptomatic heart failure in up to 5% of patients. Highly active antiretroviral therapy (HAART) has significantly reduced morbidity and mortality of acquired immunodeficiency syndrome (AIDS), but has resulted in an increase in cardiac and skeletal myopathies. METHODS AND RESULTS: In order to investigate whether the HAART component zidovudine (3'-azido-2',3'-deoxythymidine; AZT) triggers the Fas-dependent cell-death pathway and cause cytoskeletal disruption in a murine model of DCM, 8-week-old transgenic (expressing Fas ligand in the myocardium: FasL Tg) and non-transgenic (NTg) mice received water ad libitum containing different concentrations of AZT (0, 0.07, 0.2, and 0.7 mg/ml). After 6 weeks, cardiac function was assessed by echocardiography and morphology was assessed by histopathologic and immunohistochemical methods. NTg and untreated FasL Tg mice showed little or no change in cardiac structure or function. In contrast, AZT-treated FasL Tg mice developed cardiac dilation and depressed cardiac function in a dose-dependent manner, with concomitant inflammatory infiltration of both ventricles. These changes were associated with an increased sarcolemmal expression of Fas and FasL, as well as increased activation of caspase 3, translocation of calpain 1 to the sarcolemma and sarcomere, and increased numbers of cells undergoing apoptosis. These were associated with changes in dystrophin and cardiac troponin I localization, as well as loss of sarcolemmal integrity. CONCLUSIONS: The expression of Fas ligand in the myocardium, as identified in HIV-positive patients, might increase the susceptibility to HAART-induced cardiomyopathy due to activation of apoptotic pathways, resulting in cardiac dilation and dysfunction. | {"CellLine": [], "Chemical": ["AZT", "3\u0027-azido-2\u0027,3\u0027-deoxythymidine", "zidovudine"], "Disease": ["cardiac dilation", "cardiac and skeletal myopathies", "Dilated cardiomyopathy", "DCM", "myocarditis", "HIV-infected", "inflammatory", "heart failure", "AIDS", "cardiac dilation and dysfunction", "cardiomyopathy", "depressed cardiac function", "acquired immunodeficiency syndrome"], "Gene": ["cardiac troponin I", "FasL", "caspase 3", "calpain 1", "Fas ligand", "Fas", "dystrophin"], "Species": ["HIV", "mice", "murine", "patients"], "Variant": []} | science.biored_ner.39 | [
{
"content": "Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: [\"CellLine\", \"Variant\", \"Species\", \"Disease\", \"Chemical\", \"Gene\"].\n\nEnsure the output is strictly the JSON object.\n\nSource Text:\n\nMyocardial Fas ligand expression increases susceptibility to AZT-induced cardiomyopathy. BACKGROUND: Dilated cardiomyopathy (DCM) and myocarditis occur in many HIV-infected individuals, resulting in symptomatic heart failure in up to 5% of patients. Highly active antiretroviral therapy (HAART) has significantly reduced morbidity and mortality of acquired immunodeficiency syndrome (AIDS), but has resulted in an increase in cardiac and skeletal myopathies. METHODS AND RESULTS: In order to investigate whether the HAART component zidovudine (3'-azido-2',3'-deoxythymidine; AZT) triggers the Fas-dependent cell-death pathway and cause cytoskeletal disruption in a murine model of DCM, 8-week-old transgenic (expressing Fas ligand in the myocardium: FasL Tg) and non-transgenic (NTg) mice received water ad libitum containing different concentrations of AZT (0, 0.07, 0.2, and 0.7 mg/ml). After 6 weeks, cardiac function was assessed by echocardiography and morphology was assessed by histopathologic and immunohistochemical methods. NTg and untreated FasL Tg mice showed little or no change in cardiac structure or function. In contrast, AZT-treated FasL Tg mice developed cardiac dilation and depressed cardiac function in a dose-dependent manner, with concomitant inflammatory infiltration of both ventricles. These changes were associated with an increased sarcolemmal expression of Fas and FasL, as well as increased activation of caspase 3, translocation of calpain 1 to the sarcolemma and sarcomere, and increased numbers of cells undergoing apoptosis. These were associated with changes in dystrophin and cardiac troponin I localization, as well as loss of sarcolemmal integrity. CONCLUSIONS: The expression of Fas ligand in the myocardium, as identified in HIV-positive patients, might increase the susceptibility to HAART-induced cardiomyopathy due to activation of apoptotic pathways, resulting in cardiac dilation and dysfunction.",
"role": "user"
},
{
"content": "{\"CellLine\": [], \"Chemical\": [\"AZT\", \"3\\u0027-azido-2\\u0027,3\\u0027-deoxythymidine\", \"zidovudine\"], \"Disease\": [\"cardiac dilation\", \"cardiac and skeletal myopathies\", \"Dilated cardiomyopathy\", \"DCM\", \"myocarditis\", \"HIV-infected\", \"inflammatory\", \"heart failure\", \"AIDS\", \"cardiac dilation and dysfunction\", \"cardiomyopathy\", \"depressed cardiac function\", \"acquired immunodeficiency syndrome\"], \"Gene\": [\"cardiac troponin I\", \"FasL\", \"caspase 3\", \"calpain 1\", \"Fas ligand\", \"Fas\", \"dystrophin\"], \"Species\": [\"HIV\", \"mice\", \"murine\", \"patients\"], \"Variant\": []}",
"role": "assistant"
}
]
|
science.discomat_te | | Heating rate | DTA peak temperature (degC) | DTA peak temperature (degC) | DTA peak temperature (degC) | DTA peak temperature (degC) | DTA peak temperature (degC) | DTA peak temperature (degC) | DTA peak temperature (degC) | DTA peak temperature (degC) |
| (degC/min) | Nd | Sm | Eu | Gd | Dy | Y | Er | Yb |
| 10 | 696 | 720 | 697 | 720 | 717 | 733 | 731 | 694 |
| 20 | 720 | 734 | 717 | 743 | 738 | 751 | 746 | 708 |
| 30 | 735 | 769 | 729 | 752 | 751 | 762 | 758 | 710 |
| 40 | 767 | 780 | 758 | 765 | 754 | 770 | 765 | 723 |
| Ea (kJ/mol) | 232 | 247 | 267 | 380 | 422 | 469 | 494 | 564 |
| Ionic radius (nm) | 0.098 | 0.096 | 0.095 | 0.094 | 0.091 | 0.090 | 0.089 | 0.087 |
Caption: DTA peak temperatures and activation energies (Ea) of crystal growth of NaY(Nd, Sm, Eu, Gd, Dy, Er, Yb)PSi and NaYPSi glasses.
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "10", "type": "Other"}
{"value": "696", "type": "Other"}
{"value": "720", "type": "Other"}
{"value": "697", "type": "Other"}
{"value": "720", "type": "Other"}
{"value": "717", "type": "Other"}
{"value": "733", "type": "Other"}
{"value": "731", "type": "Other"}
{"value": "694", "type": "Other"}
{"value": "20", "type": "Other"}
{"value": "720", "type": "Other"}
{"value": "734", "type": "Other"}
{"value": "717", "type": "Other"}
{"value": "743", "type": "Other"}
{"value": "738", "type": "Other"}
{"value": "751", "type": "Other"}
{"value": "746", "type": "Other"}
{"value": "708", "type": "Other"}
{"value": "30", "type": "Other"}
{"value": "735", "type": "Other"}
{"value": "769", "type": "Other"}
{"value": "729", "type": "Other"}
{"value": "752", "type": "Other"}
{"value": "751", "type": "Other"}
{"value": "762", "type": "Other"}
{"value": "758", "type": "Other"}
{"value": "710", "type": "Other"}
{"value": "40", "type": "Other"}
{"value": "767", "type": "Other"}
{"value": "780", "type": "Other"}
{"value": "758", "type": "Other"}
{"value": "765", "type": "Other"}
{"value": "754", "type": "Other"}
{"value": "770", "type": "Other"}
{"value": "765", "type": "Other"}
{"value": "723", "type": "Other"}
{"value": "232", "type": "Other"}
{"value": "247", "type": "Other"}
{"value": "267", "type": "Other"}
{"value": "380", "type": "Other"}
{"value": "422", "type": "Other"}
{"value": "469", "type": "Other"}
{"value": "494", "type": "Other"}
{"value": "564", "type": "Other"}
{"value": "0.098", "type": "Other"}
{"value": "0.096", "type": "Other"}
{"value": "0.095", "type": "Other"}
{"value": "0.094", "type": "Other"}
{"value": "0.091", "type": "Other"}
{"value": "0.090", "type": "Other"}
{"value": "0.089", "type": "Other"}
{"value": "0.087", "type": "Other"} | science.discomat_te.325 | [
{
"content": "| Heating rate | DTA peak temperature (degC) | DTA peak temperature (degC) | DTA peak temperature (degC) | DTA peak temperature (degC) | DTA peak temperature (degC) | DTA peak temperature (degC) | DTA peak temperature (degC) | DTA peak temperature (degC) |\n| (degC/min) | Nd | Sm | Eu | Gd | Dy | Y | Er | Yb |\n| 10 | 696 | 720 | 697 | 720 | 717 | 733 | 731 | 694 |\n| 20 | 720 | 734 | 717 | 743 | 738 | 751 | 746 | 708 |\n| 30 | 735 | 769 | 729 | 752 | 751 | 762 | 758 | 710 |\n| 40 | 767 | 780 | 758 | 765 | 754 | 770 | 765 | 723 |\n| Ea (kJ/mol) | 232 | 247 | 267 | 380 | 422 | 469 | 494 | 564 |\n| Ionic radius (nm) | 0.098 | 0.096 | 0.095 | 0.094 | 0.091 | 0.090 | 0.089 | 0.087 |\n\nCaption: DTA peak temperatures and activation energies (Ea) of crystal growth of NaY(Nd, Sm, Eu, Gd, Dy, Er, Yb)PSi and NaYPSi glasses.\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"10\", \"type\": \"Other\"}\n{\"value\": \"696\", \"type\": \"Other\"}\n{\"value\": \"720\", \"type\": \"Other\"}\n{\"value\": \"697\", \"type\": \"Other\"}\n{\"value\": \"720\", \"type\": \"Other\"}\n{\"value\": \"717\", \"type\": \"Other\"}\n{\"value\": \"733\", \"type\": \"Other\"}\n{\"value\": \"731\", \"type\": \"Other\"}\n{\"value\": \"694\", \"type\": \"Other\"}\n{\"value\": \"20\", \"type\": \"Other\"}\n{\"value\": \"720\", \"type\": \"Other\"}\n{\"value\": \"734\", \"type\": \"Other\"}\n{\"value\": \"717\", \"type\": \"Other\"}\n{\"value\": \"743\", \"type\": \"Other\"}\n{\"value\": \"738\", \"type\": \"Other\"}\n{\"value\": \"751\", \"type\": \"Other\"}\n{\"value\": \"746\", \"type\": \"Other\"}\n{\"value\": \"708\", \"type\": \"Other\"}\n{\"value\": \"30\", \"type\": \"Other\"}\n{\"value\": \"735\", \"type\": \"Other\"}\n{\"value\": \"769\", \"type\": \"Other\"}\n{\"value\": \"729\", \"type\": \"Other\"}\n{\"value\": \"752\", \"type\": \"Other\"}\n{\"value\": \"751\", \"type\": \"Other\"}\n{\"value\": \"762\", \"type\": \"Other\"}\n{\"value\": \"758\", \"type\": \"Other\"}\n{\"value\": \"710\", \"type\": \"Other\"}\n{\"value\": \"40\", \"type\": \"Other\"}\n{\"value\": \"767\", \"type\": \"Other\"}\n{\"value\": \"780\", \"type\": \"Other\"}\n{\"value\": \"758\", \"type\": \"Other\"}\n{\"value\": \"765\", \"type\": \"Other\"}\n{\"value\": \"754\", \"type\": \"Other\"}\n{\"value\": \"770\", \"type\": \"Other\"}\n{\"value\": \"765\", \"type\": \"Other\"}\n{\"value\": \"723\", \"type\": \"Other\"}\n{\"value\": \"232\", \"type\": \"Other\"}\n{\"value\": \"247\", \"type\": \"Other\"}\n{\"value\": \"267\", \"type\": \"Other\"}\n{\"value\": \"380\", \"type\": \"Other\"}\n{\"value\": \"422\", \"type\": \"Other\"}\n{\"value\": \"469\", \"type\": \"Other\"}\n{\"value\": \"494\", \"type\": \"Other\"}\n{\"value\": \"564\", \"type\": \"Other\"}\n{\"value\": \"0.098\", \"type\": \"Other\"}\n{\"value\": \"0.096\", \"type\": \"Other\"}\n{\"value\": \"0.095\", \"type\": \"Other\"}\n{\"value\": \"0.094\", \"type\": \"Other\"}\n{\"value\": \"0.091\", \"type\": \"Other\"}\n{\"value\": \"0.090\", \"type\": \"Other\"}\n{\"value\": \"0.089\", \"type\": \"Other\"}\n{\"value\": \"0.087\", \"type\": \"Other\"}",
"role": "assistant"
}
]
|
science.discomat_te | | Radiative transitions | Tm000 | Tm000 | Tm100 | Tm100 | Tm1.0 | Tm1.0 |
| Radiative transitions | b | n | b | n | b | n |
| 527nmEr3+:(2H11/2 - 4I15/2) | - | - | 0.605 | 2.717+-0.058 | 0.103 | 2.783+-0.999 |
| 547nmEr3+:(4S3/2 - 4I15/2) | 9.712 | 2.765+-0.076 | 5.827 | 2.519+-0.106 | 0.779 | 2.651+-0.104 |
| 657nmEr3+:(4F9/2 - 4I15/2)+Tm3+:(3F2,3 - 3H6) | 1.728 | 2.540+-0.086 | 3.975 | 2.543+-0.091 | 4.228 | 2.536+-0.125 |
| 794nmEr3+:(4I9/2 - 4I15/2)+Tm3+:(3H4 - 3H6) | 0.012 | 2.501+-0.116 | 0.176 | 2.691+-0.095 | 0.085 | 2.472+-0.067 |
Caption: Parameters b and n extracted from the curves in Fig. 4(c) and their respective radiative transitions of Er3+ and Tm3+ ions.
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "0.605", "type": "Other"}
{"value": "2.717", "type": "Other"}
{"value": "0.103", "type": "Other"}
{"value": "2.783", "type": "Other"}
{"value": "9.712", "type": "Other"}
{"value": "2.765", "type": "Other"}
{"value": "5.827", "type": "Other"}
{"value": "2.519", "type": "Other"}
{"value": "0.779", "type": "Other"}
{"value": "2.651", "type": "Other"}
{"value": "1.728", "type": "Other"}
{"value": "2.540", "type": "Other"}
{"value": "3.975", "type": "Other"}
{"value": "2.543", "type": "Other"}
{"value": "4.228", "type": "Other"}
{"value": "2.536", "type": "Other"}
{"value": "0.012", "type": "Other"}
{"value": "2.501", "type": "Other"}
{"value": "0.176", "type": "Other"}
{"value": "2.691", "type": "Other"}
{"value": "0.085", "type": "Other"}
{"value": "2.472", "type": "Other"} | science.discomat_te.384 | [
{
"content": "| Radiative transitions | Tm000 | Tm000 | Tm100 | Tm100 | Tm1.0 | Tm1.0 |\n| Radiative transitions | b | n | b | n | b | n |\n| 527nmEr3+:(2H11/2 - 4I15/2) | - | - | 0.605 | 2.717+-0.058 | 0.103 | 2.783+-0.999 |\n| 547nmEr3+:(4S3/2 - 4I15/2) | 9.712 | 2.765+-0.076 | 5.827 | 2.519+-0.106 | 0.779 | 2.651+-0.104 |\n| 657nmEr3+:(4F9/2 - 4I15/2)+Tm3+:(3F2,3 - 3H6) | 1.728 | 2.540+-0.086 | 3.975 | 2.543+-0.091 | 4.228 | 2.536+-0.125 |\n| 794nmEr3+:(4I9/2 - 4I15/2)+Tm3+:(3H4 - 3H6) | 0.012 | 2.501+-0.116 | 0.176 | 2.691+-0.095 | 0.085 | 2.472+-0.067 |\n\nCaption: Parameters b and n extracted from the curves in Fig. 4(c) and their respective radiative transitions of Er3+ and Tm3+ ions.\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"0.605\", \"type\": \"Other\"}\n{\"value\": \"2.717\", \"type\": \"Other\"}\n{\"value\": \"0.103\", \"type\": \"Other\"}\n{\"value\": \"2.783\", \"type\": \"Other\"}\n{\"value\": \"9.712\", \"type\": \"Other\"}\n{\"value\": \"2.765\", \"type\": \"Other\"}\n{\"value\": \"5.827\", \"type\": \"Other\"}\n{\"value\": \"2.519\", \"type\": \"Other\"}\n{\"value\": \"0.779\", \"type\": \"Other\"}\n{\"value\": \"2.651\", \"type\": \"Other\"}\n{\"value\": \"1.728\", \"type\": \"Other\"}\n{\"value\": \"2.540\", \"type\": \"Other\"}\n{\"value\": \"3.975\", \"type\": \"Other\"}\n{\"value\": \"2.543\", \"type\": \"Other\"}\n{\"value\": \"4.228\", \"type\": \"Other\"}\n{\"value\": \"2.536\", \"type\": \"Other\"}\n{\"value\": \"0.012\", \"type\": \"Other\"}\n{\"value\": \"2.501\", \"type\": \"Other\"}\n{\"value\": \"0.176\", \"type\": \"Other\"}\n{\"value\": \"2.691\", \"type\": \"Other\"}\n{\"value\": \"0.085\", \"type\": \"Other\"}\n{\"value\": \"2.472\", \"type\": \"Other\"}",
"role": "assistant"
}
]
|
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT.BACKGROUND: The importance of creating and sustaining a strong culture of patient safety has been recognized as a critical component of safe medication use. This study aims to assess changes in attitudes toward patient safety culture and frequency of quality-related event (QRE) reporting after guided implementation of a continuous quality improvement (CQI) program in a panel of community pharmacies in the United States (U.S.).
ABSTRACT.METHODS: Twenty-one community pharmacies volunteered to participate in the project and were randomly assigned to intervention or control groups. Pharmacy staff in the intervention group received guided training to ensure full implementation of a CQI program while those in the control group partially implemented the program. Pharmacy staff in both groups completed retrospective pre-post safety culture questionnaires and reported medication errors and near misses that occurred in their practices. Rasch analysis was applied to assess questionnaire validity and reliability and to confirm if the ordinal level data approximated interval level measures. Paired t-tests and repeated measure analysis of covariance tests were subsequently used to compare observed changes in the attitudes of subjects and frequency of QREs reporting in intervention and control groups.
ABSTRACT.RESULTS: Sixty-nine employees completed the questionnaire, a 43.9% response rate. Improvement in attitudes toward patient safety was statistically significant in the intervention group in six domains: staff, training, and skill (p = 0.017); patient counseling (p = 0.043); communication about mistakes (p < 0.001); response to mistakes (p < 0.001); organizational learning – continuous improvement (p < 0.001); and overall patient safety perceptions (p = 0.033). No significant differences were observed in QRE reporting rates between intervention and control groups. However, differences were observed in the types of QREs reported (e.g., incorrect safety cap) and the point in the prescription processing workflow where a QRE was detected (e.g., partner check station, and drug utilization review station) in the intervention group (p < 0.001).
ABSTRACT.CONCLUSION: Guided CQI program implementation increased the self-reported patient safety culture attitudes among staff.
ABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (10.1186/s40545-017-0114-x) contains supplementary material, which is available to authorized users.
Extracted Data: | [["Continuous quality improvement program", "Control", "Communication about mistakes", "significantly increased", "Improvement in attitudes toward patient safety was statistically significant in the intervention group in six domains: staff, training, and skill (p = 0.017); patient counseling (p = 0.043); communication about mistakes (p \u003c 0.001); response to mistakes (p \u003c 0.001); organizational learning \u2013 continuous improvement (p \u003c 0.001); and overall patient safety perceptions (p = 0.033)."], ["Continuous quality improvement program", "Control", "Patient counseling", "significantly increased", ". Improvement in attitudes toward patient safety was statistically significant in the intervention group in six domains: staff, training, and skill (p = 0.017); patient counseling (p = 0.043);"], ["Continuous quality improvement program", "Control", "Response to mistakes", "significantly increased", "Improvement in attitudes toward patient safety was statistically significant in the intervention group in six domains: staff, training, and skill (p = 0.017); patient counseling (p = 0.043); communication about mistakes (p \u003c 0.001); response to mistakes (p \u003c 0.001);"], ["Continuous quality improvement program", "Control", "Organizational learning and continuous improvement ", "significantly increased", "Improvement in attitudes toward patient safety was statistically significant in the intervention group in six domains: staff, training, and skill (p = 0.017); patient counseling (p = 0.043); communication about mistakes (p \u003c 0.001); response to mistakes (p \u003c 0.001); organizational learning \u2013 continuous improvement (p \u003c 0.001);"], ["Continuous quality improvement program", "Control", "Patient safety perceptions ", "significantly increased", ". Improvement in attitudes toward patient safety was statistically significant in the intervention group in six domains: staff, training, and skill (p = 0.017); patient counseling (p = 0.043); communication about mistakes (p \u003c 0.001); response to mistakes (p \u003c 0.001); organizational learning \u2013 continuous improvement (p \u003c 0.001); and overall patient safety perceptions (p = 0.033)."]] | science.evidence_inference.89 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND: The importance of creating and sustaining a strong culture of patient safety has been recognized as a critical component of safe medication use. This study aims to assess changes in attitudes toward patient safety culture and frequency of quality-related event (QRE) reporting after guided implementation of a continuous quality improvement (CQI) program in a panel of community pharmacies in the United States (U.S.).\n\nABSTRACT.METHODS: Twenty-one community pharmacies volunteered to participate in the project and were randomly assigned to intervention or control groups. Pharmacy staff in the intervention group received guided training to ensure full implementation of a CQI program while those in the control group partially implemented the program. Pharmacy staff in both groups completed retrospective pre-post safety culture questionnaires and reported medication errors and near misses that occurred in their practices. Rasch analysis was applied to assess questionnaire validity and reliability and to confirm if the ordinal level data approximated interval level measures. Paired t-tests and repeated measure analysis of covariance tests were subsequently used to compare observed changes in the attitudes of subjects and frequency of QREs reporting in intervention and control groups.\n\nABSTRACT.RESULTS: Sixty-nine employees completed the questionnaire, a 43.9% response rate. Improvement in attitudes toward patient safety was statistically significant in the intervention group in six domains: staff, training, and skill (p = 0.017); patient counseling (p = 0.043); communication about mistakes (p < 0.001); response to mistakes (p < 0.001); organizational learning – continuous improvement (p < 0.001); and overall patient safety perceptions (p = 0.033). No significant differences were observed in QRE reporting rates between intervention and control groups. However, differences were observed in the types of QREs reported (e.g., incorrect safety cap) and the point in the prescription processing workflow where a QRE was detected (e.g., partner check station, and drug utilization review station) in the intervention group (p < 0.001).\n\nABSTRACT.CONCLUSION: Guided CQI program implementation increased the self-reported patient safety culture attitudes among staff.\n\nABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (10.1186/s40545-017-0114-x) contains supplementary material, which is available to authorized users.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"Continuous quality improvement program\", \"Control\", \"Communication about mistakes\", \"significantly increased\", \"Improvement in attitudes toward patient safety was statistically significant in the intervention group in six domains: staff, training, and skill (p = 0.017); patient counseling (p = 0.043); communication about mistakes (p \\u003c 0.001); response to mistakes (p \\u003c 0.001); organizational learning \\u2013 continuous improvement (p \\u003c 0.001); and overall patient safety perceptions (p = 0.033).\"], [\"Continuous quality improvement program\", \"Control\", \"Patient counseling\", \"significantly increased\", \". Improvement in attitudes toward patient safety was statistically significant in the intervention group in six domains: staff, training, and skill (p = 0.017); patient counseling (p = 0.043);\"], [\"Continuous quality improvement program\", \"Control\", \"Response to mistakes\", \"significantly increased\", \"Improvement in attitudes toward patient safety was statistically significant in the intervention group in six domains: staff, training, and skill (p = 0.017); patient counseling (p = 0.043); communication about mistakes (p \\u003c 0.001); response to mistakes (p \\u003c 0.001);\"], [\"Continuous quality improvement program\", \"Control\", \"Organizational learning and continuous improvement \", \"significantly increased\", \"Improvement in attitudes toward patient safety was statistically significant in the intervention group in six domains: staff, training, and skill (p = 0.017); patient counseling (p = 0.043); communication about mistakes (p \\u003c 0.001); response to mistakes (p \\u003c 0.001); organizational learning \\u2013 continuous improvement (p \\u003c 0.001);\"], [\"Continuous quality improvement program\", \"Control\", \"Patient safety perceptions \", \"significantly increased\", \". Improvement in attitudes toward patient safety was statistically significant in the intervention group in six domains: staff, training, and skill (p = 0.017); patient counseling (p = 0.043); communication about mistakes (p \\u003c 0.001); response to mistakes (p \\u003c 0.001); organizational learning \\u2013 continuous improvement (p \\u003c 0.001); and overall patient safety perceptions (p = 0.033).\"]]",
"role": "assistant"
}
]
|
science.scierc_ner | Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context.
Input Abstract:
We describe a dialogue system that works with its interlocutor to identify objects. Our contributions include a concise, modular architecture with reversible processes of understanding and generation, an information-state model of reference, and flexible links between semantics and collaborative problem solving. | Output Entities:
{"Generic": [], "Material": [], "Method": ["concise , modular architecture", "dialogue system", "information-state model of reference"], "Metric": [], "OtherScientificTerm": ["semantics"], "Task": ["collaborative problem solving", "generation", "understanding"]} | science.scierc_ner.49 | [
{
"content": "Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context.\n\nInput Abstract:\n\nWe describe a dialogue system that works with its interlocutor to identify objects. Our contributions include a concise, modular architecture with reversible processes of understanding and generation, an information-state model of reference, and flexible links between semantics and collaborative problem solving.",
"role": "user"
},
{
"content": "Output Entities:\n\n{\"Generic\": [], \"Material\": [], \"Method\": [\"concise , modular architecture\", \"dialogue system\", \"information-state model of reference\"], \"Metric\": [], \"OtherScientificTerm\": [\"semantics\"], \"Task\": [\"collaborative problem solving\", \"generation\", \"understanding\"]}",
"role": "assistant"
}
]
|
science.mup_single_document_summarization | Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.
Title of the Paper:
Disentangling Identifiable Features from Noisy Data with Structured Nonlinear ICA
Main Content:
1 Introduction . A central tenet of unsupervised deep learning is that noisy and high dimensional real world data is generated by a nonlinear transformation of lower dimensional latent factors . Learning such lower dimensional features is valuable as they may allow us to understand complex scientific observations in terms of much simpler , semantically meaningful , representations ( Morioka et al. , 2020 ; Zhou and Wei , 2020 ) . Access to a ground truth generative model and its latent features would also greatly enhance several other downstream tasks such as classification ( Klindt et al. , 2021 ; Banville et al. , 2021 ) , transfer learning ( Khemakhem et al. , 2020b ) , as well as causal inference ( Monti et al. , 2019 ; Wu and Fukumizu , 2020 ) . A recently popular approach to deep representation learning has been to learn disentangled features . Whilst not rigorously defined , the general methodology has been to use deep generative models such as VAEs ( Kingma and Welling , 2014 ; Higgins et al. , 2017 ) to estimate semantically distinct factors of variation that generate and encode the data . A substantial problem with the vast majority of work on disentanglement learning is that the models used are not identifiable – that is , they do not learn the true generative features , even in the limit of infinite data – in fact , this task has been proven ∗hermanni.halva @ helsinki.fi †Equal senior authorship 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . impossible without inductive biases on the generative model ( Hyvärinen and Pajunen , 1999 ; Locatello et al. , 2019 ) . Lack of identifiability plagues deep learning models broadly and has been implicated as one of the reasons for unexpectedly poor behaviour when these models are deployed in real world applications ( D ’ Amour et al. , 2020 ) . Fortunately , in many applications the data have dependency structures , such as temporal dependencies which introduce inductive biases . Recent advances in both identifiability theory and practical algorithms for nonlinear ICA ( Hyvärinen and Morioka , 2016 , 2017 ; Hälvä and Hyvärinen , 2020 ; Morioka et al. , 2021 ; Klindt et al. , 2021 ; Oberhauser and Schell , 2021 ) exploit this and offer a principled approach to disentanglement for such data . Learning statistically independent nonlinear features in such models is well-defined , i.e . those models are identifiable . However , the existing nonlinear ICA models suffer from numerous limitations . First , they only exploit specific types of temporal structures , such as either temporal dependencies or nonstationarity . Second , they often work under the assumption that some ’ auxiliary ’ data about a latent process is observed , such as knowledge of the switching points of a nonstationary process as in Hyvärinen and Morioka ( 2016 ) ; Khemakhem et al . ( 2020a ) . Furthermore , all the nonlinear ICA models cited above , with the exception of Khemakhem et al . ( 2020a ) , assume that the data are fully observed and noise-free , even though observation noise is very common in practice , and even Khemakhem et al . ( 2020a ) assumes the noise distribution to be exactly known . This approach of modelling observation noise explicitly is in stark contrast to the approach taken in papers , such as Locatello et al . ( 2020 ) , who instead consider general stochasticity of their model to be captured by latent variables – this approach would be ill-suited to the type of denoising one would often need in practice . Lastly , the identifiability theorems in previous nonlinear ICA works usually restrict the latent components to a specific class of models such as exponential families ( but see Hyvärinen and Morioka ( 2017 ) ) . In this paper we introduce a new framework for identifiable disentanglement , Structured Nonlinear ICA ( SNICA ) , which removes each of the aforementioned limitations in a single unifying framework . Furthermore , the framework guarantees identifiability of a rich class of nonlinear ICA models that is able to exploit dependency structures of any arbitrary order and thus , for instance , extends to spatially structured data . This is the first major theoretical contribution of our paper . The second important theoretical contribution of our paper proves that models within the SNICA framework are identifiable even in the presence of additive output noise of arbitrary , unknown distribution . We achieve this by extending the theorems by Gassiat et al . ( 2020b , a ) . The subsequent practical implication is that SNICA models can perform dimensionality reduction to identifiable latent components and de-noise observed data . We note that noisy-observation part of the identifiability theory is not even limited to nonlinear ICA but applies to any system observed under noise . Third , we give mild sufficient conditions , relating to the strength and the non-Gaussian nature of the temporal or spatial dependencies , enabling identifiability of nonlinear independent components in this general framework . An important implication is that our theorems can be used , for example , to develop models for disentangling identifiable features from spatial or spatio-temporal data . As an example of the flexibility of the SNICA framework , we present a new nonlinear ICA model called ∆-SNICA . It achieves the following very practical properties which have previously been unattainable in the context of nonlinear ICA : the ability to account for both nonstationarity and autocorrelation in a fully unsupervised setting ; ability perform dimensionality reduction ; model latent states ; and to enable principled estimation and inference by variational maximum-likelihood methods . We demonstrate the practical utility of the model in an application to noisy neuroimaging data that is hypothesized to contain meaningful lower dimensional latent components and complex temporal dynamics . 2 Background . We start by giving some brief background on Nonlinear ICA and identifiability . Consider a model where the distribution of observed data x is given by pX ( x ; θ ) for some parameter vector θ . This model is called identifiable if the following condition is fulfilled : ∀ ( θ , θ′ ) pX ( x ; θ ) = pX ( x ; θ′ ) ⇒ θ = θ′ . ( 1 ) In other words , based on the observed data distribution alone , we can uniquely infer the parameters that generated the data . For models parameterized with some nonparametric function estimator f , such as a deep neural network , we can replace θ with f in the equation above . In practice , identifiability might hold for some parameters , not all ; and parameters might be identifiable up to some more or less trivial indeterminacies , such as scaling . In a typical nonlinear ICA setting we observe some x ∈ RN which has been generated by an invertible nonlinear mixing function f from latent independent components s ∈ RN , with p ( s ) = ∏N i=1 p ( s ( i ) ) , as per : x = f ( s ) , ( 2 ) Identifiability of f would then mean that we can in theory find the true f , and subsequently the true data generating components . Unfortunately , without some additional structure this model is unidentifiable , as shown by Hyvärinen and Pajunen ( 1999 ) : there is an infinite number of possible solutions and these have no trivial relation with each other . To solve this problem , previous work ( Sprekeler et al. , 2014 ; Hyvärinen and Morioka , 2016 , 2017 ) developed models with temporal structure . Such time series models were generalized and expressed in a succinct way by Hyvärinen et al . ( 2019 ) ; Khemakhem et al . ( 2020a ) by assuming the independent components are conditionally independent upon some observed auxiliary variable ut : p ( st|ut ) = ∏N i=1 p ( s ( i ) t |ut ) . In a time series context , the auxiliary variable might be history , e.g . ut = xt−1 , or the index of a time segment to model nonstationarity ( or piece-wise stationarity ) . ( It could also be data from another modality , such as audio data used to condition video data ( Arandjelovic and Zisserman , 2017 ) . ) Notice that the mixing function f in ( 2 ) is assumed bijective and thus identifiable dimension reduction is not possible in most of the models discussed above . The only exceptions , we are aware of , are Khemakhem et al . ( 2020a ) ; Klindt et al . ( 2021 ) who choose f as injective rather than bijective . Further , Khemakhem et al . ( 2020a ) assume additive noise on the observations x = f ( s ) + ε , which allows to estimate posterior of s by an identifiable VAE ( iVAE ) . We will take a similar strategy in what follows . 3 Definition of Structured Nonlinear ICA . In this section , we first present the new framework of Structured Nonlinear ICA ( SNICA ) – a broad class of models for identifiable disentanglement and learning of independent components when data has structural dependencies . Next , we give an example of a particularly useful specific model that fits within our framework , called ∆-SNICA , by using switching linear dynamical latent processes . 3.1 Structured Nonlinear ICA framework . Consider observations ( xt ) t∈T = ( ( x ( 1 ) t , . . . , x ( M ) t ) ) t∈T where T is a discrete indexing set of arbitrary dimension . For discrete time-series models , like previous works , T would be a subset of N. Crucially , however , we allow it to be any arbitrary indexing variable that describes a desired structure . For instance , T could be a subset of N2 for spatial data . We assume the data is generated according the following nonlinear ICA model . First , there exist latent components s ( i ) = ( s ( i ) t ) t∈T for i ∈ { 1 , . . . , N } where for any t , t′ ∈ T , the distributions of ( s ( i ) t ) 16i6N and ( s ( i ) t′ ) 16i6N are the same , which is a weak form of stationarity . Second , we assume that for any m ∈ N∗ and ( t1 , . . . , tm ) ∈ Tm , p ( st1 , . . . , stm ) = ∏N i=1 p ( s ( i ) t1 , . . . , s ( i ) tm ) : that is , the components are unconditionally independent . We further assume that the nonlinear mixing function f : RN → RM with M > N is injective , so there may be more observed variables than components . Finally , denote observational noise by εt ∈ RM and assume that they are i.i.d . for all t ∈ T and independent of the signals s ( i ) . Putting these together , we assume the mixing model where for each t ∈ T , xt = f ( st ) + εt , ( 3 ) where st = ( s ( 1 ) t , . . . , s ( N ) t ) . Importantly , εt can have any arbitrary unknown distribution , even with dependent entries ; in fact , it may even not have finite moments . The main appeal of this framework is that , under the conditions given in next section , we can now guarantee identifiability for a very broad and rich class of models . First , notice that all previous Nonlinear ICA time-series models can be reformulated and often improved upon when viewed through this new unifying framework . In other words , we can create x1 x2 x3 s ( i ) 1 s ( i ) 2 s ( i ) 3 u ( i ) 1 u ( i ) 2 u ( i ) 3 1 6 i 6 N ( a ) HMM modulated components c.f . ( Hälvä and Hyvärinen , 2020 ) models that are very much like those previous works , and capture their dependency profiles , but with the changes that by assuming unconditional independence and output noise we now allow them to perform dimension reduction ( this does also require some additional assumptions needed in our identifiability theorems below ) . To see this , consider the model in Hälvä and Hyvärinen ( 2020 ) which captures nonstationarity in the independent components through a global hidden Markov chain . We can transform this model into the SNICA framework if we instead model each independent component as its own HMM ( Figure 1a ) , with the added benefit that we now have marginally independent components and are able to perform dimensionality reduction into low dimensional latent components . Nonlinear ICA with time-dependencies , such as in an autoregressive model , proposed by Hyvärinen and Morioka ( 2017 ) is also a special case of our framework ( Figure 1b ) , but again with the extension of dimensionality reduction . Furthermore , this framework allows for a plethora of new Nonlinear ICA models to be developed . As described above , these do not have to be limited to time-series but could for instance be a process on a two-dimensional graph with appropriate ( in ) dependencies ( see Figure 1c ) . However , we now proceed to introduce a particularly useful time-series model using our framework . | The authors provide an identifiability result for non-linear ICA for a class of time series models. There is a clear contribution with respect to previous work that was typically restricted to condionnally independent sources, conditionned on a hidden markov chain in finite stare spaces. This new result allows identifying independent sources with a specific form of time dependency. | science.mup_single_document_summarization.12888 | [
{
"content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nDisentangling Identifiable Features from Noisy Data with Structured Nonlinear ICA\n\nMain Content:\n1 Introduction . A central tenet of unsupervised deep learning is that noisy and high dimensional real world data is generated by a nonlinear transformation of lower dimensional latent factors . Learning such lower dimensional features is valuable as they may allow us to understand complex scientific observations in terms of much simpler , semantically meaningful , representations ( Morioka et al. , 2020 ; Zhou and Wei , 2020 ) . Access to a ground truth generative model and its latent features would also greatly enhance several other downstream tasks such as classification ( Klindt et al. , 2021 ; Banville et al. , 2021 ) , transfer learning ( Khemakhem et al. , 2020b ) , as well as causal inference ( Monti et al. , 2019 ; Wu and Fukumizu , 2020 ) . A recently popular approach to deep representation learning has been to learn disentangled features . Whilst not rigorously defined , the general methodology has been to use deep generative models such as VAEs ( Kingma and Welling , 2014 ; Higgins et al. , 2017 ) to estimate semantically distinct factors of variation that generate and encode the data . A substantial problem with the vast majority of work on disentanglement learning is that the models used are not identifiable – that is , they do not learn the true generative features , even in the limit of infinite data – in fact , this task has been proven ∗hermanni.halva @ helsinki.fi †Equal senior authorship 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . impossible without inductive biases on the generative model ( Hyvärinen and Pajunen , 1999 ; Locatello et al. , 2019 ) . Lack of identifiability plagues deep learning models broadly and has been implicated as one of the reasons for unexpectedly poor behaviour when these models are deployed in real world applications ( D ’ Amour et al. , 2020 ) . Fortunately , in many applications the data have dependency structures , such as temporal dependencies which introduce inductive biases . Recent advances in both identifiability theory and practical algorithms for nonlinear ICA ( Hyvärinen and Morioka , 2016 , 2017 ; Hälvä and Hyvärinen , 2020 ; Morioka et al. , 2021 ; Klindt et al. , 2021 ; Oberhauser and Schell , 2021 ) exploit this and offer a principled approach to disentanglement for such data . Learning statistically independent nonlinear features in such models is well-defined , i.e . those models are identifiable . However , the existing nonlinear ICA models suffer from numerous limitations . First , they only exploit specific types of temporal structures , such as either temporal dependencies or nonstationarity . Second , they often work under the assumption that some ’ auxiliary ’ data about a latent process is observed , such as knowledge of the switching points of a nonstationary process as in Hyvärinen and Morioka ( 2016 ) ; Khemakhem et al . ( 2020a ) . Furthermore , all the nonlinear ICA models cited above , with the exception of Khemakhem et al . ( 2020a ) , assume that the data are fully observed and noise-free , even though observation noise is very common in practice , and even Khemakhem et al . ( 2020a ) assumes the noise distribution to be exactly known . This approach of modelling observation noise explicitly is in stark contrast to the approach taken in papers , such as Locatello et al . ( 2020 ) , who instead consider general stochasticity of their model to be captured by latent variables – this approach would be ill-suited to the type of denoising one would often need in practice . Lastly , the identifiability theorems in previous nonlinear ICA works usually restrict the latent components to a specific class of models such as exponential families ( but see Hyvärinen and Morioka ( 2017 ) ) . In this paper we introduce a new framework for identifiable disentanglement , Structured Nonlinear ICA ( SNICA ) , which removes each of the aforementioned limitations in a single unifying framework . Furthermore , the framework guarantees identifiability of a rich class of nonlinear ICA models that is able to exploit dependency structures of any arbitrary order and thus , for instance , extends to spatially structured data . This is the first major theoretical contribution of our paper . The second important theoretical contribution of our paper proves that models within the SNICA framework are identifiable even in the presence of additive output noise of arbitrary , unknown distribution . We achieve this by extending the theorems by Gassiat et al . ( 2020b , a ) . The subsequent practical implication is that SNICA models can perform dimensionality reduction to identifiable latent components and de-noise observed data . We note that noisy-observation part of the identifiability theory is not even limited to nonlinear ICA but applies to any system observed under noise . Third , we give mild sufficient conditions , relating to the strength and the non-Gaussian nature of the temporal or spatial dependencies , enabling identifiability of nonlinear independent components in this general framework . An important implication is that our theorems can be used , for example , to develop models for disentangling identifiable features from spatial or spatio-temporal data . As an example of the flexibility of the SNICA framework , we present a new nonlinear ICA model called ∆-SNICA . It achieves the following very practical properties which have previously been unattainable in the context of nonlinear ICA : the ability to account for both nonstationarity and autocorrelation in a fully unsupervised setting ; ability perform dimensionality reduction ; model latent states ; and to enable principled estimation and inference by variational maximum-likelihood methods . We demonstrate the practical utility of the model in an application to noisy neuroimaging data that is hypothesized to contain meaningful lower dimensional latent components and complex temporal dynamics . 2 Background . We start by giving some brief background on Nonlinear ICA and identifiability . Consider a model where the distribution of observed data x is given by pX ( x ; θ ) for some parameter vector θ . This model is called identifiable if the following condition is fulfilled : ∀ ( θ , θ′ ) pX ( x ; θ ) = pX ( x ; θ′ ) ⇒ θ = θ′ . ( 1 ) In other words , based on the observed data distribution alone , we can uniquely infer the parameters that generated the data . For models parameterized with some nonparametric function estimator f , such as a deep neural network , we can replace θ with f in the equation above . In practice , identifiability might hold for some parameters , not all ; and parameters might be identifiable up to some more or less trivial indeterminacies , such as scaling . In a typical nonlinear ICA setting we observe some x ∈ RN which has been generated by an invertible nonlinear mixing function f from latent independent components s ∈ RN , with p ( s ) = ∏N i=1 p ( s ( i ) ) , as per : x = f ( s ) , ( 2 ) Identifiability of f would then mean that we can in theory find the true f , and subsequently the true data generating components . Unfortunately , without some additional structure this model is unidentifiable , as shown by Hyvärinen and Pajunen ( 1999 ) : there is an infinite number of possible solutions and these have no trivial relation with each other . To solve this problem , previous work ( Sprekeler et al. , 2014 ; Hyvärinen and Morioka , 2016 , 2017 ) developed models with temporal structure . Such time series models were generalized and expressed in a succinct way by Hyvärinen et al . ( 2019 ) ; Khemakhem et al . ( 2020a ) by assuming the independent components are conditionally independent upon some observed auxiliary variable ut : p ( st|ut ) = ∏N i=1 p ( s ( i ) t |ut ) . In a time series context , the auxiliary variable might be history , e.g . ut = xt−1 , or the index of a time segment to model nonstationarity ( or piece-wise stationarity ) . ( It could also be data from another modality , such as audio data used to condition video data ( Arandjelovic and Zisserman , 2017 ) . ) Notice that the mixing function f in ( 2 ) is assumed bijective and thus identifiable dimension reduction is not possible in most of the models discussed above . The only exceptions , we are aware of , are Khemakhem et al . ( 2020a ) ; Klindt et al . ( 2021 ) who choose f as injective rather than bijective . Further , Khemakhem et al . ( 2020a ) assume additive noise on the observations x = f ( s ) + ε , which allows to estimate posterior of s by an identifiable VAE ( iVAE ) . We will take a similar strategy in what follows . 3 Definition of Structured Nonlinear ICA . In this section , we first present the new framework of Structured Nonlinear ICA ( SNICA ) – a broad class of models for identifiable disentanglement and learning of independent components when data has structural dependencies . Next , we give an example of a particularly useful specific model that fits within our framework , called ∆-SNICA , by using switching linear dynamical latent processes . 3.1 Structured Nonlinear ICA framework . Consider observations ( xt ) t∈T = ( ( x ( 1 ) t , . . . , x ( M ) t ) ) t∈T where T is a discrete indexing set of arbitrary dimension . For discrete time-series models , like previous works , T would be a subset of N. Crucially , however , we allow it to be any arbitrary indexing variable that describes a desired structure . For instance , T could be a subset of N2 for spatial data . We assume the data is generated according the following nonlinear ICA model . First , there exist latent components s ( i ) = ( s ( i ) t ) t∈T for i ∈ { 1 , . . . , N } where for any t , t′ ∈ T , the distributions of ( s ( i ) t ) 16i6N and ( s ( i ) t′ ) 16i6N are the same , which is a weak form of stationarity . Second , we assume that for any m ∈ N∗ and ( t1 , . . . , tm ) ∈ Tm , p ( st1 , . . . , stm ) = ∏N i=1 p ( s ( i ) t1 , . . . , s ( i ) tm ) : that is , the components are unconditionally independent . We further assume that the nonlinear mixing function f : RN → RM with M > N is injective , so there may be more observed variables than components . Finally , denote observational noise by εt ∈ RM and assume that they are i.i.d . for all t ∈ T and independent of the signals s ( i ) . Putting these together , we assume the mixing model where for each t ∈ T , xt = f ( st ) + εt , ( 3 ) where st = ( s ( 1 ) t , . . . , s ( N ) t ) . Importantly , εt can have any arbitrary unknown distribution , even with dependent entries ; in fact , it may even not have finite moments . The main appeal of this framework is that , under the conditions given in next section , we can now guarantee identifiability for a very broad and rich class of models . First , notice that all previous Nonlinear ICA time-series models can be reformulated and often improved upon when viewed through this new unifying framework . In other words , we can create x1 x2 x3 s ( i ) 1 s ( i ) 2 s ( i ) 3 u ( i ) 1 u ( i ) 2 u ( i ) 3 1 6 i 6 N ( a ) HMM modulated components c.f . ( Hälvä and Hyvärinen , 2020 ) models that are very much like those previous works , and capture their dependency profiles , but with the changes that by assuming unconditional independence and output noise we now allow them to perform dimension reduction ( this does also require some additional assumptions needed in our identifiability theorems below ) . To see this , consider the model in Hälvä and Hyvärinen ( 2020 ) which captures nonstationarity in the independent components through a global hidden Markov chain . We can transform this model into the SNICA framework if we instead model each independent component as its own HMM ( Figure 1a ) , with the added benefit that we now have marginally independent components and are able to perform dimensionality reduction into low dimensional latent components . Nonlinear ICA with time-dependencies , such as in an autoregressive model , proposed by Hyvärinen and Morioka ( 2017 ) is also a special case of our framework ( Figure 1b ) , but again with the extension of dimensionality reduction . Furthermore , this framework allows for a plethora of new Nonlinear ICA models to be developed . As described above , these do not have to be limited to time-series but could for instance be a process on a two-dimensional graph with appropriate ( in ) dependencies ( see Figure 1c ) . However , we now proceed to introduce a particularly useful time-series model using our framework .",
"role": "user"
},
{
"content": "The authors provide an identifiability result for non-linear ICA for a class of time series models. There is a clear contribution with respect to previous work that was typically restricted to condionnally independent sources, conditionned on a hidden markov chain in finite stare spaces. This new result allows identifying independent sources with a specific form of time dependency.",
"role": "assistant"
}
]
|
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT: Osteoporosis is a common and debilitating condition associated with significant morbidity and mortality. The efficacy and safety of oral bisphosphonates for the treatment of osteoporosis are well established. However, patient adherence and persistence on treatment are suboptimal. This randomised open-label multi-centre study of 6-months' duration compared persistence on treatment in postmenopausal women with osteoporosis receiving either once-monthly ibandronate plus a patient support programme (PSP), or once-weekly alendronate. To avoid falsely elevated persistence rates often associated with clinical trials, the study was designed to reflect everyday clinical practice in the UK and follow-up visits were limited to be consistent with the primary care setting. Analysis of the primary endpoint showed that persistence was significantly higher in the ibandronate/PSP group compared with the alendronate group (p < 0.0001). The estimated proportion of patients persisting with treatment at 6 months was 56.6% (306/541) and 38.6% (198/513) in the ibandronate/PSP and alendronate groups, respectively. Therefore, compared with alendronate, there was a 47% relative improvement in the proportion of patients persisting with treatment in the ibandronate/PSP group. Secondary endpoint measurements of adherence (e.g. proportion of patients remaining on treatment at study end; proportion of patients discontinuing from the study) were also significantly different in favour of ibandronate plus patient support. In summary, the PERSIST study demonstrated that persistence on treatment was increased in patients receiving once-monthly ibandronate plus patient support compared with once-weekly alendronate. Increased persistence on bisphosphonate treatment is expected to improve patient outcomes and decrease the social and economic burden of osteoporosis.
Extracted Data: | [[" ibandronate plus a patient support programme (PSP)", "alendronate", "persistence on treatment", "significantly increased", "Analysis of the primary endpoint showed that persistence was significantly higher in the ibandronate/PSP group compared with the alendronate group (p \u003c 0.0001)."]] | science.evidence_inference.190 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT: Osteoporosis is a common and debilitating condition associated with significant morbidity and mortality. The efficacy and safety of oral bisphosphonates for the treatment of osteoporosis are well established. However, patient adherence and persistence on treatment are suboptimal. This randomised open-label multi-centre study of 6-months' duration compared persistence on treatment in postmenopausal women with osteoporosis receiving either once-monthly ibandronate plus a patient support programme (PSP), or once-weekly alendronate. To avoid falsely elevated persistence rates often associated with clinical trials, the study was designed to reflect everyday clinical practice in the UK and follow-up visits were limited to be consistent with the primary care setting. Analysis of the primary endpoint showed that persistence was significantly higher in the ibandronate/PSP group compared with the alendronate group (p < 0.0001). The estimated proportion of patients persisting with treatment at 6 months was 56.6% (306/541) and 38.6% (198/513) in the ibandronate/PSP and alendronate groups, respectively. Therefore, compared with alendronate, there was a 47% relative improvement in the proportion of patients persisting with treatment in the ibandronate/PSP group. Secondary endpoint measurements of adherence (e.g. proportion of patients remaining on treatment at study end; proportion of patients discontinuing from the study) were also significantly different in favour of ibandronate plus patient support. In summary, the PERSIST study demonstrated that persistence on treatment was increased in patients receiving once-monthly ibandronate plus patient support compared with once-weekly alendronate. Increased persistence on bisphosphonate treatment is expected to improve patient outcomes and decrease the social and economic burden of osteoporosis.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\" ibandronate plus a patient support programme (PSP)\", \"alendronate\", \"persistence on treatment\", \"significantly increased\", \"Analysis of the primary endpoint showed that persistence was significantly higher in the ibandronate/PSP group compared with the alendronate group (p \\u003c 0.0001).\"]]",
"role": "assistant"
}
]
|
science.scifact_entailment | Determine the relationship between them and generate a JSON response. Scientific Claim:
Increased mobility of retrotransposons is assosciated with mutation and higher tumorigenesis rates.
Research Abstract:
DNA methylation plays an important role in biological processes in human health and disease.
Recent technological advances allow unbiased whole-genome DNA methylation (methylome) analysis to be carried out on human cells.
Using whole-genome bisulfite sequencing at 24.7-fold coverage (12.3-fold per strand), we report a comprehensive (92.62%) methylome and analysis of the unique sequences in human peripheral blood mononuclear cells (PBMC) from the same Asian individual whose genome was deciphered in the YH project.
PBMC constitute an important source for clinical blood tests world-wide.
We found that 68.4% of CpG sites and <0.2% of non-CpG sites were methylated, demonstrating that non-CpG cytosine methylation is minor in human PBMC.
Analysis of the PBMC methylome revealed a rich epigenomic landscape for 20 distinct genomic features, including regulatory, protein-coding, non-coding, RNA-coding, and repeat sequences.
Integration of our methylome data with the YH genome sequence enabled a first comprehensive assessment of allele-specific methylation (ASM) between the two haploid methylomes of any individual and allowed the identification of 599 haploid differentially methylated regions (hDMRs) covering 287 genes.
Of these, 76 genes had hDMRs within 2 kb of their transcriptional start sites of which >80% displayed allele-specific expression (ASE).
These data demonstrate that ASM is a recurrent phenomenon and is highly correlated with ASE in human PBMCs.
Together with recently reported similar studies, our study provides a comprehensive resource for future epigenomic research and confirms new sequencing technology as a paradigm for large-scale epigenomics studies. | {"verdict": "NEI", "evidence": []} | science.scifact_entailment.397 | [
{
"content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nIncreased mobility of retrotransposons is assosciated with mutation and higher tumorigenesis rates.\n\nResearch Abstract:\nDNA methylation plays an important role in biological processes in human health and disease.\nRecent technological advances allow unbiased whole-genome DNA methylation (methylome) analysis to be carried out on human cells.\nUsing whole-genome bisulfite sequencing at 24.7-fold coverage (12.3-fold per strand), we report a comprehensive (92.62%) methylome and analysis of the unique sequences in human peripheral blood mononuclear cells (PBMC) from the same Asian individual whose genome was deciphered in the YH project.\nPBMC constitute an important source for clinical blood tests world-wide.\nWe found that 68.4% of CpG sites and <0.2% of non-CpG sites were methylated, demonstrating that non-CpG cytosine methylation is minor in human PBMC.\nAnalysis of the PBMC methylome revealed a rich epigenomic landscape for 20 distinct genomic features, including regulatory, protein-coding, non-coding, RNA-coding, and repeat sequences.\nIntegration of our methylome data with the YH genome sequence enabled a first comprehensive assessment of allele-specific methylation (ASM) between the two haploid methylomes of any individual and allowed the identification of 599 haploid differentially methylated regions (hDMRs) covering 287 genes.\nOf these, 76 genes had hDMRs within 2 kb of their transcriptional start sites of which >80% displayed allele-specific expression (ASE).\nThese data demonstrate that ASM is a recurrent phenomenon and is highly correlated with ASE in human PBMCs.\nTogether with recently reported similar studies, our study provides a comprehensive resource for future epigenomic research and confirms new sequencing technology as a paradigm for large-scale epigenomics studies.",
"role": "user"
},
{
"content": "{\"verdict\": \"NEI\", \"evidence\": []}",
"role": "assistant"
}
]
|
science.mup_single_document_summarization | Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.
Title of the Paper:
Invertible DenseNets with Concatenated LipSwish
Main Content:
We introduce Invertible Dense Networks ( i-DenseNets ) , a more parameter efficient extension of Residual Flows . The method relies on an analysis of the Lipschitz continuity of the concatenation in DenseNets , where we enforce invertibility of the network by satisfying the Lipschitz constant . Furthermore , we propose a learnable weighted concatenation , which not only improves the model performance but also indicates the importance of the concatenated weighted representation . Additionally , we introduce the Concatenated LipSwish as activation function , for which we show how to enforce the Lipschitz condition and which boosts performance . The new architecture , i-DenseNet , out-performs Residual Flow and other flow-based models on density estimation evaluated in bits per dimension , where we utilize an equal parameter budget . Moreover , we show that the proposed model outperforms Residual Flows when trained as a hybrid model where the model is both a generative and a discriminative model . 1 Introduction . Neural networks are widely used to parameterize non-linear models in supervised learning tasks such as classification . In addition , they are also utilized to build flexible density estimators of the true distribution of the observed data [ 25 , 33 ] . The resulting deep density estimators , also called deep generative models , can be further used to generate realistic-looking images that are hard to separate from real ones , detection of adversarial attacks [ 9 , 17 ] , and for hybrid modeling [ 27 ] which have the property to both predict a label ( classify ) and generate . Many deep generative models are trained by maximizing the ( log- ) likelihood function and their architectures come in different designs . For instance , causal convolutional neural networks are used to parameterize autoregressive models [ 28 , 29 ] or various neural networks can be utilized in Variational Auto-Encoders [ 19 , 32 ] . The other group of likelihood-based deep density estimators , flow-based models ( or flows ) , consist of invertible neural networks since they are used to compute the likelihood through the change of variable formula [ 31 , 37 , 36 ] . The main difference that determines an exact computation or approximation of the likelihood function for a flow-based model lies in the design of the transformation layer and tractability of the Jacobian-determinant . Many flow-based models formulate the transformation that is invertible and its Jacobian is tractable [ 3 , 6–8 , 21 , 30 , 31 , 38 ] . Recently , Behrmann et al . [ 2 ] proposed a different approach , namely , deep-residual blocks as a transformation layer . The deep-residual networks ( ResNets ) of [ 12 ] are known for their successes in supervised learning approaches . In a ResNet block , each input of the block is added to the output , which forms the input for the next block . Since ResNets are not necessarily invertible , Behrmann et al . [ 2 ] enforce the Lipschitz constant of the transformation to be smaller than 1 ( i.e. , it becomes a contraction ) that allows applying an iterative procedure to invert the network . Furthermore , Chen et al . [ 4 ] proposed Residual Flows , an improvement of i-ResNets , that uses an unbiased estimator for the logarithm of the Jacobian-determinant . 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) In supervised learning , an architecture that uses fewer parameters and is even more powerful than the deep-residual network is the Densely Connected Convolution Network ( DenseNet ) , which was first presented in [ 15 ] . Contrary to a ResNet block , a DenseNet layer consists of a concatenation of the input with the output . The network showed to improve significantly in recognition tasks on benchmark datasets such as CIFAR10 , SVHN , and ImageNet , by using fewer computations and having fewer parameters than ResNets while performing at a similar level . In this work , we extend Residual Flows [ 2 , 4 ] , and use densely connected blocks ( DenseBlocks ) as a residual layer . First , we introduce invertible Dense Networks ( i-DenseNets ) , and we show that we can derive a bound on the Lipschitz constant to create an invertible flow-based model . Furthermore , we propose the Concatenated LipSwish ( CLipSwish ) as an activation function , and derive a stronger Lipschitz bound . The CLipSwish function preserves more signal than LipSwish activation functions . Finally , we demonstrate how i-DenseNets can be efficiently trained as a generative model , outperforming Residual Flows and other flow-based models under an equal parameter budget . 2 Background . Flow-based models Let us consider a vector of observable variables x 2 Rd and a vector of latent variables z 2 Rd . We define a bijective function f : Rd ! Rd that maps a latent variable to a datapoint x = f ( z ) . Since f is invertible , we define its inverse as F = f 1 . We use the change of variables formula to compute the likelihood of a datapoint x after taking the logarithm , that is : ln pX ( x ) = ln pZ ( z ) + ln | det JF ( x ) | , ( 1 ) where pZ ( z ) is a base distribution ( e.g. , the standard Gaussian ) and JF ( x ) is the Jacobian of F at x . The bijective transformation is typically constructed as a sequence of K invertible transformations , x = fK · · · f1 ( z ) , and a single transformation fk is referred to as a flow [ 31 ] . The change of variables formula allows evaluating the data in a tractable manner . Moreover , the flows are trained using the log-likelihood objective where the Jacobian-determinant compensates the change of volume of the invertible transformations . Residual flows Behrmann et al . [ 2 ] construct an invertible ResNet layer which is only constrained in Lipschitz continuity . A ResNet is defined as : F ( x ) = x + g ( x ) , where g is modeled by a ( convolutional ) neural network and F represents a ResNet layer ( see Figure 1 ( a ) ) which is in general not invertible . However , g is constructed in such a way that it satisfies the Lipschitz constant being strictly lower than 1 , Lip ( g ) < 1 , by using spectral normalization of [ 10 , 26 ] : Lip ( g ) < 1 , if ||Wi||2 < 1 , ( 2 ) where || · ||2 is the ` 2 matrix norm . Then Lip ( g ) = K < 1 and Lip ( F ) < 1+K . Only in this specific case the Banach fixed-point theorem holds and ResNet layer F has a unique inverse . As a result , the inverse can be approximated by fixed-point iterations . To estimate the log-determinant is , especially for high-dimensional spaces , computationally intractable due to expensive computations . Since ResNet blocks have a constrained Lipschitz constant , the log-likelihood estimation of Equation ( 1 ) can be transformed to a version where the logarithm of the Jacobian-determinant is cheaper to compute , tractable , and approximated with guaranteed convergence [ 2 ] : ln p ( x ) = ln p ( f ( x ) ) + tr 1X k=1 ( 1 ) k+1 k [ Jg ( x ) ] k ! , ( 3 ) where Jg ( x ) is the Jacobian of g at x that satisfies ||Jg||2 < 1 . The Skilling-Hutchinson trace estimator [ 35 , 16 ] is used to compute the trace at a lower cost than to fully compute the trace of the Jacobian . Residual Flows [ 4 ] use an improved method to estimate the power series at an even lower cost with an unbiased estimator based on `` Russian roulette '' of [ 18 ] . Intuitively , the method estimates the infinite sum of the power series by evaluating a finite amount of terms . In return , this leads to less computation of terms compared to invertible residual networks . To avoid derivative saturation , which occurs when the second derivative is zero in large regions , the LipSwish activation is proposed . 3 Invertible Dense Networks . In this section , we propose Invertible Dense Networks by using a DenseBlock as a residual layer . We show how the network can be parameterized as a flow-based model and refer to the resulting model as i-DenseNets . The code can be retrieved from : https : //github.com/yperugachidiaz/ invertible_densenets . 3.1 Dense blocks . The main component of the proposed flow-based model is a DenseBlock that is defined as a function F : Rd ! Rd with F ( x ) = x + g ( x ) , where g consists of dense layers { hi } ni=1 . Note that an important modification to make the model invertible is to output x + g ( x ) whereas a standard DenseBlock would only output g ( x ) . The function g is expressed as follows : g ( x ) = Wn+1 hn · · · h1 ( x ) , ( 4 ) where Wn+1 represents a 1⇥ 1 convolution to match the output size of Rd . A layer hi consists of two parts concatenated to each other . The upper part is a copy of the input signal . The lower part consists of the transformed input , where the transformation is a multiplication of ( convolutional ) weights Wi with the input signal , followed by a non-linearity having Lip ( ) 1 , such as ReLU , ELU , LipSwish , or tanh . As an example , a dense layer h2 can be composed as follows : h1 ( x ) = x ( W1x ) , h2 ( h1 ( x ) ) = h1 ( x ) ( W2h1 ( x ) ) . ( 5 ) In Figure 1 , we schematically outline a residual block ( Figure 1 ( a ) ) and a dense block ( Figure 1 ( b ) ) . We refer to concatenation depth as the number of dense layers in a DenseBlock and growth as the channel growth size of the transformation in the lower part . 3.2 Constraining the Lipschitz constant . If we enforce function g to satisfy Lip ( g ) < 1 , then DenseBlock F is invertible since the Banach fixed point theorem holds . As a result , the inverse can be approximated in the same manner as in [ 2 ] . To satisfy Lip ( g ) < 1 , we need to enforce Lip ( hi ) < 1 for all n layers , since Lip ( g ) Lip ( hn+1 ) · . . . · Lip ( h1 ) . Therefore , we first need to determine the Lipschitz constant for a dense layer hi . For the full derivation , see Appendix A . We know that a function f is K-Lipschitz if for all points v and w the following holds : dY ( f ( v ) , f ( w ) ) KdX ( v , w ) , ( 6 ) where we assume that the distance metrics dX = dY = d are chosen to be the ` 2-norm . Further , let two functions f1 and f2 be concatenated in h : hv = f1 ( v ) f2 ( v ) , hw = f1 ( w ) f2 ( w ) , ( 7 ) where function f1 is the upper part and f2 is the lower part . We can now find an analytical form to express a limit on K for the dense layer in the form of Equation ( 6 ) : d ( hv , hw ) 2 = d ( f1 ( v ) , f1 ( w ) ) 2 + d ( f2 ( v ) , f2 ( w ) ) 2 , d ( hv , hw ) 2 ( K21 +K22 ) d ( v , w ) 2 , ( 8 ) where we know that the Lipschitz constant of h consist of two parts , namely , Lip ( f1 ) = K1 and Lip ( f2 ) = K2 . Therefore , the Lipschitz constant of layer h can be expressed as : Lip ( h ) = q ( K21 +K 2 2 ) . ( 9 ) With spectral normalization of Equation ( 2 ) , we know that we can enforce ( convolutional ) weights Wi to be at most 1-Lipschitz . Hence , for all n dense layers we apply the spectral normalization on the lower part which locally enforces Lip ( f2 ) = K2 < 1 . Further , since we enforce each layer hi to be at most 1-Lipschitz and we start with h1 , where f1 ( x ) = x , we know that Lip ( f1 ) = 1 . Therefore , the Lipschitz constant of an entire layer can be at most Lip ( h ) < p 12 + 12 = p 2 , thus dividing by this limit enforces each layer to be at most 1-Lipschitz . 3.3 Learnable weighted concatenation We have shown that we can enforce an entire dense layer to have Lip ( hi ) < 1 by applying a spectral norm on the ( convolutional ) weights Wi and then divide the layer hi by p 2 . Although learning a weighting between the upper and lower part would barely affect a standard dense layer , it matters in this case because the layers are regularized to be 1-Lipschitz . To optimize and learn the importance of the concatenated representations , we introduce learnable parameters ⌘1 and ⌘2 for , respectively , the upper and lower part of each layer hi . Since the upper and lower part of the layer can be at most 1-Lipschitz , multiplication by these factors results in functions that are at most ⌘1-Lipschitz and ⌘2-Lipschitz . As indicated by Equation ( 9 ) , the layer is then at mostp ⌘21 + ⌘ 2 2 Lipschitz . Dividing by this factor results in a bound that is at most 1-Lipschitz . In practice , we initialize ⌘1 and ⌘2 at value 1 and during training use a softplus function to avoid them being negative . The range of the normalized parameters is between ⌘̂1 , ⌘̂2 2 [ 0 , 1 ] and can be expressed on the unit circle as shown in Figure 2 . In the special case where ⌘1 = ⌘2 , the normalized parameters are ⌘̂1 = ⌘̂2 = 12 p 2 . This case corresponds to the situation in Section 3.2 where the concatenation is not learned . An additional advantage is that the normalized ⌘̂1 and ⌘̂2 express the importance of the upper and lower signal . For example , when ⌘̂1 > ⌘̂2 , the input signal is of more importance than the transformed signal . | The paper proposes a parameter-efficient DenseNet block and a CLipSwish (concatenated LipSwish) activation function for use with the DenseNet blocks in the residual flows setting. The paper outperforms prior art (with uniform dequantization) in density estimation. Results are reported on CIFAR10 and ImageNet32 with rich ablation studies. | science.mup_single_document_summarization.11634 | [
{
"content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nInvertible DenseNets with Concatenated LipSwish\n\nMain Content:\nWe introduce Invertible Dense Networks ( i-DenseNets ) , a more parameter efficient extension of Residual Flows . The method relies on an analysis of the Lipschitz continuity of the concatenation in DenseNets , where we enforce invertibility of the network by satisfying the Lipschitz constant . Furthermore , we propose a learnable weighted concatenation , which not only improves the model performance but also indicates the importance of the concatenated weighted representation . Additionally , we introduce the Concatenated LipSwish as activation function , for which we show how to enforce the Lipschitz condition and which boosts performance . The new architecture , i-DenseNet , out-performs Residual Flow and other flow-based models on density estimation evaluated in bits per dimension , where we utilize an equal parameter budget . Moreover , we show that the proposed model outperforms Residual Flows when trained as a hybrid model where the model is both a generative and a discriminative model . 1 Introduction . Neural networks are widely used to parameterize non-linear models in supervised learning tasks such as classification . In addition , they are also utilized to build flexible density estimators of the true distribution of the observed data [ 25 , 33 ] . The resulting deep density estimators , also called deep generative models , can be further used to generate realistic-looking images that are hard to separate from real ones , detection of adversarial attacks [ 9 , 17 ] , and for hybrid modeling [ 27 ] which have the property to both predict a label ( classify ) and generate . Many deep generative models are trained by maximizing the ( log- ) likelihood function and their architectures come in different designs . For instance , causal convolutional neural networks are used to parameterize autoregressive models [ 28 , 29 ] or various neural networks can be utilized in Variational Auto-Encoders [ 19 , 32 ] . The other group of likelihood-based deep density estimators , flow-based models ( or flows ) , consist of invertible neural networks since they are used to compute the likelihood through the change of variable formula [ 31 , 37 , 36 ] . The main difference that determines an exact computation or approximation of the likelihood function for a flow-based model lies in the design of the transformation layer and tractability of the Jacobian-determinant . Many flow-based models formulate the transformation that is invertible and its Jacobian is tractable [ 3 , 6–8 , 21 , 30 , 31 , 38 ] . Recently , Behrmann et al . [ 2 ] proposed a different approach , namely , deep-residual blocks as a transformation layer . The deep-residual networks ( ResNets ) of [ 12 ] are known for their successes in supervised learning approaches . In a ResNet block , each input of the block is added to the output , which forms the input for the next block . Since ResNets are not necessarily invertible , Behrmann et al . [ 2 ] enforce the Lipschitz constant of the transformation to be smaller than 1 ( i.e. , it becomes a contraction ) that allows applying an iterative procedure to invert the network . Furthermore , Chen et al . [ 4 ] proposed Residual Flows , an improvement of i-ResNets , that uses an unbiased estimator for the logarithm of the Jacobian-determinant . 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) In supervised learning , an architecture that uses fewer parameters and is even more powerful than the deep-residual network is the Densely Connected Convolution Network ( DenseNet ) , which was first presented in [ 15 ] . Contrary to a ResNet block , a DenseNet layer consists of a concatenation of the input with the output . The network showed to improve significantly in recognition tasks on benchmark datasets such as CIFAR10 , SVHN , and ImageNet , by using fewer computations and having fewer parameters than ResNets while performing at a similar level . In this work , we extend Residual Flows [ 2 , 4 ] , and use densely connected blocks ( DenseBlocks ) as a residual layer . First , we introduce invertible Dense Networks ( i-DenseNets ) , and we show that we can derive a bound on the Lipschitz constant to create an invertible flow-based model . Furthermore , we propose the Concatenated LipSwish ( CLipSwish ) as an activation function , and derive a stronger Lipschitz bound . The CLipSwish function preserves more signal than LipSwish activation functions . Finally , we demonstrate how i-DenseNets can be efficiently trained as a generative model , outperforming Residual Flows and other flow-based models under an equal parameter budget . 2 Background . Flow-based models Let us consider a vector of observable variables x 2 Rd and a vector of latent variables z 2 Rd . We define a bijective function f : Rd ! Rd that maps a latent variable to a datapoint x = f ( z ) . Since f is invertible , we define its inverse as F = f 1 . We use the change of variables formula to compute the likelihood of a datapoint x after taking the logarithm , that is : ln pX ( x ) = ln pZ ( z ) + ln | det JF ( x ) | , ( 1 ) where pZ ( z ) is a base distribution ( e.g. , the standard Gaussian ) and JF ( x ) is the Jacobian of F at x . The bijective transformation is typically constructed as a sequence of K invertible transformations , x = fK · · · f1 ( z ) , and a single transformation fk is referred to as a flow [ 31 ] . The change of variables formula allows evaluating the data in a tractable manner . Moreover , the flows are trained using the log-likelihood objective where the Jacobian-determinant compensates the change of volume of the invertible transformations . Residual flows Behrmann et al . [ 2 ] construct an invertible ResNet layer which is only constrained in Lipschitz continuity . A ResNet is defined as : F ( x ) = x + g ( x ) , where g is modeled by a ( convolutional ) neural network and F represents a ResNet layer ( see Figure 1 ( a ) ) which is in general not invertible . However , g is constructed in such a way that it satisfies the Lipschitz constant being strictly lower than 1 , Lip ( g ) < 1 , by using spectral normalization of [ 10 , 26 ] : Lip ( g ) < 1 , if ||Wi||2 < 1 , ( 2 ) where || · ||2 is the ` 2 matrix norm . Then Lip ( g ) = K < 1 and Lip ( F ) < 1+K . Only in this specific case the Banach fixed-point theorem holds and ResNet layer F has a unique inverse . As a result , the inverse can be approximated by fixed-point iterations . To estimate the log-determinant is , especially for high-dimensional spaces , computationally intractable due to expensive computations . Since ResNet blocks have a constrained Lipschitz constant , the log-likelihood estimation of Equation ( 1 ) can be transformed to a version where the logarithm of the Jacobian-determinant is cheaper to compute , tractable , and approximated with guaranteed convergence [ 2 ] : ln p ( x ) = ln p ( f ( x ) ) + tr 1X k=1 ( 1 ) k+1 k [ Jg ( x ) ] k ! , ( 3 ) where Jg ( x ) is the Jacobian of g at x that satisfies ||Jg||2 < 1 . The Skilling-Hutchinson trace estimator [ 35 , 16 ] is used to compute the trace at a lower cost than to fully compute the trace of the Jacobian . Residual Flows [ 4 ] use an improved method to estimate the power series at an even lower cost with an unbiased estimator based on `` Russian roulette '' of [ 18 ] . Intuitively , the method estimates the infinite sum of the power series by evaluating a finite amount of terms . In return , this leads to less computation of terms compared to invertible residual networks . To avoid derivative saturation , which occurs when the second derivative is zero in large regions , the LipSwish activation is proposed . 3 Invertible Dense Networks . In this section , we propose Invertible Dense Networks by using a DenseBlock as a residual layer . We show how the network can be parameterized as a flow-based model and refer to the resulting model as i-DenseNets . The code can be retrieved from : https : //github.com/yperugachidiaz/ invertible_densenets . 3.1 Dense blocks . The main component of the proposed flow-based model is a DenseBlock that is defined as a function F : Rd ! Rd with F ( x ) = x + g ( x ) , where g consists of dense layers { hi } ni=1 . Note that an important modification to make the model invertible is to output x + g ( x ) whereas a standard DenseBlock would only output g ( x ) . The function g is expressed as follows : g ( x ) = Wn+1 hn · · · h1 ( x ) , ( 4 ) where Wn+1 represents a 1⇥ 1 convolution to match the output size of Rd . A layer hi consists of two parts concatenated to each other . The upper part is a copy of the input signal . The lower part consists of the transformed input , where the transformation is a multiplication of ( convolutional ) weights Wi with the input signal , followed by a non-linearity having Lip ( ) 1 , such as ReLU , ELU , LipSwish , or tanh . As an example , a dense layer h2 can be composed as follows : h1 ( x ) = x ( W1x ) , h2 ( h1 ( x ) ) = h1 ( x ) ( W2h1 ( x ) ) . ( 5 ) In Figure 1 , we schematically outline a residual block ( Figure 1 ( a ) ) and a dense block ( Figure 1 ( b ) ) . We refer to concatenation depth as the number of dense layers in a DenseBlock and growth as the channel growth size of the transformation in the lower part . 3.2 Constraining the Lipschitz constant . If we enforce function g to satisfy Lip ( g ) < 1 , then DenseBlock F is invertible since the Banach fixed point theorem holds . As a result , the inverse can be approximated in the same manner as in [ 2 ] . To satisfy Lip ( g ) < 1 , we need to enforce Lip ( hi ) < 1 for all n layers , since Lip ( g ) Lip ( hn+1 ) · . . . · Lip ( h1 ) . Therefore , we first need to determine the Lipschitz constant for a dense layer hi . For the full derivation , see Appendix A . We know that a function f is K-Lipschitz if for all points v and w the following holds : dY ( f ( v ) , f ( w ) ) KdX ( v , w ) , ( 6 ) where we assume that the distance metrics dX = dY = d are chosen to be the ` 2-norm . Further , let two functions f1 and f2 be concatenated in h : hv = f1 ( v ) f2 ( v ) , hw = f1 ( w ) f2 ( w ) , ( 7 ) where function f1 is the upper part and f2 is the lower part . We can now find an analytical form to express a limit on K for the dense layer in the form of Equation ( 6 ) : d ( hv , hw ) 2 = d ( f1 ( v ) , f1 ( w ) ) 2 + d ( f2 ( v ) , f2 ( w ) ) 2 , d ( hv , hw ) 2 ( K21 +K22 ) d ( v , w ) 2 , ( 8 ) where we know that the Lipschitz constant of h consist of two parts , namely , Lip ( f1 ) = K1 and Lip ( f2 ) = K2 . Therefore , the Lipschitz constant of layer h can be expressed as : Lip ( h ) = q ( K21 +K 2 2 ) . ( 9 ) With spectral normalization of Equation ( 2 ) , we know that we can enforce ( convolutional ) weights Wi to be at most 1-Lipschitz . Hence , for all n dense layers we apply the spectral normalization on the lower part which locally enforces Lip ( f2 ) = K2 < 1 . Further , since we enforce each layer hi to be at most 1-Lipschitz and we start with h1 , where f1 ( x ) = x , we know that Lip ( f1 ) = 1 . Therefore , the Lipschitz constant of an entire layer can be at most Lip ( h ) < p 12 + 12 = p 2 , thus dividing by this limit enforces each layer to be at most 1-Lipschitz . 3.3 Learnable weighted concatenation We have shown that we can enforce an entire dense layer to have Lip ( hi ) < 1 by applying a spectral norm on the ( convolutional ) weights Wi and then divide the layer hi by p 2 . Although learning a weighting between the upper and lower part would barely affect a standard dense layer , it matters in this case because the layers are regularized to be 1-Lipschitz . To optimize and learn the importance of the concatenated representations , we introduce learnable parameters ⌘1 and ⌘2 for , respectively , the upper and lower part of each layer hi . Since the upper and lower part of the layer can be at most 1-Lipschitz , multiplication by these factors results in functions that are at most ⌘1-Lipschitz and ⌘2-Lipschitz . As indicated by Equation ( 9 ) , the layer is then at mostp ⌘21 + ⌘ 2 2 Lipschitz . Dividing by this factor results in a bound that is at most 1-Lipschitz . In practice , we initialize ⌘1 and ⌘2 at value 1 and during training use a softplus function to avoid them being negative . The range of the normalized parameters is between ⌘̂1 , ⌘̂2 2 [ 0 , 1 ] and can be expressed on the unit circle as shown in Figure 2 . In the special case where ⌘1 = ⌘2 , the normalized parameters are ⌘̂1 = ⌘̂2 = 12 p 2 . This case corresponds to the situation in Section 3.2 where the concatenation is not learned . An additional advantage is that the normalized ⌘̂1 and ⌘̂2 express the importance of the upper and lower signal . For example , when ⌘̂1 > ⌘̂2 , the input signal is of more importance than the transformed signal .",
"role": "user"
},
{
"content": "The paper proposes a parameter-efficient DenseNet block and a CLipSwish (concatenated LipSwish) activation function for use with the DenseNet blocks in the residual flows setting. The paper outperforms prior art (with uniform dequantization) in density estimation. Results are reported on CIFAR10 and ImageNet32 with rich ablation studies.",
"role": "assistant"
}
]
|
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT.STUDY OBJECTIVES: The aim of this study was to evaluate the efficacy and tolerability of novel combination naproxen sodium (NS) and diphenhydramine (DPH) in subjects with postoperative dental pain along with transient insomnia induced by 5 h sleep phase advance. The present studies aimed to demonstrate the added benefit and optimal dosages of the combination product over individual ingredients alone in improving sleep and pain.
ABSTRACT.METHODS: Each of the two studies was a two-centre, randomised, double-blind and double-dummy trial. In the first study, subjects were randomised into one of the following treatment arms: NS 440 mg/DPH 50 mg, NS 220 mg/DPH 50 mg, NS 440 mg or DPH 50 mg. In the second study, subjects received either NS 440 mg/DPH 25 mg, NS 440 mg or DPH 50 mg. The co-primary end-points in both studies were wake time after sleep onset (WASO) and sleep latency (SL) measured by actigraphy. Other secondary sleep and pain end-points were also assessed.
ABSTRACT.RESULTS: The intent-to-treat population included 712 and 267 subjects from studies one and two, respectively. In the first study, only the NS 440 mg/DPH 50 mg combination showed significant improvements in both WASO vs. NS alone (−70.3 min p = 0.0002) and SL vs. DPH alone (25.50 and 41.50 min respectively, p < 0.0001). In the second study, the NS 440 mg/DPH 25 mg combination failed to show any significant improvements vs. either component alone.
ABSTRACT.CONCLUSIONS: Only the NS 440 mg/DPH 50 mg combination demonstrated improvement in both sleep latency vs. DPH 50 mg and sleep maintenance (WASO) vs. NS 440 mg. There were no serious or unexpected adverse events reported in either study. Clinical Trial Registration: NCT01280591 (study 1); NCT01495858 (study 2)
Extracted Data: | [["Naproxen sodium + diphenhydramine (440mg/25mg)", "Naproxen sodium", "Wake time after sleep onset improvement ", "no significant difference", "the NS 440 mg/DPH 25 mg combination failed to show any significant improvements vs. either component alone."], ["Naproxen sodium + diphenhydramine (440mg/25mg)", "Diphenhydramine ", "Wake time after sleep onset improvement ", "no significant difference", "NS 440 mg/DPH 25 mg combination failed to show any significant improvements vs. either component alone."], ["Naproxen sodium + diphenhydramine (440 mg/50 mg)", "Naproxen sodium", "Wake time after sleep onset improvement ", "significantly increased", "In the first study, only the NS 440 mg/DPH 50 mg combination showed significant improvements in both WASO vs. NS alone (\u221270.3 min p = 0.0002) and SL vs. DPH alone (25.50 and 41.50 min respectively, p \u003c 0.0001)."], ["Naproxen sodium + diphenhydramine (440mg/50mg)", "Diphenhydramine", "Sleep latency improvement ", "significantly increased", "In the first study, only the NS 440 mg/DPH 50 mg combination showed significant improvements in both WASO vs. NS alone (\u221270.3 min p = 0.0002) and SL vs. DPH alone (25.50 and 41.50 min respectively, p \u003c 0.0001)."]] | science.evidence_inference.135 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.STUDY OBJECTIVES: The aim of this study was to evaluate the efficacy and tolerability of novel combination naproxen sodium (NS) and diphenhydramine (DPH) in subjects with postoperative dental pain along with transient insomnia induced by 5 h sleep phase advance. The present studies aimed to demonstrate the added benefit and optimal dosages of the combination product over individual ingredients alone in improving sleep and pain.\n\nABSTRACT.METHODS: Each of the two studies was a two-centre, randomised, double-blind and double-dummy trial. In the first study, subjects were randomised into one of the following treatment arms: NS 440 mg/DPH 50 mg, NS 220 mg/DPH 50 mg, NS 440 mg or DPH 50 mg. In the second study, subjects received either NS 440 mg/DPH 25 mg, NS 440 mg or DPH 50 mg. The co-primary end-points in both studies were wake time after sleep onset (WASO) and sleep latency (SL) measured by actigraphy. Other secondary sleep and pain end-points were also assessed.\n\nABSTRACT.RESULTS: The intent-to-treat population included 712 and 267 subjects from studies one and two, respectively. In the first study, only the NS 440 mg/DPH 50 mg combination showed significant improvements in both WASO vs. NS alone (−70.3 min p = 0.0002) and SL vs. DPH alone (25.50 and 41.50 min respectively, p < 0.0001). In the second study, the NS 440 mg/DPH 25 mg combination failed to show any significant improvements vs. either component alone.\n\nABSTRACT.CONCLUSIONS: Only the NS 440 mg/DPH 50 mg combination demonstrated improvement in both sleep latency vs. DPH 50 mg and sleep maintenance (WASO) vs. NS 440 mg. There were no serious or unexpected adverse events reported in either study. Clinical Trial Registration: NCT01280591 (study 1); NCT01495858 (study 2)\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"Naproxen sodium + diphenhydramine (440mg/25mg)\", \"Naproxen sodium\", \"Wake time after sleep onset improvement \", \"no significant difference\", \"the NS 440 mg/DPH 25 mg combination failed to show any significant improvements vs. either component alone.\"], [\"Naproxen sodium + diphenhydramine (440mg/25mg)\", \"Diphenhydramine \", \"Wake time after sleep onset improvement \", \"no significant difference\", \"NS 440 mg/DPH 25 mg combination failed to show any significant improvements vs. either component alone.\"], [\"Naproxen sodium + diphenhydramine (440 mg/50 mg)\", \"Naproxen sodium\", \"Wake time after sleep onset improvement \", \"significantly increased\", \"In the first study, only the NS 440 mg/DPH 50 mg combination showed significant improvements in both WASO vs. NS alone (\\u221270.3 min p = 0.0002) and SL vs. DPH alone (25.50 and 41.50 min respectively, p \\u003c 0.0001).\"], [\"Naproxen sodium + diphenhydramine (440mg/50mg)\", \"Diphenhydramine\", \"Sleep latency improvement \", \"significantly increased\", \"In the first study, only the NS 440 mg/DPH 50 mg combination showed significant improvements in both WASO vs. NS alone (\\u221270.3 min p = 0.0002) and SL vs. DPH alone (25.50 and 41.50 min respectively, p \\u003c 0.0001).\"]]",
"role": "assistant"
}
]
|
science.scifact_entailment | Determine the relationship between them and generate a JSON response. Scientific Claim:
The microtubule-dependent delivery and secretion of matrix metalloproteases (MMPs) is partially responsible for the disassembly of adhesion sites.
Research Abstract:
Turnover of integrin-based focal adhesions (FAs) with the extracellular matrix (ECM) is essential for coordinated cell movement.
In collectively migrating human keratinocytes, FAs assemble near the leading edge, grow and mature as a result of contractile forces and disassemble underneath the advancing cell body.
We report that clustering of microtubule-associated CLASP1 and CLASP2 proteins around FAs temporally correlates with FA turnover.
CLASPs and LL5β (also known as PHLDB2), which recruits CLASPs to FAs, facilitate FA disassembly.
CLASPs are further required for FA-associated ECM degradation, and matrix metalloprotease inhibition slows FA disassembly similarly to CLASP or PHLDB2 (LL5β) depletion.
Finally, CLASP-mediated microtubule tethering at FAs establishes an FA-directed transport pathway for delivery, docking and localized fusion of exocytic vesicles near FAs.
We propose that CLASPs couple microtubule organization, vesicle transport and cell interactions with the ECM, establishing a local secretion pathway that facilitates FA turnover by severing cell-matrix connections. | {"verdict": "SUPPORT", "evidence": ["CLASPs are further required for FA-associated ECM degradation, and matrix metalloprotease inhibition slows FA disassembly similarly to CLASP or PHLDB2 (LL5\u03b2) depletion."]} | science.scifact_entailment.801 | [
{
"content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nThe microtubule-dependent delivery and secretion of matrix metalloproteases (MMPs) is partially responsible for the disassembly of adhesion sites.\n\nResearch Abstract:\nTurnover of integrin-based focal adhesions (FAs) with the extracellular matrix (ECM) is essential for coordinated cell movement.\nIn collectively migrating human keratinocytes, FAs assemble near the leading edge, grow and mature as a result of contractile forces and disassemble underneath the advancing cell body.\nWe report that clustering of microtubule-associated CLASP1 and CLASP2 proteins around FAs temporally correlates with FA turnover.\nCLASPs and LL5β (also known as PHLDB2), which recruits CLASPs to FAs, facilitate FA disassembly.\nCLASPs are further required for FA-associated ECM degradation, and matrix metalloprotease inhibition slows FA disassembly similarly to CLASP or PHLDB2 (LL5β) depletion.\nFinally, CLASP-mediated microtubule tethering at FAs establishes an FA-directed transport pathway for delivery, docking and localized fusion of exocytic vesicles near FAs.\nWe propose that CLASPs couple microtubule organization, vesicle transport and cell interactions with the ECM, establishing a local secretion pathway that facilitates FA turnover by severing cell-matrix connections.",
"role": "user"
},
{
"content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"CLASPs are further required for FA-associated ECM degradation, and matrix metalloprotease inhibition slows FA disassembly similarly to CLASP or PHLDB2 (LL5\\u03b2) depletion.\"]}",
"role": "assistant"
}
]
|
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
If the boost in performance obtained by the FNE on the <cite>Kiros et al. [1]</cite> pipeline translates to these other methods, such combination would be likely to define new state-of-the-art results on both tasks. | ["FutureWork"] | science.multicite_intent_classification.2224 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nIf the boost in performance obtained by the FNE on the <cite>Kiros et al. [1]</cite> pipeline translates to these other methods, such combination would be likely to define new state-of-the-art results on both tasks.",
"role": "user"
},
{
"content": "[\"FutureWork\"]",
"role": "assistant"
}
]
|
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
The models were trained with multi-domain data and we improved performance following a domainmixing approach <cite>(Britz et al., 2017)</cite> . | ["Uses"] | science.multicite_intent_classification.1002 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nThe models were trained with multi-domain data and we improved performance following a domainmixing approach <cite>(Britz et al., 2017)</cite> .",
"role": "user"
},
{
"content": "[\"Uses\"]",
"role": "assistant"
}
]
|
science.discomat_te | | No. | Tg (degC)+-(3degC) | Tx (degC)+-(3degC) | DT (Tx -Tg) (degC)+-(6degC) | n (+-0.0003) at 632.8nm |
| (6) | 347 | 398 | 51 | 2.0403 |
| (7) | 370 | 419 | 49 | 1.9896 |
| (8) | 363 | 407 | 44 | 2.0142 |
| (9) | 378 | 430 | 52 | 1.9884 |
Caption: Thermal property and refractive index of 5% fluoride (chloride) doped glasses.
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "6", "type": "Other"}
{"value": "347", "type": "Other"}
{"value": "398", "type": "Other"}
{"value": "51", "type": "Other"}
{"value": "2.0403", "type": "Other"}
{"value": "7", "type": "Other"}
{"value": "370", "type": "Other"}
{"value": "419", "type": "Other"}
{"value": "49", "type": "Other"}
{"value": "1.9896", "type": "Other"}
{"value": "8", "type": "Other"}
{"value": "363", "type": "Other"}
{"value": "407", "type": "Other"}
{"value": "44", "type": "Other"}
{"value": "2.0142", "type": "Other"}
{"value": "9", "type": "Other"}
{"value": "378", "type": "Other"}
{"value": "430", "type": "Other"}
{"value": "52", "type": "Other"}
{"value": "1.9884", "type": "Other"} | science.discomat_te.57 | [
{
"content": "| No. | Tg (degC)+-(3degC) | Tx (degC)+-(3degC) | DT (Tx -Tg) (degC)+-(6degC) | n (+-0.0003) at 632.8nm |\n| (6) | 347 | 398 | 51 | 2.0403 |\n| (7) | 370 | 419 | 49 | 1.9896 |\n| (8) | 363 | 407 | 44 | 2.0142 |\n| (9) | 378 | 430 | 52 | 1.9884 |\n\nCaption: Thermal property and refractive index of 5% fluoride (chloride) doped glasses.\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"6\", \"type\": \"Other\"}\n{\"value\": \"347\", \"type\": \"Other\"}\n{\"value\": \"398\", \"type\": \"Other\"}\n{\"value\": \"51\", \"type\": \"Other\"}\n{\"value\": \"2.0403\", \"type\": \"Other\"}\n{\"value\": \"7\", \"type\": \"Other\"}\n{\"value\": \"370\", \"type\": \"Other\"}\n{\"value\": \"419\", \"type\": \"Other\"}\n{\"value\": \"49\", \"type\": \"Other\"}\n{\"value\": \"1.9896\", \"type\": \"Other\"}\n{\"value\": \"8\", \"type\": \"Other\"}\n{\"value\": \"363\", \"type\": \"Other\"}\n{\"value\": \"407\", \"type\": \"Other\"}\n{\"value\": \"44\", \"type\": \"Other\"}\n{\"value\": \"2.0142\", \"type\": \"Other\"}\n{\"value\": \"9\", \"type\": \"Other\"}\n{\"value\": \"378\", \"type\": \"Other\"}\n{\"value\": \"430\", \"type\": \"Other\"}\n{\"value\": \"52\", \"type\": \"Other\"}\n{\"value\": \"1.9884\", \"type\": \"Other\"}",
"role": "assistant"
}
]
|
science.scifact_entailment | Determine the relationship between them and generate a JSON response. Scientific Claim:
T cell receptor /CD3 microdomains are required to induce the immunologic synapse.
Research Abstract:
T cell receptor (TCR) signaling is initiated and sustained in microclusters; however, it's not known whether signaling also occurs in the TCR-rich central supramolecular activation cluster (cSMAC).
We showed that the cSMAC formed by fusion of microclusters contained more CD45 than microclusters and is a site enriched in lysobisphosphatidic acid, a lipid involved in sorting ubiquitinated membrane proteins for degradation.
Calcium signaling via TCR was blocked within 2 min by anti-MHCp treatment and 1 min by latrunculin-A treatment.
TCR-MHCp interactions in the cSMAC survived these perturbations for 10 min and hence were not sufficient to sustain signaling.
TCR microclusters were also resistant to disruption by anti-MHCp and latrunculin-A treatments.
We propose that TCR signaling is sustained by stabilized microclusters and is terminated in the cSMAC, a structure from which TCR are sorted for degradation.
Our studies reveal a role for F-actin in TCR signaling beyond microcluster formation. | {"verdict": "NEI", "evidence": []} | science.scifact_entailment.739 | [
{
"content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nT cell receptor /CD3 microdomains are required to induce the immunologic synapse.\n\nResearch Abstract:\nT cell receptor (TCR) signaling is initiated and sustained in microclusters; however, it's not known whether signaling also occurs in the TCR-rich central supramolecular activation cluster (cSMAC).\nWe showed that the cSMAC formed by fusion of microclusters contained more CD45 than microclusters and is a site enriched in lysobisphosphatidic acid, a lipid involved in sorting ubiquitinated membrane proteins for degradation.\nCalcium signaling via TCR was blocked within 2 min by anti-MHCp treatment and 1 min by latrunculin-A treatment.\nTCR-MHCp interactions in the cSMAC survived these perturbations for 10 min and hence were not sufficient to sustain signaling.\nTCR microclusters were also resistant to disruption by anti-MHCp and latrunculin-A treatments.\nWe propose that TCR signaling is sustained by stabilized microclusters and is terminated in the cSMAC, a structure from which TCR are sorted for degradation.\nOur studies reveal a role for F-actin in TCR signaling beyond microcluster formation.",
"role": "user"
},
{
"content": "{\"verdict\": \"NEI\", \"evidence\": []}",
"role": "assistant"
}
]
|
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
While most previous efforts for nested entity recognition were limited to named entities,<cite> Lu and Roth (2015)</cite> addressed the problem of nested entity mention detection where mentions can either be named, nominal or pronominal. | ["Background"] | science.multicite_intent_classification.1059 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nWhile most previous efforts for nested entity recognition were limited to named entities,<cite> Lu and Roth (2015)</cite> addressed the problem of nested entity mention detection where mentions can either be named, nominal or pronominal.",
"role": "user"
},
{
"content": "[\"Background\"]",
"role": "assistant"
}
]
|
science.scifact_entailment | Determine the relationship between them and generate a JSON response. Scientific Claim:
Homozygous deletion of murine Sbds gene from osterix-expressing mesenchymal stem and progenitor cells (MPCs) induces oxidative stress.
Research Abstract:
Mesenchymal niche cells may drive tissue failure and malignant transformation in the hematopoietic system, but the underlying molecular mechanisms and relevance to human disease remain poorly defined.
Here, we show that perturbation of mesenchymal cells in a mouse model of the pre-leukemic disorder Shwachman-Diamond syndrome (SDS) induces mitochondrial dysfunction, oxidative stress, and activation of DNA damage responses in hematopoietic stem and progenitor cells.
Massive parallel RNA sequencing of highly purified mesenchymal cells in the SDS mouse model and a range of human pre-leukemic syndromes identified p53-S100A8/9-TLR inflammatory signaling as a common driving mechanism of genotoxic stress.
Transcriptional activation of this signaling axis in the mesenchymal niche predicted leukemic evolution and progression-free survival in myelodysplastic syndrome (MDS), the principal leukemia predisposition syndrome.
Collectively, our findings identify mesenchymal niche-induced genotoxic stress in heterotypic stem and progenitor cells through inflammatory signaling as a targetable determinant of disease outcome in human pre-leukemia. | {"verdict": "NEI", "evidence": []} | science.scifact_entailment.337 | [
{
"content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nHomozygous deletion of murine Sbds gene from osterix-expressing mesenchymal stem and progenitor cells (MPCs) induces oxidative stress.\n\nResearch Abstract:\nMesenchymal niche cells may drive tissue failure and malignant transformation in the hematopoietic system, but the underlying molecular mechanisms and relevance to human disease remain poorly defined.\nHere, we show that perturbation of mesenchymal cells in a mouse model of the pre-leukemic disorder Shwachman-Diamond syndrome (SDS) induces mitochondrial dysfunction, oxidative stress, and activation of DNA damage responses in hematopoietic stem and progenitor cells.\nMassive parallel RNA sequencing of highly purified mesenchymal cells in the SDS mouse model and a range of human pre-leukemic syndromes identified p53-S100A8/9-TLR inflammatory signaling as a common driving mechanism of genotoxic stress.\nTranscriptional activation of this signaling axis in the mesenchymal niche predicted leukemic evolution and progression-free survival in myelodysplastic syndrome (MDS), the principal leukemia predisposition syndrome.\nCollectively, our findings identify mesenchymal niche-induced genotoxic stress in heterotypic stem and progenitor cells through inflammatory signaling as a targetable determinant of disease outcome in human pre-leukemia.",
"role": "user"
},
{
"content": "{\"verdict\": \"NEI\", \"evidence\": []}",
"role": "assistant"
}
]
|
science.qasper_abstractive_qa | Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.
Document Content:
Adversarial Learning for Chinese NER from Crowd Annotations
To quickly obtain new labeled data, we can choose crowdsourcing as an alternative way at lower cost in a short time. But as an exchange, crowd annotations from non-experts may be of lower quality than those from experts. In this paper, we propose an approach to performing crowd annotation learning for Chinese Named Entity Recognition (NER) to make full use of the noisy sequence labels from multiple annotators. Inspired by adversarial learning, our approach uses a common Bi-LSTM and a private Bi-LSTM for representing annotator-generic and -specific information. The annotator-generic information is the common knowledge for entities easily mastered by the crowd. Finally, we build our Chinese NE tagger based on the LSTM-CRF model. In our experiments, we create two data sets for Chinese NER tasks from two domains. The experimental results show that our system achieves better scores than strong baseline systems.
Introduction
There has been significant progress on Named Entity Recognition (NER) in recent years using models based on machine learning algorithms BIBREF0 , BIBREF1 , BIBREF2 . As with other Natural Language Processing (NLP) tasks, building NER systems typically requires a massive amount of labeled training data which are annotated by experts. In real applications, we often need to consider new types of entities in new domains where we do not have existing annotated data. For such new types of entities, however, it is very hard to find experts to annotate the data within short time limits and hiring experts is costly and non-scalable, both in terms of time and money.
In order to quickly obtain new training data, we can use crowdsourcing as one alternative way at lower cost in a short time. But as an exchange, crowd annotations from non-experts may be of lower quality than those from experts. It is one biggest challenge to build a powerful NER system on such a low quality annotated data. Although we can obtain high quality annotations for each input sentence by majority voting, it can be a waste of human labors to achieve such a goal, especially for some ambiguous sentences which may require a number of annotations to reach an agreement. Thus majority work directly build models on crowd annotations, trying to model the differences among annotators, for example, some of the annotators may be more trustful BIBREF3 , BIBREF4 .
Here we focus mainly on the Chinese NER, which is more difficult than NER for other languages such as English for the lack of morphological variations such as capitalization and in particular the uncertainty in word segmentation. The Chinese NE taggers trained on news domain often perform poor in other domains. Although we can alleviate the problem by using character-level tagging to resolve the problem of poor word segmentation performances BIBREF5 , still there exists a large gap when the target domain changes, especially for the texts of social media. Thus, in order to get a good tagger for new domains and also for the conditions of new entity types, we require large amounts of labeled data. Therefore, crowdsourcing is a reasonable solution for these situations.
In this paper, we propose an approach to training a Chinese NER system on the crowd-annotated data. Our goal is to extract additional annotator independent features by adversarial training, alleviating the annotation noises of non-experts. The idea of adversarial training in neural networks has been used successfully in several NLP tasks, such as cross-lingual POS tagging BIBREF6 and cross-domain POS tagging BIBREF7 . They use it to reduce the negative influences of the input divergences among different domains or languages, while we use adversarial training to reduce the negative influences brought by different crowd annotators. To our best knowledge, we are the first to apply adversarial training for crowd annotation learning.
In the learning framework, we perform adversarial training between the basic NER and an additional worker discriminator. We have a common Bi-LSTM for representing annotator-generic information and a private Bi-LSTM for representing annotator-specific information. We build another label Bi-LSTM by the crowd-annotated NE label sequence which reflects the mind of the crowd annotators who learn entity definitions by reading the annotation guidebook. The common and private Bi-LSTMs are used for NER, while the common and label Bi-LSTMs are used as inputs for the worker discriminator. The parameters of the common Bi-LSTM are learned by adversarial training, maximizing the worker discriminator loss and meanwhile minimizing the NER loss. Thus the resulting features of the common Bi-LSTM are worker invariant and NER sensitive.
For evaluation, we create two Chinese NER datasets in two domains: dialog and e-commerce. We require the crowd annotators to label the types of entities, including person, song, brand, product, and so on. Identifying these entities is useful for chatbot and e-commerce platforms BIBREF8 . Then we conduct experiments on the newly created datasets to verify the effectiveness of the proposed adversarial neural network model. The results show that our system outperforms very strong baseline systems. In summary, we make the following contributions:
CRF Tagging
Finally we feed the resulting features $\mathbf {h}_t^{\text{ner}}, t\in [1, n]$ into a CRF layer directly for NER decoding. CRF tagging is one globally normalized model, aiming to find the best output sequence considering the dependencies between successive labels. In the sequence labeling setting for NER, the output label of one position has a strong dependency on the label of the previous position. For example, the label before “I-XX” must be either “B-XX” or “I-XX”, where “XX” should be exactly the same.
CRF involves two parts for prediction. First we should compute the scores for each label based $\mathbf {h}_t^{\text{ner}}$ , resulting in $\mathbf {o}_t^{\text{ner}}$ , whose dimension is the number of output labels. The other part is a transition matrix $\mathbf {T}$ which defines the scores of two successive labels. $\mathbf {T}$ is also a model parameter. Based on $\mathbf {o}_t^{\text{ner}}$ and $\mathbf {T}$ , we use the Viterbi algorithm to find the best-scoring label sequence.
We can formalize the CRF tagging process as follows:
$$\begin{split}
& \mathbf {o}_t^{\text{ner}} = \mathbf {W}^{\text{ner}} \mathbf {h}_t^{\text{ner}}, \text{~~~~} t \in [1,n] \\
& \text{score}(\mathbf {X}, \mathbf {y}) = \sum _{t = 1}^{n}(\mathbf {o}_{t,y_t} + T_{y_{t-1},y_t}) \\
& \mathbf {y}^{\text{ner}} = \mathop {arg~max}_{\mathbf {y}}\big (\text{score}(\mathbf {X}, \mathbf {y}))\big ), \\
\end{split}$$ (Eq. 8)
where $\text{score}(\cdot )$ is the scoring function for a given output label sequence $\mathbf {y} = y_1y_2 \cdots y_n$ based on input $\mathbf {X}$ , $\mathbf {y}^{\text{ner}}$ is the resulting label sequence, $\mathbf {W}^{\text{ner}}$ is a model parameter.
Data Sets
With the purpose of obtaining evaluation datasets from crowd annotators, we collect the sentences from two domains: Dialog and E-commerce domain. We hire undergraduate students to annotate the sentences. They are required to identify the predefined types of entities in the sentences. Together with the guideline document, the annotators are educated some tips in fifteen minutes and also provided with 20 exemplifying sentences.
Labeled Data: DL-PS. In Dialog domain (DL), we collect raw sentences from a chatbot application. And then we randomly select 20K sentences as our pool and hire 43 students to annotate the sentences. We ask the annotators to label two types of entities: Person-Name and Song-Name. The annotators label the sentences independently. In particular, each sentence is assigned to three annotators for this data. Although the setting can be wasteful of labor, we can use the resulting dataset to test several well-known baselines such as majority voting.
After annotation, we remove some illegal sentences reported by the annotators. Finally, we have 16,948 sentences annotated by the students. Table 1 shows the information of annotated data. The average Kappa value among the annotators is 0.6033, indicating that the crowd annotators have moderate agreement on identifying entities on this data.
In order to evaluate the system performances, we create a set of corpus with gold annotations. Concretely, we randomly select 1,000 sentences from the final dataset and let two experts generate the gold annotations. Among them, we use 300 sentences as the development set and the remaining 700 as the test set. The rest sentences with only student annotations are used as the training set.
Labeled data: EC-MT and EC-UQ. In E-commerce domain (EC), we collect raw sentences from two types of texts: one is titles of merchandise entries (EC-MT) and another is user queries (EC-UQ). The annotators label five types of entities: Brand, Product, Model, Material, and Specification. These five types of entities are very important for E-commerce platform, for example building knowledge graph of merchandises. Five students participate the annotations for this domain since the number of sentences is small. We use the similar strategy as DL-PS to annotate the sentences, except that only two annotators are assigned for each sentence, because we aim to test the system performances under very small duplicated annotations.
Finally, we obtain 2,337 sentences for EC-MT and 2,300 for EC-UQ. Table 1 shows the information of annotated results. Similarly, we produce the development and test datasets for system evaluation, by randomly selecting 400 sentences and letting two experts to generate the groundtruth annotations. Among them, we use 100 sentences as the development set and the remaining 300 as the test set. The rest sentences with only crowdsourcing annotations are used as the training set.
Unlabeled data. The vector representations of characters are basic inputs of our baseline and proposed models, which are obtained by the looking-up table $\mathbf {E}^W$ . As introduced before, we can use pretrained embeddings from large-scale raw corpus to initialize the table. In order to pretrain the character embeddings, we use one large-scale unlabeled data from the user-generated content in Internet. Totally, we obtain a number of 5M sentences. Finally, we use the tool word2vec to pretrain the character embeddings based on the unlabeled dataset in our experiments.
Settings
For evaluation, we use the entity-level metrics of Precision (P), Recall (R), and their F1 value in our experiments, treating one tagged entity as correct only when it matches the gold entity exactly.
There are several hyper-parameters in the baseline LSTM-CRF and our final models. We set them empirically by the development performances. Concretely, we set the dimension size of the character embeddings by 100, the dimension size of the NE label embeddings by 50, and the dimension sizes of all the other hidden features by 200.
We exploit online training with a mini-batch size 128 to learn model parameters. The max-epoch iteration is set by 200, and the best-epoch model is chosen according to the development performances. We use RMSprop BIBREF28 with a learning rate $10^{-3}$ to update model parameters, and use $l_2$ -regularization by a parameter $10^{-5}$ . We adopt the dropout technique to avoid overfitting by a drop value of $0.2$ .
Main Results
In this section, we show the model performances of our proposed crowdsourcing learning system (ALCrowd), and meanwhile compare it with the other systems mentioned above. Table 2 shows the experimental results on the DL-PS datasets and Table 3 shows the experiment results on the EC-MT and EC-UQ datasets, respectively.
The results of CRF and LSTM-CRF mean that the crowd annotation is an alternative solution with low cost for labeling data that could be used for training a NER system even there are some inconsistencies. Compared with CRF, LSTM-CRF achieves much better performances on all the three data, showing +6.12 F1 improvement on DL-PS, +4.51 on EC-MT, and +9.19 on EC-UQ. This indicates that LSTM-CRF is a very strong baseline system, demonstrating the effectiveness of neural network.
Interestingly, when compared with CRF and LSTM-CRF, CRF-VT and LSTM-CRF-VT trained on the voted version perform worse in the DL-PS dataset. This trend is also mentioned in BIBREF4 nguyen2017aggregating. This fact shows that the majority voting method might be unsuitable for our task. There are two possible reasons accounting for the observation. On the one hand, simple character-level voting based on three annotations for each sentence may be still not enough. In the DL-PS dataset, even with only two predefined entity types, one character can have nine NE labels. Thus the majority-voting may be incapable of handling some cases. While the cost by adding more annotations for each sentence would be greatly increased. On the other hand, the lost information produced by majority-voting may be important, at least the ambiguous annotations denote that the input sentence is difficult for NER. The normal CRF and LSTM-CRF models without discard any annotations can differentiate these difficult contexts through learning.
Three crowd-annotation learning systems provide better performances than their counterpart systems, (CRF-MA VS CRF) and (LSTM-Crowd/ALCrowd VS LSTM-CRF). Compared with the strong baseline LSTM-CRF, ALCrowd shows its advantage with +1.08 F1 improvements on DL-PS, +1.24 on EC-MT, and +2.38 on EC-UQ, respectively. This indicates that adding the crowd-annotation learning is quite useful for building NER systems. In addition, ALCrowd also outperforms LSTM-Crowd on all the datasets consistently, demonstrating the high effectiveness of ALCrowd in extracting worker independent features. Among all the systems, ALCrowd performs the best, and significantly better than all the other models (the p-value is below $10^{-5}$ by using t-test). The results indicate that with the help of adversarial training, our system can learn a better feature representation from crowd annotation.
Conclusions
In this paper, we presented an approach to performing crowd annotation learning based on the idea of adversarial training for Chinese Named Entity Recognition (NER). In our approach, we use a common and private Bi-LSTMs for representing annotator-generic and -specific information, and learn a label Bi-LSTM from the crowd-annotated NE label sequences. Finally, the proposed approach adopts a LSTM-CRF model to perform tagging. In our experiments, we create two data sets for Chinese NER tasks in the dialog and e-commerce domains. The experimental results show that the proposed approach outperforms strong baseline systems.
Figure 1: The framework of the proposed model, which consists of two parts.
Table 1: Statistics of labeled datasets.
Table 2: Main results on the DL-PS data.
Figure 3: Case studies of different systems, where named entities are illustrated by square brackets.
Figure 2: Comparisons by using different character embeddings, where the Y-axis shows the F1 values
Query: What crowdsourcing platform is used?
JSON Response: | {"answer": "They did not use any platform, instead they hired undergraduate students to do the annotation.", "evidence": ["With the purpose of obtaining evaluation datasets from crowd annotators, we collect the sentences from two domains: Dialog and E-commerce domain. We hire undergraduate students to annotate the sentences. They are required to identify the predefined types of entities in the sentences. Together with the guideline document, the annotators are educated some tips in fifteen minutes and also provided with 20 exemplifying sentences."]} | science.qasper_abstractive_qa.3.0 | [
{
"content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nAdversarial Learning for Chinese NER from Crowd Annotations\n\nTo quickly obtain new labeled data, we can choose crowdsourcing as an alternative way at lower cost in a short time. But as an exchange, crowd annotations from non-experts may be of lower quality than those from experts. In this paper, we propose an approach to performing crowd annotation learning for Chinese Named Entity Recognition (NER) to make full use of the noisy sequence labels from multiple annotators. Inspired by adversarial learning, our approach uses a common Bi-LSTM and a private Bi-LSTM for representing annotator-generic and -specific information. The annotator-generic information is the common knowledge for entities easily mastered by the crowd. Finally, we build our Chinese NE tagger based on the LSTM-CRF model. In our experiments, we create two data sets for Chinese NER tasks from two domains. The experimental results show that our system achieves better scores than strong baseline systems.\n\nIntroduction\nThere has been significant progress on Named Entity Recognition (NER) in recent years using models based on machine learning algorithms BIBREF0 , BIBREF1 , BIBREF2 . As with other Natural Language Processing (NLP) tasks, building NER systems typically requires a massive amount of labeled training data which are annotated by experts. In real applications, we often need to consider new types of entities in new domains where we do not have existing annotated data. For such new types of entities, however, it is very hard to find experts to annotate the data within short time limits and hiring experts is costly and non-scalable, both in terms of time and money.\nIn order to quickly obtain new training data, we can use crowdsourcing as one alternative way at lower cost in a short time. But as an exchange, crowd annotations from non-experts may be of lower quality than those from experts. It is one biggest challenge to build a powerful NER system on such a low quality annotated data. Although we can obtain high quality annotations for each input sentence by majority voting, it can be a waste of human labors to achieve such a goal, especially for some ambiguous sentences which may require a number of annotations to reach an agreement. Thus majority work directly build models on crowd annotations, trying to model the differences among annotators, for example, some of the annotators may be more trustful BIBREF3 , BIBREF4 .\nHere we focus mainly on the Chinese NER, which is more difficult than NER for other languages such as English for the lack of morphological variations such as capitalization and in particular the uncertainty in word segmentation. The Chinese NE taggers trained on news domain often perform poor in other domains. Although we can alleviate the problem by using character-level tagging to resolve the problem of poor word segmentation performances BIBREF5 , still there exists a large gap when the target domain changes, especially for the texts of social media. Thus, in order to get a good tagger for new domains and also for the conditions of new entity types, we require large amounts of labeled data. Therefore, crowdsourcing is a reasonable solution for these situations.\nIn this paper, we propose an approach to training a Chinese NER system on the crowd-annotated data. Our goal is to extract additional annotator independent features by adversarial training, alleviating the annotation noises of non-experts. The idea of adversarial training in neural networks has been used successfully in several NLP tasks, such as cross-lingual POS tagging BIBREF6 and cross-domain POS tagging BIBREF7 . They use it to reduce the negative influences of the input divergences among different domains or languages, while we use adversarial training to reduce the negative influences brought by different crowd annotators. To our best knowledge, we are the first to apply adversarial training for crowd annotation learning.\nIn the learning framework, we perform adversarial training between the basic NER and an additional worker discriminator. We have a common Bi-LSTM for representing annotator-generic information and a private Bi-LSTM for representing annotator-specific information. We build another label Bi-LSTM by the crowd-annotated NE label sequence which reflects the mind of the crowd annotators who learn entity definitions by reading the annotation guidebook. The common and private Bi-LSTMs are used for NER, while the common and label Bi-LSTMs are used as inputs for the worker discriminator. The parameters of the common Bi-LSTM are learned by adversarial training, maximizing the worker discriminator loss and meanwhile minimizing the NER loss. Thus the resulting features of the common Bi-LSTM are worker invariant and NER sensitive.\nFor evaluation, we create two Chinese NER datasets in two domains: dialog and e-commerce. We require the crowd annotators to label the types of entities, including person, song, brand, product, and so on. Identifying these entities is useful for chatbot and e-commerce platforms BIBREF8 . Then we conduct experiments on the newly created datasets to verify the effectiveness of the proposed adversarial neural network model. The results show that our system outperforms very strong baseline systems. In summary, we make the following contributions:\n\nCRF Tagging\nFinally we feed the resulting features $\\mathbf {h}_t^{\\text{ner}}, t\\in [1, n]$ into a CRF layer directly for NER decoding. CRF tagging is one globally normalized model, aiming to find the best output sequence considering the dependencies between successive labels. In the sequence labeling setting for NER, the output label of one position has a strong dependency on the label of the previous position. For example, the label before “I-XX” must be either “B-XX” or “I-XX”, where “XX” should be exactly the same.\nCRF involves two parts for prediction. First we should compute the scores for each label based $\\mathbf {h}_t^{\\text{ner}}$ , resulting in $\\mathbf {o}_t^{\\text{ner}}$ , whose dimension is the number of output labels. The other part is a transition matrix $\\mathbf {T}$ which defines the scores of two successive labels. $\\mathbf {T}$ is also a model parameter. Based on $\\mathbf {o}_t^{\\text{ner}}$ and $\\mathbf {T}$ , we use the Viterbi algorithm to find the best-scoring label sequence.\nWe can formalize the CRF tagging process as follows: \n$$\\begin{split}\n& \\mathbf {o}_t^{\\text{ner}} = \\mathbf {W}^{\\text{ner}} \\mathbf {h}_t^{\\text{ner}}, \\text{~~~~} t \\in [1,n] \\\\\n& \\text{score}(\\mathbf {X}, \\mathbf {y}) = \\sum _{t = 1}^{n}(\\mathbf {o}_{t,y_t} + T_{y_{t-1},y_t}) \\\\\n& \\mathbf {y}^{\\text{ner}} = \\mathop {arg~max}_{\\mathbf {y}}\\big (\\text{score}(\\mathbf {X}, \\mathbf {y}))\\big ), \\\\\n\\end{split}$$ (Eq. 8) \nwhere $\\text{score}(\\cdot )$ is the scoring function for a given output label sequence $\\mathbf {y} = y_1y_2 \\cdots y_n$ based on input $\\mathbf {X}$ , $\\mathbf {y}^{\\text{ner}}$ is the resulting label sequence, $\\mathbf {W}^{\\text{ner}}$ is a model parameter.\n\nData Sets\nWith the purpose of obtaining evaluation datasets from crowd annotators, we collect the sentences from two domains: Dialog and E-commerce domain. We hire undergraduate students to annotate the sentences. They are required to identify the predefined types of entities in the sentences. Together with the guideline document, the annotators are educated some tips in fifteen minutes and also provided with 20 exemplifying sentences.\nLabeled Data: DL-PS. In Dialog domain (DL), we collect raw sentences from a chatbot application. And then we randomly select 20K sentences as our pool and hire 43 students to annotate the sentences. We ask the annotators to label two types of entities: Person-Name and Song-Name. The annotators label the sentences independently. In particular, each sentence is assigned to three annotators for this data. Although the setting can be wasteful of labor, we can use the resulting dataset to test several well-known baselines such as majority voting.\nAfter annotation, we remove some illegal sentences reported by the annotators. Finally, we have 16,948 sentences annotated by the students. Table 1 shows the information of annotated data. The average Kappa value among the annotators is 0.6033, indicating that the crowd annotators have moderate agreement on identifying entities on this data.\nIn order to evaluate the system performances, we create a set of corpus with gold annotations. Concretely, we randomly select 1,000 sentences from the final dataset and let two experts generate the gold annotations. Among them, we use 300 sentences as the development set and the remaining 700 as the test set. The rest sentences with only student annotations are used as the training set.\nLabeled data: EC-MT and EC-UQ. In E-commerce domain (EC), we collect raw sentences from two types of texts: one is titles of merchandise entries (EC-MT) and another is user queries (EC-UQ). The annotators label five types of entities: Brand, Product, Model, Material, and Specification. These five types of entities are very important for E-commerce platform, for example building knowledge graph of merchandises. Five students participate the annotations for this domain since the number of sentences is small. We use the similar strategy as DL-PS to annotate the sentences, except that only two annotators are assigned for each sentence, because we aim to test the system performances under very small duplicated annotations.\nFinally, we obtain 2,337 sentences for EC-MT and 2,300 for EC-UQ. Table 1 shows the information of annotated results. Similarly, we produce the development and test datasets for system evaluation, by randomly selecting 400 sentences and letting two experts to generate the groundtruth annotations. Among them, we use 100 sentences as the development set and the remaining 300 as the test set. The rest sentences with only crowdsourcing annotations are used as the training set.\nUnlabeled data. The vector representations of characters are basic inputs of our baseline and proposed models, which are obtained by the looking-up table $\\mathbf {E}^W$ . As introduced before, we can use pretrained embeddings from large-scale raw corpus to initialize the table. In order to pretrain the character embeddings, we use one large-scale unlabeled data from the user-generated content in Internet. Totally, we obtain a number of 5M sentences. Finally, we use the tool word2vec to pretrain the character embeddings based on the unlabeled dataset in our experiments.\n\nSettings\nFor evaluation, we use the entity-level metrics of Precision (P), Recall (R), and their F1 value in our experiments, treating one tagged entity as correct only when it matches the gold entity exactly.\nThere are several hyper-parameters in the baseline LSTM-CRF and our final models. We set them empirically by the development performances. Concretely, we set the dimension size of the character embeddings by 100, the dimension size of the NE label embeddings by 50, and the dimension sizes of all the other hidden features by 200.\nWe exploit online training with a mini-batch size 128 to learn model parameters. The max-epoch iteration is set by 200, and the best-epoch model is chosen according to the development performances. We use RMSprop BIBREF28 with a learning rate $10^{-3}$ to update model parameters, and use $l_2$ -regularization by a parameter $10^{-5}$ . We adopt the dropout technique to avoid overfitting by a drop value of $0.2$ .\n\nMain Results\nIn this section, we show the model performances of our proposed crowdsourcing learning system (ALCrowd), and meanwhile compare it with the other systems mentioned above. Table 2 shows the experimental results on the DL-PS datasets and Table 3 shows the experiment results on the EC-MT and EC-UQ datasets, respectively.\nThe results of CRF and LSTM-CRF mean that the crowd annotation is an alternative solution with low cost for labeling data that could be used for training a NER system even there are some inconsistencies. Compared with CRF, LSTM-CRF achieves much better performances on all the three data, showing +6.12 F1 improvement on DL-PS, +4.51 on EC-MT, and +9.19 on EC-UQ. This indicates that LSTM-CRF is a very strong baseline system, demonstrating the effectiveness of neural network.\nInterestingly, when compared with CRF and LSTM-CRF, CRF-VT and LSTM-CRF-VT trained on the voted version perform worse in the DL-PS dataset. This trend is also mentioned in BIBREF4 nguyen2017aggregating. This fact shows that the majority voting method might be unsuitable for our task. There are two possible reasons accounting for the observation. On the one hand, simple character-level voting based on three annotations for each sentence may be still not enough. In the DL-PS dataset, even with only two predefined entity types, one character can have nine NE labels. Thus the majority-voting may be incapable of handling some cases. While the cost by adding more annotations for each sentence would be greatly increased. On the other hand, the lost information produced by majority-voting may be important, at least the ambiguous annotations denote that the input sentence is difficult for NER. The normal CRF and LSTM-CRF models without discard any annotations can differentiate these difficult contexts through learning.\nThree crowd-annotation learning systems provide better performances than their counterpart systems, (CRF-MA VS CRF) and (LSTM-Crowd/ALCrowd VS LSTM-CRF). Compared with the strong baseline LSTM-CRF, ALCrowd shows its advantage with +1.08 F1 improvements on DL-PS, +1.24 on EC-MT, and +2.38 on EC-UQ, respectively. This indicates that adding the crowd-annotation learning is quite useful for building NER systems. In addition, ALCrowd also outperforms LSTM-Crowd on all the datasets consistently, demonstrating the high effectiveness of ALCrowd in extracting worker independent features. Among all the systems, ALCrowd performs the best, and significantly better than all the other models (the p-value is below $10^{-5}$ by using t-test). The results indicate that with the help of adversarial training, our system can learn a better feature representation from crowd annotation.\n\nConclusions\nIn this paper, we presented an approach to performing crowd annotation learning based on the idea of adversarial training for Chinese Named Entity Recognition (NER). In our approach, we use a common and private Bi-LSTMs for representing annotator-generic and -specific information, and learn a label Bi-LSTM from the crowd-annotated NE label sequences. Finally, the proposed approach adopts a LSTM-CRF model to perform tagging. In our experiments, we create two data sets for Chinese NER tasks in the dialog and e-commerce domains. The experimental results show that the proposed approach outperforms strong baseline systems.\n\nFigure 1: The framework of the proposed model, which consists of two parts.\n\nTable 1: Statistics of labeled datasets.\n\nTable 2: Main results on the DL-PS data.\n\nFigure 3: Case studies of different systems, where named entities are illustrated by square brackets.\n\nFigure 2: Comparisons by using different character embeddings, where the Y-axis shows the F1 values\n\nQuery: What crowdsourcing platform is used?\n\nJSON Response:",
"role": "user"
},
{
"content": "{\"answer\": \"They did not use any platform, instead they hired undergraduate students to do the annotation.\", \"evidence\": [\"With the purpose of obtaining evaluation datasets from crowd annotators, we collect the sentences from two domains: Dialog and E-commerce domain. We hire undergraduate students to annotate the sentences. They are required to identify the predefined types of entities in the sentences. Together with the guideline document, the annotators are educated some tips in fifteen minutes and also provided with 20 exemplifying sentences.\"]}",
"role": "assistant"
}
]
|
science.mup_single_document_summarization | Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.
Title of the Paper:
PGPS : Coupling Policy Gradient with Population-based Search
Main Content:
1 INTRODUCTION In Reinforcement Learning ( RL ) , an agent interacts with the environment , and its goal is to find the policy that maximizes the objective function , which is generally defined as a cumulative discounted reward . Recently , many researchers have worked on combining deep neural networks and a gradient-based RL algorithm , generally known as Deep Reinforcement Learning ( DRL ) . This approach has achieved great success not only in the discrete action domain , such as in Go ( Silver et al. , 2017 ) and Atari games ( Mnih et al. , 2015 ; 2016 ) , but also in the continuous action domain , such as in Robot control ( Fujimoto et al. , 2018 ; Lillicrap et al. , 2015 ; Schulman et al. , 2015 ) . However , it is difficult to use the gradient-based method for the objective function ( J ) , which includes “ many wide flat regions ” since the gradient ( ∇θJ ) is near zero at a flat point . Figure 1 is an extreme case consisting of only flat regions , which is called a piece-wise constant function . This problem remains an unsolved issue in gradient-based DRL with continuous control domains ( Colas et al. , 2018 ) . The Swimmer in a MuJoCo environment ( Todorov et al. , 2012 ) has already been reported to be hard to use the gradient-based method ( Jung et al. , 2020 ; Liu et al. , 2019 ) . Our experiment shows that the objective function of Swimmer includes wide flat regions ( Appendix A ) . The population-based Evolutionary Approach ( EA ) , which is an alternative to the gradient-based method , has also shown successful results in various control tasks ( Conti et al. , 2018 ; Liu et al. , 2019 ; Salimans et al. , 2017 ; Such et al. , 2017 ) . As a population-based search , the EA generates a population of agents to explore policy , and the population is regenerated with improvement in each generation . The EA is also known as a direct policy search ( Schmidhuber & Zhao , 1998 ) because it directly searches by perturbing the parameter of policy . In Figure 1 , the Cross-Entropy Method ( CEM ) as a kind of population-based search is simply described , where the current population sampled from the target distribution is evaluated . Then the distribution is updated to the direction for generating a more promising population . Not depending on the gradient , these approaches are robust to flat or deceptive gradients ( Staines & Barber , 2013 ; Liu et al. , 2019 ) . However , the EA is sample inefficient because it requires a Monte-Carlo evaluation , and the previous results and data generally can not be reused . The off-policy Policy Gradient ( PG ) algorithm uses the data from arbitrary policies to train its actor and critic functions . It generates exciting potential by combining the EA and PG , where the data which is discarded in a standard EA is directly used to train the PG ’ s functions . Khadka & Tumer ( 2018 ) and Pourchot & Sigaud ( 2018 ) introduced a framework combining the EA and off-policy PG . However , the framework of ( Khadka & Tumer , 2018 ) is less efficient to train the policy for general tasks than the PG algorithm alone , and the framework of ( Pourchot & Sigaud , 2018 ) is unsuitable to train the policy for a task providing a deceptive gradient . In this paper , we propose another hybrid algorithm , called Policy Gradient with Population-based Search ( PGPS ) in which the CEM and Twin Delayed Deep Deterministic Policy Gradient ( TD3 ) ( Fujimoto et al. , 2018 ) are combined . It is as robust to a deceptive gradient as the CEM and more efficient to train the policy for general tasks than TD3 . To be robust to a deceptive gradient , the proposed algorithm is constructed in a way similar to the one in ( Khadka & Tumer , 2018 ) , where the TD3 is trained using data from the CEM and periodically participates in the CEM population as an individual ( PG guides EA ) . However , in this basic framework , the TD3 sometimes falls into the inferior solution and inefficiently searches . To get the TD3 out of the inferior solution , we let the EA guide the TD3 by guided policy learning ( Jung et al. , 2020 ) ( EA guides PG ) . Furthermore , the TD3 critic contributes to generating a more promising population by filtering the set of actors sampled from CEM ( Q-critic filtering ) . Lastly , to control the trade-off between the frequency of search and stable estimation , we used evaluation step scheduling in the process of population evaluation ( Increasing evaluation steps ) . It carries out frequent searches when searching far from the optimal , whereas it carries out stable estimation when searching close to the optimal . These approaches bring out more synergies between the CEM and the TD3 while maintaining both the population-based search and the gradient-based search . Consequently , the proposed algorithm is not only robust to a deceptive gradient , but also produces outstanding performances with a low additional computational cost . 2 RELATED WORKS . Recently , beyond the view of an alternative approach , few attempts have been proposed in the form of A supporting B . An attempt is to use EA to fill a replay buffer with diverse samples . In Colas et al . ( 2018 ) , a Goal Exploration Process ( GEP ) , a kind of EA , is firstly applied to search the policy and to fill a replay buffer with the diverse samples , and then the off-policy PG algorithm is sequentially used for fine tuning the parameters of the policy . Another attempt is to combine a population-based approach and PG for efficiently searching a good policy or the good hyper-parameters of an algorithm in parallel multi-learners setting . These applications generally consist of periodically evaluating the population , followed by distributing good knowledge to the other learners . To find the best architecture and hyper-parameters , Jaderberg et al . ( 2017 ) proposed a Population-Based Training ( PBT ) method in which the current best knowledge is periodically transferred to PG learners . Gangwani & Peng ( 2017 ) developed the distilled crossover using imitation learning and mutation based on the PG . Proposed operators transfer the information on current good policies into the next population without destructive change to the neural network . Jung et al . ( 2020 ) introduced a soft-manner guided policy learning to fuse the knowledge of the best policy with other identical multiple learners while maintaining a more extensive search area for the exploration . The idea of combining the population-based EA and off-policy PG was recently introduced by Khadka & Tumer ( 2018 ) . Their approach was called Evolutionary-Guided Reinforcement Learning ( ERL ) in which the Genetic Algorithm ( GA ) and the Deep Deterministic Policy Gradient ( DDPG ) ( Lillicrap et al. , 2015 ) are combined . In ERL frameworks , the GA transfers the experience from evaluation into the DDPG through a replay buffer , and the DDPG transfers the knowledge learned from the policy gradient into the GA by periodically injecting a PG actor into the GA population . Khadka et al . ( 2019 ) expanded the PG algorithm of ERL from a single DDPG learner to multiple TD3 learners with a resource manager . Bodnar et al . ( 2019 ) revised the GA ’ s crossover and mutation to the distilled crossover and proximal mutation inspired by ( Gangwani & Peng , 2017 ) and ( Lehman et al. , 2018 ) to prevent the destruction of neural networks . Pourchot & Sigaud ( 2018 ) introduced another framework , which combines the CEM and the TD3 . In this framework , the TD3 algorithm has only a critic function trained using the experience from the CEM . In order to propagate the knowledge learned by policy gradient to the CEM , half of the population is updated to the direction indicated by the TD3 critic for a fixed number of steps , followed by the evaluation . The policy gradient for half of the population not only enhances the gradient-based learning , but also deteriorate the CEM ’ s robustness over a deceptive gradient . In this paper , we introduce another hybrid algorithm , in which the CEM and the TD3 are combined as in CEMRL ( Pourchot & Sigaud , 2018 ) . However , the TD3 has both actor and critic , which are trained by a gradient from experiences generated by CEM . And then , the TD3 actor periodically participates in CEM population as in ERL ( Khadka & Tumer , 2018 ) . This structure is an effective way to maintain a direct policy search of CEM . To enhance the performance , we introduced new interactions processes between the CEM and TD3 instead of carrying out a policy gradient for numerous individual actors . 3 BACKGROUNDS . Twin Delayed Deep Deterministic Policy Gradient ( TD3 ) RL framework has an agent interacting with an environment generally defined by a Markov Decision Process ( MDP ) . At each timestep t , an agent receives the state st , and takes an action at according to the policy π , and then receives a reward rt and the next state st+1 at next time step t + 1 . The goal of RL is to find the policy that maximizes the discounted cumulative return Rt = ∑∞ k=t γ k−trk where γ is a discounted factor . Off-policy RL can use the data from arbitrary policies to train its actor and critic functions repeatedly , which is a key point for improving recent gradient-based RL . Silver et al . ( 2014 ) introduced the off-policy Deterministic Policy Gradient ( DPG ) , which has an advantage for high-dimensional action spaces . The DDPG ( Lillicrap et al. , 2015 ) was extended from the DPG to apply it to a deep neural network . TD3 ( Fujimoto et al. , 2018 ) is an advanced version of the DDPG , which suffers from the overestimation bias of the critic . To correct this bias , two critics are introduced , and then the critic with the lowest state-action value is taken during the critic update as in the Double Deep Q-Network ( DDQN ) ( Van Hasselt et al. , 2016 ) . Figure 2 ( a ) represents the architecture of the TD3 . Cross Entropy Method ( CEM ) The Evolutionary Approach ( EA ) is a heuristic search method inspired by nature , where the current population is evaluated , and the next population is regenerated using the current evaluation result in order to produce a higher Return , which is also known as Fitness and defined as a cumulative sum of immediate reward for a fixed number of steps . The Estimation of Distribution Algorithm ( EDA ) is a class of the EA : It updates the target distribution to generate a better population . Depending on the update method for the distribution , EDAs are classified as a CEM ( De Boer et al. , 2005 ) , a Covariance Matrix Adaptation Evolutionary Strategy ( Hansen , 2016 ) , an Evolutionary Strategy ( Salimans et al. , 2017 ) , and a Natural Evolutionary Strategy ( Wierstra et al. , 2014 ) . We used the CEM as one part of our proposed algorithm . As shown in Figure 2 ( b ) , the CEM procedures are as follows : The population is sampled from the multivariate Gaussian N ( µ , Σ ) and evaluated , and for the top K , which is smaller than the population size ( N ) , high performing actors are used to update a new mean ( µ ) and covariance ( Σ ) of the target distribution . The weight can be given to each actor according to the rank of the Return ( Hansen , 2016 ) . The elite actor can be passed to the next population , which is known as elitism . The more detailed procedure is reported in ( De Boer et al. , 2005 ) . In this paper , we used a diagonal covariance to reduce the parameters . Population-guided Parallel Policy Search ( P3S ) The Guided Policy Learning ( GPL ) is commonly used when the elite policy leads some sub-policies to a better solution in multiple policies setting . Teh et al . ( 2017 ) introduced GPL for joint learning of numerous tasks in which a common policy encourages local policies to act better . Jung et al . ( 2020 ) proposed a soft-manner GPL , called the Population-guided Parallel Policy Search , for multiple identical learners with the same objective , where a population is evaluated periodically . Then sub-policies are trained to maximize their critic value and to concurrently minimize the distance from the elite policy for the next period . For this purpose , Augmented Loss ( 2 ) is used to train the sub-policies instead of Original Loss ( 1 ) . Original Loss : LO ( π ) = Es∼SS [ −Qπ ( s , π ( s ) ) ] ( 1 ) Augmented Loss : LA ( π , πelite , β ) = Es∼SS [ −Qπ ( s , π ( s ) ) + β||π ( s ) − πelite ( s ) ||22 ] ( 2 ) where π is a trained policy , Qπ is a critic function depending on π , πelite is the elite policy , SS is the set of states , and ||π ( s ) − πelite ( s ) ||22 is the Euclidean distance measure between the trained policy and the elite policy . β is a distance weight and is controlled adaptively . In this paper , we used a revised GPL inspired by P3S so that the CEM elite actor guides the TD3 to better space . | The paper proposes a new method combining evolutionary methods and RL. In particular, the authors combine CEM and TD3 in PGPS. PGPS maintains a population of policies, which interact with the environment to collect data filling the replay buffer. The data in replay buffer is then used to train TD3. PGPS enables information flow in both directions: when the TD3 policy performs poorly, the elite policy from the population is used to guide TD3 by an imitation learning loss; The TD3 critic helps select top policies in the population and the TD3 actor is also included in the population. The experiments on simple Mujoco domains demonstrate the utility of PGPS and the ablation study analyzes the utility of each part of PGPS. | science.mup_single_document_summarization.1052 | [
{
"content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nPGPS : Coupling Policy Gradient with Population-based Search\n\nMain Content:\n1 INTRODUCTION In Reinforcement Learning ( RL ) , an agent interacts with the environment , and its goal is to find the policy that maximizes the objective function , which is generally defined as a cumulative discounted reward . Recently , many researchers have worked on combining deep neural networks and a gradient-based RL algorithm , generally known as Deep Reinforcement Learning ( DRL ) . This approach has achieved great success not only in the discrete action domain , such as in Go ( Silver et al. , 2017 ) and Atari games ( Mnih et al. , 2015 ; 2016 ) , but also in the continuous action domain , such as in Robot control ( Fujimoto et al. , 2018 ; Lillicrap et al. , 2015 ; Schulman et al. , 2015 ) . However , it is difficult to use the gradient-based method for the objective function ( J ) , which includes “ many wide flat regions ” since the gradient ( ∇θJ ) is near zero at a flat point . Figure 1 is an extreme case consisting of only flat regions , which is called a piece-wise constant function . This problem remains an unsolved issue in gradient-based DRL with continuous control domains ( Colas et al. , 2018 ) . The Swimmer in a MuJoCo environment ( Todorov et al. , 2012 ) has already been reported to be hard to use the gradient-based method ( Jung et al. , 2020 ; Liu et al. , 2019 ) . Our experiment shows that the objective function of Swimmer includes wide flat regions ( Appendix A ) . The population-based Evolutionary Approach ( EA ) , which is an alternative to the gradient-based method , has also shown successful results in various control tasks ( Conti et al. , 2018 ; Liu et al. , 2019 ; Salimans et al. , 2017 ; Such et al. , 2017 ) . As a population-based search , the EA generates a population of agents to explore policy , and the population is regenerated with improvement in each generation . The EA is also known as a direct policy search ( Schmidhuber & Zhao , 1998 ) because it directly searches by perturbing the parameter of policy . In Figure 1 , the Cross-Entropy Method ( CEM ) as a kind of population-based search is simply described , where the current population sampled from the target distribution is evaluated . Then the distribution is updated to the direction for generating a more promising population . Not depending on the gradient , these approaches are robust to flat or deceptive gradients ( Staines & Barber , 2013 ; Liu et al. , 2019 ) . However , the EA is sample inefficient because it requires a Monte-Carlo evaluation , and the previous results and data generally can not be reused . The off-policy Policy Gradient ( PG ) algorithm uses the data from arbitrary policies to train its actor and critic functions . It generates exciting potential by combining the EA and PG , where the data which is discarded in a standard EA is directly used to train the PG ’ s functions . Khadka & Tumer ( 2018 ) and Pourchot & Sigaud ( 2018 ) introduced a framework combining the EA and off-policy PG . However , the framework of ( Khadka & Tumer , 2018 ) is less efficient to train the policy for general tasks than the PG algorithm alone , and the framework of ( Pourchot & Sigaud , 2018 ) is unsuitable to train the policy for a task providing a deceptive gradient . In this paper , we propose another hybrid algorithm , called Policy Gradient with Population-based Search ( PGPS ) in which the CEM and Twin Delayed Deep Deterministic Policy Gradient ( TD3 ) ( Fujimoto et al. , 2018 ) are combined . It is as robust to a deceptive gradient as the CEM and more efficient to train the policy for general tasks than TD3 . To be robust to a deceptive gradient , the proposed algorithm is constructed in a way similar to the one in ( Khadka & Tumer , 2018 ) , where the TD3 is trained using data from the CEM and periodically participates in the CEM population as an individual ( PG guides EA ) . However , in this basic framework , the TD3 sometimes falls into the inferior solution and inefficiently searches . To get the TD3 out of the inferior solution , we let the EA guide the TD3 by guided policy learning ( Jung et al. , 2020 ) ( EA guides PG ) . Furthermore , the TD3 critic contributes to generating a more promising population by filtering the set of actors sampled from CEM ( Q-critic filtering ) . Lastly , to control the trade-off between the frequency of search and stable estimation , we used evaluation step scheduling in the process of population evaluation ( Increasing evaluation steps ) . It carries out frequent searches when searching far from the optimal , whereas it carries out stable estimation when searching close to the optimal . These approaches bring out more synergies between the CEM and the TD3 while maintaining both the population-based search and the gradient-based search . Consequently , the proposed algorithm is not only robust to a deceptive gradient , but also produces outstanding performances with a low additional computational cost . 2 RELATED WORKS . Recently , beyond the view of an alternative approach , few attempts have been proposed in the form of A supporting B . An attempt is to use EA to fill a replay buffer with diverse samples . In Colas et al . ( 2018 ) , a Goal Exploration Process ( GEP ) , a kind of EA , is firstly applied to search the policy and to fill a replay buffer with the diverse samples , and then the off-policy PG algorithm is sequentially used for fine tuning the parameters of the policy . Another attempt is to combine a population-based approach and PG for efficiently searching a good policy or the good hyper-parameters of an algorithm in parallel multi-learners setting . These applications generally consist of periodically evaluating the population , followed by distributing good knowledge to the other learners . To find the best architecture and hyper-parameters , Jaderberg et al . ( 2017 ) proposed a Population-Based Training ( PBT ) method in which the current best knowledge is periodically transferred to PG learners . Gangwani & Peng ( 2017 ) developed the distilled crossover using imitation learning and mutation based on the PG . Proposed operators transfer the information on current good policies into the next population without destructive change to the neural network . Jung et al . ( 2020 ) introduced a soft-manner guided policy learning to fuse the knowledge of the best policy with other identical multiple learners while maintaining a more extensive search area for the exploration . The idea of combining the population-based EA and off-policy PG was recently introduced by Khadka & Tumer ( 2018 ) . Their approach was called Evolutionary-Guided Reinforcement Learning ( ERL ) in which the Genetic Algorithm ( GA ) and the Deep Deterministic Policy Gradient ( DDPG ) ( Lillicrap et al. , 2015 ) are combined . In ERL frameworks , the GA transfers the experience from evaluation into the DDPG through a replay buffer , and the DDPG transfers the knowledge learned from the policy gradient into the GA by periodically injecting a PG actor into the GA population . Khadka et al . ( 2019 ) expanded the PG algorithm of ERL from a single DDPG learner to multiple TD3 learners with a resource manager . Bodnar et al . ( 2019 ) revised the GA ’ s crossover and mutation to the distilled crossover and proximal mutation inspired by ( Gangwani & Peng , 2017 ) and ( Lehman et al. , 2018 ) to prevent the destruction of neural networks . Pourchot & Sigaud ( 2018 ) introduced another framework , which combines the CEM and the TD3 . In this framework , the TD3 algorithm has only a critic function trained using the experience from the CEM . In order to propagate the knowledge learned by policy gradient to the CEM , half of the population is updated to the direction indicated by the TD3 critic for a fixed number of steps , followed by the evaluation . The policy gradient for half of the population not only enhances the gradient-based learning , but also deteriorate the CEM ’ s robustness over a deceptive gradient . In this paper , we introduce another hybrid algorithm , in which the CEM and the TD3 are combined as in CEMRL ( Pourchot & Sigaud , 2018 ) . However , the TD3 has both actor and critic , which are trained by a gradient from experiences generated by CEM . And then , the TD3 actor periodically participates in CEM population as in ERL ( Khadka & Tumer , 2018 ) . This structure is an effective way to maintain a direct policy search of CEM . To enhance the performance , we introduced new interactions processes between the CEM and TD3 instead of carrying out a policy gradient for numerous individual actors . 3 BACKGROUNDS . Twin Delayed Deep Deterministic Policy Gradient ( TD3 ) RL framework has an agent interacting with an environment generally defined by a Markov Decision Process ( MDP ) . At each timestep t , an agent receives the state st , and takes an action at according to the policy π , and then receives a reward rt and the next state st+1 at next time step t + 1 . The goal of RL is to find the policy that maximizes the discounted cumulative return Rt = ∑∞ k=t γ k−trk where γ is a discounted factor . Off-policy RL can use the data from arbitrary policies to train its actor and critic functions repeatedly , which is a key point for improving recent gradient-based RL . Silver et al . ( 2014 ) introduced the off-policy Deterministic Policy Gradient ( DPG ) , which has an advantage for high-dimensional action spaces . The DDPG ( Lillicrap et al. , 2015 ) was extended from the DPG to apply it to a deep neural network . TD3 ( Fujimoto et al. , 2018 ) is an advanced version of the DDPG , which suffers from the overestimation bias of the critic . To correct this bias , two critics are introduced , and then the critic with the lowest state-action value is taken during the critic update as in the Double Deep Q-Network ( DDQN ) ( Van Hasselt et al. , 2016 ) . Figure 2 ( a ) represents the architecture of the TD3 . Cross Entropy Method ( CEM ) The Evolutionary Approach ( EA ) is a heuristic search method inspired by nature , where the current population is evaluated , and the next population is regenerated using the current evaluation result in order to produce a higher Return , which is also known as Fitness and defined as a cumulative sum of immediate reward for a fixed number of steps . The Estimation of Distribution Algorithm ( EDA ) is a class of the EA : It updates the target distribution to generate a better population . Depending on the update method for the distribution , EDAs are classified as a CEM ( De Boer et al. , 2005 ) , a Covariance Matrix Adaptation Evolutionary Strategy ( Hansen , 2016 ) , an Evolutionary Strategy ( Salimans et al. , 2017 ) , and a Natural Evolutionary Strategy ( Wierstra et al. , 2014 ) . We used the CEM as one part of our proposed algorithm . As shown in Figure 2 ( b ) , the CEM procedures are as follows : The population is sampled from the multivariate Gaussian N ( µ , Σ ) and evaluated , and for the top K , which is smaller than the population size ( N ) , high performing actors are used to update a new mean ( µ ) and covariance ( Σ ) of the target distribution . The weight can be given to each actor according to the rank of the Return ( Hansen , 2016 ) . The elite actor can be passed to the next population , which is known as elitism . The more detailed procedure is reported in ( De Boer et al. , 2005 ) . In this paper , we used a diagonal covariance to reduce the parameters . Population-guided Parallel Policy Search ( P3S ) The Guided Policy Learning ( GPL ) is commonly used when the elite policy leads some sub-policies to a better solution in multiple policies setting . Teh et al . ( 2017 ) introduced GPL for joint learning of numerous tasks in which a common policy encourages local policies to act better . Jung et al . ( 2020 ) proposed a soft-manner GPL , called the Population-guided Parallel Policy Search , for multiple identical learners with the same objective , where a population is evaluated periodically . Then sub-policies are trained to maximize their critic value and to concurrently minimize the distance from the elite policy for the next period . For this purpose , Augmented Loss ( 2 ) is used to train the sub-policies instead of Original Loss ( 1 ) . Original Loss : LO ( π ) = Es∼SS [ −Qπ ( s , π ( s ) ) ] ( 1 ) Augmented Loss : LA ( π , πelite , β ) = Es∼SS [ −Qπ ( s , π ( s ) ) + β||π ( s ) − πelite ( s ) ||22 ] ( 2 ) where π is a trained policy , Qπ is a critic function depending on π , πelite is the elite policy , SS is the set of states , and ||π ( s ) − πelite ( s ) ||22 is the Euclidean distance measure between the trained policy and the elite policy . β is a distance weight and is controlled adaptively . In this paper , we used a revised GPL inspired by P3S so that the CEM elite actor guides the TD3 to better space .",
"role": "user"
},
{
"content": "The paper proposes a new method combining evolutionary methods and RL. In particular, the authors combine CEM and TD3 in PGPS. PGPS maintains a population of policies, which interact with the environment to collect data filling the replay buffer. The data in replay buffer is then used to train TD3. PGPS enables information flow in both directions: when the TD3 policy performs poorly, the elite policy from the population is used to guide TD3 by an imitation learning loss; The TD3 critic helps select top policies in the population and the TD3 actor is also included in the population. The experiments on simple Mujoco domains demonstrate the utility of PGPS and the ablation study analyzes the utility of each part of PGPS.",
"role": "assistant"
}
]
|
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT.BACKGROUND: Subthreshold depression is highly prevalent in the general population and causes great loss to society especially in the form of reduced productivity while at work (presenteeism). We developed a highly-structured manualized eight-session cognitive-behavioral program with a focus on subthreshold depression in the workplace and to be administered via telephone by trained psychotherapists (tCBT).
ABSTRACT.METHODS: We conducted a parallel-group, non-blinded randomized controlled trial of tCBT in addition to the pre-existing Employee Assistance Program (EAP) versus EAP alone among workers with subthreshold depression at a large manufacturing company in Japan. The primary outcomes were depression severity as measured with Beck Depression Inventory-II (BDI-II) and presenteeism as measured with World Health Organization Health and Work Productivity Questionnaire (HPQ). In the course of the trial the follow-up period was shortened in order to increase acceptability of the study.
ABSTRACT.RESULTS: The planned sample size was 108 per arm but the trial was stopped early due to low accrual. Altogether 118 subjects were randomized to tCBT+EAP (n = 58) and to EAP alone (n = 60). The BDI-II scores fell from the mean of 17.3 at baseline to 11.0 in the intervention group and to 15.7 in the control group after 4 months (p<0.001, Effect size = 0.69, 95%CI: 0.32 to 1.05). However, there was no statistically significant decrease in absolute and relative presenteeism (p = 0.44, ES = 0.15, −0.21 to 0.52, and p = 0.50, ES = 0.02, −0.34 to 0.39, respectively).
ABSTRACT.CONCLUSION: Remote CBT, including tCBT, may provide easy access to quality-assured effective psychotherapy for people in the work force who present with subthreshold depression. Further studies are needed to evaluate the effectiveness of this approach in longer terms. The study was funded by Sekisui Chemicals Co. Ltd.
ABSTRACT.TRIAL REGISTRATION: ClinicalTrials.gov NCT00885014
Extracted Data: | [["trained psychotherapists in addition to the pre-existing Employee Assistance Program (EAP)", "Employee Assistance Program", "Beck Depression Inventory-II (BDI-II) after 4 months", "significantly decreased", "The BDI-II scores fell from the mean of 17.3 at baseline to 11.0 in the intervention group and to 15.7 in the control group after 4 months (p\u003c0.001, Effect size = 0.69, 95%CI: 0.32 to 1.05). "]] | science.evidence_inference.5 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND: Subthreshold depression is highly prevalent in the general population and causes great loss to society especially in the form of reduced productivity while at work (presenteeism). We developed a highly-structured manualized eight-session cognitive-behavioral program with a focus on subthreshold depression in the workplace and to be administered via telephone by trained psychotherapists (tCBT).\n\nABSTRACT.METHODS: We conducted a parallel-group, non-blinded randomized controlled trial of tCBT in addition to the pre-existing Employee Assistance Program (EAP) versus EAP alone among workers with subthreshold depression at a large manufacturing company in Japan. The primary outcomes were depression severity as measured with Beck Depression Inventory-II (BDI-II) and presenteeism as measured with World Health Organization Health and Work Productivity Questionnaire (HPQ). In the course of the trial the follow-up period was shortened in order to increase acceptability of the study.\n\nABSTRACT.RESULTS: The planned sample size was 108 per arm but the trial was stopped early due to low accrual. Altogether 118 subjects were randomized to tCBT+EAP (n = 58) and to EAP alone (n = 60). The BDI-II scores fell from the mean of 17.3 at baseline to 11.0 in the intervention group and to 15.7 in the control group after 4 months (p<0.001, Effect size = 0.69, 95%CI: 0.32 to 1.05). However, there was no statistically significant decrease in absolute and relative presenteeism (p = 0.44, ES = 0.15, −0.21 to 0.52, and p = 0.50, ES = 0.02, −0.34 to 0.39, respectively).\n\nABSTRACT.CONCLUSION: Remote CBT, including tCBT, may provide easy access to quality-assured effective psychotherapy for people in the work force who present with subthreshold depression. Further studies are needed to evaluate the effectiveness of this approach in longer terms. The study was funded by Sekisui Chemicals Co. Ltd.\n\nABSTRACT.TRIAL REGISTRATION: ClinicalTrials.gov NCT00885014\n\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"trained psychotherapists in addition to the pre-existing Employee Assistance Program (EAP)\", \"Employee Assistance Program\", \"Beck Depression Inventory-II (BDI-II) after 4 months\", \"significantly decreased\", \"The BDI-II scores fell from the mean of 17.3 at baseline to 11.0 in the intervention group and to 15.7 in the control group after 4 months (p\\u003c0.001, Effect size = 0.69, 95%CI: 0.32 to 1.05). \"]]",
"role": "assistant"
}
]
|
science.qasper_abstractive_qa | Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.
Document Content:
Towards Understanding Neural Machine Translation with Word Importance
Although neural machine translation (NMT) has advanced the state-of-the-art on various language pairs, the interpretability of NMT remains unsatisfactory. In this work, we propose to address this gap by focusing on understanding the input-output behavior of NMT models. Specifically, we measure the word importance by attributing the NMT output to every input word through a gradient-based method. We validate the approach on a couple of perturbation operations, language pairs, and model architectures, demonstrating its superiority on identifying input words with higher influence on translation performance. Encouragingly, the calculated importance can serve as indicators of input words that are under-translated by NMT models. Furthermore, our analysis reveals that words of certain syntactic categories have higher importance while the categories vary across language pairs, which can inspire better design principles of NMT architectures for multi-lingual translation.
Introduction ::: Contributions
Our main contributions are:
Our study demonstrates the necessity and effectiveness of exploiting the intermediate gradients for estimating word importance.
We find that word importance is useful for understanding NMT by identifying under-translated words.
We provide empirical support for the design principle of NMT architectures: essential inductive bias (e.g., language characteristics) should be considered for model design.
Related Work ::: Exploiting Gradients for Model Interpretation
The intermediate gradients have proven to be useful in interpreting deep learning models, such as NLP models BIBREF14, BIBREF15 and computer vision models BIBREF16, BIBREF9. Among all gradient-based approaches, the integrated gradients BIBREF9 is appealing since it does not need any instrumentation of the architecture and can be computed easily by calling gradient operations. In this work, we employ the IG method to interpret NMT models and reveal several interesting findings, which can potentially help debug NMT models and design better architectures for specific language pairs.
Approach ::: Neural Machine Translation
In machine translation task, a NMT model $F$: $\textbf {x} \rightarrow \textbf {y}$ maximizes the probability of a target sequence $\textbf {y} = \lbrace y_1,...,y_N\rbrace $ given a source sentence $\textbf {x} = \lbrace x_1,...,x_M\rbrace $:
where $\mathbf {\theta }$ is the model parameter and $\textbf {y}_{<n}$ is a partial translation. At each time step n, the model generates an output word of the highest probability based on the source sentence $\textbf {x}$ and the partial translation $\textbf {y}_{<n}$. The training objective is to minimize the negative log-likelihood loss on the training corpus. During the inference, beam search is employed to decode a more optimal translation. In this study, we investigate the contribution of each input word $x_m$ to the translated sentence ${\bf y}$.
Approach ::: Integrated Gradients
In this work, we resort to a gradient-based method, integrated gradients BIBREF9 (IG), which was originally proposed to attribute the model predictions to input features. It exploits the handy model gradient information by integrating first-order derivatives. IG is implementation invariant and does not require neural models to be differentiable or smooth, thereby is suitable for complex neural networks like Transformer. In this work, we use IG to estimate the word importance in an input sentence precisely.
Formally, let $\textbf {x} = (x_1, ..., x_M)$ be the input sentence and $\textbf {x}^{\prime }$ be a baseline input. $F$ is a well-trained NMT model, and $F(\textbf {x})_n$ is the model output (i.e., $P(y_n|\textbf {y}_{<n},\textbf {x})$) at time step $n$. Integrated gradients is then defined as the integral of gradients along the straightline path from the baseline $\textbf {x}^{\prime }$ to the input $\textbf {x}$. In detail, the contribution of the $m^{th}$ word in $\textbf {x}$ to the prediction of $F(\textbf {x})_n$ is defined as follows.
where $\frac{\partial {F(\textbf {x})_n}}{\partial {\textbf {x}_m}}$ is the gradient of $F(\textbf {x})_n$ w.r.t. the embedding of the $m^{th}$ word. In this paper, as suggested, the baseline input $\textbf {x}^{\prime }$ is set as a sequence of zero embeddings that has the same sequence length $M$. In this way, we can compute the contribution of a specific input word to a designated output word. Since the above formula is intractable for deep neural models, we approximate it by summing the gradients along a multi-step path from baseline $\textbf {x}^{\prime }$ to the input x.
where $S$ denotes the number of steps that are uniformly distributed along the path. The IG will be more accurate if a larger S is used. In our preliminary experiments, we varied the steps and found 300 steps yielding fairly good performance.
Following the formula, we can calculate the contribution of every input word makes to every output word, forming a contribution matrix of size $M \times N$, where $N$ is the output sentence length. Given the contribution matrix, we can obtain the word importance of each input word to the entire output sentence. To this end, for each input word, we first aggregate its contribution values to all output words by the sum operation, and then normalize all sums through the Softmax function. Figure FIGREF13 illustrates an example of the calculated word importance and the contribution matrix, where an English sentence is translated into a French sentence using the Transformer model. A negative contribution value indicates that the input word has negative effects on the output word.
Experiment ::: Implementation
We choose the state-of-the-art Transformer BIBREF1 model and the conventional RNN-Search model BIBREF0 as our test bed. We implement the Attribution method based on the Fairseq-py BIBREF19 framework for the above models. All models are trained on the training corpus for 100k steps under the standard settings, which achieve comparable translation results. All the following experiments are conducted on the test dataset, and we estimate the input word importance using the model generated hypotheses.
In the following experiments, we compare IG (Attribution) with several black-box methods (i.e., Content, Frequency, Attention) as introduced in Section SECREF8. In Section SECREF21, to ensure that the translation performance decrease attributes to the selected words instead of the perturbation operations, we randomly select the same number of words to perturb (Random), which serves as a baseline. Since there is no ranking for content words, we randomly select a set of content words as important words. To avoid the potential bias introduced by randomness (i.e., Random and Content), we repeat the experiments for 10 times and report the averaged results. We calculate the Attention importance in a similar manner as the Attribution, except that the attention scores use a max operation due to the better performance.
Experiment ::: Evaluation
We evaluate the effectiveness of estimating word importance by the translation performance decrease. More specifically, unlike the usual way, we measure the decrease of translation performance when perturbing a set of important words that are of top-most word importance in a sentence. The more translation performance degrades, the more important the word is.
We use the standard BLEU score as the evaluation metric for translation performance. To make the conclusion more convincing, we conduct experiments on different types of synthetic perturbations (Section SECREF21), as well as different NMT architectures and language pairs (Section SECREF27). In addition, we compare with a supervised erasure method, which requires ground-truth translations for scoring word importance (Section SECREF30).
Experiment ::: Results on Different Perturbations ::: Important words are more influential on translation performance than the others.
Under three different perturbations, perturbing words of top-most importance leads to lower BLEU scores than Random selected words. It confirms the existence of important words, which have greater impacts on translation performance. Furthermore, perturbing important words identified by Attribution outperforms the Random method by a large margin (more than 4.0 BLEU under 5 operations).
Experiment ::: Results on Different NMT Architecture and Language Pairs ::: Different Language Pairs and Directions
We further conduct experiments on another two language pairs (i.e., English$\Rightarrow $French, English$\Rightarrow $Japanese in Figures FIGREF20(b, c)) as well as the reverse directions (Figures FIGREF20(d, e, f)) using Transformer under the Mask perturbation. In all the cases, Attribution shows the best performance while Random achieves the worst result. More specifically, Attribution method shows similar translation quality degradation on all three language-pairs, which declines to around the half of the original BLEU score with five operations.
Analysis ::: Effect on Detecting Translation Errors
In this experiment, we propose to use the estimated word importance to detect the under-translated words by NMT models. Intuitively, under-translated input words should contribute little to the NMT outputs, yielding much smaller word importance. Given 500 Chinese$\Rightarrow $English sentence pairs translated by the Transformer model (BLEU 23.57), we ask ten human annotators to manually label the under-translated input words, and at least two annotators label each input-hypothesis pair. These annotators have at least six years of English study experience, whose native language is Chinese. Among these sentences, 178 sentences have under-translation errors with 553 under-translated words in total.
Table TABREF32 lists the accuracy of detecting under-translation errors by comparing words of least importance and human-annotated under-translated words. As seen, our Attribution method consistently and significantly outperforms both Erasure and Attention approaches. By exploiting the word importance calculated by Attribution method, we can identify the under-translation errors automatically without the involvement of human interpreters. Although the accuracy is not high, it is worth noting that our under-translation method is very simple and straightforward. This is potentially useful for debugging NMT models, e.g., automatic post-editing with constraint decoding BIBREF26, BIBREF27.
Analysis ::: Analysis on Linguistic Properties
In this section, we analyze the linguistic characteristics of important words identified by the attribution-based approach. Specifically, we investigate several representative sets of linguistic properties, including POS tags, and fertility, and depth in a syntactic parse tree. In these analyses, we multiply the word importance with the corresponding sentence length for fair comparison. We use a decision tree based regression model to calculate the correlation between the importance and linguistic properties.
Table TABREF34 lists the correlations, where a higher value indicates a stronger correlation. We find that the syntactic information is almost independent of the word importance value. Instead, the word importance strongly correlates with the POS tags and fertility features, and these features in total contribute over 95%. Therefore, in the following analyses, we mainly focus on the POS tags (Table TABREF35) and fertility properties (Table TABREF36). For better illustration, we calculate the distribution over the linguistic property based on both the Attribution importance (“Attr.”) and the word frequency (“Count”) inside a sentence. The larger the relative increase between these two values, the more important the linguistic property is.
Discussion and Conclusion
We approach understanding NMT by investigating the word importance via a gradient-based method, which bridges the gap between word importance and translation performance. Empirical results show that the gradient-based method is superior to several black-box methods in estimating the word importance. Further analyses show that important words are of distinct syntactic categories on different language pairs, which might support the viewpoint that essential inductive bias should be introduced into the model design BIBREF28. Our study also suggests the possibility of detecting the notorious under-translation problem via the gradient-based method.
This paper is an initiating step towards the general understanding of NMT models, which may bring some potential improvements, such as
Interactive MT and Constraint Decoding BIBREF29, BIBREF26: The model pays more attention to the detected unimportant words, which are possibly under-translated;
Adaptive Input Embedding BIBREF30: We can extend the adaptive softmax BIBREF31 to the input embedding of variable capacity – more important words are assigned with more capacity;
NMT Architecture Design: The language-specific inductive bias (e.g., different behaviors on POS) should be incorporated into the model design.
We can also explore other applications of word importance to improve NMT models, such as more tailored training methods. In general, model interpretability can build trust in model predictions, help error diagnosis and facilitate model refinement. We expect our work could shed light on the NMT model understanding and benefit the model improvement.
There are many possible ways to implement the general idea of exploiting gradients for model interpretation. The aim of this paper is not to explore this whole space but simply to show that some fairly straightforward implementations work well. Our approach can benefit from advanced exploitation of the gradients or other useful intermediate information, which we leave to the future work.
Analyses on Reverse Directions
2
We analyze the distribution of syntactic categories and word fertility on the same language pairs with reverse directions, i.e., English$\Rightarrow $Chinese, French$\Rightarrow $English, and Japanese$\Rightarrow $English. The results are shown in Table TABREF43 and Table TABREF44 respectively, where we observe similar findings as before. We use the Stanford POS tagger to parse the English and French input sentences, and use the Kytea to parse the Japanese input sentences.
Analyses on Reverse Directions ::: Word Fertility
The word fertility also shows similar trend as the previously reported results, where one-to-many fertility is more important and null-aligned fertility is less important. Interestingly, many-to-one fertility shows an increasing trend on Japanese$\Rightarrow $English translation, but the proportion is relatively small.
In summary, the findings on language pairs with reverse directions still agree with the findings in the paper, which further confirms the generality of our experimental findings.
Figure 2: Effect of three types of synthetic perturbations on Chinese⇒English translation using the Transformer.
Figure 3: Effect of the Mask perturbation on (a) Chinese⇒English translation using the RNN-Search model, (b, c, d, e, f) other language pairs and directions using Transformer model.
Figure 4: Effect of Attribution and Erasure methods on Chinese⇒English translation with Mask perturbation.
Table 1: F1 accuracy of detecting under-translation errors with the estimated word importance.
Table 3: Distribution of syntactic categories (e.g. content words vs. content-free words) based on word count (“Count”) and Attribution importance (“Attri.”). “4” denotes relative change over the count-based distribution.
Table 4: Distributions of word fertility and their relative change based on Attribution importance and word count.
Table 5: Distribution of syntactic categories with reverse directions based on word count (“Count”) and Attribution importance (“Attri.”). “4” denotes relative change over the count-based distribution.
Table 6: Distributions of word fertility and relative changes with reverse directions.
Query: How do their models decide how much improtance to give to the output words?
JSON Response: | {"answer": "They compute the gradient of the output at each time step with respect to the input words to decide the importance.", "evidence": ["Formally, let $\\textbf {x} = (x_1, ..., x_M)$ be the input sentence and $\\textbf {x}^{\\prime }$ be a baseline input. $F$ is a well-trained NMT model, and $F(\\textbf {x})_n$ is the model output (i.e., $P(y_n|\\textbf {y}_{\u003cn},\\textbf {x})$) at time step $n$. Integrated gradients is then defined as the integral of gradients along the straightline path from the baseline $\\textbf {x}^{\\prime }$ to the input $\\textbf {x}$. In detail, the contribution of the $m^{th}$ word in $\\textbf {x}$ to the prediction of $F(\\textbf {x})_n$ is defined as follows.", "where $\\frac{\\partial {F(\\textbf {x})_n}}{\\partial {\\textbf {x}_m}}$ is the gradient of $F(\\textbf {x})_n$ w.r.t. the embedding of the $m^{th}$ word. In this paper, as suggested, the baseline input $\\textbf {x}^{\\prime }$ is set as a sequence of zero embeddings that has the same sequence length $M$. In this way, we can compute the contribution of a specific input word to a designated output word. Since the above formula is intractable for deep neural models, we approximate it by summing the gradients along a multi-step path from baseline $\\textbf {x}^{\\prime }$ to the input x."]} | science.qasper_abstractive_qa.147.1 | [
{
"content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nTowards Understanding Neural Machine Translation with Word Importance\n\nAlthough neural machine translation (NMT) has advanced the state-of-the-art on various language pairs, the interpretability of NMT remains unsatisfactory. In this work, we propose to address this gap by focusing on understanding the input-output behavior of NMT models. Specifically, we measure the word importance by attributing the NMT output to every input word through a gradient-based method. We validate the approach on a couple of perturbation operations, language pairs, and model architectures, demonstrating its superiority on identifying input words with higher influence on translation performance. Encouragingly, the calculated importance can serve as indicators of input words that are under-translated by NMT models. Furthermore, our analysis reveals that words of certain syntactic categories have higher importance while the categories vary across language pairs, which can inspire better design principles of NMT architectures for multi-lingual translation.\n\nIntroduction ::: Contributions\nOur main contributions are:\nOur study demonstrates the necessity and effectiveness of exploiting the intermediate gradients for estimating word importance.\nWe find that word importance is useful for understanding NMT by identifying under-translated words.\nWe provide empirical support for the design principle of NMT architectures: essential inductive bias (e.g., language characteristics) should be considered for model design.\n\nRelated Work ::: Exploiting Gradients for Model Interpretation\nThe intermediate gradients have proven to be useful in interpreting deep learning models, such as NLP models BIBREF14, BIBREF15 and computer vision models BIBREF16, BIBREF9. Among all gradient-based approaches, the integrated gradients BIBREF9 is appealing since it does not need any instrumentation of the architecture and can be computed easily by calling gradient operations. In this work, we employ the IG method to interpret NMT models and reveal several interesting findings, which can potentially help debug NMT models and design better architectures for specific language pairs.\n\nApproach ::: Neural Machine Translation\nIn machine translation task, a NMT model $F$: $\\textbf {x} \\rightarrow \\textbf {y}$ maximizes the probability of a target sequence $\\textbf {y} = \\lbrace y_1,...,y_N\\rbrace $ given a source sentence $\\textbf {x} = \\lbrace x_1,...,x_M\\rbrace $:\nwhere $\\mathbf {\\theta }$ is the model parameter and $\\textbf {y}_{<n}$ is a partial translation. At each time step n, the model generates an output word of the highest probability based on the source sentence $\\textbf {x}$ and the partial translation $\\textbf {y}_{<n}$. The training objective is to minimize the negative log-likelihood loss on the training corpus. During the inference, beam search is employed to decode a more optimal translation. In this study, we investigate the contribution of each input word $x_m$ to the translated sentence ${\\bf y}$.\n\nApproach ::: Integrated Gradients\nIn this work, we resort to a gradient-based method, integrated gradients BIBREF9 (IG), which was originally proposed to attribute the model predictions to input features. It exploits the handy model gradient information by integrating first-order derivatives. IG is implementation invariant and does not require neural models to be differentiable or smooth, thereby is suitable for complex neural networks like Transformer. In this work, we use IG to estimate the word importance in an input sentence precisely.\nFormally, let $\\textbf {x} = (x_1, ..., x_M)$ be the input sentence and $\\textbf {x}^{\\prime }$ be a baseline input. $F$ is a well-trained NMT model, and $F(\\textbf {x})_n$ is the model output (i.e., $P(y_n|\\textbf {y}_{<n},\\textbf {x})$) at time step $n$. Integrated gradients is then defined as the integral of gradients along the straightline path from the baseline $\\textbf {x}^{\\prime }$ to the input $\\textbf {x}$. In detail, the contribution of the $m^{th}$ word in $\\textbf {x}$ to the prediction of $F(\\textbf {x})_n$ is defined as follows.\nwhere $\\frac{\\partial {F(\\textbf {x})_n}}{\\partial {\\textbf {x}_m}}$ is the gradient of $F(\\textbf {x})_n$ w.r.t. the embedding of the $m^{th}$ word. In this paper, as suggested, the baseline input $\\textbf {x}^{\\prime }$ is set as a sequence of zero embeddings that has the same sequence length $M$. In this way, we can compute the contribution of a specific input word to a designated output word. Since the above formula is intractable for deep neural models, we approximate it by summing the gradients along a multi-step path from baseline $\\textbf {x}^{\\prime }$ to the input x.\nwhere $S$ denotes the number of steps that are uniformly distributed along the path. The IG will be more accurate if a larger S is used. In our preliminary experiments, we varied the steps and found 300 steps yielding fairly good performance.\nFollowing the formula, we can calculate the contribution of every input word makes to every output word, forming a contribution matrix of size $M \\times N$, where $N$ is the output sentence length. Given the contribution matrix, we can obtain the word importance of each input word to the entire output sentence. To this end, for each input word, we first aggregate its contribution values to all output words by the sum operation, and then normalize all sums through the Softmax function. Figure FIGREF13 illustrates an example of the calculated word importance and the contribution matrix, where an English sentence is translated into a French sentence using the Transformer model. A negative contribution value indicates that the input word has negative effects on the output word.\n\nExperiment ::: Implementation\nWe choose the state-of-the-art Transformer BIBREF1 model and the conventional RNN-Search model BIBREF0 as our test bed. We implement the Attribution method based on the Fairseq-py BIBREF19 framework for the above models. All models are trained on the training corpus for 100k steps under the standard settings, which achieve comparable translation results. All the following experiments are conducted on the test dataset, and we estimate the input word importance using the model generated hypotheses.\nIn the following experiments, we compare IG (Attribution) with several black-box methods (i.e., Content, Frequency, Attention) as introduced in Section SECREF8. In Section SECREF21, to ensure that the translation performance decrease attributes to the selected words instead of the perturbation operations, we randomly select the same number of words to perturb (Random), which serves as a baseline. Since there is no ranking for content words, we randomly select a set of content words as important words. To avoid the potential bias introduced by randomness (i.e., Random and Content), we repeat the experiments for 10 times and report the averaged results. We calculate the Attention importance in a similar manner as the Attribution, except that the attention scores use a max operation due to the better performance.\n\nExperiment ::: Evaluation\nWe evaluate the effectiveness of estimating word importance by the translation performance decrease. More specifically, unlike the usual way, we measure the decrease of translation performance when perturbing a set of important words that are of top-most word importance in a sentence. The more translation performance degrades, the more important the word is.\nWe use the standard BLEU score as the evaluation metric for translation performance. To make the conclusion more convincing, we conduct experiments on different types of synthetic perturbations (Section SECREF21), as well as different NMT architectures and language pairs (Section SECREF27). In addition, we compare with a supervised erasure method, which requires ground-truth translations for scoring word importance (Section SECREF30).\n\nExperiment ::: Results on Different Perturbations ::: Important words are more influential on translation performance than the others.\nUnder three different perturbations, perturbing words of top-most importance leads to lower BLEU scores than Random selected words. It confirms the existence of important words, which have greater impacts on translation performance. Furthermore, perturbing important words identified by Attribution outperforms the Random method by a large margin (more than 4.0 BLEU under 5 operations).\n\nExperiment ::: Results on Different NMT Architecture and Language Pairs ::: Different Language Pairs and Directions\nWe further conduct experiments on another two language pairs (i.e., English$\\Rightarrow $French, English$\\Rightarrow $Japanese in Figures FIGREF20(b, c)) as well as the reverse directions (Figures FIGREF20(d, e, f)) using Transformer under the Mask perturbation. In all the cases, Attribution shows the best performance while Random achieves the worst result. More specifically, Attribution method shows similar translation quality degradation on all three language-pairs, which declines to around the half of the original BLEU score with five operations.\n\nAnalysis ::: Effect on Detecting Translation Errors\nIn this experiment, we propose to use the estimated word importance to detect the under-translated words by NMT models. Intuitively, under-translated input words should contribute little to the NMT outputs, yielding much smaller word importance. Given 500 Chinese$\\Rightarrow $English sentence pairs translated by the Transformer model (BLEU 23.57), we ask ten human annotators to manually label the under-translated input words, and at least two annotators label each input-hypothesis pair. These annotators have at least six years of English study experience, whose native language is Chinese. Among these sentences, 178 sentences have under-translation errors with 553 under-translated words in total.\nTable TABREF32 lists the accuracy of detecting under-translation errors by comparing words of least importance and human-annotated under-translated words. As seen, our Attribution method consistently and significantly outperforms both Erasure and Attention approaches. By exploiting the word importance calculated by Attribution method, we can identify the under-translation errors automatically without the involvement of human interpreters. Although the accuracy is not high, it is worth noting that our under-translation method is very simple and straightforward. This is potentially useful for debugging NMT models, e.g., automatic post-editing with constraint decoding BIBREF26, BIBREF27.\n\nAnalysis ::: Analysis on Linguistic Properties\nIn this section, we analyze the linguistic characteristics of important words identified by the attribution-based approach. Specifically, we investigate several representative sets of linguistic properties, including POS tags, and fertility, and depth in a syntactic parse tree. In these analyses, we multiply the word importance with the corresponding sentence length for fair comparison. We use a decision tree based regression model to calculate the correlation between the importance and linguistic properties.\nTable TABREF34 lists the correlations, where a higher value indicates a stronger correlation. We find that the syntactic information is almost independent of the word importance value. Instead, the word importance strongly correlates with the POS tags and fertility features, and these features in total contribute over 95%. Therefore, in the following analyses, we mainly focus on the POS tags (Table TABREF35) and fertility properties (Table TABREF36). For better illustration, we calculate the distribution over the linguistic property based on both the Attribution importance (“Attr.”) and the word frequency (“Count”) inside a sentence. The larger the relative increase between these two values, the more important the linguistic property is.\n\nDiscussion and Conclusion\nWe approach understanding NMT by investigating the word importance via a gradient-based method, which bridges the gap between word importance and translation performance. Empirical results show that the gradient-based method is superior to several black-box methods in estimating the word importance. Further analyses show that important words are of distinct syntactic categories on different language pairs, which might support the viewpoint that essential inductive bias should be introduced into the model design BIBREF28. Our study also suggests the possibility of detecting the notorious under-translation problem via the gradient-based method.\nThis paper is an initiating step towards the general understanding of NMT models, which may bring some potential improvements, such as\nInteractive MT and Constraint Decoding BIBREF29, BIBREF26: The model pays more attention to the detected unimportant words, which are possibly under-translated;\nAdaptive Input Embedding BIBREF30: We can extend the adaptive softmax BIBREF31 to the input embedding of variable capacity – more important words are assigned with more capacity;\nNMT Architecture Design: The language-specific inductive bias (e.g., different behaviors on POS) should be incorporated into the model design.\nWe can also explore other applications of word importance to improve NMT models, such as more tailored training methods. In general, model interpretability can build trust in model predictions, help error diagnosis and facilitate model refinement. We expect our work could shed light on the NMT model understanding and benefit the model improvement.\nThere are many possible ways to implement the general idea of exploiting gradients for model interpretation. The aim of this paper is not to explore this whole space but simply to show that some fairly straightforward implementations work well. Our approach can benefit from advanced exploitation of the gradients or other useful intermediate information, which we leave to the future work.\n\nAnalyses on Reverse Directions\n2\nWe analyze the distribution of syntactic categories and word fertility on the same language pairs with reverse directions, i.e., English$\\Rightarrow $Chinese, French$\\Rightarrow $English, and Japanese$\\Rightarrow $English. The results are shown in Table TABREF43 and Table TABREF44 respectively, where we observe similar findings as before. We use the Stanford POS tagger to parse the English and French input sentences, and use the Kytea to parse the Japanese input sentences.\n\nAnalyses on Reverse Directions ::: Word Fertility\nThe word fertility also shows similar trend as the previously reported results, where one-to-many fertility is more important and null-aligned fertility is less important. Interestingly, many-to-one fertility shows an increasing trend on Japanese$\\Rightarrow $English translation, but the proportion is relatively small.\nIn summary, the findings on language pairs with reverse directions still agree with the findings in the paper, which further confirms the generality of our experimental findings.\n\nFigure 2: Effect of three types of synthetic perturbations on Chinese⇒English translation using the Transformer.\n\nFigure 3: Effect of the Mask perturbation on (a) Chinese⇒English translation using the RNN-Search model, (b, c, d, e, f) other language pairs and directions using Transformer model.\n\nFigure 4: Effect of Attribution and Erasure methods on Chinese⇒English translation with Mask perturbation.\n\nTable 1: F1 accuracy of detecting under-translation errors with the estimated word importance.\n\nTable 3: Distribution of syntactic categories (e.g. content words vs. content-free words) based on word count (“Count”) and Attribution importance (“Attri.”). “4” denotes relative change over the count-based distribution.\n\nTable 4: Distributions of word fertility and their relative change based on Attribution importance and word count.\n\nTable 5: Distribution of syntactic categories with reverse directions based on word count (“Count”) and Attribution importance (“Attri.”). “4” denotes relative change over the count-based distribution.\n\nTable 6: Distributions of word fertility and relative changes with reverse directions.\n\nQuery: How do their models decide how much improtance to give to the output words?\n\nJSON Response:",
"role": "user"
},
{
"content": "{\"answer\": \"They compute the gradient of the output at each time step with respect to the input words to decide the importance.\", \"evidence\": [\"Formally, let $\\\\textbf {x} = (x_1, ..., x_M)$ be the input sentence and $\\\\textbf {x}^{\\\\prime }$ be a baseline input. $F$ is a well-trained NMT model, and $F(\\\\textbf {x})_n$ is the model output (i.e., $P(y_n|\\\\textbf {y}_{\\u003cn},\\\\textbf {x})$) at time step $n$. Integrated gradients is then defined as the integral of gradients along the straightline path from the baseline $\\\\textbf {x}^{\\\\prime }$ to the input $\\\\textbf {x}$. In detail, the contribution of the $m^{th}$ word in $\\\\textbf {x}$ to the prediction of $F(\\\\textbf {x})_n$ is defined as follows.\", \"where $\\\\frac{\\\\partial {F(\\\\textbf {x})_n}}{\\\\partial {\\\\textbf {x}_m}}$ is the gradient of $F(\\\\textbf {x})_n$ w.r.t. the embedding of the $m^{th}$ word. In this paper, as suggested, the baseline input $\\\\textbf {x}^{\\\\prime }$ is set as a sequence of zero embeddings that has the same sequence length $M$. In this way, we can compute the contribution of a specific input word to a designated output word. Since the above formula is intractable for deep neural models, we approximate it by summing the gradients along a multi-step path from baseline $\\\\textbf {x}^{\\\\prime }$ to the input x.\"]}",
"role": "assistant"
}
]
|
science.discomat_te | | Modifier cations | DF in 1020 m-2 | Dr in 10-10 m | DN MeO | Ternary crystalline compounds |
| Ca2+-Mg2+ | 0.10-0.16 | 0.28 | ~2 | None |
| Ba2+-Na+ | 0.05-0.07 | 0.45 | ~3 | NaBaP3O9; Na4BaP6O18 |
| Ba2+-Sr2+ | 0.04 | 0.16 | ~1 | None |
| K+-Na+ | 0.04-0.06 | 0.35 | ~1 | Na2KP3O9 |
Caption: Difference in modifier cation field strengths, DF, in ionic radii, Dr, in coordination numbers, DN MeO and crystalline compounds known in the ternary systems a
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "0.10", "type": "Other"}
{"value": "0.16", "type": "Other"}
{"value": "0.28", "type": "Other"}
{"value": "0.05", "type": "Other"}
{"value": "0.07", "type": "Other"}
{"value": "0.45", "type": "Other"}
{"value": "0.04", "type": "Other"}
{"value": "0.16", "type": "Other"}
{"value": "0.04", "type": "Other"}
{"value": "0.06", "type": "Other"}
{"value": "0.35", "type": "Other"} | science.discomat_te.8 | [
{
"content": "| Modifier cations | DF in 1020 m-2 | Dr in 10-10 m | DN MeO | Ternary crystalline compounds |\n| Ca2+-Mg2+ | 0.10-0.16 | 0.28 | ~2 | None |\n| Ba2+-Na+ | 0.05-0.07 | 0.45 | ~3 | NaBaP3O9; Na4BaP6O18 |\n| Ba2+-Sr2+ | 0.04 | 0.16 | ~1 | None |\n| K+-Na+ | 0.04-0.06 | 0.35 | ~1 | Na2KP3O9 |\n\nCaption: Difference in modifier cation field strengths, DF, in ionic radii, Dr, in coordination numbers, DN MeO and crystalline compounds known in the ternary systems a\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"0.10\", \"type\": \"Other\"}\n{\"value\": \"0.16\", \"type\": \"Other\"}\n{\"value\": \"0.28\", \"type\": \"Other\"}\n{\"value\": \"0.05\", \"type\": \"Other\"}\n{\"value\": \"0.07\", \"type\": \"Other\"}\n{\"value\": \"0.45\", \"type\": \"Other\"}\n{\"value\": \"0.04\", \"type\": \"Other\"}\n{\"value\": \"0.16\", \"type\": \"Other\"}\n{\"value\": \"0.04\", \"type\": \"Other\"}\n{\"value\": \"0.06\", \"type\": \"Other\"}\n{\"value\": \"0.35\", \"type\": \"Other\"}",
"role": "assistant"
}
]
|
science.discomat_te | | | QW | QP | SC | AT |
| 2901 | | | | |
| 2904 | | | | |
| 3190 | | | | |
Caption: Studied samples (checked) from three melts made using various heat treatments
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "2901", "type": "Other"}
{"value": "2904", "type": "Other"}
{"value": "3190", "type": "Other"} | science.discomat_te.150 | [
{
"content": "| | QW | QP | SC | AT |\n| 2901 | | | | |\n| 2904 | | | | |\n| 3190 | | | | |\n\nCaption: Studied samples (checked) from three melts made using various heat treatments\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"2901\", \"type\": \"Other\"}\n{\"value\": \"2904\", \"type\": \"Other\"}\n{\"value\": \"3190\", \"type\": \"Other\"}",
"role": "assistant"
}
]
|
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT.BACKGROUND: Early diagnosis and appropriate treatment of infections in cirrhosis are crucial. As new guidelines in this context, particularly for health care-associated (HCA) infections, would be needed, we performed a trial documenting whether an empirical broad-spectrum antibiotic therapy is more effective than the standard one for these infections. Because of the higher daily cost of broad-spectrum than standard antibiotics, we performed a cost analysis to compare: 1) total drug costs, 2) profitability of hospital admissions.
ABSTRACT.METHODS: This retrospective observational analysis was performed on patients enrolled in the trial NCT01820026, in which consecutive cirrhotic patients with HCA infections were randomly assigned to a standard vs a broad-spectrum treatment. Antibiotic daily doses, days of treatment, length of hospital stay, and DRG (diagnosis-related group) were recorded from the clinical trial medical records. The profitability of hospitalizations was calculated considering DRG tariffs divided by length of hospital stay.
ABSTRACT.RESULTS: We considered 84 patients (42 for each group). The standard therapy allowed to obtain a first-line treatment cost lower than in the broad-spectrum therapy. Anyway, the latter, being related to a lower failure rate (19% vs 57.1%), resulted in cost saving in terms of cumulative antibiotic costs (first- and second-line treatments). The mean cost saving per patient for the broad-spectrum arm was €44.18 (−37.6%), with a total cost saving of about €2,000. Compared to standard group, we observed a statistically significant reduction in hospital stay from 17.8 to 11.8 days (p<0.002) for patients treated with broad-spectrum antibiotics. The distribution of DRG tariffs was similar in the two groups. According to DRG, the shorter length of hospital stay of the broad-spectrum group involved a higher mean profitable daily cost than standard group (€345.61 vs €252.23; +37%).
ABSTRACT.CONCLUSION: Our study supports the idea that the use of a broad-spectrum empirical treatment for HCA infections in cirrhosis would be cost-saving and that hospitals need to be aware of the clinical and economic consequences of a wrong antibiotic treatment in this setting.
Extracted Data: | [["broad-spectrum treatment", "standard", "hospital stay", "significantly decreased", "Compared to standard group, we observed a statistically significant reduction in hospital stay from 17.8 to 11.8 days (p\u0026lt;0.002) for patients treated with broad-spectrum antibiotics."], ["broad-spectrum treatment", "standard", "hospital stay", "significantly decreased", "statistically significant reduction in hospital stay from 17.8 to 11.8 days (p\u003c0.002) for patients treated with broad-spectrum antibiotics"]] | science.evidence_inference.225 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND: Early diagnosis and appropriate treatment of infections in cirrhosis are crucial. As new guidelines in this context, particularly for health care-associated (HCA) infections, would be needed, we performed a trial documenting whether an empirical broad-spectrum antibiotic therapy is more effective than the standard one for these infections. Because of the higher daily cost of broad-spectrum than standard antibiotics, we performed a cost analysis to compare: 1) total drug costs, 2) profitability of hospital admissions.\n\nABSTRACT.METHODS: This retrospective observational analysis was performed on patients enrolled in the trial NCT01820026, in which consecutive cirrhotic patients with HCA infections were randomly assigned to a standard vs a broad-spectrum treatment. Antibiotic daily doses, days of treatment, length of hospital stay, and DRG (diagnosis-related group) were recorded from the clinical trial medical records. The profitability of hospitalizations was calculated considering DRG tariffs divided by length of hospital stay.\n\nABSTRACT.RESULTS: We considered 84 patients (42 for each group). The standard therapy allowed to obtain a first-line treatment cost lower than in the broad-spectrum therapy. Anyway, the latter, being related to a lower failure rate (19% vs 57.1%), resulted in cost saving in terms of cumulative antibiotic costs (first- and second-line treatments). The mean cost saving per patient for the broad-spectrum arm was €44.18 (−37.6%), with a total cost saving of about €2,000. Compared to standard group, we observed a statistically significant reduction in hospital stay from 17.8 to 11.8 days (p<0.002) for patients treated with broad-spectrum antibiotics. The distribution of DRG tariffs was similar in the two groups. According to DRG, the shorter length of hospital stay of the broad-spectrum group involved a higher mean profitable daily cost than standard group (€345.61 vs €252.23; +37%).\n\nABSTRACT.CONCLUSION: Our study supports the idea that the use of a broad-spectrum empirical treatment for HCA infections in cirrhosis would be cost-saving and that hospitals need to be aware of the clinical and economic consequences of a wrong antibiotic treatment in this setting.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"broad-spectrum treatment\", \"standard\", \"hospital stay\", \"significantly decreased\", \"Compared to standard group, we observed a statistically significant reduction in hospital stay from 17.8 to 11.8 days (p\\u0026lt;0.002) for patients treated with broad-spectrum antibiotics.\"], [\"broad-spectrum treatment\", \"standard\", \"hospital stay\", \"significantly decreased\", \"statistically significant reduction in hospital stay from 17.8 to 11.8 days (p\\u003c0.002) for patients treated with broad-spectrum antibiotics\"]]",
"role": "assistant"
}
]
|
science.scifact_entailment | Determine the relationship between them and generate a JSON response. Scientific Claim:
The RANK-RANK-Ligand (RANKL) pathway is involved in the development of Aire-expressing medullary thymic epithelial cells (mTECs).
Research Abstract:
Medullary thymic epithelial cells (mTECs) establish T cell self-tolerance through the expression of autoimmune regulator (Aire) and peripheral tissue-specific self-antigens.
However, signals underlying mTEC development remain largely unclear.
Here, we demonstrate crucial regulation of mTEC development by receptor activator of NF-kappaB (RANK) and CD40 signals.
Whereas only RANK signaling was essential for mTEC development during embryogenesis, in postnatal mice, cooperation between CD40 and RANK signals was required for mTEC development to successfully establish the medullary microenvironment.
Ligation of RANK or CD40 on fetal thymic stroma in vitro induced mTEC development in a tumor necrosis factor-associated factor 6 (TRAF6)-, NF-kappaB inducing kinase (NIK)-, and IkappaB kinase beta (IKKbeta)-dependent manner.
These results show that developmental-stage-dependent cooperation between RANK and CD40 promotes mTEC development, thereby establishing self-tolerance. | {"verdict": "SUPPORT", "evidence": ["Here, we demonstrate crucial regulation of mTEC development by receptor activator of NF-kappaB (RANK) and CD40 signals.", "Whereas only RANK signaling was essential for mTEC development during embryogenesis, in postnatal mice, cooperation between CD40 and RANK signals was required for mTEC development to successfully establish the medullary microenvironment.", "Ligation of RANK or CD40 on fetal thymic stroma in vitro induced mTEC development in a tumor necrosis factor-associated factor 6 (TRAF6)-, NF-kappaB inducing kinase (NIK)-, and IkappaB kinase beta (IKKbeta)-dependent manner.", "These results show that developmental-stage-dependent cooperation between RANK and CD40 promotes mTEC development, thereby establishing self-tolerance."]} | science.scifact_entailment.770 | [
{
"content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nThe RANK-RANK-Ligand (RANKL) pathway is involved in the development of Aire-expressing medullary thymic epithelial cells (mTECs).\n\nResearch Abstract:\nMedullary thymic epithelial cells (mTECs) establish T cell self-tolerance through the expression of autoimmune regulator (Aire) and peripheral tissue-specific self-antigens.\nHowever, signals underlying mTEC development remain largely unclear.\nHere, we demonstrate crucial regulation of mTEC development by receptor activator of NF-kappaB (RANK) and CD40 signals.\nWhereas only RANK signaling was essential for mTEC development during embryogenesis, in postnatal mice, cooperation between CD40 and RANK signals was required for mTEC development to successfully establish the medullary microenvironment.\nLigation of RANK or CD40 on fetal thymic stroma in vitro induced mTEC development in a tumor necrosis factor-associated factor 6 (TRAF6)-, NF-kappaB inducing kinase (NIK)-, and IkappaB kinase beta (IKKbeta)-dependent manner.\nThese results show that developmental-stage-dependent cooperation between RANK and CD40 promotes mTEC development, thereby establishing self-tolerance.",
"role": "user"
},
{
"content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"Here, we demonstrate crucial regulation of mTEC development by receptor activator of NF-kappaB (RANK) and CD40 signals.\", \"Whereas only RANK signaling was essential for mTEC development during embryogenesis, in postnatal mice, cooperation between CD40 and RANK signals was required for mTEC development to successfully establish the medullary microenvironment.\", \"Ligation of RANK or CD40 on fetal thymic stroma in vitro induced mTEC development in a tumor necrosis factor-associated factor 6 (TRAF6)-, NF-kappaB inducing kinase (NIK)-, and IkappaB kinase beta (IKKbeta)-dependent manner.\", \"These results show that developmental-stage-dependent cooperation between RANK and CD40 promotes mTEC development, thereby establishing self-tolerance.\"]}",
"role": "assistant"
}
]
|
science.mup_single_document_summarization | Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.
Title of the Paper:
Weakly-Supervised Learning of Disentangled and Interpretable Skills for Hierarchical Reinforcement Learning
Main Content:
1 INTRODUCTION . Deep reinforcement learning ( RL ) has achieved great success for various applications , ranging from playing games ( Mnih et al. , 2013 ; Silver et al. , 2016 ) to complex locomotion and robots control ( Lillicrap et al. , 2015 ; Schulman et al. , 2015 ; 2017 ; Haarnoja et al. , 2017 ) . However , several challenges such as sparse rewards or inadaptability to unlearned tasks still hinder its practical usages in real-world problems . To alleviate these challenges , hierarchical RL ( Sutton et al. , 1999 ; Dietterich , 2000 ) has been studied where an agent pre-learns reusable skills from prior experiences and hierarchically solve higher-level problems by combining the skills . Two issues need to be resolved for the successful deployment of the hierarchical RL ; how to learn useful skills and how to effectively make use of the skills for various downstream tasks . A possible approach for skills that can be applicable to various downstream tasks is to learn without task-specific rewards ( Eysenbach et al. , 2018 ) . Another way to achieve the useful skills is to make them predictable . To learn those skills , ( Co-Reyes et al. , 2018 ; Sharma et al. , 2019 ) proposed to combine model-free and model-based RL approaches , where a skill-based predictive model , a dynamics model over the latent space , is trained together with a skill-based policy network.By using the predictive model for model-based planning during testing time , these works showed to efficiently solve various downstream tasks without the need to learn additional higher-level policies . However , since they did not consider how the skill is embedded into the latent space , the factors consisting of the skill often are entangled when the skill is a continuous latent variable . Compared to the entangled one , the skill consisting of disentangled factors has several advantages in its applicability in that the factors can be separately interpreted and handled . In this paper , we introduce a novel WEakly-supervised learning approach for learning Disentangled and Interpretable Skills ( WEDIS ) from the continuous latent representations of trajectories that are composed of several generative factors , e.g. , speed , direction , and curvature . To this end , we propose a weakly-supervised trajectory variational autoencoder ( WET-VAE ) model that is an extension of the trajectory VAE ( Co-Reyes et al. , 2018 ) consisting of a recurrent neural network ( RNN ) . We leverage the weak labels ( Margonis et al. , 2020 ) to enforce an inductive bias on the model , which explicitly enforces the trajectory representations to be disentangled into factors of interest that we intend the model to learn . To train the WET-VAE , we first synthetically generate a trajectory dataset by the combination of several factors of interest , because the trajectories obtained by an online exploration are likely to contain meaningless samples such as random walks . With the trajectory dataset , the WET-VAE model is trained apart from a policy network . It is worthy of noting that while this is similar to imitation learning , our data acquisition is much simpler than collecting expert demonstration . Sequentially , we train a skill-based policy network with the WET-VAE fixed . Given the latent representations as skills , the skill-based policy network is trained to generate similar trajectories with the decoder of the WET-VAE by minimizing the KL divergence between two trajectory distributions . However , training a policy to generate a trajectory given a skill is difficult since it is unlikely to explore the corresponding trajectory in the training procedure . Instead , we propose to train the policy network with the single-step transitions and perform the trajectory-level behaviors in the test time , which can be achieved with the knowledge of the learned skills . This simplifies the training procedure of the policy , and also allows for a sample-efficient large-scale planning strategy with the scaled trajectories . In experiments in Mujoco Ant environment , we show that our disentangled and interpretable skills are effective in solving challenging sparse reward and long-horizon problems in 2D navigation in mazes . 2 RELATED WORKS . Numerous approaches ( Sutton et al. , 1999 ; Bacon et al. , 2017 ; Florensa et al. , 2017 ; Hausman et al. , 2018 ; Haarnoja et al. , 2018 ; Eysenbach et al. , 2018 ; Shankar et al. , 2019 ; Shankar & Gupta , 2020 ; Co-Reyes et al. , 2018 ; Sharma et al. , 2019 ) have explored on learning reusable skills in RL to solve challenging long-horizon or sparse reward problems . ( Sutton et al. , 1999 ) pioneered a way to control higher-level abstraction by introducing an option-framework , which learns low-level primitives in a top-down manner . ( Bacon et al. , 2017 ) proposed an option-critic architecture that learns sub-policies of options . Also , several works ( Florensa et al. , 2017 ; Hausman et al. , 2018 ; Haarnoja et al. , 2018 ) introduced to learn skills with multiple tasks in a bottom-up manner . However , designing reward functions still requires expert knowledge and such task-specific rewards may limit a generalization ability of the agent to the downstream tasks . To overcome this issue , recent works ( Eysenbach et al. , 2018 ; Achiam et al. , 2018 ; Co-Reyes et al. , 2018 ; Sharma et al. , 2019 ; Campos et al. , 2020 ) proposed an unsupervised framework that does not require a hand-specified reward function . Model-based RL methods ( Levine et al. , 2016 ; Nagabandi et al. , 2018 ; Chua et al. , 2018 ; Ha & Schmidhuber , 2018 ) aim to learn a dynamics model of the environment . While these works are capable of solving unlearned tasks without the needs of an additional learning via planning through the dynamics model , they are often at the risk of falling into over-fitting due to a huge capacity of the required data to explore the environment . Instead of learning the underlying dynamics , some methods ( Co-Reyes et al. , 2018 ; Sharma et al. , 2019 ) attempted to combine the model-free and model-based RL for learning a skill-based predictive model and a skill-based policy . Despite the improved results , they still suffer from the lack of the interpretability of the skills . Learning disentangled latent representations of factors of variation within dataset is beneficial to a variety of downstream tasks such as few-shot classification and data generation , thanks to the interpretability of the disentangled factors . ( Higgins et al. , 2016 ) proposed β-VAE , an unsupervised method to learn the disentangled representations by modifying the weight of the KL-divergence term of the VAE ( Kingma & Welling , 2013 ; Rezende et al. , 2014 ) greater than one . Afterwards , while several variants ( Kim & Mnih , 2018 ; Chen et al. , 2018 ) improved the β-VAE by introducing a total correlation ( TC ) term , ( Locatello et al. , 2019a ) pointed out the inherent limitation of the purely unsupervised approaches and emphasized the need of an inductive bias . Recent works ( Locatello et al. , 2019b ; Shu et al. , 2019 ; Locatello et al. , 2020 ; Margonis et al. , 2020 ) proposed various forms of weak supervision to encourage the inductive bias to learn the disentangled representations . While there are various categories on the weak labels , we used them in terms of ones that 1 ) are roughly divided into fewer classes and 2 ) can be obtained with programming by using the knowledge on the factors without the need for manual labeling . 3 PRELIMINARIES . Consider a Markov decision process ( MDP ) ( S , A , P , r , ρ0 , γ ) , where S is a set of states , A is a set of action , P : S × A × S → R+ is a transition probability distribution , r : S × A→ R is a reward function , ρ0 : S → R+ is an initial state distribution and γ ∈ ( 0 , 1 ) is a discount factor . We denote a stochastic policy as π : S × A→ R+ . RL has a goal of maximizing the expected discounted sum of rewards for an episode horizon HE : η ( π ) = Eπ [ HE∑ t=0 γtr ( st , at ) ] ( 1 ) Variational autoencoder ( VAE ) optimizes variational the lower bound of the marginal likelihood of dataset . Given an observed datapoint x , the variational lowerbound is defined as : log pθ ( x ) ≥ L ( θ , φ ; x ) = Eqφ ( z |x ) [ log pθ ( x |z ) ] −DKL ( qφ ( z |x ) ‖p ( z ) ) , ( 2 ) where p ( z ) is a prior distribution of a latent variable z , the decoder pθ ( x |z ) is a generative model given a latent z parameterized by θ , and the encoder qφ ( z |x ) is an approximate posterior distribution parameterized by φ . In Equation 2 , the first term is the reconstruction term of the autoencoder , and the second term is the KL divergence regularization . In our work , we will focus on the aspect of the generative model of the decoder . 4 WEAKLY SUPERVISED LEARNING OF DISENTANGLED AND INTERPRETABLE SKILL ( WEDIS ) . Our framework consists of three stages ; 1 ) generating trajectory training data with factors of interest 2 ) training the WET-VAE model , whose decoder is used for the predictive model and 3 ) training a policy network to generate the similar trajectories with the predictive model conditioned on skills . The generation process of the trajectory dataset is explained in Appendix A.1.1 due to the lack of space . As a notation , we will use superscript for factors and subscript for time steps . The WEDIS algorithm is summarized in Figure 2 . 4.1 LEARNING DISENTANGLED AND INTERPRETABLE REPRESENTATIONS OF TRAJECTORY To learn the temporally extended behaviors , ( CoReyes et al. , 2018 ) proposed a trajectory VAE model consisting of the RNN architecture . The trajectory VAE learns latent representations of trajectories , which will be used as skills for a policy . However , this model , which learns the representations in the unsupervised manner , does not consider which factors of variation of a trajectory are embedded in the latent space . Thus , the factors that are often entangled make the interpretation of the representations difficult , exposing limitations in further applicability of the learned skills . To address this , we propose a weakly-supervised trajectory VAE ( WET-VAE ) model that leverages an inductive bias in the form of weak supervision ( Margonis et al. , 2020 ) to explicitly enforce the model to learn the disentangled representations consisting of desired factors , yielding interpretable skills . Consider a latent-variable generative model p ( τ |z ) to generate a trajectory τ given a latent variable z . We assume the fixed initial state s0 at the origin as when given other initial states we can obtain the next states with a linear translation based on the initial states such that p ( s|s0 , z ) = p ( s − s0|z ) . Considering M factors of interest to generate trajectories , the weak supervision can be provided by simply adding a set of M weak labels y = { y1 , ... , yM } to the generative model , where each label ym is one-hot encoded vector for each factor . The idea is that a latent representation z ∈ RM , which can generate the trajectories based on the M disentangled generative factors , should also be able to reconstruct the factors . Assuming that the trajectory and the factors that are represented as the multiple labels satisfy conditional independence with respect to a given z , the generative model is extended with the labels p ( τ , y |z ) = p ( τ |z ) p ( y1|z ) · · ·p ( yM |z ) . Then , the variational lower bound of the marginal joint distribution p ( τ , y ) can be formulated as follows : L ( θ , φ ; τ , y ) = Eqφ ( z |τ , y ) [ log pθ ( τ , y |z ) ] −DKL ( qφ ( z |τ , y ) ‖p ( z ) ) = Eqφ ( z |τ , y ) [ T∑ t=1 log pθ ( st |s1 : t−1 , z ) + M∑ m=1 log pθ ( y m |z ) ] −DKL ( qφ ( z |τ , y ) ‖p ( z ) ) , ( 3 ) where pθ ( τ |z ) = pθ ( s1|z ) pθ ( s2|s1 , z ) · · · pθ ( sT |s1 , s2 , ... , sT−1 , z ) . Since pθ ( ym |z ) can be understood as a classifier for each factor , the factors should be distinctly embedded in a latent representation z for high classification probability . As a result , this enforces a disentangled representation of the factors . Practically , the scales of the values of the log-likelihoods of the states and labels are different due to the difference in dimensionality . To fill the gap , we introduce a balancing weight γ inspired by ( Margonis et al. , 2020 ) . We also use a weight β > 1 to emphasize the KL divergence term for better disentanglement in the spirit of the β-VAE ( Higgins et al. , 2016 ) . Then , the final objective function becomes : L ( θ , φ ; τ , y , β , γ ) = Eqφ ( z |τ , y ) [ T∑ t=1 log pθ ( st |s1 : t−1 , z ) + γ · M∑ m=1 log pθ ( y m |z ) ] − β ·DKL ( qφ ( z |τ , y ) ‖p ( z ) ) ( 4 ) The WET-VAE model is trained to maximize Equation 4 . To handle the sequential data , we use the RNN architecture with LSTMs as in Figure 1 . With the support of the weak supervision , this model can learn the disentangled representations of trajectories that consist of factors of variation contributing over different time steps . | The paper at hand proposes a new framework for pre-training skill policies (WEDIS), and use them for control in a hierarchical setup with MPC. The main idea is that skill policies should follow a set of generated trajectories based on some salient factors. For this, the authors train a VAE that will then provide both the control variables (latent encoding) and a predictive model of the trajectory (decoder). The skill policy is trained to match the trajectories of the predictive model, and the predictive model is then used for MPC. | science.mup_single_document_summarization.5780 | [
{
"content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nWeakly-Supervised Learning of Disentangled and Interpretable Skills for Hierarchical Reinforcement Learning\n\nMain Content:\n1 INTRODUCTION . Deep reinforcement learning ( RL ) has achieved great success for various applications , ranging from playing games ( Mnih et al. , 2013 ; Silver et al. , 2016 ) to complex locomotion and robots control ( Lillicrap et al. , 2015 ; Schulman et al. , 2015 ; 2017 ; Haarnoja et al. , 2017 ) . However , several challenges such as sparse rewards or inadaptability to unlearned tasks still hinder its practical usages in real-world problems . To alleviate these challenges , hierarchical RL ( Sutton et al. , 1999 ; Dietterich , 2000 ) has been studied where an agent pre-learns reusable skills from prior experiences and hierarchically solve higher-level problems by combining the skills . Two issues need to be resolved for the successful deployment of the hierarchical RL ; how to learn useful skills and how to effectively make use of the skills for various downstream tasks . A possible approach for skills that can be applicable to various downstream tasks is to learn without task-specific rewards ( Eysenbach et al. , 2018 ) . Another way to achieve the useful skills is to make them predictable . To learn those skills , ( Co-Reyes et al. , 2018 ; Sharma et al. , 2019 ) proposed to combine model-free and model-based RL approaches , where a skill-based predictive model , a dynamics model over the latent space , is trained together with a skill-based policy network.By using the predictive model for model-based planning during testing time , these works showed to efficiently solve various downstream tasks without the need to learn additional higher-level policies . However , since they did not consider how the skill is embedded into the latent space , the factors consisting of the skill often are entangled when the skill is a continuous latent variable . Compared to the entangled one , the skill consisting of disentangled factors has several advantages in its applicability in that the factors can be separately interpreted and handled . In this paper , we introduce a novel WEakly-supervised learning approach for learning Disentangled and Interpretable Skills ( WEDIS ) from the continuous latent representations of trajectories that are composed of several generative factors , e.g. , speed , direction , and curvature . To this end , we propose a weakly-supervised trajectory variational autoencoder ( WET-VAE ) model that is an extension of the trajectory VAE ( Co-Reyes et al. , 2018 ) consisting of a recurrent neural network ( RNN ) . We leverage the weak labels ( Margonis et al. , 2020 ) to enforce an inductive bias on the model , which explicitly enforces the trajectory representations to be disentangled into factors of interest that we intend the model to learn . To train the WET-VAE , we first synthetically generate a trajectory dataset by the combination of several factors of interest , because the trajectories obtained by an online exploration are likely to contain meaningless samples such as random walks . With the trajectory dataset , the WET-VAE model is trained apart from a policy network . It is worthy of noting that while this is similar to imitation learning , our data acquisition is much simpler than collecting expert demonstration . Sequentially , we train a skill-based policy network with the WET-VAE fixed . Given the latent representations as skills , the skill-based policy network is trained to generate similar trajectories with the decoder of the WET-VAE by minimizing the KL divergence between two trajectory distributions . However , training a policy to generate a trajectory given a skill is difficult since it is unlikely to explore the corresponding trajectory in the training procedure . Instead , we propose to train the policy network with the single-step transitions and perform the trajectory-level behaviors in the test time , which can be achieved with the knowledge of the learned skills . This simplifies the training procedure of the policy , and also allows for a sample-efficient large-scale planning strategy with the scaled trajectories . In experiments in Mujoco Ant environment , we show that our disentangled and interpretable skills are effective in solving challenging sparse reward and long-horizon problems in 2D navigation in mazes . 2 RELATED WORKS . Numerous approaches ( Sutton et al. , 1999 ; Bacon et al. , 2017 ; Florensa et al. , 2017 ; Hausman et al. , 2018 ; Haarnoja et al. , 2018 ; Eysenbach et al. , 2018 ; Shankar et al. , 2019 ; Shankar & Gupta , 2020 ; Co-Reyes et al. , 2018 ; Sharma et al. , 2019 ) have explored on learning reusable skills in RL to solve challenging long-horizon or sparse reward problems . ( Sutton et al. , 1999 ) pioneered a way to control higher-level abstraction by introducing an option-framework , which learns low-level primitives in a top-down manner . ( Bacon et al. , 2017 ) proposed an option-critic architecture that learns sub-policies of options . Also , several works ( Florensa et al. , 2017 ; Hausman et al. , 2018 ; Haarnoja et al. , 2018 ) introduced to learn skills with multiple tasks in a bottom-up manner . However , designing reward functions still requires expert knowledge and such task-specific rewards may limit a generalization ability of the agent to the downstream tasks . To overcome this issue , recent works ( Eysenbach et al. , 2018 ; Achiam et al. , 2018 ; Co-Reyes et al. , 2018 ; Sharma et al. , 2019 ; Campos et al. , 2020 ) proposed an unsupervised framework that does not require a hand-specified reward function . Model-based RL methods ( Levine et al. , 2016 ; Nagabandi et al. , 2018 ; Chua et al. , 2018 ; Ha & Schmidhuber , 2018 ) aim to learn a dynamics model of the environment . While these works are capable of solving unlearned tasks without the needs of an additional learning via planning through the dynamics model , they are often at the risk of falling into over-fitting due to a huge capacity of the required data to explore the environment . Instead of learning the underlying dynamics , some methods ( Co-Reyes et al. , 2018 ; Sharma et al. , 2019 ) attempted to combine the model-free and model-based RL for learning a skill-based predictive model and a skill-based policy . Despite the improved results , they still suffer from the lack of the interpretability of the skills . Learning disentangled latent representations of factors of variation within dataset is beneficial to a variety of downstream tasks such as few-shot classification and data generation , thanks to the interpretability of the disentangled factors . ( Higgins et al. , 2016 ) proposed β-VAE , an unsupervised method to learn the disentangled representations by modifying the weight of the KL-divergence term of the VAE ( Kingma & Welling , 2013 ; Rezende et al. , 2014 ) greater than one . Afterwards , while several variants ( Kim & Mnih , 2018 ; Chen et al. , 2018 ) improved the β-VAE by introducing a total correlation ( TC ) term , ( Locatello et al. , 2019a ) pointed out the inherent limitation of the purely unsupervised approaches and emphasized the need of an inductive bias . Recent works ( Locatello et al. , 2019b ; Shu et al. , 2019 ; Locatello et al. , 2020 ; Margonis et al. , 2020 ) proposed various forms of weak supervision to encourage the inductive bias to learn the disentangled representations . While there are various categories on the weak labels , we used them in terms of ones that 1 ) are roughly divided into fewer classes and 2 ) can be obtained with programming by using the knowledge on the factors without the need for manual labeling . 3 PRELIMINARIES . Consider a Markov decision process ( MDP ) ( S , A , P , r , ρ0 , γ ) , where S is a set of states , A is a set of action , P : S × A × S → R+ is a transition probability distribution , r : S × A→ R is a reward function , ρ0 : S → R+ is an initial state distribution and γ ∈ ( 0 , 1 ) is a discount factor . We denote a stochastic policy as π : S × A→ R+ . RL has a goal of maximizing the expected discounted sum of rewards for an episode horizon HE : η ( π ) = Eπ [ HE∑ t=0 γtr ( st , at ) ] ( 1 ) Variational autoencoder ( VAE ) optimizes variational the lower bound of the marginal likelihood of dataset . Given an observed datapoint x , the variational lowerbound is defined as : log pθ ( x ) ≥ L ( θ , φ ; x ) = Eqφ ( z |x ) [ log pθ ( x |z ) ] −DKL ( qφ ( z |x ) ‖p ( z ) ) , ( 2 ) where p ( z ) is a prior distribution of a latent variable z , the decoder pθ ( x |z ) is a generative model given a latent z parameterized by θ , and the encoder qφ ( z |x ) is an approximate posterior distribution parameterized by φ . In Equation 2 , the first term is the reconstruction term of the autoencoder , and the second term is the KL divergence regularization . In our work , we will focus on the aspect of the generative model of the decoder . 4 WEAKLY SUPERVISED LEARNING OF DISENTANGLED AND INTERPRETABLE SKILL ( WEDIS ) . Our framework consists of three stages ; 1 ) generating trajectory training data with factors of interest 2 ) training the WET-VAE model , whose decoder is used for the predictive model and 3 ) training a policy network to generate the similar trajectories with the predictive model conditioned on skills . The generation process of the trajectory dataset is explained in Appendix A.1.1 due to the lack of space . As a notation , we will use superscript for factors and subscript for time steps . The WEDIS algorithm is summarized in Figure 2 . 4.1 LEARNING DISENTANGLED AND INTERPRETABLE REPRESENTATIONS OF TRAJECTORY To learn the temporally extended behaviors , ( CoReyes et al. , 2018 ) proposed a trajectory VAE model consisting of the RNN architecture . The trajectory VAE learns latent representations of trajectories , which will be used as skills for a policy . However , this model , which learns the representations in the unsupervised manner , does not consider which factors of variation of a trajectory are embedded in the latent space . Thus , the factors that are often entangled make the interpretation of the representations difficult , exposing limitations in further applicability of the learned skills . To address this , we propose a weakly-supervised trajectory VAE ( WET-VAE ) model that leverages an inductive bias in the form of weak supervision ( Margonis et al. , 2020 ) to explicitly enforce the model to learn the disentangled representations consisting of desired factors , yielding interpretable skills . Consider a latent-variable generative model p ( τ |z ) to generate a trajectory τ given a latent variable z . We assume the fixed initial state s0 at the origin as when given other initial states we can obtain the next states with a linear translation based on the initial states such that p ( s|s0 , z ) = p ( s − s0|z ) . Considering M factors of interest to generate trajectories , the weak supervision can be provided by simply adding a set of M weak labels y = { y1 , ... , yM } to the generative model , where each label ym is one-hot encoded vector for each factor . The idea is that a latent representation z ∈ RM , which can generate the trajectories based on the M disentangled generative factors , should also be able to reconstruct the factors . Assuming that the trajectory and the factors that are represented as the multiple labels satisfy conditional independence with respect to a given z , the generative model is extended with the labels p ( τ , y |z ) = p ( τ |z ) p ( y1|z ) · · ·p ( yM |z ) . Then , the variational lower bound of the marginal joint distribution p ( τ , y ) can be formulated as follows : L ( θ , φ ; τ , y ) = Eqφ ( z |τ , y ) [ log pθ ( τ , y |z ) ] −DKL ( qφ ( z |τ , y ) ‖p ( z ) ) = Eqφ ( z |τ , y ) [ T∑ t=1 log pθ ( st |s1 : t−1 , z ) + M∑ m=1 log pθ ( y m |z ) ] −DKL ( qφ ( z |τ , y ) ‖p ( z ) ) , ( 3 ) where pθ ( τ |z ) = pθ ( s1|z ) pθ ( s2|s1 , z ) · · · pθ ( sT |s1 , s2 , ... , sT−1 , z ) . Since pθ ( ym |z ) can be understood as a classifier for each factor , the factors should be distinctly embedded in a latent representation z for high classification probability . As a result , this enforces a disentangled representation of the factors . Practically , the scales of the values of the log-likelihoods of the states and labels are different due to the difference in dimensionality . To fill the gap , we introduce a balancing weight γ inspired by ( Margonis et al. , 2020 ) . We also use a weight β > 1 to emphasize the KL divergence term for better disentanglement in the spirit of the β-VAE ( Higgins et al. , 2016 ) . Then , the final objective function becomes : L ( θ , φ ; τ , y , β , γ ) = Eqφ ( z |τ , y ) [ T∑ t=1 log pθ ( st |s1 : t−1 , z ) + γ · M∑ m=1 log pθ ( y m |z ) ] − β ·DKL ( qφ ( z |τ , y ) ‖p ( z ) ) ( 4 ) The WET-VAE model is trained to maximize Equation 4 . To handle the sequential data , we use the RNN architecture with LSTMs as in Figure 1 . With the support of the weak supervision , this model can learn the disentangled representations of trajectories that consist of factors of variation contributing over different time steps .",
"role": "user"
},
{
"content": "The paper at hand proposes a new framework for pre-training skill policies (WEDIS), and use them for control in a hierarchical setup with MPC. The main idea is that skill policies should follow a set of generated trajectories based on some salient factors. For this, the authors train a VAE that will then provide both the control variables (latent encoding) and a predictive model of the trajectory (decoder). The skill policy is trained to match the trajectories of the predictive model, and the predictive model is then used for MPC.",
"role": "assistant"
}
]
|
science.bioasq_list_qa | Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.
Context: HiTSelect: a comprehensive tool for high-complexity-pooled screen analysis.
Genetic screens of an unprecedented scale have recently been made possible by the availability of high-complexity libraries of synthetic oligonucleotides designed to mediate either gene knockdown or gene knockout, coupled with next-generation sequencing. However, several sources of random noise and statistical biases complicate the interpretation of the resulting high-throughput data. We developed HiTSelect, a comprehensive analysis pipeline for rigorously selecting screen hits and identifying functionally relevant genes and pathways by addressing off-target effects, controlling for variance in both gene silencing efficiency and sequencing depth of coverage and integrating relevant metadata. We document the superior performance of HiTSelect using data from both genome-wide RNAi and CRISPR/Cas9 screens. HiTSelect is implemented as an open-source package, with a user-friendly interface for data visualization and pathway exploration. Binary executables are available at http://sourceforge.net/projects/hitselect/, and the source code is available at https://github.com/diazlab/HiTSelect.
MAGeCK enables robust identification of essential genes from genome-scale CRISPR/Cas9 knockout screens.
We propose the Model-based Analysis of Genome-wide CRISPR/Cas9 Knockout (MAGeCK) method for prioritizing single-guide RNAs, genes and pathways in genome-scale CRISPR/Cas9 knockout screens. MAGeCK demonstrates better performance compared with existing methods, identifies both positively and negatively selected genes simultaneously, and reports robust results across different experimental conditions. Using public datasets, MAGeCK identified novel essential genes and pathways, including EGFR in vemurafenib-treated A375 cells harboring a BRAF mutation. MAGeCK also detected cell type-specific essential genes, including BCR and ABL1, in KBM7 cells bearing a BCR-ABL fusion, and IGF1R in HL-60 cells, which depends on the insulin signaling pathway for proliferation.
We propose the Model-based Analysis of Genome-wide CRISPR/Cas9 Knockout (MAGeCK) method for prioritizing single-guide RNAs, genes and pathways in genome-scale CRISPR/Cas9 knockout screens.
We propose the Model-based Analysis of Genome-wide CRISPR/Cas9 Knockout (MAGeCK) method for prioritizing single-guide RNAs, genes and pathways in genome-scale CRISPR/Cas9 knockout screens.
We document the superior performance of HiTSelect using data from both genome-wide RNAi and CRISPR/Cas9 screens.
Question: Which algorithms have been developed for analysing CRISPR/Cas9 knockout screens data? | ["HiTSelect", "MAGeCK"] | science.bioasq_list_qa.692 | [
{
"content": "Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.\n\nContext: HiTSelect: a comprehensive tool for high-complexity-pooled screen analysis.\nGenetic screens of an unprecedented scale have recently been made possible by the availability of high-complexity libraries of synthetic oligonucleotides designed to mediate either gene knockdown or gene knockout, coupled with next-generation sequencing. However, several sources of random noise and statistical biases complicate the interpretation of the resulting high-throughput data. We developed HiTSelect, a comprehensive analysis pipeline for rigorously selecting screen hits and identifying functionally relevant genes and pathways by addressing off-target effects, controlling for variance in both gene silencing efficiency and sequencing depth of coverage and integrating relevant metadata. We document the superior performance of HiTSelect using data from both genome-wide RNAi and CRISPR/Cas9 screens. HiTSelect is implemented as an open-source package, with a user-friendly interface for data visualization and pathway exploration. Binary executables are available at http://sourceforge.net/projects/hitselect/, and the source code is available at https://github.com/diazlab/HiTSelect.\nMAGeCK enables robust identification of essential genes from genome-scale CRISPR/Cas9 knockout screens.\nWe propose the Model-based Analysis of Genome-wide CRISPR/Cas9 Knockout (MAGeCK) method for prioritizing single-guide RNAs, genes and pathways in genome-scale CRISPR/Cas9 knockout screens. MAGeCK demonstrates better performance compared with existing methods, identifies both positively and negatively selected genes simultaneously, and reports robust results across different experimental conditions. Using public datasets, MAGeCK identified novel essential genes and pathways, including EGFR in vemurafenib-treated A375 cells harboring a BRAF mutation. MAGeCK also detected cell type-specific essential genes, including BCR and ABL1, in KBM7 cells bearing a BCR-ABL fusion, and IGF1R in HL-60 cells, which depends on the insulin signaling pathway for proliferation.\nWe propose the Model-based Analysis of Genome-wide CRISPR/Cas9 Knockout (MAGeCK) method for prioritizing single-guide RNAs, genes and pathways in genome-scale CRISPR/Cas9 knockout screens.\nWe propose the Model-based Analysis of Genome-wide CRISPR/Cas9 Knockout (MAGeCK) method for prioritizing single-guide RNAs, genes and pathways in genome-scale CRISPR/Cas9 knockout screens.\nWe document the superior performance of HiTSelect using data from both genome-wide RNAi and CRISPR/Cas9 screens.\n\nQuestion: Which algorithms have been developed for analysing CRISPR/Cas9 knockout screens data?",
"role": "user"
},
{
"content": "[\"HiTSelect\", \"MAGeCK\"]",
"role": "assistant"
}
]
|
science.mup_single_document_summarization | Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.
Title of the Paper:
VUT: Versatile UI Transformer for Multimodal Multi-Task User Interface Modeling
Main Content:
1 INTRODUCTION . Modern graphical user interfaces specifically touchscreen mobile UIs enable a rich problem space for modeling where the input is inherently multimodal , which consists of several distinct types of data . A user interface screen exists in both a visual form , i.e. , a screenshot , and a structural representation , i.e. , a tree-like view hierarchy . Based on graphical user interfaces , there is a wide spectrum of modeling tasks that either directly enhance user experiences or advance the development of intelligent user interfaces . For example , previous work developed models and datasets for grounding a language command into an executable UI action ( Li et al. , 2020a ) , generating language description for accessibility on mobile devices ( Li et al. , 2020b ; Wang et al. , 2021 ) , and understanding the usability of user interfaces ( Swearngin & Li , 2019 ) or identifying the objects on the screen ( Zhang et al. , 2021 ) . Previous work has also started learning effective representation of user interface screens ( He et al. , 2020 ; Li et al. , 2021a ) , which can potentially benefit downstream tasks . Although these previous works have made progress in addressing individual problems , it is important to investigate the feasibility of learning all these tasks with a single model . In addition to achieving a scientific understanding of how these UI tasks are related , it is extremely valuable to obtain such a multi-task model , which can potentially reduce the number of models that need to be developed and deployed . This is crucial for mobile devices that have limited computing resources . In this work , we propose VUT—Versatile UI Transformer , which handles three types of data : images , structures ( view hierarchies ) and language , and simultaneously performs five unique tasks that are representative in the UI modeling literature , including UI object detection , natural language command grounding , widget captioning , screen summarization and UI tappability prediction . A major challenge we need to address is how to unify these distinct tasks as well as their heterogeneous datasets such that they can be learned by a single model . To this end , we devise a general formulation for UI modeling tasks based on five inherent types of information that define a task . We also aim to design a compact model architecture such that it remains stable for addressing a diverse and potentially growing set of tasks , for which we make each model component multi-purpose . Specifically , VUT comprises two Transformer architectures ( Figure 1 ) : the Image-Structure model , and the Question-Answer model . The Image-Structure model encodes the entire screenshot of a UI along its view hierarchy tree , with early fusion of the two modalities , which is guided by a focus map when a given object is inquired . In addition to being the UI encoder , the Image-Structure model predicts UI objects when the view hierarchy is absent on the input , which achieves the UI object detection task . The Question-Answer model encodes a question while attending to the UI encodings from the Image-Structure model . It decodes a text answer when the task response is language , e.g. , widget captioning ( Li et al. , 2020c ) or screen summarization ( Wang et al. , 2021 ) . For the grounding task whose output is an object reference , the Question-Answer model serves as the question encoder and its hidden state is used to locate UI elements to be acted upon . We highlight the relation of VUT with previous works in Table 1 , and discuss their differences further in the following section . We experiment with our model on 5 datasets , and compare the accuracy of VUT when it is trained alone for each task and jointly with multiple tasks . Our experiments show that VUT is able to perform all the five tasks simultaneously and achieve the performance on par with or surpass that when each task is learned alone . The main contributions of our work are as follows . • We formulate multi-modal multi-task learning for a new domain—graphical user interfaces— with one model to accomplish a wide range of tasks for enhancing mobile user experiences . • We design VUT based on a two-tower Transformer architecture , one for handling imagestructure and the other for language data , where each Transformer is multi-purpose by both encoding and decoding its own modality , with cross-tower attention . • We experiment with VUT on 5 distinct UI tasks , and thoroughly investigated the effect of these tasks when learned alone or jointly with ablations and analysis , which show the feasibility for achieving diverse UI tasks using a single model , which offers the value for reducing the number of models and storage footprints needed for deployment ( Appendix E ) . 2 RELATED WORK . Extensive work has been conducted in multi-modal modeling with vision and languages ( Li et al. , 2019 ; Lu et al. , 2019 ; Hu & Singh , 2021 ; Lu et al. , 2020 ; Tan & Bansal , 2019 ; Kim et al. , 2021 ; Zhou et al. , 2020 ; Gupta et al. , 2021 ) . Existing works differ in the form of input they consume and the objectives of modeling . One category of work focuses on pretraining to learn an effective cross-modality representation for downstream tasks and the other directly learns multiple tasks end-to-end ( Table 1 ) . VUT belongs to the latter . In terms of the forms of multimodal data these models consume , most models handle image and text input . However , a unique form of data in UI modeling is the structure input of view hierarchies , which only VUT , UIBert and ActionBert use . Many existing works feed object regions , instead of the entire image to the model , which requires a pretrained object detection model ( Lu et al. , 2019 ; Bai et al. , 2021 ; He et al. , 2020 ; Zhou et al. , 2020 ; Tan & Bansal , 2019 ; Lu et al. , 2020 ) or address tasks only regarding the entire image ( Kim et al. , 2021 ; Hu & Singh , 2021 ) . Although ActionBert ( He et al. , 2020 ) and UIBert ( Bai et al. , 2021 ) also address the UI domain , they are targeted for representation learning , and do not support multiple tasks simultaneously . As a result , they do not deal with language input of task descriptions . Their text input is those scraped from the UI screen , e.g. , using OCR . In addition , these models require predetermined object regions similar to many BERT-based multi-modal models . In contrast , object detection is one of the tasks that VUT addresses . In terms of modeling techniques , we designed a novel Transformer architecture for multi-task modeling of the UI domain , based on building blocks previously proposed for natural images and language , e.g. , ( Hu et al. , 2020 ; Lu et al. , 2020 ) . the work that is closely related to ours is GPVI ( Gupta et al. , 2021 ) , which uses DETR ( Carion et al. , 2020 ) for object detection , and ViBERT ( Lu et al. , 2019 ) for multimodal modeling . In addition to the obvious deviation our work , e.g. , VUT uses structure input but GPV-I does not , there are several important architecture differences . While GPV-I directly embeds DETR , an encoder-decoder model , into its architecture , VUT uses a single tower design where both the image and object queries are fed to the same Transformer encoder . This design choice is motivated by our goal to achieve a compact architecture , which the Image-Structure model serves both image-structure encoding and object detection ( when the structure input is absent in the input ) . As shown in our experiment , the single tower architecture of VUT ’ s Image-Structure model showed clear advantage over the encoder-decoder architecture in DETR for the UI object detection task . To address the unique domin of UI tasks , we also introduce focus map to guide the model towards the object being inquired . VUT ’ s question-answer Transformer is designed based on existing auto-regressive multi-task language models ( Raffel et al. , 2019 ; Brown et al. , 2020 ) where a question or a command is fed to the model as a prefix , and the responses are decoded token by token . One difference is that for the language command grounding task , instead of generating a language response , the last hidden state of the language model is used , as a question encoding , to retrieve a UI object on the screen . 3 PROBLEM FORMULATION . A graphical user interface contains a collection of UI elements for fulfilling a coherent set of tasks . There are often five types of data involved to formulate a UI task : < S , V , T , Q , A > ( Figure 1 ) . S is the screenshot image that captures the visual appearance of the UI screen . V is the view hierarchy tree that represents the underlying structure of the screen . T is the target object on the screen to be inquired . Q is the natural language description of the task , which can be an open-ended question such as `` What is the caption of the element ? `` , a yes-or-no question such as `` Does the object look clickable ? '' or a command such as `` Click on the Next button. '' . See the full list of Q used in our experiments in Appendix A . Finally , A is the natural language answer to the question Q when the form of the response for the task is supposed to be natural language . Depending on each task setup , these data types appear as either input or output . We elaborate on the formation of each task here , and use F to denote the function for achieving each task . 3.1 UI OBJECT DETECTION . Given the screenshot image , S , the task is to detect each UI element on the screen , such as Text Field , Toggle Button , or Image View . This task is similar to the typical object detection task in natural images ( Carion et al. , 2020 ) or recent UI object detection work ( Zhang et al. , 2021 ) . However , our task is more challenging in that it needs to detect different types of container objects , which determine how UI objects are visually structured of the screen , such as Linear Layout , Frame Layout or List . In total there are 21 types of leaf or non-leaf objects in a view hierarchy . See Appendix D for the full list of objects we detect . UI object detection is important for improving accessibility and enabling other intelligent features such as UI adaptation when view hierarchy is not available . As a screen understanding task , UI object detection is beneficial to other UI modeling tasks as we will show in our experiments . The task is formulated as the follow ( Equation 1 ) . V = F ( S , V∅ , T∅ , Q∅ ) ( 1 ) Note that this task is achieved solely based on the single-tower Image-Structure Transformer ( Figure 1 ) and does not rely on the question-answer model . V∅ , T∅ and Q∅ represent each type of data masked out or missing in the input . | The paper proposes an architecture for graphical user interfaces which involve multi-modal inputs (UI screenshots, Hierarchy structures, Natural Language) and multi-task learning (UI Object Detection, Widget Captioning, Screen summarization, Language grounding, and Tappability).The proposed architecture consists of seperate transformer blocks to encode image and text modalities. The two transformer blocks attends to each other to produce multi-modal outputs, which is then used for downstream tasks. The authors that with the proposed architecture, training on all tasks simultaneously is better than training on individual task alone. | science.mup_single_document_summarization.4990 | [
{
"content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nVUT: Versatile UI Transformer for Multimodal Multi-Task User Interface Modeling \n\nMain Content:\n1 INTRODUCTION . Modern graphical user interfaces specifically touchscreen mobile UIs enable a rich problem space for modeling where the input is inherently multimodal , which consists of several distinct types of data . A user interface screen exists in both a visual form , i.e. , a screenshot , and a structural representation , i.e. , a tree-like view hierarchy . Based on graphical user interfaces , there is a wide spectrum of modeling tasks that either directly enhance user experiences or advance the development of intelligent user interfaces . For example , previous work developed models and datasets for grounding a language command into an executable UI action ( Li et al. , 2020a ) , generating language description for accessibility on mobile devices ( Li et al. , 2020b ; Wang et al. , 2021 ) , and understanding the usability of user interfaces ( Swearngin & Li , 2019 ) or identifying the objects on the screen ( Zhang et al. , 2021 ) . Previous work has also started learning effective representation of user interface screens ( He et al. , 2020 ; Li et al. , 2021a ) , which can potentially benefit downstream tasks . Although these previous works have made progress in addressing individual problems , it is important to investigate the feasibility of learning all these tasks with a single model . In addition to achieving a scientific understanding of how these UI tasks are related , it is extremely valuable to obtain such a multi-task model , which can potentially reduce the number of models that need to be developed and deployed . This is crucial for mobile devices that have limited computing resources . In this work , we propose VUT—Versatile UI Transformer , which handles three types of data : images , structures ( view hierarchies ) and language , and simultaneously performs five unique tasks that are representative in the UI modeling literature , including UI object detection , natural language command grounding , widget captioning , screen summarization and UI tappability prediction . A major challenge we need to address is how to unify these distinct tasks as well as their heterogeneous datasets such that they can be learned by a single model . To this end , we devise a general formulation for UI modeling tasks based on five inherent types of information that define a task . We also aim to design a compact model architecture such that it remains stable for addressing a diverse and potentially growing set of tasks , for which we make each model component multi-purpose . Specifically , VUT comprises two Transformer architectures ( Figure 1 ) : the Image-Structure model , and the Question-Answer model . The Image-Structure model encodes the entire screenshot of a UI along its view hierarchy tree , with early fusion of the two modalities , which is guided by a focus map when a given object is inquired . In addition to being the UI encoder , the Image-Structure model predicts UI objects when the view hierarchy is absent on the input , which achieves the UI object detection task . The Question-Answer model encodes a question while attending to the UI encodings from the Image-Structure model . It decodes a text answer when the task response is language , e.g. , widget captioning ( Li et al. , 2020c ) or screen summarization ( Wang et al. , 2021 ) . For the grounding task whose output is an object reference , the Question-Answer model serves as the question encoder and its hidden state is used to locate UI elements to be acted upon . We highlight the relation of VUT with previous works in Table 1 , and discuss their differences further in the following section . We experiment with our model on 5 datasets , and compare the accuracy of VUT when it is trained alone for each task and jointly with multiple tasks . Our experiments show that VUT is able to perform all the five tasks simultaneously and achieve the performance on par with or surpass that when each task is learned alone . The main contributions of our work are as follows . • We formulate multi-modal multi-task learning for a new domain—graphical user interfaces— with one model to accomplish a wide range of tasks for enhancing mobile user experiences . • We design VUT based on a two-tower Transformer architecture , one for handling imagestructure and the other for language data , where each Transformer is multi-purpose by both encoding and decoding its own modality , with cross-tower attention . • We experiment with VUT on 5 distinct UI tasks , and thoroughly investigated the effect of these tasks when learned alone or jointly with ablations and analysis , which show the feasibility for achieving diverse UI tasks using a single model , which offers the value for reducing the number of models and storage footprints needed for deployment ( Appendix E ) . 2 RELATED WORK . Extensive work has been conducted in multi-modal modeling with vision and languages ( Li et al. , 2019 ; Lu et al. , 2019 ; Hu & Singh , 2021 ; Lu et al. , 2020 ; Tan & Bansal , 2019 ; Kim et al. , 2021 ; Zhou et al. , 2020 ; Gupta et al. , 2021 ) . Existing works differ in the form of input they consume and the objectives of modeling . One category of work focuses on pretraining to learn an effective cross-modality representation for downstream tasks and the other directly learns multiple tasks end-to-end ( Table 1 ) . VUT belongs to the latter . In terms of the forms of multimodal data these models consume , most models handle image and text input . However , a unique form of data in UI modeling is the structure input of view hierarchies , which only VUT , UIBert and ActionBert use . Many existing works feed object regions , instead of the entire image to the model , which requires a pretrained object detection model ( Lu et al. , 2019 ; Bai et al. , 2021 ; He et al. , 2020 ; Zhou et al. , 2020 ; Tan & Bansal , 2019 ; Lu et al. , 2020 ) or address tasks only regarding the entire image ( Kim et al. , 2021 ; Hu & Singh , 2021 ) . Although ActionBert ( He et al. , 2020 ) and UIBert ( Bai et al. , 2021 ) also address the UI domain , they are targeted for representation learning , and do not support multiple tasks simultaneously . As a result , they do not deal with language input of task descriptions . Their text input is those scraped from the UI screen , e.g. , using OCR . In addition , these models require predetermined object regions similar to many BERT-based multi-modal models . In contrast , object detection is one of the tasks that VUT addresses . In terms of modeling techniques , we designed a novel Transformer architecture for multi-task modeling of the UI domain , based on building blocks previously proposed for natural images and language , e.g. , ( Hu et al. , 2020 ; Lu et al. , 2020 ) . the work that is closely related to ours is GPVI ( Gupta et al. , 2021 ) , which uses DETR ( Carion et al. , 2020 ) for object detection , and ViBERT ( Lu et al. , 2019 ) for multimodal modeling . In addition to the obvious deviation our work , e.g. , VUT uses structure input but GPV-I does not , there are several important architecture differences . While GPV-I directly embeds DETR , an encoder-decoder model , into its architecture , VUT uses a single tower design where both the image and object queries are fed to the same Transformer encoder . This design choice is motivated by our goal to achieve a compact architecture , which the Image-Structure model serves both image-structure encoding and object detection ( when the structure input is absent in the input ) . As shown in our experiment , the single tower architecture of VUT ’ s Image-Structure model showed clear advantage over the encoder-decoder architecture in DETR for the UI object detection task . To address the unique domin of UI tasks , we also introduce focus map to guide the model towards the object being inquired . VUT ’ s question-answer Transformer is designed based on existing auto-regressive multi-task language models ( Raffel et al. , 2019 ; Brown et al. , 2020 ) where a question or a command is fed to the model as a prefix , and the responses are decoded token by token . One difference is that for the language command grounding task , instead of generating a language response , the last hidden state of the language model is used , as a question encoding , to retrieve a UI object on the screen . 3 PROBLEM FORMULATION . A graphical user interface contains a collection of UI elements for fulfilling a coherent set of tasks . There are often five types of data involved to formulate a UI task : < S , V , T , Q , A > ( Figure 1 ) . S is the screenshot image that captures the visual appearance of the UI screen . V is the view hierarchy tree that represents the underlying structure of the screen . T is the target object on the screen to be inquired . Q is the natural language description of the task , which can be an open-ended question such as `` What is the caption of the element ? `` , a yes-or-no question such as `` Does the object look clickable ? '' or a command such as `` Click on the Next button. '' . See the full list of Q used in our experiments in Appendix A . Finally , A is the natural language answer to the question Q when the form of the response for the task is supposed to be natural language . Depending on each task setup , these data types appear as either input or output . We elaborate on the formation of each task here , and use F to denote the function for achieving each task . 3.1 UI OBJECT DETECTION . Given the screenshot image , S , the task is to detect each UI element on the screen , such as Text Field , Toggle Button , or Image View . This task is similar to the typical object detection task in natural images ( Carion et al. , 2020 ) or recent UI object detection work ( Zhang et al. , 2021 ) . However , our task is more challenging in that it needs to detect different types of container objects , which determine how UI objects are visually structured of the screen , such as Linear Layout , Frame Layout or List . In total there are 21 types of leaf or non-leaf objects in a view hierarchy . See Appendix D for the full list of objects we detect . UI object detection is important for improving accessibility and enabling other intelligent features such as UI adaptation when view hierarchy is not available . As a screen understanding task , UI object detection is beneficial to other UI modeling tasks as we will show in our experiments . The task is formulated as the follow ( Equation 1 ) . V = F ( S , V∅ , T∅ , Q∅ ) ( 1 ) Note that this task is achieved solely based on the single-tower Image-Structure Transformer ( Figure 1 ) and does not rely on the question-answer model . V∅ , T∅ and Q∅ represent each type of data masked out or missing in the input .",
"role": "user"
},
{
"content": "The paper proposes an architecture for graphical user interfaces which involve multi-modal inputs (UI screenshots, Hierarchy structures, Natural Language) and multi-task learning (UI Object Detection, Widget Captioning, Screen summarization, Language grounding, and Tappability).The proposed architecture consists of seperate transformer blocks to encode image and text modalities. The two transformer blocks attends to each other to produce multi-modal outputs, which is then used for downstream tasks. The authors that with the proposed architecture, training on all tasks simultaneously is better than training on individual task alone.",
"role": "assistant"
}
]
|
science.discomat_te | | ID | Mol% | Mol% | Mol% | Mol% | NC | NC' |
| ID | SiO2 | Na2O | CaO | P2O5 | NC | NC' |
| Series I | Series I | Series I | Series I | Series I | Series I | Series I |
| ICIE1 | 49.46 | 26.38 | 23.08 | 1.07 | 2.04 | 2.13 |
| ICSW2 | 47.84 | 26.67 | 23.33 | 2.16 | 2.00 | 2.18 |
| ICSW3 | 44.47 | 27.26 | 23.85 | 4.42 | 1.92 | 2.30 |
| ICSW5 | 40.96 | 27.87 | 24.39 | 6.78 | 1.83 | 2.44 |
| ICSW4 | 37.28 | 28.52 | 24.95 | 9.25 | 1.75 | 2.62 |
| | | | | | | |
| Series II | Series II | Series II | Series II | Series II | Series II | Series II |
| ICSW1 | 51.06 | 26.10 | 22.84 | 0.00 | 2.08 | 2.08 |
| ICSW6 | 48.98 | 26.67 | 23.33 | 1.02 | 2.00 | 2.08 |
| ICSW7 | 47.07 | 27.19 | 23.78 | 1.95 | 1.92 | 2.08 |
| ICSW8 | 43.66 | 28.12 | 24.60 | 3.62 | 1.79 | 2.08 |
| ICSW10 | 40.71 | 28.91 | 25.31 | 5.07 | 1.67 | 2.08 |
| ICSW9 | 38.14 | 29.62 | 25.91 | 6.33 | 1.56 | 2.08 |
Caption: Glass compositions from series I (fixed Na2O:CaO=1:0.87) and II (charge balanced), showing mol%, network connectivity (NC) assuming P2O5 enters the glass network and modified network connectivity (NC') assuming isolated orthophosphate units
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "49.46", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "ICIE1"}
{"value": "26.38", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "mol", "material": "ICIE1"}
{"value": "23.08", "type": "Glass_Compound_Amount", "constituent": "CaO", "unit": "mol", "material": "ICIE1"}
{"value": "1.07", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "mol", "material": "ICIE1"}
{"value": "2.04", "type": "Other"}
{"value": "2.13", "type": "Other"}
{"value": "47.84", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "ICSW2"}
{"value": "26.67", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "mol", "material": "ICSW2"}
{"value": "23.33", "type": "Glass_Compound_Amount", "constituent": "CaO", "unit": "mol", "material": "ICSW2"}
{"value": "2.16", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "mol", "material": "ICSW2"}
{"value": "2.00", "type": "Other"}
{"value": "2.18", "type": "Other"}
{"value": "44.47", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "ICSW3"}
{"value": "27.26", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "mol", "material": "ICSW3"}
{"value": "23.85", "type": "Glass_Compound_Amount", "constituent": "CaO", "unit": "mol", "material": "ICSW3"}
{"value": "4.42", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "mol", "material": "ICSW3"}
{"value": "1.92", "type": "Other"}
{"value": "2.30", "type": "Other"}
{"value": "40.96", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "ICSW5"}
{"value": "27.87", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "mol", "material": "ICSW5"}
{"value": "24.39", "type": "Glass_Compound_Amount", "constituent": "CaO", "unit": "mol", "material": "ICSW5"}
{"value": "6.78", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "mol", "material": "ICSW5"}
{"value": "1.83", "type": "Other"}
{"value": "2.44", "type": "Other"}
{"value": "37.28", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "ICSW4"}
{"value": "28.52", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "mol", "material": "ICSW4"}
{"value": "24.95", "type": "Glass_Compound_Amount", "constituent": "CaO", "unit": "mol", "material": "ICSW4"}
{"value": "9.25", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "mol", "material": "ICSW4"}
{"value": "1.75", "type": "Other"}
{"value": "2.62", "type": "Other"}
{"value": "51.06", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "ICSW1"}
{"value": "26.10", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "mol", "material": "ICSW1"}
{"value": "22.84", "type": "Glass_Compound_Amount", "constituent": "CaO", "unit": "mol", "material": "ICSW1"}
{"value": "0.00", "type": "Other"}
{"value": "2.08", "type": "Other"}
{"value": "2.08", "type": "Other"}
{"value": "48.98", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "ICSW6"}
{"value": "26.67", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "mol", "material": "ICSW6"}
{"value": "23.33", "type": "Glass_Compound_Amount", "constituent": "CaO", "unit": "mol", "material": "ICSW6"}
{"value": "1.02", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "mol", "material": "ICSW6"}
{"value": "2.00", "type": "Other"}
{"value": "2.08", "type": "Other"}
{"value": "47.07", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "ICSW7"}
{"value": "27.19", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "mol", "material": "ICSW7"}
{"value": "23.78", "type": "Glass_Compound_Amount", "constituent": "CaO", "unit": "mol", "material": "ICSW7"}
{"value": "1.95", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "mol", "material": "ICSW7"}
{"value": "1.92", "type": "Other"}
{"value": "2.08", "type": "Other"}
{"value": "43.66", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "ICSW8"}
{"value": "28.12", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "mol", "material": "ICSW8"}
{"value": "24.60", "type": "Glass_Compound_Amount", "constituent": "CaO", "unit": "mol", "material": "ICSW8"}
{"value": "3.62", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "mol", "material": "ICSW8"}
{"value": "1.79", "type": "Other"}
{"value": "2.08", "type": "Other"}
{"value": "40.71", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "ICSW10"}
{"value": "28.91", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "mol", "material": "ICSW10"}
{"value": "25.31", "type": "Glass_Compound_Amount", "constituent": "CaO", "unit": "mol", "material": "ICSW10"}
{"value": "5.07", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "mol", "material": "ICSW10"}
{"value": "1.67", "type": "Other"}
{"value": "2.08", "type": "Other"}
{"value": "38.14", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "ICSW9"}
{"value": "29.62", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "mol", "material": "ICSW9"}
{"value": "25.91", "type": "Glass_Compound_Amount", "constituent": "CaO", "unit": "mol", "material": "ICSW9"}
{"value": "6.33", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "mol", "material": "ICSW9"}
{"value": "1.56", "type": "Other"}
{"value": "2.08", "type": "Other"} | science.discomat_te.314 | [
{
"content": "| ID | Mol% | Mol% | Mol% | Mol% | NC | NC' |\n| ID | SiO2 | Na2O | CaO | P2O5 | NC | NC' |\n| Series I | Series I | Series I | Series I | Series I | Series I | Series I |\n| ICIE1 | 49.46 | 26.38 | 23.08 | 1.07 | 2.04 | 2.13 |\n| ICSW2 | 47.84 | 26.67 | 23.33 | 2.16 | 2.00 | 2.18 |\n| ICSW3 | 44.47 | 27.26 | 23.85 | 4.42 | 1.92 | 2.30 |\n| ICSW5 | 40.96 | 27.87 | 24.39 | 6.78 | 1.83 | 2.44 |\n| ICSW4 | 37.28 | 28.52 | 24.95 | 9.25 | 1.75 | 2.62 |\n| | | | | | | |\n| Series II | Series II | Series II | Series II | Series II | Series II | Series II |\n| ICSW1 | 51.06 | 26.10 | 22.84 | 0.00 | 2.08 | 2.08 |\n| ICSW6 | 48.98 | 26.67 | 23.33 | 1.02 | 2.00 | 2.08 |\n| ICSW7 | 47.07 | 27.19 | 23.78 | 1.95 | 1.92 | 2.08 |\n| ICSW8 | 43.66 | 28.12 | 24.60 | 3.62 | 1.79 | 2.08 |\n| ICSW10 | 40.71 | 28.91 | 25.31 | 5.07 | 1.67 | 2.08 |\n| ICSW9 | 38.14 | 29.62 | 25.91 | 6.33 | 1.56 | 2.08 |\n\nCaption: Glass compositions from series I (fixed Na2O:CaO=1:0.87) and II (charge balanced), showing mol%, network connectivity (NC) assuming P2O5 enters the glass network and modified network connectivity (NC') assuming isolated orthophosphate units\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"49.46\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"ICIE1\"}\n{\"value\": \"26.38\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"mol\", \"material\": \"ICIE1\"}\n{\"value\": \"23.08\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CaO\", \"unit\": \"mol\", \"material\": \"ICIE1\"}\n{\"value\": \"1.07\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"mol\", \"material\": \"ICIE1\"}\n{\"value\": \"2.04\", \"type\": \"Other\"}\n{\"value\": \"2.13\", \"type\": \"Other\"}\n{\"value\": \"47.84\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"ICSW2\"}\n{\"value\": \"26.67\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"mol\", \"material\": \"ICSW2\"}\n{\"value\": \"23.33\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CaO\", \"unit\": \"mol\", \"material\": \"ICSW2\"}\n{\"value\": \"2.16\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"mol\", \"material\": \"ICSW2\"}\n{\"value\": \"2.00\", \"type\": \"Other\"}\n{\"value\": \"2.18\", \"type\": \"Other\"}\n{\"value\": \"44.47\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"ICSW3\"}\n{\"value\": \"27.26\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"mol\", \"material\": \"ICSW3\"}\n{\"value\": \"23.85\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CaO\", \"unit\": \"mol\", \"material\": \"ICSW3\"}\n{\"value\": \"4.42\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"mol\", \"material\": \"ICSW3\"}\n{\"value\": \"1.92\", \"type\": \"Other\"}\n{\"value\": \"2.30\", \"type\": \"Other\"}\n{\"value\": \"40.96\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"ICSW5\"}\n{\"value\": \"27.87\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"mol\", \"material\": \"ICSW5\"}\n{\"value\": \"24.39\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CaO\", \"unit\": \"mol\", \"material\": \"ICSW5\"}\n{\"value\": \"6.78\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"mol\", \"material\": \"ICSW5\"}\n{\"value\": \"1.83\", \"type\": \"Other\"}\n{\"value\": \"2.44\", \"type\": \"Other\"}\n{\"value\": \"37.28\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"ICSW4\"}\n{\"value\": \"28.52\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"mol\", \"material\": \"ICSW4\"}\n{\"value\": \"24.95\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CaO\", \"unit\": \"mol\", \"material\": \"ICSW4\"}\n{\"value\": \"9.25\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"mol\", \"material\": \"ICSW4\"}\n{\"value\": \"1.75\", \"type\": \"Other\"}\n{\"value\": \"2.62\", \"type\": \"Other\"}\n{\"value\": \"51.06\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"ICSW1\"}\n{\"value\": \"26.10\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"mol\", \"material\": \"ICSW1\"}\n{\"value\": \"22.84\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CaO\", \"unit\": \"mol\", \"material\": \"ICSW1\"}\n{\"value\": \"0.00\", \"type\": \"Other\"}\n{\"value\": \"2.08\", \"type\": \"Other\"}\n{\"value\": \"2.08\", \"type\": \"Other\"}\n{\"value\": \"48.98\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"ICSW6\"}\n{\"value\": \"26.67\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"mol\", \"material\": \"ICSW6\"}\n{\"value\": \"23.33\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CaO\", \"unit\": \"mol\", \"material\": \"ICSW6\"}\n{\"value\": \"1.02\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"mol\", \"material\": \"ICSW6\"}\n{\"value\": \"2.00\", \"type\": \"Other\"}\n{\"value\": \"2.08\", \"type\": \"Other\"}\n{\"value\": \"47.07\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"ICSW7\"}\n{\"value\": \"27.19\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"mol\", \"material\": \"ICSW7\"}\n{\"value\": \"23.78\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CaO\", \"unit\": \"mol\", \"material\": \"ICSW7\"}\n{\"value\": \"1.95\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"mol\", \"material\": \"ICSW7\"}\n{\"value\": \"1.92\", \"type\": \"Other\"}\n{\"value\": \"2.08\", \"type\": \"Other\"}\n{\"value\": \"43.66\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"ICSW8\"}\n{\"value\": \"28.12\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"mol\", \"material\": \"ICSW8\"}\n{\"value\": \"24.60\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CaO\", \"unit\": \"mol\", \"material\": \"ICSW8\"}\n{\"value\": \"3.62\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"mol\", \"material\": \"ICSW8\"}\n{\"value\": \"1.79\", \"type\": \"Other\"}\n{\"value\": \"2.08\", \"type\": \"Other\"}\n{\"value\": \"40.71\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"ICSW10\"}\n{\"value\": \"28.91\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"mol\", \"material\": \"ICSW10\"}\n{\"value\": \"25.31\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CaO\", \"unit\": \"mol\", \"material\": \"ICSW10\"}\n{\"value\": \"5.07\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"mol\", \"material\": \"ICSW10\"}\n{\"value\": \"1.67\", \"type\": \"Other\"}\n{\"value\": \"2.08\", \"type\": \"Other\"}\n{\"value\": \"38.14\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"ICSW9\"}\n{\"value\": \"29.62\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"mol\", \"material\": \"ICSW9\"}\n{\"value\": \"25.91\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CaO\", \"unit\": \"mol\", \"material\": \"ICSW9\"}\n{\"value\": \"6.33\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"mol\", \"material\": \"ICSW9\"}\n{\"value\": \"1.56\", \"type\": \"Other\"}\n{\"value\": \"2.08\", \"type\": \"Other\"}",
"role": "assistant"
}
]
|
science.biored_ner | Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: ["CellLine", "Variant", "Species", "Disease", "Chemical", "Gene"].
Ensure the output is strictly the JSON object.
Source Text:
Hypoxia in renal disease with proteinuria and/or glomerular hypertension. Despite the increasing need to identify and quantify tissue oxygenation at the cellular level, relatively few methods have been available. In this study, we developed a new hypoxia-responsive reporter vector using a hypoxia-responsive element of the 5' vascular endothelial growth factor untranslated region and generated a novel hypoxia-sensing transgenic rat. We then applied this animal model to the detection of tubulointerstitial hypoxia in the diseased kidney. With this model, we were able to identify diffuse cortical hypoxia in the puromycin aminonucleoside-induced nephrotic syndrome and focal and segmental hypoxia in the remnant kidney model. Expression of the hypoxia-responsive transgene increased throughout the observation period, reaching 2.2-fold at 2 weeks in the puromycin aminonucleoside model and 2.6-fold at 4 weeks in the remnant kidney model, whereas that of vascular endothelial growth factor showed a mild decrease, reflecting distinct behaviors of the two genes. The degree of hypoxia showed a positive correlation with microscopic tubulointerstitial injury in both models. Finally, we identified the localization of proliferating cell nuclear antigen-positive, ED-1-positive, and terminal dUTP nick-end labeled-positive cells in the hypoxic cortical area in the remnant kidney model. We propose here a possible pathological tie between chronic tubulointerstitial hypoxia and progressive glomerular diseases. | {"CellLine": [], "Chemical": ["dUTP", "puromycin aminonucleoside"], "Disease": ["hypoxic", "glomerular diseases", "nephrotic syndrome", "Hypoxia", "diseased kidney", "tubulointerstitial injury", "hypertension", "hypoxia", "proteinuria", "renal disease"], "Gene": ["vascular endothelial growth factor", "ED-1"], "Species": ["rat"], "Variant": []} | science.biored_ner.55 | [
{
"content": "Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: [\"CellLine\", \"Variant\", \"Species\", \"Disease\", \"Chemical\", \"Gene\"].\n\nEnsure the output is strictly the JSON object.\n\nSource Text:\n\nHypoxia in renal disease with proteinuria and/or glomerular hypertension. Despite the increasing need to identify and quantify tissue oxygenation at the cellular level, relatively few methods have been available. In this study, we developed a new hypoxia-responsive reporter vector using a hypoxia-responsive element of the 5' vascular endothelial growth factor untranslated region and generated a novel hypoxia-sensing transgenic rat. We then applied this animal model to the detection of tubulointerstitial hypoxia in the diseased kidney. With this model, we were able to identify diffuse cortical hypoxia in the puromycin aminonucleoside-induced nephrotic syndrome and focal and segmental hypoxia in the remnant kidney model. Expression of the hypoxia-responsive transgene increased throughout the observation period, reaching 2.2-fold at 2 weeks in the puromycin aminonucleoside model and 2.6-fold at 4 weeks in the remnant kidney model, whereas that of vascular endothelial growth factor showed a mild decrease, reflecting distinct behaviors of the two genes. The degree of hypoxia showed a positive correlation with microscopic tubulointerstitial injury in both models. Finally, we identified the localization of proliferating cell nuclear antigen-positive, ED-1-positive, and terminal dUTP nick-end labeled-positive cells in the hypoxic cortical area in the remnant kidney model. We propose here a possible pathological tie between chronic tubulointerstitial hypoxia and progressive glomerular diseases.",
"role": "user"
},
{
"content": "{\"CellLine\": [], \"Chemical\": [\"dUTP\", \"puromycin aminonucleoside\"], \"Disease\": [\"hypoxic\", \"glomerular diseases\", \"nephrotic syndrome\", \"Hypoxia\", \"diseased kidney\", \"tubulointerstitial injury\", \"hypertension\", \"hypoxia\", \"proteinuria\", \"renal disease\"], \"Gene\": [\"vascular endothelial growth factor\", \"ED-1\"], \"Species\": [\"rat\"], \"Variant\": []}",
"role": "assistant"
}
]
|
science.scifact_entailment | Determine the relationship between them and generate a JSON response. Scientific Claim:
At least 85% of patients exposed to radiation have activated markers of myofibroblasts.
Research Abstract:
Previous studies investigating the role of smooth muscle cells (SMCs) and macrophages in the pathogenesis of atherosclerosis have provided controversial results owing to the use of unreliable methods for clearly identifying each of these cell types.
Here, using Myh11-CreERT2 ROSA floxed STOP eYFP Apoe−/− mice to perform SMC lineage tracing, we find that traditional methods for detecting SMCs based on immunostaining for SMC markers fail to detect >80% of SMC-derived cells within advanced atherosclerotic lesions.
These unidentified SMC-derived cells exhibit phenotypes of other cell lineages, including macrophages and mesenchymal stem cells (MSCs).
SMC-specific conditional knockout of Krüppel-like factor 4 (Klf4) resulted in reduced numbers of SMC-derived MSC- and macrophage-like cells, a marked reduction in lesion size, and increases in multiple indices of plaque stability, including an increase in fibrous cap thickness as compared to wild-type controls.
On the basis of in vivo KLF4 chromatin immunoprecipitation–sequencing (ChIP-seq) analyses and studies of cholesterol-treated cultured SMCs, we identified >800 KLF4 target genes, including many that regulate pro-inflammatory responses of SMCs.
Our findings indicate that the contribution of SMCs to atherosclerotic plaques has been greatly underestimated, and that KLF4-dependent transitions in SMC phenotype are critical in lesion pathogenesis. | {"verdict": "NEI", "evidence": []} | science.scifact_entailment.97 | [
{
"content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nAt least 85% of patients exposed to radiation have activated markers of myofibroblasts.\n\nResearch Abstract:\nPrevious studies investigating the role of smooth muscle cells (SMCs) and macrophages in the pathogenesis of atherosclerosis have provided controversial results owing to the use of unreliable methods for clearly identifying each of these cell types.\nHere, using Myh11-CreERT2 ROSA floxed STOP eYFP Apoe−/− mice to perform SMC lineage tracing, we find that traditional methods for detecting SMCs based on immunostaining for SMC markers fail to detect >80% of SMC-derived cells within advanced atherosclerotic lesions.\nThese unidentified SMC-derived cells exhibit phenotypes of other cell lineages, including macrophages and mesenchymal stem cells (MSCs).\nSMC-specific conditional knockout of Krüppel-like factor 4 (Klf4) resulted in reduced numbers of SMC-derived MSC- and macrophage-like cells, a marked reduction in lesion size, and increases in multiple indices of plaque stability, including an increase in fibrous cap thickness as compared to wild-type controls.\nOn the basis of in vivo KLF4 chromatin immunoprecipitation–sequencing (ChIP-seq) analyses and studies of cholesterol-treated cultured SMCs, we identified >800 KLF4 target genes, including many that regulate pro-inflammatory responses of SMCs.\nOur findings indicate that the contribution of SMCs to atherosclerotic plaques has been greatly underestimated, and that KLF4-dependent transitions in SMC phenotype are critical in lesion pathogenesis.",
"role": "user"
},
{
"content": "{\"verdict\": \"NEI\", \"evidence\": []}",
"role": "assistant"
}
]
|
science.qasper_abstractive_qa | Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.
Document Content:
Characterizing Political Fake News in Twitter by its Meta-Data
This article presents a preliminary approach towards characterizing political fake news on Twitter through the analysis of their meta-data. In particular, we focus on more than 1.5M tweets collected on the day of the election of Donald Trump as 45th president of the United States of America. We use the meta-data embedded within those tweets in order to look for differences between tweets containing fake news and tweets not containing them. Specifically, we perform our analysis only on tweets that went viral, by studying proxies for users' exposure to the tweets, by characterizing accounts spreading fake news, and by looking at their polarization. We found significant differences on the distribution of followers, the number of URLs on tweets, and the verification of the users.
Unnamed section
10pt
1.10pt
[ Characterizing Political Fake News in Twitter by its Meta-DataJulio Amador Díaz LópezAxel Oehmichen Miguel Molina-Solana( j.amador, axelfrancois.oehmichen11, [email protected] ) Imperial College London This article presents a preliminary approach towards characterizing political fake news on Twitter through the analysis of their meta-data. In particular, we focus on more than 1.5M tweets collected on the day of the election of Donald Trump as 45th president of the United States of America. We use the meta-data embedded within those tweets in order to look for differences between tweets containing fake news and tweets not containing them. Specifically, we perform our analysis only on tweets that went viral, by studying proxies for users' exposure to the tweets, by characterizing accounts spreading fake news, and by looking at their polarization. We found significant differences on the distribution of followers, the number of URLs on tweets, and the verification of the users.
]
Defining Fake news
Our research is connected to different strands of academic knowledge related to the phenomenon of fake news. In relation to Computer Science, a recent survey by Conroy and colleagues BIBREF10 identifies two popular approaches to single-out fake news. On the one hand, the authors pointed to linguistic approaches consisting in using text, its linguistic characteristics and machine learning techniques to automatically flag fake news. On the other, these researchers underscored the use of network approaches, which make use of network characteristics and meta-data, to identify fake news.
With respect to social sciences, efforts from psychology, political science and sociology, have been dedicated to understand why people consume and/or believe misinformation BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 . Most of these studies consistently reported that psychological biases such as priming effects and confirmation bias play an important role in people ability to discern misinformation.
In relation to the production and distribution of fake news, a recent paper in the field of Economics BIBREF9 found that most fake news sites use names that resemble those of legitimate organizations, and that sites supplying fake news tend to be short-lived. These authors also noticed that fake news items are more likely shared than legitimate articles coming from trusted sources, and they tend to exhibit a larger level of polarization.
The conceptual issue of how to define fake news is a serious and unresolved issue. As the focus of our work is not attempting to offer light on this, we will rely on work by other authors to describe what we consider as fake news. In particular, we use the categorization provided by Rubin et al. BIBREF7 . The five categories they described, together with illustrative examples from our dataset, are as follows:
Research Hypotheses
Previous works on the area (presented in the section above) suggest that there may be important determinants for the adoption and diffusion of fake news. Our hypotheses builds on them and identifies three important dimensions that may help distinguishing fake news from legit information:
Taking those three dimensions into account, we propose the following hypotheses about the features that we believe can help to identify tweets containing fake news from those not containing them. They will be later tested over our collected dataset.
Exposure.
Characterization.
Polarization.
Results
The sample collected consisted on 1 785 855 tweets published by 848 196 different users. Within our sample, we identified 1327 tweets that went viral (retweeted more than 1000 times by the 8th of November 2016) produced by 643 users. Such small subset of viral tweets were retweeted on 290 841 occasions in the observed time-window.
The 1327 `viral' tweets were manually annotated as containing fake news or not. The annotation was carried out by a single person in order to obtain a consistent annotation throughout the dataset. Out of those 1327 tweets, we identified 136 as potentially containing fake news (according to the categories previously described), and the rest were classified as `non containing fake news'. Note that the categorization is far from being perfect given the ambiguity of fake news themselves and human judgement involved in the process of categorization. Because of this, we do not claim that this dataset can be considered a ground truth.
The following results detail characteristics of these tweets along the previously mentioned dimensions. Table TABREF23 reports the actual differences (together with their associated p-values) of the distributions of viral tweets containing fake news and viral tweets not containing them for every variable considered.
Exposure
Figure FIGREF24 shows that, in contrast to other kinds of viral tweets, those containing fake news were created more recently. As such, Twitter users were exposed to fake news related to the election for a shorter period of time.
However, in terms of retweets, Figure FIGREF25 shows no apparent difference between containing fake news or not containing them. That is confirmed by the Kolmogorov-Smirnoff test, which does not discard the hypothesis that the associated distributions are equal.
In relation to the number of favourites, users that generated at least a viral tweet containing fake news appear to have, on average, less favourites than users that do not generate them. Figure FIGREF26 shows the distribution of favourites. Despite the apparent visual differences, the difference are not statistically significant.
Finally, the number of hashtags used in viral fake news appears to be larger than those in other viral tweets. Figure FIGREF27 shows the density distribution of the number of hashtags used. However, once again, we were not able to find any statistical difference between the average number of hashtags in a viral tweet and the average number of hashtags in viral fake news.
Characterization
We found that 82 users within our sample were spreading fake news (i.e. they produced at least one tweet which was labelled as fake news). Out of those, 34 had verified accounts, and the rest were unverified. From the 48 unverified accounts, 6 have been suspended by Twitter at the date of writing, 3 tried to imitate legitimate accounts of others, and 4 accounts have been already deleted. Figure FIGREF28 shows the proportion of verified accounts to unverified accounts for viral tweets (containing fake news vs. not containing fake news). From the chart, it is clear that there is a higher chance of fake news coming from unverified accounts.
Turning to friends, accounts distributing fake news appear to have, on average, the same number of friends than those distributing tweets with no fake news. However, the density distribution of friends from the accounts (Figure FIGREF29 ) shows that there is indeed a statistically significant difference in their distributions.
If we take into consideration the number of followers, accounts generating viral tweets with fake news do have a very different distribution on this dimension, compared to those accounts generating viral tweets with no fake news (see Figure FIGREF30 ). In fact, such differences are statistically significant.
A useful representation for friends and followers is the ratio between friends/followers. Figures FIGREF31 and FIGREF32 show this representation. Notice that accounts spreading viral tweets with fake news have, on average, a larger ratio of friends/followers. The distribution of those accounts not generating fake news is more evenly distributed.
With respect to the number of mentions, Figure FIGREF33 shows that viral tweets labelled as containing fake news appear to use mentions to other users less frequently than viral tweets not containing fake news. In other words, tweets containing fake news mostly contain 1 mention, whereas other tweets tend to have two). Such differences are statistically significant.
The analysis (Figure FIGREF34 ) of the presence of media in the tweets in our dataset shows that tweets labelled as not containing fake news appear to present more media elements than those labelled as fake news. However, the difference is not statistically significant.
On the other hand, Figure FIGREF35 shows that viral tweets containing fake news appear to include more URLs to other sites than viral tweets that do not contain fake news. In fact, the difference between the two distributions is statistically significant (assuming INLINEFORM0 ).
Polarization
Finally, manual inspection of the text field of those viral tweets labelled as containing fake news shows that 117 of such tweets expressed support for Donald Trump, while only 8 supported Hillary Clinton. The remaining tweets contained fake news related to other topics, not expressing support for any of the candidates.
Discussion
As a summary, and constrained by our existing dataset, we made the following observations regarding differences between viral tweets labelled as containing fake news and viral tweets labelled as not containing them:
These findings (related to our initial hypothesis in Table TABREF44 ) clearly suggest that there are specific pieces of meta-data about tweets that may allow the identification of fake news. One such parameter is the time of exposure. Viral tweets containing fake news are shorter-lived than those containing other type of content. This notion seems to resonate with our findings showing that a number of accounts spreading fake news have already been deleted or suspended by Twitter by the time of writing. If one considers that researchers using different data have found similar results BIBREF9 , it appears that the lifetime of accounts, together with the age of the questioned viral content could be useful to identify fake news. In the light of this finding, accounts newly created should probably put under higher scrutiny than older ones. This in fact, would be a nice a-priori bias for a Bayesian classifier.
Accounts spreading fake news appear to have a larger proportion of friends/followers (i.e. they have, on average, the same number of friends but a smaller number of followers) than those spreading viral content only. Together with the fact that, on average, tweets containing fake news have more URLs than those spreading viral content, it is possible to hypothesize that, both, the ratio of friends/followers of the account producing a viral tweet and number of URLs contained in such a tweet could be useful to single-out fake news in Twitter. Not only that, but our finding related to the number of URLs is in line with intuitions behind the incentives to create fake news commonly found in the literature BIBREF9 (in particular that of obtaining revenue through click-through advertising).
Finally, it is interesting to notice that the content of viral fake news was highly polarized. This finding is also in line with those of Alcott et al. BIBREF9 . This feature suggests that textual sentiment analysis of the content of tweets (as most researchers do), together with the above mentioned parameters from meta-data, may prove useful for identifying fake news.
Figure 1: Distribution of the date of creation of the tweets that were viral on November 8th. For clarity, the image only shows the year 2016, and no more than 150 tweets per day.
Figure 2: Density distributions of achieved retweets for tweets in our dataset 1)containing fake news and 2)not containing them. No differences are apparent.
Table 1: For each one of the selected features, the table shows the difference between the set of tweets containing fake news and those non containing them, and the associated p-value (applying a KolmogorovSmirnov test). The null hypothesis is that both distributions are equal (two sided). Results are ordered by decreasing p-value.
Figure 3: Density distributions of the number of favourites that the user generating the tweet has. The differences are not statistically significant.
Figure 5: Tweets labelled as containing fake news mostly come from non-verified users. This contrasts with the opposite pattern for tweets non containing them (which mostly originate from verified accounts).
Figure 6: Density distributions (for tweets labelled as containing fake news, and tweets labelled as not containing them) of the number of friends that the user generating the tweet has. Difference is statistically significant.
Figure 9: Density distribution of friends/followers ratio. Note that they do not follow a normal distribution. A higher friends/followers ratio exists for accounts that has at least produced a tweet labelled as containing fake news.
Figure 10: Number of mentions within tweets labelled as containing fake news and tweets not containing them. There is almost a similar distribution of 1 and 2 mentions for tweets containing fake news. This contrasts with tweets not containing fake news, in which 2 mentions is much more common.
Figure 8: Density distribution of friends/followers ratio, showing quartiles. Accounts that generate fake news tend to have a higher ratio value.
Figure 12: Number of URLs embedded within viral tweets (with fake news vs. without them). Differences are statistically significant with α = 0.05
Query: How is the ground truth for fake news established?
JSON Response: | {"answer": "Ground truth is not established in the paper", "evidence": ["The 1327 `viral\u0027 tweets were manually annotated as containing fake news or not. The annotation was carried out by a single person in order to obtain a consistent annotation throughout the dataset. Out of those 1327 tweets, we identified 136 as potentially containing fake news (according to the categories previously described), and the rest were classified as `non containing fake news\u0027. Note that the categorization is far from being perfect given the ambiguity of fake news themselves and human judgement involved in the process of categorization. Because of this, we do not claim that this dataset can be considered a ground truth."]} | science.qasper_abstractive_qa.252.4 | [
{
"content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nCharacterizing Political Fake News in Twitter by its Meta-Data\n\nThis article presents a preliminary approach towards characterizing political fake news on Twitter through the analysis of their meta-data. In particular, we focus on more than 1.5M tweets collected on the day of the election of Donald Trump as 45th president of the United States of America. We use the meta-data embedded within those tweets in order to look for differences between tweets containing fake news and tweets not containing them. Specifically, we perform our analysis only on tweets that went viral, by studying proxies for users' exposure to the tweets, by characterizing accounts spreading fake news, and by looking at their polarization. We found significant differences on the distribution of followers, the number of URLs on tweets, and the verification of the users.\n\nUnnamed section\n10pt\n1.10pt\n[ Characterizing Political Fake News in Twitter by its Meta-DataJulio Amador Díaz LópezAxel Oehmichen Miguel Molina-Solana( j.amador, axelfrancois.oehmichen11, [email protected] ) Imperial College London This article presents a preliminary approach towards characterizing political fake news on Twitter through the analysis of their meta-data. In particular, we focus on more than 1.5M tweets collected on the day of the election of Donald Trump as 45th president of the United States of America. We use the meta-data embedded within those tweets in order to look for differences between tweets containing fake news and tweets not containing them. Specifically, we perform our analysis only on tweets that went viral, by studying proxies for users' exposure to the tweets, by characterizing accounts spreading fake news, and by looking at their polarization. We found significant differences on the distribution of followers, the number of URLs on tweets, and the verification of the users.\n]\n\nDefining Fake news\nOur research is connected to different strands of academic knowledge related to the phenomenon of fake news. In relation to Computer Science, a recent survey by Conroy and colleagues BIBREF10 identifies two popular approaches to single-out fake news. On the one hand, the authors pointed to linguistic approaches consisting in using text, its linguistic characteristics and machine learning techniques to automatically flag fake news. On the other, these researchers underscored the use of network approaches, which make use of network characteristics and meta-data, to identify fake news.\nWith respect to social sciences, efforts from psychology, political science and sociology, have been dedicated to understand why people consume and/or believe misinformation BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 . Most of these studies consistently reported that psychological biases such as priming effects and confirmation bias play an important role in people ability to discern misinformation.\nIn relation to the production and distribution of fake news, a recent paper in the field of Economics BIBREF9 found that most fake news sites use names that resemble those of legitimate organizations, and that sites supplying fake news tend to be short-lived. These authors also noticed that fake news items are more likely shared than legitimate articles coming from trusted sources, and they tend to exhibit a larger level of polarization.\nThe conceptual issue of how to define fake news is a serious and unresolved issue. As the focus of our work is not attempting to offer light on this, we will rely on work by other authors to describe what we consider as fake news. In particular, we use the categorization provided by Rubin et al. BIBREF7 . The five categories they described, together with illustrative examples from our dataset, are as follows:\n\nResearch Hypotheses\nPrevious works on the area (presented in the section above) suggest that there may be important determinants for the adoption and diffusion of fake news. Our hypotheses builds on them and identifies three important dimensions that may help distinguishing fake news from legit information:\nTaking those three dimensions into account, we propose the following hypotheses about the features that we believe can help to identify tweets containing fake news from those not containing them. They will be later tested over our collected dataset.\nExposure.\nCharacterization.\nPolarization.\n\nResults\nThe sample collected consisted on 1 785 855 tweets published by 848 196 different users. Within our sample, we identified 1327 tweets that went viral (retweeted more than 1000 times by the 8th of November 2016) produced by 643 users. Such small subset of viral tweets were retweeted on 290 841 occasions in the observed time-window.\nThe 1327 `viral' tweets were manually annotated as containing fake news or not. The annotation was carried out by a single person in order to obtain a consistent annotation throughout the dataset. Out of those 1327 tweets, we identified 136 as potentially containing fake news (according to the categories previously described), and the rest were classified as `non containing fake news'. Note that the categorization is far from being perfect given the ambiguity of fake news themselves and human judgement involved in the process of categorization. Because of this, we do not claim that this dataset can be considered a ground truth.\nThe following results detail characteristics of these tweets along the previously mentioned dimensions. Table TABREF23 reports the actual differences (together with their associated p-values) of the distributions of viral tweets containing fake news and viral tweets not containing them for every variable considered.\n\nExposure\nFigure FIGREF24 shows that, in contrast to other kinds of viral tweets, those containing fake news were created more recently. As such, Twitter users were exposed to fake news related to the election for a shorter period of time.\nHowever, in terms of retweets, Figure FIGREF25 shows no apparent difference between containing fake news or not containing them. That is confirmed by the Kolmogorov-Smirnoff test, which does not discard the hypothesis that the associated distributions are equal.\nIn relation to the number of favourites, users that generated at least a viral tweet containing fake news appear to have, on average, less favourites than users that do not generate them. Figure FIGREF26 shows the distribution of favourites. Despite the apparent visual differences, the difference are not statistically significant.\nFinally, the number of hashtags used in viral fake news appears to be larger than those in other viral tweets. Figure FIGREF27 shows the density distribution of the number of hashtags used. However, once again, we were not able to find any statistical difference between the average number of hashtags in a viral tweet and the average number of hashtags in viral fake news.\n\nCharacterization\nWe found that 82 users within our sample were spreading fake news (i.e. they produced at least one tweet which was labelled as fake news). Out of those, 34 had verified accounts, and the rest were unverified. From the 48 unverified accounts, 6 have been suspended by Twitter at the date of writing, 3 tried to imitate legitimate accounts of others, and 4 accounts have been already deleted. Figure FIGREF28 shows the proportion of verified accounts to unverified accounts for viral tweets (containing fake news vs. not containing fake news). From the chart, it is clear that there is a higher chance of fake news coming from unverified accounts.\nTurning to friends, accounts distributing fake news appear to have, on average, the same number of friends than those distributing tweets with no fake news. However, the density distribution of friends from the accounts (Figure FIGREF29 ) shows that there is indeed a statistically significant difference in their distributions.\nIf we take into consideration the number of followers, accounts generating viral tweets with fake news do have a very different distribution on this dimension, compared to those accounts generating viral tweets with no fake news (see Figure FIGREF30 ). In fact, such differences are statistically significant.\nA useful representation for friends and followers is the ratio between friends/followers. Figures FIGREF31 and FIGREF32 show this representation. Notice that accounts spreading viral tweets with fake news have, on average, a larger ratio of friends/followers. The distribution of those accounts not generating fake news is more evenly distributed.\nWith respect to the number of mentions, Figure FIGREF33 shows that viral tweets labelled as containing fake news appear to use mentions to other users less frequently than viral tweets not containing fake news. In other words, tweets containing fake news mostly contain 1 mention, whereas other tweets tend to have two). Such differences are statistically significant.\nThe analysis (Figure FIGREF34 ) of the presence of media in the tweets in our dataset shows that tweets labelled as not containing fake news appear to present more media elements than those labelled as fake news. However, the difference is not statistically significant.\nOn the other hand, Figure FIGREF35 shows that viral tweets containing fake news appear to include more URLs to other sites than viral tweets that do not contain fake news. In fact, the difference between the two distributions is statistically significant (assuming INLINEFORM0 ).\n\nPolarization\nFinally, manual inspection of the text field of those viral tweets labelled as containing fake news shows that 117 of such tweets expressed support for Donald Trump, while only 8 supported Hillary Clinton. The remaining tweets contained fake news related to other topics, not expressing support for any of the candidates.\n\nDiscussion\nAs a summary, and constrained by our existing dataset, we made the following observations regarding differences between viral tweets labelled as containing fake news and viral tweets labelled as not containing them:\nThese findings (related to our initial hypothesis in Table TABREF44 ) clearly suggest that there are specific pieces of meta-data about tweets that may allow the identification of fake news. One such parameter is the time of exposure. Viral tweets containing fake news are shorter-lived than those containing other type of content. This notion seems to resonate with our findings showing that a number of accounts spreading fake news have already been deleted or suspended by Twitter by the time of writing. If one considers that researchers using different data have found similar results BIBREF9 , it appears that the lifetime of accounts, together with the age of the questioned viral content could be useful to identify fake news. In the light of this finding, accounts newly created should probably put under higher scrutiny than older ones. This in fact, would be a nice a-priori bias for a Bayesian classifier.\nAccounts spreading fake news appear to have a larger proportion of friends/followers (i.e. they have, on average, the same number of friends but a smaller number of followers) than those spreading viral content only. Together with the fact that, on average, tweets containing fake news have more URLs than those spreading viral content, it is possible to hypothesize that, both, the ratio of friends/followers of the account producing a viral tweet and number of URLs contained in such a tweet could be useful to single-out fake news in Twitter. Not only that, but our finding related to the number of URLs is in line with intuitions behind the incentives to create fake news commonly found in the literature BIBREF9 (in particular that of obtaining revenue through click-through advertising).\nFinally, it is interesting to notice that the content of viral fake news was highly polarized. This finding is also in line with those of Alcott et al. BIBREF9 . This feature suggests that textual sentiment analysis of the content of tweets (as most researchers do), together with the above mentioned parameters from meta-data, may prove useful for identifying fake news.\n\nFigure 1: Distribution of the date of creation of the tweets that were viral on November 8th. For clarity, the image only shows the year 2016, and no more than 150 tweets per day.\n\nFigure 2: Density distributions of achieved retweets for tweets in our dataset 1)containing fake news and 2)not containing them. No differences are apparent.\n\nTable 1: For each one of the selected features, the table shows the difference between the set of tweets containing fake news and those non containing them, and the associated p-value (applying a KolmogorovSmirnov test). The null hypothesis is that both distributions are equal (two sided). Results are ordered by decreasing p-value.\n\nFigure 3: Density distributions of the number of favourites that the user generating the tweet has. The differences are not statistically significant.\n\nFigure 5: Tweets labelled as containing fake news mostly come from non-verified users. This contrasts with the opposite pattern for tweets non containing them (which mostly originate from verified accounts).\n\nFigure 6: Density distributions (for tweets labelled as containing fake news, and tweets labelled as not containing them) of the number of friends that the user generating the tweet has. Difference is statistically significant.\n\nFigure 9: Density distribution of friends/followers ratio. Note that they do not follow a normal distribution. A higher friends/followers ratio exists for accounts that has at least produced a tweet labelled as containing fake news.\n\nFigure 10: Number of mentions within tweets labelled as containing fake news and tweets not containing them. There is almost a similar distribution of 1 and 2 mentions for tweets containing fake news. This contrasts with tweets not containing fake news, in which 2 mentions is much more common.\n\nFigure 8: Density distribution of friends/followers ratio, showing quartiles. Accounts that generate fake news tend to have a higher ratio value.\n\nFigure 12: Number of URLs embedded within viral tweets (with fake news vs. without them). Differences are statistically significant with α = 0.05\n\nQuery: How is the ground truth for fake news established?\n\nJSON Response:",
"role": "user"
},
{
"content": "{\"answer\": \"Ground truth is not established in the paper\", \"evidence\": [\"The 1327 `viral\\u0027 tweets were manually annotated as containing fake news or not. The annotation was carried out by a single person in order to obtain a consistent annotation throughout the dataset. Out of those 1327 tweets, we identified 136 as potentially containing fake news (according to the categories previously described), and the rest were classified as `non containing fake news\\u0027. Note that the categorization is far from being perfect given the ambiguity of fake news themselves and human judgement involved in the process of categorization. Because of this, we do not claim that this dataset can be considered a ground truth.\"]}",
"role": "assistant"
}
]
|
science.scierc_ner | Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context.
Input Abstract:
Past work of generating referring expressions mainly utilized attributes of objects and binary relations between objects. However, such an approach does not work well when there is no distinctive attribute among objects. To overcome this limitation, this paper proposes a method utilizing the perceptual groups of objects and n-ary relations among them. The key is to identify groups of objects that are naturally recognized by humans. We conducted psychological experiments with 42 subjects to collect referring expressions in such situations, and built a generation algorithm based on the results. The evaluation using another 23 subjects showed that the proposed method could effectively generate proper referring expressions. | Output Entities:
{"Generic": [], "Material": [], "Method": ["generation algorithm"], "Metric": [], "OtherScientificTerm": ["referring expressions", "binary relations", "n-ary relations"], "Task": []} | science.scierc_ner.1 | [
{
"content": "Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context.\n\nInput Abstract:\n\nPast work of generating referring expressions mainly utilized attributes of objects and binary relations between objects. However, such an approach does not work well when there is no distinctive attribute among objects. To overcome this limitation, this paper proposes a method utilizing the perceptual groups of objects and n-ary relations among them. The key is to identify groups of objects that are naturally recognized by humans. We conducted psychological experiments with 42 subjects to collect referring expressions in such situations, and built a generation algorithm based on the results. The evaluation using another 23 subjects showed that the proposed method could effectively generate proper referring expressions.",
"role": "user"
},
{
"content": "Output Entities:\n\n{\"Generic\": [], \"Material\": [], \"Method\": [\"generation algorithm\"], \"Metric\": [], \"OtherScientificTerm\": [\"referring expressions\", \"binary relations\", \"n-ary relations\"], \"Task\": []}",
"role": "assistant"
}
]
|
science.discomat_te | | Name | SiO2 | B2O3 | Na2O | S (cm2/g) |
| B11 | 77.8 | 11.3 | 10.9 | 482 |
| B13 | 73.3 | 13.6 | 13.1 | 395 |
| B15 | 71.1 | 15.0 | 13.8 | 373 |
| B17 | 68.0 | 16.3 | 15.6 | 388 |
| B19 | 63.7 | 19.3 | 17.0 | - |
| B21 | 58.5 | 22.0 | 19.5 | 431 |
| B23 | 53.6 | 24.2 | 22.2 | - |
Caption: Composition (% molar oxide) and specific surface area of the borosilicate powders
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "77.8", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "B11"}
{"value": "11.3", "type": "Glass_Compound_Amount", "constituent": "B2O3", "unit": "mol", "material": "B11"}
{"value": "10.9", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "mol", "material": "B11"}
{"value": "482", "type": "Other"}
{"value": "73.3", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "B13"}
{"value": "13.6", "type": "Glass_Compound_Amount", "constituent": "B2O3", "unit": "mol", "material": "B13"}
{"value": "13.1", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "mol", "material": "B13"}
{"value": "395", "type": "Other"}
{"value": "71.1", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "B15"}
{"value": "15.0", "type": "Glass_Compound_Amount", "constituent": "B2O3", "unit": "mol", "material": "B15"}
{"value": "13.8", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "mol", "material": "B15"}
{"value": "373", "type": "Other"}
{"value": "68.0", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "B17"}
{"value": "16.3", "type": "Glass_Compound_Amount", "constituent": "B2O3", "unit": "mol", "material": "B17"}
{"value": "15.6", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "mol", "material": "B17"}
{"value": "388", "type": "Other"}
{"value": "63.7", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "B19"}
{"value": "19.3", "type": "Glass_Compound_Amount", "constituent": "B2O3", "unit": "mol", "material": "B19"}
{"value": "17.0", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "mol", "material": "B19"}
{"value": "58.5", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "B21"}
{"value": "22.0", "type": "Glass_Compound_Amount", "constituent": "B2O3", "unit": "mol", "material": "B21"}
{"value": "19.5", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "mol", "material": "B21"}
{"value": "431", "type": "Other"}
{"value": "53.6", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "B23"}
{"value": "24.2", "type": "Glass_Compound_Amount", "constituent": "B2O3", "unit": "mol", "material": "B23"}
{"value": "22.2", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "mol", "material": "B23"} | science.discomat_te.263 | [
{
"content": "| Name | SiO2 | B2O3 | Na2O | S (cm2/g) |\n| B11 | 77.8 | 11.3 | 10.9 | 482 |\n| B13 | 73.3 | 13.6 | 13.1 | 395 |\n| B15 | 71.1 | 15.0 | 13.8 | 373 |\n| B17 | 68.0 | 16.3 | 15.6 | 388 |\n| B19 | 63.7 | 19.3 | 17.0 | - |\n| B21 | 58.5 | 22.0 | 19.5 | 431 |\n| B23 | 53.6 | 24.2 | 22.2 | - |\n\nCaption: Composition (% molar oxide) and specific surface area of the borosilicate powders\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"77.8\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"B11\"}\n{\"value\": \"11.3\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B2O3\", \"unit\": \"mol\", \"material\": \"B11\"}\n{\"value\": \"10.9\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"mol\", \"material\": \"B11\"}\n{\"value\": \"482\", \"type\": \"Other\"}\n{\"value\": \"73.3\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"B13\"}\n{\"value\": \"13.6\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B2O3\", \"unit\": \"mol\", \"material\": \"B13\"}\n{\"value\": \"13.1\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"mol\", \"material\": \"B13\"}\n{\"value\": \"395\", \"type\": \"Other\"}\n{\"value\": \"71.1\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"B15\"}\n{\"value\": \"15.0\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B2O3\", \"unit\": \"mol\", \"material\": \"B15\"}\n{\"value\": \"13.8\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"mol\", \"material\": \"B15\"}\n{\"value\": \"373\", \"type\": \"Other\"}\n{\"value\": \"68.0\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"B17\"}\n{\"value\": \"16.3\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B2O3\", \"unit\": \"mol\", \"material\": \"B17\"}\n{\"value\": \"15.6\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"mol\", \"material\": \"B17\"}\n{\"value\": \"388\", \"type\": \"Other\"}\n{\"value\": \"63.7\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"B19\"}\n{\"value\": \"19.3\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B2O3\", \"unit\": \"mol\", \"material\": \"B19\"}\n{\"value\": \"17.0\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"mol\", \"material\": \"B19\"}\n{\"value\": \"58.5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"B21\"}\n{\"value\": \"22.0\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B2O3\", \"unit\": \"mol\", \"material\": \"B21\"}\n{\"value\": \"19.5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"mol\", \"material\": \"B21\"}\n{\"value\": \"431\", \"type\": \"Other\"}\n{\"value\": \"53.6\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"B23\"}\n{\"value\": \"24.2\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B2O3\", \"unit\": \"mol\", \"material\": \"B23\"}\n{\"value\": \"22.2\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"mol\", \"material\": \"B23\"}",
"role": "assistant"
}
]
|
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
3 <cite>Goldberg (2019)</cite> showed that in his experiments the base BERT model performed better than the larger model, so we restrict our attention to the base model. | ["Uses"] | science.multicite_intent_classification.521 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\n3 <cite>Goldberg (2019)</cite> showed that in his experiments the base BERT model performed better than the larger model, so we restrict our attention to the base model.",
"role": "user"
},
{
"content": "[\"Uses\"]",
"role": "assistant"
}
]
|
science.discomat_te | | | SiO2 | PbO | K2O | BaO | ZrO2 |
| Wt% | 28 (<1%) | 66 (<8%) | 0.3 (<2%) | 4.1 (<5%) | 1.3 (<6%) |
| Mol% | 58 | 36.8 | 0.4 | 3.4 | 1.4 |
Caption: Glass composition (uncertainty of the measure - Analysis SARM, CRPG Nancy)
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "28", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "wt", "material": "xx"}
{"value": "66", "type": "Glass_Compound_Amount", "constituent": "PbO", "unit": "wt", "material": "xx"}
{"value": "0.3", "type": "Glass_Compound_Amount", "constituent": "K2O", "unit": "wt", "material": "xx"}
{"value": "4.1", "type": "Glass_Compound_Amount", "constituent": "BaO", "unit": "wt", "material": "xx"}
{"value": "1.3", "type": "Glass_Compound_Amount", "constituent": "ZrO2", "unit": "wt", "material": "xx"}
{"value": "58", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "xx"}
{"value": "36.8", "type": "Glass_Compound_Amount", "constituent": "PbO", "unit": "mol", "material": "xx"}
{"value": "0.4", "type": "Glass_Compound_Amount", "constituent": "K2O", "unit": "mol", "material": "xx"}
{"value": "3.4", "type": "Glass_Compound_Amount", "constituent": "BaO", "unit": "mol", "material": "xx"}
{"value": "1.4", "type": "Glass_Compound_Amount", "constituent": "ZrO2", "unit": "mol", "material": "xx"} | science.discomat_te.84 | [
{
"content": "| | SiO2 | PbO | K2O | BaO | ZrO2 |\n| Wt% | 28 (<1%) | 66 (<8%) | 0.3 (<2%) | 4.1 (<5%) | 1.3 (<6%) |\n| Mol% | 58 | 36.8 | 0.4 | 3.4 | 1.4 |\n\nCaption: Glass composition (uncertainty of the measure - Analysis SARM, CRPG Nancy)\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"28\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"wt\", \"material\": \"xx\"}\n{\"value\": \"66\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"PbO\", \"unit\": \"wt\", \"material\": \"xx\"}\n{\"value\": \"0.3\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"K2O\", \"unit\": \"wt\", \"material\": \"xx\"}\n{\"value\": \"4.1\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"BaO\", \"unit\": \"wt\", \"material\": \"xx\"}\n{\"value\": \"1.3\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"ZrO2\", \"unit\": \"wt\", \"material\": \"xx\"}\n{\"value\": \"58\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"36.8\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"PbO\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.4\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"K2O\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"3.4\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"BaO\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"1.4\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"ZrO2\", \"unit\": \"mol\", \"material\": \"xx\"}",
"role": "assistant"
}
]
|
science.discomat_te | | Samples | Measured density (g/cm3) (+-0.001) | Vickers hardness (GPa) (+-0.05) | Measured half crack length (mm) (+-0.1) | Fracture toughness (MPam1/2) (+-0.05) | Brittleness (mm-1/2) (+-0.2) |
| CPAS-0 | 2.842 | 3.83 | 194 | 0.50 | 7.7 |
| CPAS-10 | 2.856 | 3.92 | 187 | 0.52 | 7.5 |
| CPAS-20 | 2.871 | 4.08 | 179 | 0.55 | 7.4 |
| CPAS-30 | 2.881 | 4.17 | 171 | 0.58 | 7.2 |
| CPAS-40 | 2.886 | 4.25 | 164 | 0.61 | 6.9 |
| CPAS-50 | 2.891 | 4.38 | 159 | 0.63 | 6.8 |
Caption: Variation of measured density, Vickers hardness, fracture toughness and brittleness of CPAS glass samples with duration of heat treatment.
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "2.842", "type": "Other"}
{"value": "3.83", "type": "Other"}
{"value": "194", "type": "Other"}
{"value": "0.50", "type": "Other"}
{"value": "7.7", "type": "Other"}
{"value": "2.856", "type": "Other"}
{"value": "3.92", "type": "Other"}
{"value": "187", "type": "Other"}
{"value": "0.52", "type": "Other"}
{"value": "7.5", "type": "Other"}
{"value": "2.871", "type": "Other"}
{"value": "4.08", "type": "Other"}
{"value": "179", "type": "Other"}
{"value": "0.55", "type": "Other"}
{"value": "7.4", "type": "Other"}
{"value": "2.881", "type": "Other"}
{"value": "4.17", "type": "Other"}
{"value": "171", "type": "Other"}
{"value": "0.58", "type": "Other"}
{"value": "7.2", "type": "Other"}
{"value": "2.886", "type": "Other"}
{"value": "4.25", "type": "Other"}
{"value": "164", "type": "Other"}
{"value": "0.61", "type": "Other"}
{"value": "6.9", "type": "Other"}
{"value": "2.891", "type": "Other"}
{"value": "4.38", "type": "Other"}
{"value": "159", "type": "Other"}
{"value": "0.63", "type": "Other"}
{"value": "6.8", "type": "Other"} | science.discomat_te.202 | [
{
"content": "| Samples | Measured density (g/cm3) (+-0.001) | Vickers hardness (GPa) (+-0.05) | Measured half crack length (mm) (+-0.1) | Fracture toughness (MPam1/2) (+-0.05) | Brittleness (mm-1/2) (+-0.2) |\n| CPAS-0 | 2.842 | 3.83 | 194 | 0.50 | 7.7 |\n| CPAS-10 | 2.856 | 3.92 | 187 | 0.52 | 7.5 |\n| CPAS-20 | 2.871 | 4.08 | 179 | 0.55 | 7.4 |\n| CPAS-30 | 2.881 | 4.17 | 171 | 0.58 | 7.2 |\n| CPAS-40 | 2.886 | 4.25 | 164 | 0.61 | 6.9 |\n| CPAS-50 | 2.891 | 4.38 | 159 | 0.63 | 6.8 |\n\nCaption: Variation of measured density, Vickers hardness, fracture toughness and brittleness of CPAS glass samples with duration of heat treatment.\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"2.842\", \"type\": \"Other\"}\n{\"value\": \"3.83\", \"type\": \"Other\"}\n{\"value\": \"194\", \"type\": \"Other\"}\n{\"value\": \"0.50\", \"type\": \"Other\"}\n{\"value\": \"7.7\", \"type\": \"Other\"}\n{\"value\": \"2.856\", \"type\": \"Other\"}\n{\"value\": \"3.92\", \"type\": \"Other\"}\n{\"value\": \"187\", \"type\": \"Other\"}\n{\"value\": \"0.52\", \"type\": \"Other\"}\n{\"value\": \"7.5\", \"type\": \"Other\"}\n{\"value\": \"2.871\", \"type\": \"Other\"}\n{\"value\": \"4.08\", \"type\": \"Other\"}\n{\"value\": \"179\", \"type\": \"Other\"}\n{\"value\": \"0.55\", \"type\": \"Other\"}\n{\"value\": \"7.4\", \"type\": \"Other\"}\n{\"value\": \"2.881\", \"type\": \"Other\"}\n{\"value\": \"4.17\", \"type\": \"Other\"}\n{\"value\": \"171\", \"type\": \"Other\"}\n{\"value\": \"0.58\", \"type\": \"Other\"}\n{\"value\": \"7.2\", \"type\": \"Other\"}\n{\"value\": \"2.886\", \"type\": \"Other\"}\n{\"value\": \"4.25\", \"type\": \"Other\"}\n{\"value\": \"164\", \"type\": \"Other\"}\n{\"value\": \"0.61\", \"type\": \"Other\"}\n{\"value\": \"6.9\", \"type\": \"Other\"}\n{\"value\": \"2.891\", \"type\": \"Other\"}\n{\"value\": \"4.38\", \"type\": \"Other\"}\n{\"value\": \"159\", \"type\": \"Other\"}\n{\"value\": \"0.63\", \"type\": \"Other\"}\n{\"value\": \"6.8\", \"type\": \"Other\"}",
"role": "assistant"
}
]
|
science.biored_ner | Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: ["CellLine", "Variant", "Species", "Disease", "Chemical", "Gene"].
Ensure the output is strictly the JSON object.
Source Text:
Attenuation of methamphetamine-induced nigrostriatal dopaminergic neurotoxicity in mice by lipopolysaccharide pretreatment. Immunological activation has been proposed to play a role in methamphetamine-induced dopaminergic terminal damage. In this study, we examined the roles of lipopolysaccharide, a pro-inflammatory and inflammatory factor, treatment in modulating the methamphetamine-induced nigrostriatal dopamine neurotoxicity. Lipopolysaccharide pretreatment did not affect the basal body temperature or methamphetamine-elicited hyperthermia three days later. Such systemic lipopolysaccharide treatment mitigated methamphetamine-induced striatal dopamine and 3,4-dihydroxyphenylacetic acid depletions in a dose-dependent manner. As the most potent dose (1 mg/kg) of lipopolysaccharide was administered two weeks, one day before or after the methamphetamine dosing regimen, methamphetamine-induced striatal dopamine and 3,4-dihydroxyphenylacetic acid depletions remained unaltered. Moreover, systemic lipopolysaccharide pretreatment (1 mg/kg) attenuated local methamphetamine infusion-produced dopamine and 3,4-dihydroxyphenylacetic acid depletions in the striatum, indicating that the protective effect of lipopolysaccharide is less likely due to interrupted peripheral distribution or metabolism of methamphetamine. We concluded a critical time window for systemic lipopolysaccharide pretreatment in exerting effective protection against methamphetamine-induced nigrostriatal dopamine neurotoxicity. | {"CellLine": [], "Chemical": ["Lipopolysaccharide", "dopaminergic", "lipopolysaccharide", "dopamine", "3,4-dihydroxyphenylacetic acid", "methamphetamine"], "Disease": ["hyperthermia", "inflammatory", "dopaminergic terminal damage", "neurotoxicity"], "Gene": [], "Species": ["mice"], "Variant": []} | science.biored_ner.88 | [
{
"content": "Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: [\"CellLine\", \"Variant\", \"Species\", \"Disease\", \"Chemical\", \"Gene\"].\n\nEnsure the output is strictly the JSON object.\n\nSource Text:\n\nAttenuation of methamphetamine-induced nigrostriatal dopaminergic neurotoxicity in mice by lipopolysaccharide pretreatment. Immunological activation has been proposed to play a role in methamphetamine-induced dopaminergic terminal damage. In this study, we examined the roles of lipopolysaccharide, a pro-inflammatory and inflammatory factor, treatment in modulating the methamphetamine-induced nigrostriatal dopamine neurotoxicity. Lipopolysaccharide pretreatment did not affect the basal body temperature or methamphetamine-elicited hyperthermia three days later. Such systemic lipopolysaccharide treatment mitigated methamphetamine-induced striatal dopamine and 3,4-dihydroxyphenylacetic acid depletions in a dose-dependent manner. As the most potent dose (1 mg/kg) of lipopolysaccharide was administered two weeks, one day before or after the methamphetamine dosing regimen, methamphetamine-induced striatal dopamine and 3,4-dihydroxyphenylacetic acid depletions remained unaltered. Moreover, systemic lipopolysaccharide pretreatment (1 mg/kg) attenuated local methamphetamine infusion-produced dopamine and 3,4-dihydroxyphenylacetic acid depletions in the striatum, indicating that the protective effect of lipopolysaccharide is less likely due to interrupted peripheral distribution or metabolism of methamphetamine. We concluded a critical time window for systemic lipopolysaccharide pretreatment in exerting effective protection against methamphetamine-induced nigrostriatal dopamine neurotoxicity.",
"role": "user"
},
{
"content": "{\"CellLine\": [], \"Chemical\": [\"Lipopolysaccharide\", \"dopaminergic\", \"lipopolysaccharide\", \"dopamine\", \"3,4-dihydroxyphenylacetic acid\", \"methamphetamine\"], \"Disease\": [\"hyperthermia\", \"inflammatory\", \"dopaminergic terminal damage\", \"neurotoxicity\"], \"Gene\": [], \"Species\": [\"mice\"], \"Variant\": []}",
"role": "assistant"
}
]
|
science.discomat_te | | Oxide | Glass | Glass |
| Oxide | A | F |
| SiO2 | 60 | 55.5 |
| Al2O3 | 14 | 10.2 |
| MgO | 10 | 9.3 |
| CaO | 16 | 14.8 |
| TiO2 | - | 3.7 |
| B2O3 | - | 2.8 |
| K2O | - | 2.8 |
| Na2O | - | 0.9 |
Caption: Compositions of glasses (wt%)
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "60", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "wt", "material": "A"}
{"value": "55.5", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "wt", "material": "F"}
{"value": "14", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "wt", "material": "A"}
{"value": "10.2", "type": "Other"}
{"value": "10", "type": "Glass_Compound_Amount", "constituent": "MgO", "unit": "wt", "material": "A"}
{"value": "9.3", "type": "Other"}
{"value": "16", "type": "Glass_Compound_Amount", "constituent": "CaO", "unit": "wt", "material": "A"}
{"value": "14.8", "type": "Glass_Compound_Amount", "constituent": "CaO", "unit": "wt", "material": "F"}
{"value": "3.7", "type": "Glass_Compound_Amount", "constituent": "TiO2", "unit": "wt", "material": "F"}
{"value": "2.8", "type": "Glass_Compound_Amount", "constituent": "B2O3", "unit": "wt", "material": "F"}
{"value": "2.8", "type": "Glass_Compound_Amount", "constituent": "K2O", "unit": "wt", "material": "F"}
{"value": "0.9", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "wt", "material": "F"} | science.discomat_te.46 | [
{
"content": "| Oxide | Glass | Glass |\n| Oxide | A | F |\n| SiO2 | 60 | 55.5 |\n| Al2O3 | 14 | 10.2 |\n| MgO | 10 | 9.3 |\n| CaO | 16 | 14.8 |\n| TiO2 | - | 3.7 |\n| B2O3 | - | 2.8 |\n| K2O | - | 2.8 |\n| Na2O | - | 0.9 |\n\nCaption: Compositions of glasses (wt%)\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"60\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"wt\", \"material\": \"A\"}\n{\"value\": \"55.5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"wt\", \"material\": \"F\"}\n{\"value\": \"14\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"wt\", \"material\": \"A\"}\n{\"value\": \"10.2\", \"type\": \"Other\"}\n{\"value\": \"10\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"MgO\", \"unit\": \"wt\", \"material\": \"A\"}\n{\"value\": \"9.3\", \"type\": \"Other\"}\n{\"value\": \"16\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CaO\", \"unit\": \"wt\", \"material\": \"A\"}\n{\"value\": \"14.8\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CaO\", \"unit\": \"wt\", \"material\": \"F\"}\n{\"value\": \"3.7\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"TiO2\", \"unit\": \"wt\", \"material\": \"F\"}\n{\"value\": \"2.8\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B2O3\", \"unit\": \"wt\", \"material\": \"F\"}\n{\"value\": \"2.8\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"K2O\", \"unit\": \"wt\", \"material\": \"F\"}\n{\"value\": \"0.9\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"wt\", \"material\": \"F\"}",
"role": "assistant"
}
]
|
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT.BACKGROUND AND OBJECTIVES: Bilastine is a novel second-generation antihistamine for the symptomatic treatment of allergic rhinitis and urticaria. The objective of this study was to evaluate the pharmacokinetics, pharmacodynamics, and tolerability of bilastine following single and multiple oral doses in healthy Japanese subjects. The pharmacokinetic and pharmacodynamic profiles were compared with those reported in Caucasian subjects.
ABSTRACT.METHODS: In a single-blind, randomized, placebo-controlled, parallel-group, single- and multiple-ascending dose study, bilastine tablets were administered at single doses of 10, 20, and 50 mg (Part I), and once daily for 14 days at 20 and 50 mg (Part II).
ABSTRACT.RESULTS: After single oral doses, maximum plasma concentrations (C
max) were reached at 1.0–1.5 h postdose. Plasma exposure [C
max and area under the plasma concentration-time curve (AUC)] increased dose-proportionally at single doses of 10–50 mg. In repeated-dose administration, no remarkable differences were observed between Day 1 and Day 14 for C
max or AUC. For inhibitory effects on wheal and flare response, bilastine 20 and 50 mg showed significant inhibition from 1.5 h after administration as compared with placebo, and the significant effect persisted for 24 h after administration. The rates of adverse events (AEs) were comparable between bilastine and placebo in both Part I and Part II. In addition, no dose- or administration period-dependent tendency of increase in rate of AEs or worsening of severity was observed.
ABSTRACT.CONCLUSION: Bilastine exhibits similar single- and multiple-dose pharmacokinetic and pharmacodynamic characteristics in healthy Japanese subjects compared with those observed in Caucasian subjects in previous studies.
ABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1007/s40261-016-0447-2) contains supplementary material, which is available to authorized users.
Extracted Data: | [["Bilastine 10 mg", "Placebo", "Adverse effects", "no significant difference", "The rates of adverse events (AEs) were comparable between bilastine and placebo in both Part I and Part II."]] | science.evidence_inference.75 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND AND OBJECTIVES: Bilastine is a novel second-generation antihistamine for the symptomatic treatment of allergic rhinitis and urticaria. The objective of this study was to evaluate the pharmacokinetics, pharmacodynamics, and tolerability of bilastine following single and multiple oral doses in healthy Japanese subjects. The pharmacokinetic and pharmacodynamic profiles were compared with those reported in Caucasian subjects.\n\nABSTRACT.METHODS: In a single-blind, randomized, placebo-controlled, parallel-group, single- and multiple-ascending dose study, bilastine tablets were administered at single doses of 10, 20, and 50 mg (Part I), and once daily for 14 days at 20 and 50 mg (Part II).\n\nABSTRACT.RESULTS: After single oral doses, maximum plasma concentrations (C\nmax) were reached at 1.0–1.5 h postdose. Plasma exposure [C\nmax and area under the plasma concentration-time curve (AUC)] increased dose-proportionally at single doses of 10–50 mg. In repeated-dose administration, no remarkable differences were observed between Day 1 and Day 14 for C\nmax or AUC. For inhibitory effects on wheal and flare response, bilastine 20 and 50 mg showed significant inhibition from 1.5 h after administration as compared with placebo, and the significant effect persisted for 24 h after administration. The rates of adverse events (AEs) were comparable between bilastine and placebo in both Part I and Part II. In addition, no dose- or administration period-dependent tendency of increase in rate of AEs or worsening of severity was observed.\n\nABSTRACT.CONCLUSION: Bilastine exhibits similar single- and multiple-dose pharmacokinetic and pharmacodynamic characteristics in healthy Japanese subjects compared with those observed in Caucasian subjects in previous studies.\n\nABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1007/s40261-016-0447-2) contains supplementary material, which is available to authorized users.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"Bilastine 10 mg\", \"Placebo\", \"Adverse effects\", \"no significant difference\", \"The rates of adverse events (AEs) were comparable between bilastine and placebo in both Part I and Part II.\"]]",
"role": "assistant"
}
]
|
science.discomat_te | | Nominal | Nominal | Actual (from ICP) | Actual (from ICP) |
| CuO | TeO2 | CuO | TeO2 |
| 0.10 | 0.90 | 0.092 | 0.908 |
| 0.20 | 0.80 | 0.169 | 0.831 |
| 0.30 | 0.70 | 0.272 | 0.728 |
| 0.40 | 0.60 | 0.357 | 0.643 |
| 0.50 | 0.50 | 0.491 | 0.509 |
Caption: Nominal and actual composition (molar fraction) of various tellurite glasses containing CuO
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "0.10", "type": "Glass_Compound_Amount", "constituent": "CuO", "unit": "mol", "material": "xx"}
{"value": "0.90", "type": "Glass_Compound_Amount", "constituent": "TeO2", "unit": "mol", "material": "xx"}
{"value": "0.092", "type": "Glass_Compound_Amount", "constituent": "CuO", "unit": "mol", "material": "xx"}
{"value": "0.908", "type": "Glass_Compound_Amount", "constituent": "TeO2", "unit": "mol", "material": "xx"}
{"value": "0.20", "type": "Glass_Compound_Amount", "constituent": "CuO", "unit": "mol", "material": "xx"}
{"value": "0.80", "type": "Glass_Compound_Amount", "constituent": "TeO2", "unit": "mol", "material": "xx"}
{"value": "0.169", "type": "Glass_Compound_Amount", "constituent": "CuO", "unit": "mol", "material": "xx"}
{"value": "0.831", "type": "Glass_Compound_Amount", "constituent": "TeO2", "unit": "mol", "material": "xx"}
{"value": "0.30", "type": "Glass_Compound_Amount", "constituent": "CuO", "unit": "mol", "material": "xx"}
{"value": "0.70", "type": "Glass_Compound_Amount", "constituent": "TeO2", "unit": "mol", "material": "xx"}
{"value": "0.272", "type": "Glass_Compound_Amount", "constituent": "CuO", "unit": "mol", "material": "xx"}
{"value": "0.728", "type": "Glass_Compound_Amount", "constituent": "TeO2", "unit": "mol", "material": "xx"}
{"value": "0.40", "type": "Glass_Compound_Amount", "constituent": "CuO", "unit": "mol", "material": "xx"}
{"value": "0.60", "type": "Glass_Compound_Amount", "constituent": "TeO2", "unit": "mol", "material": "xx"}
{"value": "0.357", "type": "Glass_Compound_Amount", "constituent": "CuO", "unit": "mol", "material": "xx"}
{"value": "0.643", "type": "Glass_Compound_Amount", "constituent": "TeO2", "unit": "mol", "material": "xx"}
{"value": "0.50", "type": "Glass_Compound_Amount", "constituent": "CuO", "unit": "mol", "material": "xx"}
{"value": "0.50", "type": "Glass_Compound_Amount", "constituent": "TeO2", "unit": "mol", "material": "xx"}
{"value": "0.491", "type": "Glass_Compound_Amount", "constituent": "CuO", "unit": "mol", "material": "xx"}
{"value": "0.509", "type": "Glass_Compound_Amount", "constituent": "TeO2", "unit": "mol", "material": "xx"} | science.discomat_te.394 | [
{
"content": "| Nominal | Nominal | Actual (from ICP) | Actual (from ICP) |\n| CuO | TeO2 | CuO | TeO2 |\n| 0.10 | 0.90 | 0.092 | 0.908 |\n| 0.20 | 0.80 | 0.169 | 0.831 |\n| 0.30 | 0.70 | 0.272 | 0.728 |\n| 0.40 | 0.60 | 0.357 | 0.643 |\n| 0.50 | 0.50 | 0.491 | 0.509 |\n\nCaption: Nominal and actual composition (molar fraction) of various tellurite glasses containing CuO\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"0.10\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CuO\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.90\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"TeO2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.092\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CuO\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.908\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"TeO2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.20\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CuO\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.80\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"TeO2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.169\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CuO\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.831\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"TeO2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.30\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CuO\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.70\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"TeO2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.272\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CuO\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.728\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"TeO2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.40\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CuO\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.60\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"TeO2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.357\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CuO\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.643\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"TeO2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.50\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CuO\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.50\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"TeO2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.491\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CuO\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.509\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"TeO2\", \"unit\": \"mol\", \"material\": \"xx\"}",
"role": "assistant"
}
]
|
science.scifact_entailment | Determine the relationship between them and generate a JSON response. Scientific Claim:
Epidemiological disease burden from noncommunicable diseases is more prevalent in high economic settings.
Research Abstract:
BACKGROUND The Global Burden of Diseases, Injuries, and Risk Factors Study 2015 provides an up-to-date synthesis of the evidence for risk factor exposure and the attributable burden of disease.
By providing national and subnational assessments spanning the past 25 years, this study can inform debates on the importance of addressing risks in context.
METHODS We used the comparative risk assessment framework developed for previous iterations of the Global Burden of Disease Study to estimate attributable deaths, disability-adjusted life-years (DALYs), and trends in exposure by age group, sex, year, and geography for 79 behavioural, environmental and occupational, and metabolic risks or clusters of risks from 1990 to 2015.
This study included 388 risk-outcome pairs that met World Cancer Research Fund-defined criteria for convincing or probable evidence.
We extracted relative risk and exposure estimates from randomised controlled trials, cohorts, pooled cohorts, household surveys, census data, satellite data, and other sources.
We used statistical models to pool data, adjust for bias, and incorporate covariates.
We developed a metric that allows comparisons of exposure across risk factors-the summary exposure value.
Using the counterfactual scenario of theoretical minimum risk level, we estimated the portion of deaths and DALYs that could be attributed to a given risk.
We decomposed trends in attributable burden into contributions from population growth, population age structure, risk exposure, and risk-deleted cause-specific DALY rates.
We characterised risk exposure in relation to a Socio-demographic Index (SDI).
FINDINGS Between 1990 and 2015, global exposure to unsafe sanitation, household air pollution, childhood underweight, childhood stunting, and smoking each decreased by more than 25%.
Global exposure for several occupational risks, high body-mass index (BMI), and drug use increased by more than 25% over the same period.
All risks jointly evaluated in 2015 accounted for 57·8% (95% CI 56·6-58·8) of global deaths and 41·2% (39·8-42·8) of DALYs.
In 2015, the ten largest contributors to global DALYs among Level 3 risks were high systolic blood pressure (211·8 million [192·7 million to 231·1 million] global DALYs), smoking (148·6 million [134·2 million to 163·1 million]), high fasting plasma glucose (143·1 million [125·1 million to 163·5 million]), high BMI (120·1 million [83·8 million to 158·4 million]), childhood undernutrition (113·3 million [103·9 million to 123·4 million]), ambient particulate matter (103·1 million [90·8 million to 115·1 million]), high total cholesterol (88·7 million [74·6 million to 105·7 million]), household air pollution (85·6 million [66·7 million to 106·1 million]), alcohol use (85·0 million [77·2 million to 93·0 million]), and diets high in sodium (83·0 million [49·3 million to 127·5 million]).
From 1990 to 2015, attributable DALYs declined for micronutrient deficiencies, childhood undernutrition, unsafe sanitation and water, and household air pollution; reductions in risk-deleted DALY rates rather than reductions in exposure drove these declines.
Rising exposure contributed to notable increases in attributable DALYs from high BMI, high fasting plasma glucose, occupational carcinogens, and drug use.
Environmental risks and childhood undernutrition declined steadily with SDI; low physical activity, high BMI, and high fasting plasma glucose increased with SDI.
In 119 countries, metabolic risks, such as high BMI and fasting plasma glucose, contributed the most attributable DALYs in 2015.
Regionally, smoking still ranked among the leading five risk factors for attributable DALYs in 109 countries; childhood underweight and unsafe sex remained primary drivers of early death and disability in much of sub-Saharan Africa.
INTERPRETATION Declines in some key environmental risks have contributed to declines in critical infectious diseases.
Some risks appear to be invariant to SDI.
Increasing risks, including high BMI, high fasting plasma glucose, drug use, and some occupational exposures, contribute to rising burden from some conditions, but also provide opportunities for intervention.
Some highly preventable risks, such as smoking, remain major causes of attributable DALYs, even as exposure is declining.
Public policy makers need to pay attention to the risks that are increasingly major contributors to global burden.
FUNDING Bill & Melinda Gates Foundation. | {"verdict": "NEI", "evidence": []} | science.scifact_entailment.257 | [
{
"content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nEpidemiological disease burden from noncommunicable diseases is more prevalent in high economic settings.\n\nResearch Abstract:\nBACKGROUND The Global Burden of Diseases, Injuries, and Risk Factors Study 2015 provides an up-to-date synthesis of the evidence for risk factor exposure and the attributable burden of disease.\nBy providing national and subnational assessments spanning the past 25 years, this study can inform debates on the importance of addressing risks in context.\nMETHODS We used the comparative risk assessment framework developed for previous iterations of the Global Burden of Disease Study to estimate attributable deaths, disability-adjusted life-years (DALYs), and trends in exposure by age group, sex, year, and geography for 79 behavioural, environmental and occupational, and metabolic risks or clusters of risks from 1990 to 2015.\nThis study included 388 risk-outcome pairs that met World Cancer Research Fund-defined criteria for convincing or probable evidence.\nWe extracted relative risk and exposure estimates from randomised controlled trials, cohorts, pooled cohorts, household surveys, census data, satellite data, and other sources.\nWe used statistical models to pool data, adjust for bias, and incorporate covariates.\nWe developed a metric that allows comparisons of exposure across risk factors-the summary exposure value.\nUsing the counterfactual scenario of theoretical minimum risk level, we estimated the portion of deaths and DALYs that could be attributed to a given risk.\nWe decomposed trends in attributable burden into contributions from population growth, population age structure, risk exposure, and risk-deleted cause-specific DALY rates.\nWe characterised risk exposure in relation to a Socio-demographic Index (SDI).\nFINDINGS Between 1990 and 2015, global exposure to unsafe sanitation, household air pollution, childhood underweight, childhood stunting, and smoking each decreased by more than 25%.\nGlobal exposure for several occupational risks, high body-mass index (BMI), and drug use increased by more than 25% over the same period.\nAll risks jointly evaluated in 2015 accounted for 57·8% (95% CI 56·6-58·8) of global deaths and 41·2% (39·8-42·8) of DALYs.\nIn 2015, the ten largest contributors to global DALYs among Level 3 risks were high systolic blood pressure (211·8 million [192·7 million to 231·1 million] global DALYs), smoking (148·6 million [134·2 million to 163·1 million]), high fasting plasma glucose (143·1 million [125·1 million to 163·5 million]), high BMI (120·1 million [83·8 million to 158·4 million]), childhood undernutrition (113·3 million [103·9 million to 123·4 million]), ambient particulate matter (103·1 million [90·8 million to 115·1 million]), high total cholesterol (88·7 million [74·6 million to 105·7 million]), household air pollution (85·6 million [66·7 million to 106·1 million]), alcohol use (85·0 million [77·2 million to 93·0 million]), and diets high in sodium (83·0 million [49·3 million to 127·5 million]).\nFrom 1990 to 2015, attributable DALYs declined for micronutrient deficiencies, childhood undernutrition, unsafe sanitation and water, and household air pollution; reductions in risk-deleted DALY rates rather than reductions in exposure drove these declines.\nRising exposure contributed to notable increases in attributable DALYs from high BMI, high fasting plasma glucose, occupational carcinogens, and drug use.\nEnvironmental risks and childhood undernutrition declined steadily with SDI; low physical activity, high BMI, and high fasting plasma glucose increased with SDI.\nIn 119 countries, metabolic risks, such as high BMI and fasting plasma glucose, contributed the most attributable DALYs in 2015.\nRegionally, smoking still ranked among the leading five risk factors for attributable DALYs in 109 countries; childhood underweight and unsafe sex remained primary drivers of early death and disability in much of sub-Saharan Africa.\nINTERPRETATION Declines in some key environmental risks have contributed to declines in critical infectious diseases.\nSome risks appear to be invariant to SDI.\nIncreasing risks, including high BMI, high fasting plasma glucose, drug use, and some occupational exposures, contribute to rising burden from some conditions, but also provide opportunities for intervention.\nSome highly preventable risks, such as smoking, remain major causes of attributable DALYs, even as exposure is declining.\nPublic policy makers need to pay attention to the risks that are increasingly major contributors to global burden.\nFUNDING Bill & Melinda Gates Foundation.",
"role": "user"
},
{
"content": "{\"verdict\": \"NEI\", \"evidence\": []}",
"role": "assistant"
}
]
|
science.discomat_te | | No. | t 1, ns | t 2, ns | I2, a.u. | Fit | t B, ns | t - , ns | k d, ns-1 |
| 1 | 0.24 | 0.34 | 0.55 | 1.227 | 0.289 | 0.298 | 0.68 |
| 2 | 0.22 | 0.35 | 0.12 | 1.112 | 0.233 | 0.239 | 0.20 |
| 3 | 0.22 | 0.37 | 0.54 | 0.990 | 0.282 | 0.303 | 1.00 |
| 4 | 0.22 | 0.37 | 0.54 | 0.988 | 0.282 | 0.304 | 1.02 |
| 5 | 0.22 | 0.37 | 0.54 | 0.986 | 0.282 | 0.303 | 1.04 |
| 6 | 0.22 | 0.37 | 0.54 | 0.984 | 0.282 | 0.303 | 1.06 |
| 7 | 0.22 | 0.37 | 0.54 | 0.980 | 0.281 | 0.303 | 1.09 |
| 8 | 0.22 | 0.37 | 0.54 | 1.023 | 0.274 | 0.294 | 1.00 |
| 9 | 0.21 | 0.37 | 0.58 | 1.006 | 0.278 | 0.302 | 1.27 |
| 10 | 0.21 | 0.37 | 0.59 | 0.933 | 0.278 | 0.303 | 1.29 |
| 11 | 0.20 | 0.37 | 0.60 | 1.004 | 0.276 | 0.302 | 1.37 |
| 12 | 0.20 | 0.37 | 0.60 | 1.004 | 0.275 | 0.302 | 1.39 |
| 13 | 0.20 | 0.37 | 0.60 | 1.004 | 0.275 | 0.302 | 1.41 |
| 14 | 0.20 | 0.37 | 0.60 | 1.037 | 0.272 | 0.300 | 1.46 |
| 15 | 0.20 | 0.37 | 0.60 | 1.003 | 0.274 | 0.302 | 1.48 |
| 16 | 0.19 | 0.37 | 0.62 | 1.002 | 0.271 | 0.300 | 1.58 |
Caption: Two-component fitting parameters of LT computer program [11], describing positron annihilation in g-As2Se3 within two-state positron trapping model
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "1", "type": "Other"}
{"value": "0.24", "type": "Other"}
{"value": "0.34", "type": "Other"}
{"value": "0.55", "type": "Other"}
{"value": "1.227", "type": "Other"}
{"value": "0.289", "type": "Other"}
{"value": "0.298", "type": "Other"}
{"value": "0.68", "type": "Other"}
{"value": "2", "type": "Other"}
{"value": "0.22", "type": "Other"}
{"value": "0.35", "type": "Other"}
{"value": "0.12", "type": "Other"}
{"value": "1.112", "type": "Other"}
{"value": "0.233", "type": "Other"}
{"value": "0.239", "type": "Other"}
{"value": "0.20", "type": "Other"}
{"value": "3", "type": "Other"}
{"value": "0.22", "type": "Other"}
{"value": "0.37", "type": "Other"}
{"value": "0.54", "type": "Other"}
{"value": "0.990", "type": "Other"}
{"value": "0.282", "type": "Other"}
{"value": "0.303", "type": "Other"}
{"value": "1.00", "type": "Other"}
{"value": "4", "type": "Other"}
{"value": "0.22", "type": "Other"}
{"value": "0.37", "type": "Other"}
{"value": "0.54", "type": "Other"}
{"value": "0.988", "type": "Other"}
{"value": "0.282", "type": "Other"}
{"value": "0.304", "type": "Other"}
{"value": "1.02", "type": "Other"}
{"value": "5", "type": "Other"}
{"value": "0.22", "type": "Other"}
{"value": "0.37", "type": "Other"}
{"value": "0.54", "type": "Other"}
{"value": "0.986", "type": "Other"}
{"value": "0.282", "type": "Other"}
{"value": "0.303", "type": "Other"}
{"value": "1.04", "type": "Other"}
{"value": "6", "type": "Other"}
{"value": "0.22", "type": "Other"}
{"value": "0.37", "type": "Other"}
{"value": "0.54", "type": "Other"}
{"value": "0.984", "type": "Other"}
{"value": "0.282", "type": "Other"}
{"value": "0.303", "type": "Other"}
{"value": "1.06", "type": "Other"}
{"value": "7", "type": "Other"}
{"value": "0.22", "type": "Other"}
{"value": "0.37", "type": "Other"}
{"value": "0.54", "type": "Other"}
{"value": "0.980", "type": "Other"}
{"value": "0.281", "type": "Other"}
{"value": "0.303", "type": "Other"}
{"value": "1.09", "type": "Other"}
{"value": "8", "type": "Other"}
{"value": "0.22", "type": "Other"}
{"value": "0.37", "type": "Other"}
{"value": "0.54", "type": "Other"}
{"value": "1.023", "type": "Other"}
{"value": "0.274", "type": "Other"}
{"value": "0.294", "type": "Other"}
{"value": "1.00", "type": "Other"}
{"value": "9", "type": "Other"}
{"value": "0.21", "type": "Other"}
{"value": "0.37", "type": "Other"}
{"value": "0.58", "type": "Other"}
{"value": "1.006", "type": "Other"}
{"value": "0.278", "type": "Other"}
{"value": "0.302", "type": "Other"}
{"value": "1.27", "type": "Other"}
{"value": "10", "type": "Other"}
{"value": "0.21", "type": "Other"}
{"value": "0.37", "type": "Other"}
{"value": "0.59", "type": "Other"}
{"value": "0.933", "type": "Other"}
{"value": "0.278", "type": "Other"}
{"value": "0.303", "type": "Other"}
{"value": "1.29", "type": "Other"}
{"value": "11", "type": "Other"}
{"value": "0.20", "type": "Other"}
{"value": "0.37", "type": "Other"}
{"value": "0.60", "type": "Other"}
{"value": "1.004", "type": "Other"}
{"value": "0.276", "type": "Other"}
{"value": "0.302", "type": "Other"}
{"value": "1.37", "type": "Other"}
{"value": "12", "type": "Other"}
{"value": "0.20", "type": "Other"}
{"value": "0.37", "type": "Other"}
{"value": "0.60", "type": "Other"}
{"value": "1.004", "type": "Other"}
{"value": "0.275", "type": "Other"}
{"value": "0.302", "type": "Other"}
{"value": "1.39", "type": "Other"}
{"value": "13", "type": "Other"}
{"value": "0.20", "type": "Other"}
{"value": "0.37", "type": "Other"}
{"value": "0.60", "type": "Other"}
{"value": "1.004", "type": "Other"}
{"value": "0.275", "type": "Other"}
{"value": "0.302", "type": "Other"}
{"value": "1.41", "type": "Other"}
{"value": "14", "type": "Other"}
{"value": "0.20", "type": "Other"}
{"value": "0.37", "type": "Other"}
{"value": "0.60", "type": "Other"}
{"value": "1.037", "type": "Other"}
{"value": "0.272", "type": "Other"}
{"value": "0.300", "type": "Other"}
{"value": "1.46", "type": "Other"}
{"value": "15", "type": "Other"}
{"value": "0.20", "type": "Other"}
{"value": "0.37", "type": "Other"}
{"value": "0.60", "type": "Other"}
{"value": "1.003", "type": "Other"}
{"value": "0.274", "type": "Other"}
{"value": "0.302", "type": "Other"}
{"value": "1.48", "type": "Other"}
{"value": "16", "type": "Other"}
{"value": "0.19", "type": "Other"}
{"value": "0.37", "type": "Other"}
{"value": "0.62", "type": "Other"}
{"value": "1.002", "type": "Other"}
{"value": "0.271", "type": "Other"}
{"value": "0.300", "type": "Other"}
{"value": "1.58", "type": "Other"} | science.discomat_te.79 | [
{
"content": "| No. | t 1, ns | t 2, ns | I2, a.u. | Fit | t B, ns | t - , ns | k d, ns-1 |\n| 1 | 0.24 | 0.34 | 0.55 | 1.227 | 0.289 | 0.298 | 0.68 |\n| 2 | 0.22 | 0.35 | 0.12 | 1.112 | 0.233 | 0.239 | 0.20 |\n| 3 | 0.22 | 0.37 | 0.54 | 0.990 | 0.282 | 0.303 | 1.00 |\n| 4 | 0.22 | 0.37 | 0.54 | 0.988 | 0.282 | 0.304 | 1.02 |\n| 5 | 0.22 | 0.37 | 0.54 | 0.986 | 0.282 | 0.303 | 1.04 |\n| 6 | 0.22 | 0.37 | 0.54 | 0.984 | 0.282 | 0.303 | 1.06 |\n| 7 | 0.22 | 0.37 | 0.54 | 0.980 | 0.281 | 0.303 | 1.09 |\n| 8 | 0.22 | 0.37 | 0.54 | 1.023 | 0.274 | 0.294 | 1.00 |\n| 9 | 0.21 | 0.37 | 0.58 | 1.006 | 0.278 | 0.302 | 1.27 |\n| 10 | 0.21 | 0.37 | 0.59 | 0.933 | 0.278 | 0.303 | 1.29 |\n| 11 | 0.20 | 0.37 | 0.60 | 1.004 | 0.276 | 0.302 | 1.37 |\n| 12 | 0.20 | 0.37 | 0.60 | 1.004 | 0.275 | 0.302 | 1.39 |\n| 13 | 0.20 | 0.37 | 0.60 | 1.004 | 0.275 | 0.302 | 1.41 |\n| 14 | 0.20 | 0.37 | 0.60 | 1.037 | 0.272 | 0.300 | 1.46 |\n| 15 | 0.20 | 0.37 | 0.60 | 1.003 | 0.274 | 0.302 | 1.48 |\n| 16 | 0.19 | 0.37 | 0.62 | 1.002 | 0.271 | 0.300 | 1.58 |\n\nCaption: Two-component fitting parameters of LT computer program [11], describing positron annihilation in g-As2Se3 within two-state positron trapping model\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"1\", \"type\": \"Other\"}\n{\"value\": \"0.24\", \"type\": \"Other\"}\n{\"value\": \"0.34\", \"type\": \"Other\"}\n{\"value\": \"0.55\", \"type\": \"Other\"}\n{\"value\": \"1.227\", \"type\": \"Other\"}\n{\"value\": \"0.289\", \"type\": \"Other\"}\n{\"value\": \"0.298\", \"type\": \"Other\"}\n{\"value\": \"0.68\", \"type\": \"Other\"}\n{\"value\": \"2\", \"type\": \"Other\"}\n{\"value\": \"0.22\", \"type\": \"Other\"}\n{\"value\": \"0.35\", \"type\": \"Other\"}\n{\"value\": \"0.12\", \"type\": \"Other\"}\n{\"value\": \"1.112\", \"type\": \"Other\"}\n{\"value\": \"0.233\", \"type\": \"Other\"}\n{\"value\": \"0.239\", \"type\": \"Other\"}\n{\"value\": \"0.20\", \"type\": \"Other\"}\n{\"value\": \"3\", \"type\": \"Other\"}\n{\"value\": \"0.22\", \"type\": \"Other\"}\n{\"value\": \"0.37\", \"type\": \"Other\"}\n{\"value\": \"0.54\", \"type\": \"Other\"}\n{\"value\": \"0.990\", \"type\": \"Other\"}\n{\"value\": \"0.282\", \"type\": \"Other\"}\n{\"value\": \"0.303\", \"type\": \"Other\"}\n{\"value\": \"1.00\", \"type\": \"Other\"}\n{\"value\": \"4\", \"type\": \"Other\"}\n{\"value\": \"0.22\", \"type\": \"Other\"}\n{\"value\": \"0.37\", \"type\": \"Other\"}\n{\"value\": \"0.54\", \"type\": \"Other\"}\n{\"value\": \"0.988\", \"type\": \"Other\"}\n{\"value\": \"0.282\", \"type\": \"Other\"}\n{\"value\": \"0.304\", \"type\": \"Other\"}\n{\"value\": \"1.02\", \"type\": \"Other\"}\n{\"value\": \"5\", \"type\": \"Other\"}\n{\"value\": \"0.22\", \"type\": \"Other\"}\n{\"value\": \"0.37\", \"type\": \"Other\"}\n{\"value\": \"0.54\", \"type\": \"Other\"}\n{\"value\": \"0.986\", \"type\": \"Other\"}\n{\"value\": \"0.282\", \"type\": \"Other\"}\n{\"value\": \"0.303\", \"type\": \"Other\"}\n{\"value\": \"1.04\", \"type\": \"Other\"}\n{\"value\": \"6\", \"type\": \"Other\"}\n{\"value\": \"0.22\", \"type\": \"Other\"}\n{\"value\": \"0.37\", \"type\": \"Other\"}\n{\"value\": \"0.54\", \"type\": \"Other\"}\n{\"value\": \"0.984\", \"type\": \"Other\"}\n{\"value\": \"0.282\", \"type\": \"Other\"}\n{\"value\": \"0.303\", \"type\": \"Other\"}\n{\"value\": \"1.06\", \"type\": \"Other\"}\n{\"value\": \"7\", \"type\": \"Other\"}\n{\"value\": \"0.22\", \"type\": \"Other\"}\n{\"value\": \"0.37\", \"type\": \"Other\"}\n{\"value\": \"0.54\", \"type\": \"Other\"}\n{\"value\": \"0.980\", \"type\": \"Other\"}\n{\"value\": \"0.281\", \"type\": \"Other\"}\n{\"value\": \"0.303\", \"type\": \"Other\"}\n{\"value\": \"1.09\", \"type\": \"Other\"}\n{\"value\": \"8\", \"type\": \"Other\"}\n{\"value\": \"0.22\", \"type\": \"Other\"}\n{\"value\": \"0.37\", \"type\": \"Other\"}\n{\"value\": \"0.54\", \"type\": \"Other\"}\n{\"value\": \"1.023\", \"type\": \"Other\"}\n{\"value\": \"0.274\", \"type\": \"Other\"}\n{\"value\": \"0.294\", \"type\": \"Other\"}\n{\"value\": \"1.00\", \"type\": \"Other\"}\n{\"value\": \"9\", \"type\": \"Other\"}\n{\"value\": \"0.21\", \"type\": \"Other\"}\n{\"value\": \"0.37\", \"type\": \"Other\"}\n{\"value\": \"0.58\", \"type\": \"Other\"}\n{\"value\": \"1.006\", \"type\": \"Other\"}\n{\"value\": \"0.278\", \"type\": \"Other\"}\n{\"value\": \"0.302\", \"type\": \"Other\"}\n{\"value\": \"1.27\", \"type\": \"Other\"}\n{\"value\": \"10\", \"type\": \"Other\"}\n{\"value\": \"0.21\", \"type\": \"Other\"}\n{\"value\": \"0.37\", \"type\": \"Other\"}\n{\"value\": \"0.59\", \"type\": \"Other\"}\n{\"value\": \"0.933\", \"type\": \"Other\"}\n{\"value\": \"0.278\", \"type\": \"Other\"}\n{\"value\": \"0.303\", \"type\": \"Other\"}\n{\"value\": \"1.29\", \"type\": \"Other\"}\n{\"value\": \"11\", \"type\": \"Other\"}\n{\"value\": \"0.20\", \"type\": \"Other\"}\n{\"value\": \"0.37\", \"type\": \"Other\"}\n{\"value\": \"0.60\", \"type\": \"Other\"}\n{\"value\": \"1.004\", \"type\": \"Other\"}\n{\"value\": \"0.276\", \"type\": \"Other\"}\n{\"value\": \"0.302\", \"type\": \"Other\"}\n{\"value\": \"1.37\", \"type\": \"Other\"}\n{\"value\": \"12\", \"type\": \"Other\"}\n{\"value\": \"0.20\", \"type\": \"Other\"}\n{\"value\": \"0.37\", \"type\": \"Other\"}\n{\"value\": \"0.60\", \"type\": \"Other\"}\n{\"value\": \"1.004\", \"type\": \"Other\"}\n{\"value\": \"0.275\", \"type\": \"Other\"}\n{\"value\": \"0.302\", \"type\": \"Other\"}\n{\"value\": \"1.39\", \"type\": \"Other\"}\n{\"value\": \"13\", \"type\": \"Other\"}\n{\"value\": \"0.20\", \"type\": \"Other\"}\n{\"value\": \"0.37\", \"type\": \"Other\"}\n{\"value\": \"0.60\", \"type\": \"Other\"}\n{\"value\": \"1.004\", \"type\": \"Other\"}\n{\"value\": \"0.275\", \"type\": \"Other\"}\n{\"value\": \"0.302\", \"type\": \"Other\"}\n{\"value\": \"1.41\", \"type\": \"Other\"}\n{\"value\": \"14\", \"type\": \"Other\"}\n{\"value\": \"0.20\", \"type\": \"Other\"}\n{\"value\": \"0.37\", \"type\": \"Other\"}\n{\"value\": \"0.60\", \"type\": \"Other\"}\n{\"value\": \"1.037\", \"type\": \"Other\"}\n{\"value\": \"0.272\", \"type\": \"Other\"}\n{\"value\": \"0.300\", \"type\": \"Other\"}\n{\"value\": \"1.46\", \"type\": \"Other\"}\n{\"value\": \"15\", \"type\": \"Other\"}\n{\"value\": \"0.20\", \"type\": \"Other\"}\n{\"value\": \"0.37\", \"type\": \"Other\"}\n{\"value\": \"0.60\", \"type\": \"Other\"}\n{\"value\": \"1.003\", \"type\": \"Other\"}\n{\"value\": \"0.274\", \"type\": \"Other\"}\n{\"value\": \"0.302\", \"type\": \"Other\"}\n{\"value\": \"1.48\", \"type\": \"Other\"}\n{\"value\": \"16\", \"type\": \"Other\"}\n{\"value\": \"0.19\", \"type\": \"Other\"}\n{\"value\": \"0.37\", \"type\": \"Other\"}\n{\"value\": \"0.62\", \"type\": \"Other\"}\n{\"value\": \"1.002\", \"type\": \"Other\"}\n{\"value\": \"0.271\", \"type\": \"Other\"}\n{\"value\": \"0.300\", \"type\": \"Other\"}\n{\"value\": \"1.58\", \"type\": \"Other\"}",
"role": "assistant"
}
]
|
science.bioasq_list_qa | Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.
Context: Nonetheless, infliximab and adalimumab are the only biological agents that have been approved for this group of patients.
In September 2013, the European Medicines Agency approved two infliximab biosimilars for treatment of adult and paediatric inflammatory bowel disease patients,
The introduction of infliximab into clinical practice is one of the most significant advances in the care of patients who have IBD. Infliximab has become an important part of the medical armamentarium to treat extraintestinal manifestations that often are refractory to other medications and are a significant cause of morbidity in these patients. Two other TNF inhibitors recently have demonstrated efficacy in CD: certolizumab pegol and adalimumab.
Infliximab, a chimeric monoclonal antibody to tumor necrosis factor-alpha, is effective for induction and maintenance of remission of CD and UC. The role of infliximab for EIMs related to IBD has been less studied, but it is likely as effective.
The Italian Group for the Study of Inflammatory Bowel Disease (IG-IBD) has prepared clinical practice guidelines to help physicians prescribe corticosteroids and immunosuppressive drugs for these patients.
lthough the TNF inhibitor infliximab is known to improve IBD outcomes in many different ways, several questions remain regarding the optimal way to employ this drug in the clinic,
Infliximab biosimilars for the treatment of IBD have been available in Europe and Asia for a few years and are expected to become available in the United States within the next 1 to 2 years
Question: List 2 approved drug treatments for Inflammatory Bowel Disease (IBD). | ["infliximab (IFX)", "adalimumab (ADA)"] | science.bioasq_list_qa.523 | [
{
"content": "Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.\n\nContext: Nonetheless, infliximab and adalimumab are the only biological agents that have been approved for this group of patients.\nIn September 2013, the European Medicines Agency approved two infliximab biosimilars for treatment of adult and paediatric inflammatory bowel disease patients,\nThe introduction of infliximab into clinical practice is one of the most significant advances in the care of patients who have IBD. Infliximab has become an important part of the medical armamentarium to treat extraintestinal manifestations that often are refractory to other medications and are a significant cause of morbidity in these patients. Two other TNF inhibitors recently have demonstrated efficacy in CD: certolizumab pegol and adalimumab.\nInfliximab, a chimeric monoclonal antibody to tumor necrosis factor-alpha, is effective for induction and maintenance of remission of CD and UC. The role of infliximab for EIMs related to IBD has been less studied, but it is likely as effective.\nThe Italian Group for the Study of Inflammatory Bowel Disease (IG-IBD) has prepared clinical practice guidelines to help physicians prescribe corticosteroids and immunosuppressive drugs for these patients.\nlthough the TNF inhibitor infliximab is known to improve IBD outcomes in many different ways, several questions remain regarding the optimal way to employ this drug in the clinic,\nInfliximab biosimilars for the treatment of IBD have been available in Europe and Asia for a few years and are expected to become available in the United States within the next 1 to 2 years\n\nQuestion: List 2 approved drug treatments for Inflammatory Bowel Disease (IBD).",
"role": "user"
},
{
"content": "[\"infliximab (IFX)\", \"adalimumab (ADA)\"]",
"role": "assistant"
}
]
|
science.mup_single_document_summarization | Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.
Title of the Paper:
Autoregressive Quantile Flows for Predictive Uncertainty Estimation
Main Content:
1 INTRODUCTION . Reasoning about uncertainty via the language of probability is important in many application domains of machine learning , including medicine ( Saria , 2018 ) , robotics ( Chua et al. , 2018 ; Buckman et al. , 2018 ) , and operations research ( Van Roy et al. , 1997 ) . Especially important is the estimation of predictive uncertainties ( e.g. , confidence intervals around forecasts ) ; in applications such as clinical diagnosis ( Jiang et al. , 2012 ) or decision support systems ( Werling et al. , 2015 ) , estimating uncertainty can be as important as obtaining high accuracy ( Kuleshov and Liang , 2015 ) . Normalizing flows ( Rezende and Mohamed , 2016 ; Papamakarios et al. , 2019 ; Kingma et al. , 2016 ) are a popular framework for defining probabilistic models , and can be used for density estimation ( Papamakarios et al. , 2017 ) , out-of-distribution detection ( Nalisnick et al. , 2019 ) , content generation ( Kingma and Dhariwal , 2018 ) , and more . Flows feature tractable posterior inference and maximum likelihood estimation ; however , maximum likelihood estimation of flows requires carefully designing a family of bijective functions that are simultaneously expressive and whose Jacobian has a tractable determinant . In practice , this makes flows time-consuming to design and computationally expensive to train . This paper takes a step towards addressing this limitation of normalizing flows by proposing new objectives that contribute towards alleviating the computational cost of calculating determinants of Jacobians . Specifically , we argue for training flows using an objective that is different from classical maximum likelihood and is instead based on proper scoring rules ( Gneiting and Raftery , 2007 ) , a standard tool in the statistics literature for evaluating the quality of probabilistic forecasts . We show that this objective can be used to train normalizing flows and that it simplifies the computation of Jacobians in certain types of flows . We introduce autoregressive quantile flows ( AQFs ) , a framework that combines the above learning objective with a set of architectural choices inspired by classical autoregressive flows . Quantile flows possess characteristics that represent an improvement over existing flow models—including supporting neural architectures that simultaneously provide fast training and sampling— in addition to the usual benefits of flows ( exact posterior inference and density estimation ) . Interestingly , quantile flows can be interpreted as extensions of quantile functions to multiple dimensions . We use AQFs as the basis for quantile flow regression ( QFR ) , an approach to predictive uncertainty estimation in which a probabilistic model directly outputs a normalizing flow as the predictive distribution . The QFR approach enables neural networks to output highly expressive probabilistic predictions that make very little assumptions on the form of the predicted variable and that improve uncertainty estimates in probabilistic and Bayesian models . In the one-dimensional case , our approach yields quantile function regression and cumulative distribution function regression , two simple , general , and principled approaches for flexible probabilistic forecasting in regression . In addition , we demonstrate the benefits of AQFs on probabilistic modeling tasks that include density estimation and autoregressive generation . Across our sets of experiments , we observe improved performance , and we demonstrate properties of quantile flows that traditional flow models do not possess ( e.g. , sampling with flexible neural parameterizations ) . Contributions . In summary , this work ( 1 ) introduces new objectives for flow models that simplify the computation of determinants of Jacobians , which in turn greatly simplifies the implementation of flow models and extends the class of models that can be used to parameterize flows . We also ( 2 ) define autoregressive quantile flows based on this objective , and highlight new architectures supported by this framework . Finally , ( 3 ) we deploy AQFs as part of quantile flow regression , and show that this approach improves upon existing methods for predictive uncertainty estimation . 2 BACKGROUND . Notation . Our goal is to learn a probabilistic model p ( y ) ∈ ∆ ( Rd ) in the space ∆ ( Rd ) of distributions over a high-dimensional y ∈ Rd ; we use yj ∈ R to denote components of y . In some cases , we have access to features x ∈ X associated with y and we want to train a forecaster H : X → ∆ ( Rd ) that outputs a predictive probability over y conditioned on x . 2.1 NORMALIZING FLOWS AND AUTOREGRESSIVE GENERATIVE MODELS . A normalizing flow defines a distribution p ( y ) via an invertible mapping fθ : Rd → Rd with parameters θ ∈ Θ that describes a transformation between y and a random variable z ∈ Rd sampled from a simple prior z ∼ p ( z ) ( Rezende and Mohamed , 2016 ; Papamakarios et al. , 2019 ) . We may compute p ( y ) via the change of variables formula p ( y ) = ∣∣∣∂fθ ( z ) −1∂z ∣∣∣ p ( z ) , where ∣∣∣∂fθ ( z ) −1∂z ∣∣∣ denotes the determinant of the inverse Jacobian of fθ . In order to fit flow-based models using maximum likelihood , we typically choose fθ to be in a family for which the Jacobian is tractable . A common way to define flows with a tractable Jacobian is via autoregressive models of the form yj = τ ( zj ; hj ) hj = cj ( y < j ) , where τ ( zj ; hj ) is an invertible transformer , a strictly monotonic function of zj , and cj is the j-th conditioner , which outputs parameters hj for the transformer . As long as τ is invertible , such autoregressive models can be used to define flows ( Papamakarios et al. , 2019 ) . 2.2 EVALUATING FORECASTS WITH PROPER SCORING RULES . A common way to represent a probabilistic forecast in the statistics and forecasting literature is via a cumulative distribution function ( CDF ) F : Rd → [ 0 , 1 ] ; any probability distribution can be represented this way , including discrete distributions . Since F is monotonically increasing in each coordinate , when y is one dimensional , we may define its inverse Q : [ 0 , 1 ] → R called the quantile function ( QF ) , defined as Q ( α ) = inf { y′ ∈ R | F ( y′ ) ≥ α } . In the statistics literature , the quality of forecasts is often evaluated using proper scoring rules ( or proper scores ; Gneiting and Raftery ( 2007 ) ) . For example , when predictions take the form of CDFs , a popular scoring rule is the continuous ranked probability score ( CRPS ) , defined for two CDFs F and G as CRPS ( F , G ) = ∫ y ( F ( y ) −G ( y ) ) 2 dy . When we only have samples y1 , ... , ym from G , we can generalize this score as 1m ∑m i=1 ∫ y ( F ( y ) − I ( y − yi ) ) 2 dy . Alternatively , we can evaluate the α-th quantile Q ( α ) of a QF Q via the check score L : R × R → R+ defined as Lα ( y , f ) = α ( y − f ) if y ≥ f and ( 1 − α ) ( f − y ) otherwise . The check score also provides a consistent estimator for the conditional quantile of any distribution . 3 TAKING STEPS BEYOND MAXIMUM LIKELIHOOD LEARNING OF FLOWS . Maximum likelihood estimation of flows requires carefully designing a family of bijective functions that are simultaneously expressive and whose Jacobian has a tractable determinant . In practice , this makes flows time-consuming to design and computationally expensive to train . In this paper , we argue for training flows using objectives based on proper scoring rules ( Gneiting and Raftery , 2007 ) . 3.1 LEARNING SIMPLE FLOWS WITH PROPER SCORING RULES . We begin with the one dimensional setting , where a flow fθ : R→ R is a bijective mapping that can be interpreted as a QF . Alternatively , the reverse flow f−1θ can be interpreted as a CDF . We will use Qθ , Fθ to denote fθ and f−1θ , respectively ; our goal is to fit these models from data . In order to fit models of the cumulative distribution and the quantile function , we propose objectives based on proper scoring rules . We propose fitting models Fθ of the CDF using the CRPS : L ( 1 ) ( Fθ , yi ) : = CRPS ( Fθ , yi ) = ∫ ∞ −∞ ( Fθ ( y ) − I ( yi ≤ y ) ) 2 dy . ( 1 ) When dealing with a model Qθ of the QF , we propose an objective based on the expected check score L ( 2 ) ( Qθ , yi ) : = ∫ 1 0 Lα ( Qθ ( α ) , yi ) dα , ( 2 ) where Lα is a check score targeting quantile α . We refer to this objective as the quantile loss . This objective has been used previously to train value functions in reinforcement learning as well as conditional distributions in autoregressive models ( Dabney et al. , 2018a ; b ) . In this paper , we describe its application to modeling aleatoric predictive uncertainties . The parametric form of Qθ or Fθ can be any class of strictly monotonic ( hence invertible ) functions . Previous works have relied on affine or piecewise linear functions ( Wehenkel and Louppe , 2021 ) , sum-of-squares ( Jaini et al. , 2019 ) , monotonic neural networks ( Huang et al. , 2018 ; Cao et al. , 2019 ) , and other models . Any of these choices suits our framework ; we provide more details below . Equivalence Between the CRPS and Quantile Losses So far , we have described two methods for fitting a one-dimensional flow model . Their objectives are actually equivalent . Proposition 1 . For a CDF F : R→ [ 0 , 1 ] and y′ ∈ R , the CRPS and quantile losses are equivalent : L ( 1 ) ( F , y′ ) = a · L ( 2 ) ( F−1 , y′ ) + b a , b ∈ R , a > 0 ( 3 ) This fact appears to be part of statistics folk knowledge , and we have only ever seen it stated briefly in some works . We provide a complete proof in the appendix . See ( Laio and Tamea , 2007 ) for another argument . If the models Fθ , Qθ are analytically invertible ( e.g. , they are piecewise linear ) , we are free to choose fitting the CDF or its inverse . Other representations for F will not lead to analytically invertible models , which require choosing a training direction , as we discuss below . Practical Implementation . The quantile and the CRPS losses both involve a potentially intractable integral . We approximate the integrals using Monte-Carlo ; this allows us to obtain gradients using backpropagation . For the quantile loss , we sample α uniformly at random in [ 0 , 1 ] ; for the CRPS loss , we choose a reasonable range of y ( usually , centered around yi ) and sample uniformly in that range . This approach works well in practice and avoids the complexity of alternative methods such as quadrature ( Durkan et al. , 2019 ) . | This paper proposes a novel framework for training flow models named Autoregressive Quantile Flows (AQF). The proposed method utilizes a new objective by evaluating forecasts with proper scoring rules, including the continuous ranked probability score and the check score. The advantages of the proposed objective are 1) it could avoid the explicit calculation of the determinant of the Jacobian matrix and 2) it could also provide uncertainty estimation for predictions. Experiments on multiple tasks including regression, object detection, time series forecasting, and generation validate the effectiveness of this framework. | science.mup_single_document_summarization.3882 | [
{
"content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nAutoregressive Quantile Flows for Predictive Uncertainty Estimation\n\nMain Content:\n1 INTRODUCTION . Reasoning about uncertainty via the language of probability is important in many application domains of machine learning , including medicine ( Saria , 2018 ) , robotics ( Chua et al. , 2018 ; Buckman et al. , 2018 ) , and operations research ( Van Roy et al. , 1997 ) . Especially important is the estimation of predictive uncertainties ( e.g. , confidence intervals around forecasts ) ; in applications such as clinical diagnosis ( Jiang et al. , 2012 ) or decision support systems ( Werling et al. , 2015 ) , estimating uncertainty can be as important as obtaining high accuracy ( Kuleshov and Liang , 2015 ) . Normalizing flows ( Rezende and Mohamed , 2016 ; Papamakarios et al. , 2019 ; Kingma et al. , 2016 ) are a popular framework for defining probabilistic models , and can be used for density estimation ( Papamakarios et al. , 2017 ) , out-of-distribution detection ( Nalisnick et al. , 2019 ) , content generation ( Kingma and Dhariwal , 2018 ) , and more . Flows feature tractable posterior inference and maximum likelihood estimation ; however , maximum likelihood estimation of flows requires carefully designing a family of bijective functions that are simultaneously expressive and whose Jacobian has a tractable determinant . In practice , this makes flows time-consuming to design and computationally expensive to train . This paper takes a step towards addressing this limitation of normalizing flows by proposing new objectives that contribute towards alleviating the computational cost of calculating determinants of Jacobians . Specifically , we argue for training flows using an objective that is different from classical maximum likelihood and is instead based on proper scoring rules ( Gneiting and Raftery , 2007 ) , a standard tool in the statistics literature for evaluating the quality of probabilistic forecasts . We show that this objective can be used to train normalizing flows and that it simplifies the computation of Jacobians in certain types of flows . We introduce autoregressive quantile flows ( AQFs ) , a framework that combines the above learning objective with a set of architectural choices inspired by classical autoregressive flows . Quantile flows possess characteristics that represent an improvement over existing flow models—including supporting neural architectures that simultaneously provide fast training and sampling— in addition to the usual benefits of flows ( exact posterior inference and density estimation ) . Interestingly , quantile flows can be interpreted as extensions of quantile functions to multiple dimensions . We use AQFs as the basis for quantile flow regression ( QFR ) , an approach to predictive uncertainty estimation in which a probabilistic model directly outputs a normalizing flow as the predictive distribution . The QFR approach enables neural networks to output highly expressive probabilistic predictions that make very little assumptions on the form of the predicted variable and that improve uncertainty estimates in probabilistic and Bayesian models . In the one-dimensional case , our approach yields quantile function regression and cumulative distribution function regression , two simple , general , and principled approaches for flexible probabilistic forecasting in regression . In addition , we demonstrate the benefits of AQFs on probabilistic modeling tasks that include density estimation and autoregressive generation . Across our sets of experiments , we observe improved performance , and we demonstrate properties of quantile flows that traditional flow models do not possess ( e.g. , sampling with flexible neural parameterizations ) . Contributions . In summary , this work ( 1 ) introduces new objectives for flow models that simplify the computation of determinants of Jacobians , which in turn greatly simplifies the implementation of flow models and extends the class of models that can be used to parameterize flows . We also ( 2 ) define autoregressive quantile flows based on this objective , and highlight new architectures supported by this framework . Finally , ( 3 ) we deploy AQFs as part of quantile flow regression , and show that this approach improves upon existing methods for predictive uncertainty estimation . 2 BACKGROUND . Notation . Our goal is to learn a probabilistic model p ( y ) ∈ ∆ ( Rd ) in the space ∆ ( Rd ) of distributions over a high-dimensional y ∈ Rd ; we use yj ∈ R to denote components of y . In some cases , we have access to features x ∈ X associated with y and we want to train a forecaster H : X → ∆ ( Rd ) that outputs a predictive probability over y conditioned on x . 2.1 NORMALIZING FLOWS AND AUTOREGRESSIVE GENERATIVE MODELS . A normalizing flow defines a distribution p ( y ) via an invertible mapping fθ : Rd → Rd with parameters θ ∈ Θ that describes a transformation between y and a random variable z ∈ Rd sampled from a simple prior z ∼ p ( z ) ( Rezende and Mohamed , 2016 ; Papamakarios et al. , 2019 ) . We may compute p ( y ) via the change of variables formula p ( y ) = ∣∣∣∂fθ ( z ) −1∂z ∣∣∣ p ( z ) , where ∣∣∣∂fθ ( z ) −1∂z ∣∣∣ denotes the determinant of the inverse Jacobian of fθ . In order to fit flow-based models using maximum likelihood , we typically choose fθ to be in a family for which the Jacobian is tractable . A common way to define flows with a tractable Jacobian is via autoregressive models of the form yj = τ ( zj ; hj ) hj = cj ( y < j ) , where τ ( zj ; hj ) is an invertible transformer , a strictly monotonic function of zj , and cj is the j-th conditioner , which outputs parameters hj for the transformer . As long as τ is invertible , such autoregressive models can be used to define flows ( Papamakarios et al. , 2019 ) . 2.2 EVALUATING FORECASTS WITH PROPER SCORING RULES . A common way to represent a probabilistic forecast in the statistics and forecasting literature is via a cumulative distribution function ( CDF ) F : Rd → [ 0 , 1 ] ; any probability distribution can be represented this way , including discrete distributions . Since F is monotonically increasing in each coordinate , when y is one dimensional , we may define its inverse Q : [ 0 , 1 ] → R called the quantile function ( QF ) , defined as Q ( α ) = inf { y′ ∈ R | F ( y′ ) ≥ α } . In the statistics literature , the quality of forecasts is often evaluated using proper scoring rules ( or proper scores ; Gneiting and Raftery ( 2007 ) ) . For example , when predictions take the form of CDFs , a popular scoring rule is the continuous ranked probability score ( CRPS ) , defined for two CDFs F and G as CRPS ( F , G ) = ∫ y ( F ( y ) −G ( y ) ) 2 dy . When we only have samples y1 , ... , ym from G , we can generalize this score as 1m ∑m i=1 ∫ y ( F ( y ) − I ( y − yi ) ) 2 dy . Alternatively , we can evaluate the α-th quantile Q ( α ) of a QF Q via the check score L : R × R → R+ defined as Lα ( y , f ) = α ( y − f ) if y ≥ f and ( 1 − α ) ( f − y ) otherwise . The check score also provides a consistent estimator for the conditional quantile of any distribution . 3 TAKING STEPS BEYOND MAXIMUM LIKELIHOOD LEARNING OF FLOWS . Maximum likelihood estimation of flows requires carefully designing a family of bijective functions that are simultaneously expressive and whose Jacobian has a tractable determinant . In practice , this makes flows time-consuming to design and computationally expensive to train . In this paper , we argue for training flows using objectives based on proper scoring rules ( Gneiting and Raftery , 2007 ) . 3.1 LEARNING SIMPLE FLOWS WITH PROPER SCORING RULES . We begin with the one dimensional setting , where a flow fθ : R→ R is a bijective mapping that can be interpreted as a QF . Alternatively , the reverse flow f−1θ can be interpreted as a CDF . We will use Qθ , Fθ to denote fθ and f−1θ , respectively ; our goal is to fit these models from data . In order to fit models of the cumulative distribution and the quantile function , we propose objectives based on proper scoring rules . We propose fitting models Fθ of the CDF using the CRPS : L ( 1 ) ( Fθ , yi ) : = CRPS ( Fθ , yi ) = ∫ ∞ −∞ ( Fθ ( y ) − I ( yi ≤ y ) ) 2 dy . ( 1 ) When dealing with a model Qθ of the QF , we propose an objective based on the expected check score L ( 2 ) ( Qθ , yi ) : = ∫ 1 0 Lα ( Qθ ( α ) , yi ) dα , ( 2 ) where Lα is a check score targeting quantile α . We refer to this objective as the quantile loss . This objective has been used previously to train value functions in reinforcement learning as well as conditional distributions in autoregressive models ( Dabney et al. , 2018a ; b ) . In this paper , we describe its application to modeling aleatoric predictive uncertainties . The parametric form of Qθ or Fθ can be any class of strictly monotonic ( hence invertible ) functions . Previous works have relied on affine or piecewise linear functions ( Wehenkel and Louppe , 2021 ) , sum-of-squares ( Jaini et al. , 2019 ) , monotonic neural networks ( Huang et al. , 2018 ; Cao et al. , 2019 ) , and other models . Any of these choices suits our framework ; we provide more details below . Equivalence Between the CRPS and Quantile Losses So far , we have described two methods for fitting a one-dimensional flow model . Their objectives are actually equivalent . Proposition 1 . For a CDF F : R→ [ 0 , 1 ] and y′ ∈ R , the CRPS and quantile losses are equivalent : L ( 1 ) ( F , y′ ) = a · L ( 2 ) ( F−1 , y′ ) + b a , b ∈ R , a > 0 ( 3 ) This fact appears to be part of statistics folk knowledge , and we have only ever seen it stated briefly in some works . We provide a complete proof in the appendix . See ( Laio and Tamea , 2007 ) for another argument . If the models Fθ , Qθ are analytically invertible ( e.g. , they are piecewise linear ) , we are free to choose fitting the CDF or its inverse . Other representations for F will not lead to analytically invertible models , which require choosing a training direction , as we discuss below . Practical Implementation . The quantile and the CRPS losses both involve a potentially intractable integral . We approximate the integrals using Monte-Carlo ; this allows us to obtain gradients using backpropagation . For the quantile loss , we sample α uniformly at random in [ 0 , 1 ] ; for the CRPS loss , we choose a reasonable range of y ( usually , centered around yi ) and sample uniformly in that range . This approach works well in practice and avoids the complexity of alternative methods such as quadrature ( Durkan et al. , 2019 ) .",
"role": "user"
},
{
"content": "This paper proposes a novel framework for training flow models named Autoregressive Quantile Flows (AQF). The proposed method utilizes a new objective by evaluating forecasts with proper scoring rules, including the continuous ranked probability score and the check score. The advantages of the proposed objective are 1) it could avoid the explicit calculation of the determinant of the Jacobian matrix and 2) it could also provide uncertainty estimation for predictions. Experiments on multiple tasks including regression, object detection, time series forecasting, and generation validate the effectiveness of this framework.",
"role": "assistant"
}
]
|
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT.BACKGROUND: Opioid-based patient controlled analgesia (PCA) provides adequate pain control following spinal surgeries at the expense of increased risk of postoperative nausea and vomiting (PONV). We evaluated the efficacy of dexamethasone added to ramosetron, which is a newly developed five-hydroxytryptamine receptor 3 antagonist with a higher receptor affinity and longer action duration compared to its congeners, on preventing PONV in highly susceptible patients receiving opioid-based IV PCA after spinal surgery.
ABSTRACT.METHODS: One hundred nonsmoking female patients undergoing spinal surgery were randomly allocated to either a ramosetron group (group R) or a ramosetron plus dexamethasone group (group RD)., Normal saline (1 ml) or 5 mg of dexamethasone was injected before anesthetic induction, while at the end of the surgery, ramosetron (0.3 mg) was administered to all patients and fentanyl-based IV PCA was continued for 48 hrs. The incidence and severity of PONV, pain score and the amount of rescue antiemetics were assessed for 48 hours after surgery.
ABSTRACT.RESULTS: The number of patients with moderate to severe nausea (20 vs. 10, P = 0.029), and overall incidence of vomiting (13 vs. 5, P = 0.037) were significantly lower in the group RD than in the group R, respectively. Rescue antiemetic was used less in the RD group without significance.
ABSTRACT.CONCLUSIONS: Combination of ramosetron and dexamethasone significantly reduced the incidence of moderate to severe nausea and vomiting compared to ramosetron alone in highly susceptible patients receiving opioid-based IV PCA after surgery.
Extracted Data: | [["ramosetron group (group R", "ramosetron plus dexamethasone group (group RD)", "moderate to severe nausea", "significantly increased", "The number of patients with moderate to severe nausea (20 vs. 10, P = 0.029), and overall incidence of vomiting (13 vs. 5, P = 0.037) were significantly lower in the group RD than in the group R, respectively."], ["ramosetron group (group R", "ramosetron plus dexamethasone group (group RD)", "The overall incidence of vomiting", "significantly increased", "The number of patients with moderate to severe nausea (20 vs. 10, P = 0.029), and overall incidence of vomiting (13 vs. 5, P = 0.037) were significantly lower in the group RD than in the group R, respectively."]] | science.evidence_inference.81 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND: Opioid-based patient controlled analgesia (PCA) provides adequate pain control following spinal surgeries at the expense of increased risk of postoperative nausea and vomiting (PONV). We evaluated the efficacy of dexamethasone added to ramosetron, which is a newly developed five-hydroxytryptamine receptor 3 antagonist with a higher receptor affinity and longer action duration compared to its congeners, on preventing PONV in highly susceptible patients receiving opioid-based IV PCA after spinal surgery.\n\nABSTRACT.METHODS: One hundred nonsmoking female patients undergoing spinal surgery were randomly allocated to either a ramosetron group (group R) or a ramosetron plus dexamethasone group (group RD)., Normal saline (1 ml) or 5 mg of dexamethasone was injected before anesthetic induction, while at the end of the surgery, ramosetron (0.3 mg) was administered to all patients and fentanyl-based IV PCA was continued for 48 hrs. The incidence and severity of PONV, pain score and the amount of rescue antiemetics were assessed for 48 hours after surgery.\n\nABSTRACT.RESULTS: The number of patients with moderate to severe nausea (20 vs. 10, P = 0.029), and overall incidence of vomiting (13 vs. 5, P = 0.037) were significantly lower in the group RD than in the group R, respectively. Rescue antiemetic was used less in the RD group without significance.\n\nABSTRACT.CONCLUSIONS: Combination of ramosetron and dexamethasone significantly reduced the incidence of moderate to severe nausea and vomiting compared to ramosetron alone in highly susceptible patients receiving opioid-based IV PCA after surgery.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"ramosetron group (group R\", \"ramosetron plus dexamethasone group (group RD)\", \"moderate to severe nausea\", \"significantly increased\", \"The number of patients with moderate to severe nausea (20 vs. 10, P = 0.029), and overall incidence of vomiting (13 vs. 5, P = 0.037) were significantly lower in the group RD than in the group R, respectively.\"], [\"ramosetron group (group R\", \"ramosetron plus dexamethasone group (group RD)\", \"The overall incidence of vomiting\", \"significantly increased\", \"The number of patients with moderate to severe nausea (20 vs. 10, P = 0.029), and overall incidence of vomiting (13 vs. 5, P = 0.037) were significantly lower in the group RD than in the group R, respectively.\"]]",
"role": "assistant"
}
]
|
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
This search method matches with the best reward from<cite> Chen and Bansal (2018)</cite> . | ["Similarities"] | science.multicite_intent_classification.331 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nThis search method matches with the best reward from<cite> Chen and Bansal (2018)</cite> .",
"role": "user"
},
{
"content": "[\"Similarities\"]",
"role": "assistant"
}
]
|
science.discomat_te | | This research | This research | This research | Corresponding bands in terms of Scholze's approach | Corresponding bands in terms of Scholze's approach |
| Oscillator no. in Table 2 | Inherent band frequencies (cm-1) | Assignments | Band locations (cm-1) | Assignments |
| 3 | 1539-1606 | Four components of the n(Si)O-HO1-(Si) {II} mode that corresponds to the strongest hydrogen bonding (the Q n species containing O1- cannot be specified) | Region was assumed to contain the glass matrix fundamentals only and, so, was not studied | Region was assumed to contain the glass matrix fundamentals only and, so, was not studied |
| 5 | 1670-1745 | Four components of the n(Si)O-HO1-(Si) {II} mode that corresponds to the strongest hydrogen bonding (the Q n species containing O1- cannot be specified) | Region was assumed to contain the glass matrix fundamentals only and, so, was not studied | Region was assumed to contain the glass matrix fundamentals only and, so, was not studied |
| 8 | 2030-2107 | Four components of the n(Si)O-HO1-(Si) {II} mode that corresponds to the strongest hydrogen bonding (the Q n species containing O1- cannot be specified) | | |
| 9 | 2302+-8 | Four components of the n(Si)O-HO1-(Si) {II} mode that corresponds to the strongest hydrogen bonding (the Q n species containing O1- cannot be specified) | 2350+-50 | n (Si)O-HO1-(Si) mode, O1- being related to Q0 species |
| 4 | 1674 | d H2O mode | Regionwasassumedtocontaintheglassmatrix fundamentalsonlyand,so,wasnotstudied | Regionwasassumedtocontaintheglassmatrix fundamentalsonlyand,so,wasnotstudied |
| 6 | 1807-1840 | Four components of the n(Si)O-HO1-(Si) {I} mode that corresponds to the somewhat weaker hydrogen bonding than is the case for the above n(Si)O-HO1-(Si) {II} mode (the Q n species containing O1- cannot be specified) | Regionwasassumedtocontaintheglassmatrix fundamentalsonlyand,so,wasnotstudied | Regionwasassumedtocontaintheglassmatrix fundamentalsonlyand,so,wasnotstudied |
| 7 | 1903-1927 | Four components of the n(Si)O-HO1-(Si) {I} mode that corresponds to the somewhat weaker hydrogen bonding than is the case for the above n(Si)O-HO1-(Si) {II} mode (the Q n species containing O1- cannot be specified) | Regionwasassumedtocontaintheglassmatrix fundamentalsonlyand,so,wasnotstudied | Regionwasassumedtocontaintheglassmatrix fundamentalsonlyand,so,wasnotstudied |
| 10 | 2542+-15 | Four components of the n(Si)O-HO1-(Si) {I} mode that corresponds to the somewhat weaker hydrogen bonding than is the case for the above n(Si)O-HO1-(Si) {II} mode (the Q n species containing O1- cannot be specified) | Missed | - |
| 11 | 2750+-11 | Four components of the n(Si)O-HO1-(Si) {I} mode that corresponds to the somewhat weaker hydrogen bonding than is the case for the above n(Si)O-HO1-(Si) {II} mode (the Q n species containing O1- cannot be specified) | n(Si)O-HO1-(Si) mode, O1- being related to Q3 and/or Q2 species | |
| | | | n(Si)O-HO1-(Si) mode, O1- being related to Q3 and/or Q2 species | |
| 12 | 2910+-3 | Two components of the nas H2O mode | Missed | - |
| 13 | 2959+-7 | Two components of the nas H2O mode | Missed | - |
| | | | | |
| 14 | 3200+-9 | Two components of the ns H2O mode | Missed | - |
| 15 | 3393+-6 | Two components of the ns H2O mode | Missed | - |
| | | | | |
| 16 | 3524+-14 | Two components of the n(Si)O-HOH(Si) mode that corresponds to the weakest hydrogen bonding (the one with an oxygen from the neighboring hydroxyl group) | 3500+-200 | n (Si)O-H mode not affected by the hydrogen bonding |
| 17 | 3595+-14 | Two components of the n(Si)O-HOH(Si) mode that corresponds to the weakest hydrogen bonding (the one with an oxygen from the neighboring hydroxyl group) | Missed | n (Si)O-H mode not affected by the hydrogen bonding |
Caption: Assignments of the first order water-related bands in the IR spectra of silicate glasses developed in our research in comparison to those assumed by Scholze [4-6]
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "3", "type": "Other"}
{"value": "1539", "type": "Other"}
{"value": "1606", "type": "Other"}
{"value": "5", "type": "Other"}
{"value": "1670", "type": "Other"}
{"value": "1745", "type": "Other"}
{"value": "8", "type": "Other"}
{"value": "2030", "type": "Other"}
{"value": "2107", "type": "Other"}
{"value": "9", "type": "Other"}
{"value": "2302", "type": "Other"}
{"value": "2350", "type": "Other"}
{"value": "4", "type": "Other"}
{"value": "1674", "type": "Other"}
{"value": "6", "type": "Other"}
{"value": "1807", "type": "Other"}
{"value": "1840", "type": "Other"}
{"value": "7", "type": "Other"}
{"value": "1903", "type": "Other"}
{"value": "1927", "type": "Other"}
{"value": "10", "type": "Other"}
{"value": "2542", "type": "Other"}
{"value": "11", "type": "Other"}
{"value": "2750", "type": "Other"}
{"value": "12", "type": "Other"}
{"value": "2910", "type": "Other"}
{"value": "13", "type": "Other"}
{"value": "2959", "type": "Other"}
{"value": "14", "type": "Other"}
{"value": "3200", "type": "Other"}
{"value": "15", "type": "Other"}
{"value": "3393", "type": "Other"}
{"value": "16", "type": "Other"}
{"value": "3524", "type": "Other"}
{"value": "3500", "type": "Other"}
{"value": "17", "type": "Other"}
{"value": "3595", "type": "Other"} | science.discomat_te.228 | [
{
"content": "| This research | This research | This research | Corresponding bands in terms of Scholze's approach | Corresponding bands in terms of Scholze's approach |\n| Oscillator no. in Table 2 | Inherent band frequencies (cm-1) | Assignments | Band locations (cm-1) | Assignments |\n| 3 | 1539-1606 | Four components of the n(Si)O-HO1-(Si) {II} mode that corresponds to the strongest hydrogen bonding (the Q n species containing O1- cannot be specified) | Region was assumed to contain the glass matrix fundamentals only and, so, was not studied | Region was assumed to contain the glass matrix fundamentals only and, so, was not studied |\n| 5 | 1670-1745 | Four components of the n(Si)O-HO1-(Si) {II} mode that corresponds to the strongest hydrogen bonding (the Q n species containing O1- cannot be specified) | Region was assumed to contain the glass matrix fundamentals only and, so, was not studied | Region was assumed to contain the glass matrix fundamentals only and, so, was not studied |\n| 8 | 2030-2107 | Four components of the n(Si)O-HO1-(Si) {II} mode that corresponds to the strongest hydrogen bonding (the Q n species containing O1- cannot be specified) | | |\n| 9 | 2302+-8 | Four components of the n(Si)O-HO1-(Si) {II} mode that corresponds to the strongest hydrogen bonding (the Q n species containing O1- cannot be specified) | 2350+-50 | n (Si)O-HO1-(Si) mode, O1- being related to Q0 species |\n| 4 | 1674 | d H2O mode | Regionwasassumedtocontaintheglassmatrix fundamentalsonlyand,so,wasnotstudied | Regionwasassumedtocontaintheglassmatrix fundamentalsonlyand,so,wasnotstudied |\n| 6 | 1807-1840 | Four components of the n(Si)O-HO1-(Si) {I} mode that corresponds to the somewhat weaker hydrogen bonding than is the case for the above n(Si)O-HO1-(Si) {II} mode (the Q n species containing O1- cannot be specified) | Regionwasassumedtocontaintheglassmatrix fundamentalsonlyand,so,wasnotstudied | Regionwasassumedtocontaintheglassmatrix fundamentalsonlyand,so,wasnotstudied |\n| 7 | 1903-1927 | Four components of the n(Si)O-HO1-(Si) {I} mode that corresponds to the somewhat weaker hydrogen bonding than is the case for the above n(Si)O-HO1-(Si) {II} mode (the Q n species containing O1- cannot be specified) | Regionwasassumedtocontaintheglassmatrix fundamentalsonlyand,so,wasnotstudied | Regionwasassumedtocontaintheglassmatrix fundamentalsonlyand,so,wasnotstudied |\n| 10 | 2542+-15 | Four components of the n(Si)O-HO1-(Si) {I} mode that corresponds to the somewhat weaker hydrogen bonding than is the case for the above n(Si)O-HO1-(Si) {II} mode (the Q n species containing O1- cannot be specified) | Missed | - |\n| 11 | 2750+-11 | Four components of the n(Si)O-HO1-(Si) {I} mode that corresponds to the somewhat weaker hydrogen bonding than is the case for the above n(Si)O-HO1-(Si) {II} mode (the Q n species containing O1- cannot be specified) | n(Si)O-HO1-(Si) mode, O1- being related to Q3 and/or Q2 species | |\n| | | | n(Si)O-HO1-(Si) mode, O1- being related to Q3 and/or Q2 species | |\n| 12 | 2910+-3 | Two components of the nas H2O mode | Missed | - |\n| 13 | 2959+-7 | Two components of the nas H2O mode | Missed | - |\n| | | | | |\n| 14 | 3200+-9 | Two components of the ns H2O mode | Missed | - |\n| 15 | 3393+-6 | Two components of the ns H2O mode | Missed | - |\n| | | | | |\n| 16 | 3524+-14 | Two components of the n(Si)O-HOH(Si) mode that corresponds to the weakest hydrogen bonding (the one with an oxygen from the neighboring hydroxyl group) | 3500+-200 | n (Si)O-H mode not affected by the hydrogen bonding |\n| 17 | 3595+-14 | Two components of the n(Si)O-HOH(Si) mode that corresponds to the weakest hydrogen bonding (the one with an oxygen from the neighboring hydroxyl group) | Missed | n (Si)O-H mode not affected by the hydrogen bonding |\n\nCaption: Assignments of the first order water-related bands in the IR spectra of silicate glasses developed in our research in comparison to those assumed by Scholze [4-6]\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"3\", \"type\": \"Other\"}\n{\"value\": \"1539\", \"type\": \"Other\"}\n{\"value\": \"1606\", \"type\": \"Other\"}\n{\"value\": \"5\", \"type\": \"Other\"}\n{\"value\": \"1670\", \"type\": \"Other\"}\n{\"value\": \"1745\", \"type\": \"Other\"}\n{\"value\": \"8\", \"type\": \"Other\"}\n{\"value\": \"2030\", \"type\": \"Other\"}\n{\"value\": \"2107\", \"type\": \"Other\"}\n{\"value\": \"9\", \"type\": \"Other\"}\n{\"value\": \"2302\", \"type\": \"Other\"}\n{\"value\": \"2350\", \"type\": \"Other\"}\n{\"value\": \"4\", \"type\": \"Other\"}\n{\"value\": \"1674\", \"type\": \"Other\"}\n{\"value\": \"6\", \"type\": \"Other\"}\n{\"value\": \"1807\", \"type\": \"Other\"}\n{\"value\": \"1840\", \"type\": \"Other\"}\n{\"value\": \"7\", \"type\": \"Other\"}\n{\"value\": \"1903\", \"type\": \"Other\"}\n{\"value\": \"1927\", \"type\": \"Other\"}\n{\"value\": \"10\", \"type\": \"Other\"}\n{\"value\": \"2542\", \"type\": \"Other\"}\n{\"value\": \"11\", \"type\": \"Other\"}\n{\"value\": \"2750\", \"type\": \"Other\"}\n{\"value\": \"12\", \"type\": \"Other\"}\n{\"value\": \"2910\", \"type\": \"Other\"}\n{\"value\": \"13\", \"type\": \"Other\"}\n{\"value\": \"2959\", \"type\": \"Other\"}\n{\"value\": \"14\", \"type\": \"Other\"}\n{\"value\": \"3200\", \"type\": \"Other\"}\n{\"value\": \"15\", \"type\": \"Other\"}\n{\"value\": \"3393\", \"type\": \"Other\"}\n{\"value\": \"16\", \"type\": \"Other\"}\n{\"value\": \"3524\", \"type\": \"Other\"}\n{\"value\": \"3500\", \"type\": \"Other\"}\n{\"value\": \"17\", \"type\": \"Other\"}\n{\"value\": \"3595\", \"type\": \"Other\"}",
"role": "assistant"
}
]
|
science.qasper_abstractive_qa | Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.
Document Content:
Structured Embedding Models for Grouped Data
Word embeddings are a powerful approach for analyzing language, and exponential family embeddings (EFE) extend them to other types of data. Here we develop structured exponential family embeddings (S-EFE), a method for discovering embeddings that vary across related groups of data. We study how the word usage of U.S. Congressional speeches varies across states and party affiliation, how words are used differently across sections of the ArXiv, and how the co-purchase patterns of groceries can vary across seasons. Key to the success of our method is that the groups share statistical information. We develop two sharing strategies: hierarchical modeling and amortization. We demonstrate the benefits of this approach in empirical studies of speeches, abstracts, and shopping baskets. We show how S-EFE enables group-specific interpretation of word usage, and outperforms EFE in predicting held-out data.
Introduction
Word embeddings BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 are unsupervised learning methods for capturing latent semantic structure in language. Word embedding methods analyze text data to learn distributed representations of the vocabulary that capture its co-occurrence statistics. These representations are useful for reasoning about word usage and meaning BIBREF7 , BIBREF8 . Word embeddings have also been extended to data beyond text BIBREF9 , BIBREF10 , such as items in a grocery store or neurons in the brain. efe is a probabilistic perspective on embeddings that encompasses many existing methods and opens the door to bringing expressive probabilistic modeling BIBREF11 , BIBREF12 to the problem of learning distributed representations.
We develop sefe, an extension of efe for studying how embeddings can vary across groups of related data. We will study several examples: in U.S. Congressional speeches, word usage can vary across states or party affiliations; in scientific literature, the usage patterns of technical terms can vary across fields; in supermarket shopping data, co-purchase patterns of items can vary across seasons of the year. We will see that sefe discovers a per-group embedding representation of objects. While the naïve approach of fitting an individual embedding model for each group would typically suffer from lack of data—especially in groups for which fewer observations are available—we develop two methods that can share information across groups.
Figure FIGREF1 illustrates the kind of variation that we can capture. We fit an sefe to ArXiv abstracts grouped into different sections, such as computer science (cs), quantitative finance (q-fin), and nonlinear sciences (nlin). sefe results in a per-section embedding of each term in the vocabulary. Using the fitted embeddings, we illustrate similar words to the word 1.10intelligence. We can see that how 1.10intelligence is used varies by field: in computer science the most similar words include 1.10artificial and 1.10ai; in finance, similar words include 1.10abilities and 1.10consciousness.
In more detail, embedding methods posit two representation vectors for each term in the vocabulary; an embedding vector and a context vector. (We use the language of text for concreteness; as we mentioned, efe extend to other types of data.) The idea is that the conditional probability of each observed word depends on the interaction between the embedding vector and the context vectors of the surrounding words. In sefe, we posit a separate set of embedding vectors for each group but a shared set of context vectors; this ensures that the embedding vectors are in the same space.
We propose two methods to share statistical strength among the embedding vectors. The first approach is based on hierarchical modeling BIBREF13 , which assumes that the group-specific embedding representations are tied through a global embedding. The second approach is based on amortization BIBREF14 , BIBREF15 , which considers that the individual embeddings are the output of a deterministic function of a global embedding representation. We use stochastic optimization to fit large data sets.
Our work relates closely to two threads of research in the embedding literature. One is embedding methods that study how language evolves over time BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 . Time can be thought of as a type of “group”, though with evolutionary structure that we do not consider. The second thread is multilingual embeddings BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 ; our approach is different in that most words appear in all groups and we are interested in the variations of the embeddings across those groups.
Our contributions are thus as follows. We introduce the sefe model, extending efe to grouped data. We present two techniques to share statistical strength among the embedding vectors, one based on hierarchical modeling and one based on amortization. We carry out a thorough experimental study on two text databases, ArXiv papers by section and U.S. Congressional speeches by home state and political party. Using Poisson embeddings, we study market basket data from a large grocery store, grouped by season. On all three data sets, sefe outperforms efe in terms of held-out log-likelihood. Qualitatively, we demonstrate how sefe discovers which words are used most differently across U.S. states and political parties, and show how word usage changes in different ArXiv disciplines.
Model Description
In this section, we develop sefe, a model that builds on efe BIBREF10 to capture semantic variations across groups of data. In embedding models, we represent each object (e.g., a word in text, or an item in shopping data) using two sets of vectors, an embedding vector and a context vector. In this paper, we are interested in how the embeddings vary across groups of data, and for each object we want to learn a separate embedding vector for each group. Having a separate embedding for each group allows us to study how the usage of a word like 1.10intelligence varies across categories of the ArXiv, or which words are used most differently by U.S. Senators depending on which state they are from and whether they are Democrats or Republicans.
The sefe model extends efe to grouped data, by having the embedding vectors be specific for each group, while sharing the context vectors across all groups. We review the efe model in Section SECREF4 . We then formalize the idea of sharing the context vectors in Section SECREF8 , where we present two approaches to build a hierarchical structure over the group-specific embeddings.
Background: Exponential Family Embeddings
In exponential family embeddings, we have a collection of objects, and our goal is to learn a vector representation of these objects based on their co-occurrence patterns.
Let us consider a dataset represented as a (typically sparse) matrix INLINEFORM0 , where columns are datapoints and rows are objects. For example, in text, each column corresponds to a location in the text, and each entry INLINEFORM1 is a binary variable that indicates whether word INLINEFORM2 appears at location INLINEFORM3 .
In efe, we represent each object INLINEFORM0 with two sets of vectors, embeddings vectors INLINEFORM1 and context vectors INLINEFORM2 , and we posit a probability distribution of data entries INLINEFORM3 in which these vectors interact. The definition of the efe model requires three ingredients: a context, a conditional exponential family, and a parameter sharing structure. We next describe these three components.
Exponential family embeddings learn the vector representation of objects based on the conditional probability of each observation, conditioned on the observations in its context. The context INLINEFORM0 gives the indices of the observations that appear in the conditional probability distribution of INLINEFORM1 . The definition of the context varies across applications. In text, it corresponds to the set of words in a fixed-size window centered at location INLINEFORM2 .
Given the context INLINEFORM0 and the corresponding observations INLINEFORM1 indexed by INLINEFORM2 , the distribution for INLINEFORM3 is in the exponential family, DISPLAYFORM0
with sufficient statistics INLINEFORM0 and natural parameter INLINEFORM1 . The parameter vectors interact in the conditional probability distributions of each observation INLINEFORM2 as follows. The embedding vectors INLINEFORM3 and the context vectors INLINEFORM4 are combined to form the natural parameter, DISPLAYFORM0
where INLINEFORM0 is the link function. Exponential family embeddings can be understood as a bank of glm. The context vectors are combined to give the covariates, and the “regression coefficients” are the embedding vectors. In Eq. EQREF6 , the link function INLINEFORM1 plays the same role as in glm and is a modeling choice. We use the identity link function.
The third ingredient of the efe model is the parameter sharing structure, which indicates how the embedding vectors are shared across observations. In the standard efe model, we use INLINEFORM0 and INLINEFORM1 for all columns of INLINEFORM2 . That is, each unique object INLINEFORM3 has a shared representation across all instances.
The objective function. In efe, we maximize the objective function, which is given by the sum of the log-conditional likelihoods in Eq. EQREF5 . In addition, we add an INLINEFORM0 -regularization term (we use the notation of the log Gaussian pdf) over the embedding and context vectors, yielding DISPLAYFORM0
Note that maximizing the regularized conditional likelihood is not equivalent to maximum a posteriori. Rather, it is similar to maximization of the pseudo-likelihood in conditionally specified models BIBREF26 , BIBREF10 .
Empirical Study
In this section, we describe the experimental study. We fit the sefe model on three datasets and compare it against the efe BIBREF10 . Our quantitative results show that sharing the context vectors provides better results, and that amortization and hierarchical structure give further improvements.
Data. We apply the sefe on three datasets: ArXiv papers, U.S. Senate speeches, and purchases on supermarket grocery shopping data. We describe these datasets below, and we provide a summary of the datasets in Table TABREF17 .
ArXiv papers: This dataset contains the abstracts of papers published on the ArXiv under the 19 different tags between April 2007 and June 2015. We treat each tag as a group and fit sefe with the goal of uncovering which words have the strongest shift in usage. We split the abstracts into training, validation, and test sets, with proportions of INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 , respectively.
Senate speeches: This dataset contains U.S. Senate speeches from 1994 to mid 2009. In contrast to the ArXiv collection, it is a transcript of spoken language. We group the data into state of origin of the speaker and his or her party affiliation. Only affiliations with the Republican and Democratic Party are considered. As a result, there are 83 groups (Republicans from Alabama, Democrats from Alabama, Republicans from Arkansas, etc.). Some of the state/party combinations are not available in the data, as some of the 50 states have only had Senators with the same party affiliation. We split the speeches into training ( INLINEFORM0 ), validation ( INLINEFORM1 ), and testing ( INLINEFORM2 ).
Grocery shopping data: This dataset contains the purchases of INLINEFORM0 customers. The data covers a period of 97 weeks. After removing low-frequency items, the data contains INLINEFORM1 unique items at the 1.10upc (Universal Product Code) level. We split the data into a training, test, and validation sets, with proportions of INLINEFORM2 , INLINEFORM3 , and INLINEFORM4 , respectively. The training data contains INLINEFORM5 shopping trips and INLINEFORM6 purchases in total.
For the text corpora, we fix the vocabulary to the 15k most frequent terms and remove all words that are not in the vocabulary. Following BIBREF2 , we additionally remove each word with probability INLINEFORM0 , where INLINEFORM1 is the word frequency. This downsamples especially the frequent words and speeds up training. (Sizes reported in Table TABREF17 are the number of words remaining after preprocessing.)
Models. Our goal is to fit the sefe model on these datasets. For the text data, we use the Bernoulli distribution as the conditional exponential family, while for the shopping data we use the Poisson distribution, which is more appropriate for count data.
On each dataset, we compare four approaches based on sefe with two efe BIBREF10 baselines. All are fit using sgd BIBREF34 . In particular, we compare the following methods:
Figure 1: (a) INTELLIGENCE is used differently across the ArXiv sections. Words with the closest embedding to the query are listed for 5 sections. (The embeddings were obtained by fitting an amortized S-EFE.) The method automatically orders the sections along the horizontal axis by their similarity in the usage of INTELLIGENCE. See Section 3 additional for details. (b) Graphical representation of S-EFE for data in S categories. The embedding vectors ρ(s)v are specific to each group, and the context vectors αv are shared across all categories.
Table 1: Group structure and size of the three corpora analyzed in Section 3.
Table 3: List of the three most different words for different groups for the Congressional speeches. S-EFE uncovers which words are used most differently by Republican Senators (red) and Democratic Senators (blue) from different states. The complete table is in the Appendix.
Query: What experiments are used to demonstrate the benefits of this approach?
JSON Response: | {"answer": "Calculate test log-likelihood on the three considered datasets", "evidence": ["Our contributions are thus as follows. We introduce the sefe model, extending efe to grouped data. We present two techniques to share statistical strength among the embedding vectors, one based on hierarchical modeling and one based on amortization. We carry out a thorough experimental study on two text databases, ArXiv papers by section and U.S. Congressional speeches by home state and political party. Using Poisson embeddings, we study market basket data from a large grocery store, grouped by season. On all three data sets, sefe outperforms efe in terms of held-out log-likelihood. Qualitatively, we demonstrate how sefe discovers which words are used most differently across U.S. states and political parties, and show how word usage changes in different ArXiv disciplines."]} | science.qasper_abstractive_qa.276.0 | [
{
"content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nStructured Embedding Models for Grouped Data\n\nWord embeddings are a powerful approach for analyzing language, and exponential family embeddings (EFE) extend them to other types of data. Here we develop structured exponential family embeddings (S-EFE), a method for discovering embeddings that vary across related groups of data. We study how the word usage of U.S. Congressional speeches varies across states and party affiliation, how words are used differently across sections of the ArXiv, and how the co-purchase patterns of groceries can vary across seasons. Key to the success of our method is that the groups share statistical information. We develop two sharing strategies: hierarchical modeling and amortization. We demonstrate the benefits of this approach in empirical studies of speeches, abstracts, and shopping baskets. We show how S-EFE enables group-specific interpretation of word usage, and outperforms EFE in predicting held-out data.\n\nIntroduction\nWord embeddings BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 are unsupervised learning methods for capturing latent semantic structure in language. Word embedding methods analyze text data to learn distributed representations of the vocabulary that capture its co-occurrence statistics. These representations are useful for reasoning about word usage and meaning BIBREF7 , BIBREF8 . Word embeddings have also been extended to data beyond text BIBREF9 , BIBREF10 , such as items in a grocery store or neurons in the brain. efe is a probabilistic perspective on embeddings that encompasses many existing methods and opens the door to bringing expressive probabilistic modeling BIBREF11 , BIBREF12 to the problem of learning distributed representations.\nWe develop sefe, an extension of efe for studying how embeddings can vary across groups of related data. We will study several examples: in U.S. Congressional speeches, word usage can vary across states or party affiliations; in scientific literature, the usage patterns of technical terms can vary across fields; in supermarket shopping data, co-purchase patterns of items can vary across seasons of the year. We will see that sefe discovers a per-group embedding representation of objects. While the naïve approach of fitting an individual embedding model for each group would typically suffer from lack of data—especially in groups for which fewer observations are available—we develop two methods that can share information across groups.\nFigure FIGREF1 illustrates the kind of variation that we can capture. We fit an sefe to ArXiv abstracts grouped into different sections, such as computer science (cs), quantitative finance (q-fin), and nonlinear sciences (nlin). sefe results in a per-section embedding of each term in the vocabulary. Using the fitted embeddings, we illustrate similar words to the word 1.10intelligence. We can see that how 1.10intelligence is used varies by field: in computer science the most similar words include 1.10artificial and 1.10ai; in finance, similar words include 1.10abilities and 1.10consciousness.\nIn more detail, embedding methods posit two representation vectors for each term in the vocabulary; an embedding vector and a context vector. (We use the language of text for concreteness; as we mentioned, efe extend to other types of data.) The idea is that the conditional probability of each observed word depends on the interaction between the embedding vector and the context vectors of the surrounding words. In sefe, we posit a separate set of embedding vectors for each group but a shared set of context vectors; this ensures that the embedding vectors are in the same space.\nWe propose two methods to share statistical strength among the embedding vectors. The first approach is based on hierarchical modeling BIBREF13 , which assumes that the group-specific embedding representations are tied through a global embedding. The second approach is based on amortization BIBREF14 , BIBREF15 , which considers that the individual embeddings are the output of a deterministic function of a global embedding representation. We use stochastic optimization to fit large data sets.\nOur work relates closely to two threads of research in the embedding literature. One is embedding methods that study how language evolves over time BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 . Time can be thought of as a type of “group”, though with evolutionary structure that we do not consider. The second thread is multilingual embeddings BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 ; our approach is different in that most words appear in all groups and we are interested in the variations of the embeddings across those groups.\nOur contributions are thus as follows. We introduce the sefe model, extending efe to grouped data. We present two techniques to share statistical strength among the embedding vectors, one based on hierarchical modeling and one based on amortization. We carry out a thorough experimental study on two text databases, ArXiv papers by section and U.S. Congressional speeches by home state and political party. Using Poisson embeddings, we study market basket data from a large grocery store, grouped by season. On all three data sets, sefe outperforms efe in terms of held-out log-likelihood. Qualitatively, we demonstrate how sefe discovers which words are used most differently across U.S. states and political parties, and show how word usage changes in different ArXiv disciplines.\n\nModel Description\nIn this section, we develop sefe, a model that builds on efe BIBREF10 to capture semantic variations across groups of data. In embedding models, we represent each object (e.g., a word in text, or an item in shopping data) using two sets of vectors, an embedding vector and a context vector. In this paper, we are interested in how the embeddings vary across groups of data, and for each object we want to learn a separate embedding vector for each group. Having a separate embedding for each group allows us to study how the usage of a word like 1.10intelligence varies across categories of the ArXiv, or which words are used most differently by U.S. Senators depending on which state they are from and whether they are Democrats or Republicans.\nThe sefe model extends efe to grouped data, by having the embedding vectors be specific for each group, while sharing the context vectors across all groups. We review the efe model in Section SECREF4 . We then formalize the idea of sharing the context vectors in Section SECREF8 , where we present two approaches to build a hierarchical structure over the group-specific embeddings.\n\nBackground: Exponential Family Embeddings\nIn exponential family embeddings, we have a collection of objects, and our goal is to learn a vector representation of these objects based on their co-occurrence patterns.\nLet us consider a dataset represented as a (typically sparse) matrix INLINEFORM0 , where columns are datapoints and rows are objects. For example, in text, each column corresponds to a location in the text, and each entry INLINEFORM1 is a binary variable that indicates whether word INLINEFORM2 appears at location INLINEFORM3 .\nIn efe, we represent each object INLINEFORM0 with two sets of vectors, embeddings vectors INLINEFORM1 and context vectors INLINEFORM2 , and we posit a probability distribution of data entries INLINEFORM3 in which these vectors interact. The definition of the efe model requires three ingredients: a context, a conditional exponential family, and a parameter sharing structure. We next describe these three components.\nExponential family embeddings learn the vector representation of objects based on the conditional probability of each observation, conditioned on the observations in its context. The context INLINEFORM0 gives the indices of the observations that appear in the conditional probability distribution of INLINEFORM1 . The definition of the context varies across applications. In text, it corresponds to the set of words in a fixed-size window centered at location INLINEFORM2 .\nGiven the context INLINEFORM0 and the corresponding observations INLINEFORM1 indexed by INLINEFORM2 , the distribution for INLINEFORM3 is in the exponential family, DISPLAYFORM0 \n with sufficient statistics INLINEFORM0 and natural parameter INLINEFORM1 . The parameter vectors interact in the conditional probability distributions of each observation INLINEFORM2 as follows. The embedding vectors INLINEFORM3 and the context vectors INLINEFORM4 are combined to form the natural parameter, DISPLAYFORM0 \n where INLINEFORM0 is the link function. Exponential family embeddings can be understood as a bank of glm. The context vectors are combined to give the covariates, and the “regression coefficients” are the embedding vectors. In Eq. EQREF6 , the link function INLINEFORM1 plays the same role as in glm and is a modeling choice. We use the identity link function.\nThe third ingredient of the efe model is the parameter sharing structure, which indicates how the embedding vectors are shared across observations. In the standard efe model, we use INLINEFORM0 and INLINEFORM1 for all columns of INLINEFORM2 . That is, each unique object INLINEFORM3 has a shared representation across all instances.\nThe objective function. In efe, we maximize the objective function, which is given by the sum of the log-conditional likelihoods in Eq. EQREF5 . In addition, we add an INLINEFORM0 -regularization term (we use the notation of the log Gaussian pdf) over the embedding and context vectors, yielding DISPLAYFORM0 \nNote that maximizing the regularized conditional likelihood is not equivalent to maximum a posteriori. Rather, it is similar to maximization of the pseudo-likelihood in conditionally specified models BIBREF26 , BIBREF10 .\n\nEmpirical Study\nIn this section, we describe the experimental study. We fit the sefe model on three datasets and compare it against the efe BIBREF10 . Our quantitative results show that sharing the context vectors provides better results, and that amortization and hierarchical structure give further improvements.\nData. We apply the sefe on three datasets: ArXiv papers, U.S. Senate speeches, and purchases on supermarket grocery shopping data. We describe these datasets below, and we provide a summary of the datasets in Table TABREF17 .\nArXiv papers: This dataset contains the abstracts of papers published on the ArXiv under the 19 different tags between April 2007 and June 2015. We treat each tag as a group and fit sefe with the goal of uncovering which words have the strongest shift in usage. We split the abstracts into training, validation, and test sets, with proportions of INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 , respectively.\nSenate speeches: This dataset contains U.S. Senate speeches from 1994 to mid 2009. In contrast to the ArXiv collection, it is a transcript of spoken language. We group the data into state of origin of the speaker and his or her party affiliation. Only affiliations with the Republican and Democratic Party are considered. As a result, there are 83 groups (Republicans from Alabama, Democrats from Alabama, Republicans from Arkansas, etc.). Some of the state/party combinations are not available in the data, as some of the 50 states have only had Senators with the same party affiliation. We split the speeches into training ( INLINEFORM0 ), validation ( INLINEFORM1 ), and testing ( INLINEFORM2 ).\nGrocery shopping data: This dataset contains the purchases of INLINEFORM0 customers. The data covers a period of 97 weeks. After removing low-frequency items, the data contains INLINEFORM1 unique items at the 1.10upc (Universal Product Code) level. We split the data into a training, test, and validation sets, with proportions of INLINEFORM2 , INLINEFORM3 , and INLINEFORM4 , respectively. The training data contains INLINEFORM5 shopping trips and INLINEFORM6 purchases in total.\nFor the text corpora, we fix the vocabulary to the 15k most frequent terms and remove all words that are not in the vocabulary. Following BIBREF2 , we additionally remove each word with probability INLINEFORM0 , where INLINEFORM1 is the word frequency. This downsamples especially the frequent words and speeds up training. (Sizes reported in Table TABREF17 are the number of words remaining after preprocessing.)\nModels. Our goal is to fit the sefe model on these datasets. For the text data, we use the Bernoulli distribution as the conditional exponential family, while for the shopping data we use the Poisson distribution, which is more appropriate for count data.\nOn each dataset, we compare four approaches based on sefe with two efe BIBREF10 baselines. All are fit using sgd BIBREF34 . In particular, we compare the following methods:\n\nFigure 1: (a) INTELLIGENCE is used differently across the ArXiv sections. Words with the closest embedding to the query are listed for 5 sections. (The embeddings were obtained by fitting an amortized S-EFE.) The method automatically orders the sections along the horizontal axis by their similarity in the usage of INTELLIGENCE. See Section 3 additional for details. (b) Graphical representation of S-EFE for data in S categories. The embedding vectors ρ(s)v are specific to each group, and the context vectors αv are shared across all categories.\n\nTable 1: Group structure and size of the three corpora analyzed in Section 3.\n\nTable 3: List of the three most different words for different groups for the Congressional speeches. S-EFE uncovers which words are used most differently by Republican Senators (red) and Democratic Senators (blue) from different states. The complete table is in the Appendix.\n\nQuery: What experiments are used to demonstrate the benefits of this approach?\n\nJSON Response:",
"role": "user"
},
{
"content": "{\"answer\": \"Calculate test log-likelihood on the three considered datasets\", \"evidence\": [\"Our contributions are thus as follows. We introduce the sefe model, extending efe to grouped data. We present two techniques to share statistical strength among the embedding vectors, one based on hierarchical modeling and one based on amortization. We carry out a thorough experimental study on two text databases, ArXiv papers by section and U.S. Congressional speeches by home state and political party. Using Poisson embeddings, we study market basket data from a large grocery store, grouped by season. On all three data sets, sefe outperforms efe in terms of held-out log-likelihood. Qualitatively, we demonstrate how sefe discovers which words are used most differently across U.S. states and political parties, and show how word usage changes in different ArXiv disciplines.\"]}",
"role": "assistant"
}
]
|
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
2 Technically, the method of Smith et al. <cite>[19]</cite> learns two projection functions f S ( S |θ S ) and f S ( T |θ T ), projecting the source and target monolingual embedding spaces, respectively, to the new shared space. | ["Background"] | science.multicite_intent_classification.408 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\n2 Technically, the method of Smith et al. <cite>[19]</cite> learns two projection functions f S ( S |θ S ) and f S ( T |θ T ), projecting the source and target monolingual embedding spaces, respectively, to the new shared space.",
"role": "user"
},
{
"content": "[\"Background\"]",
"role": "assistant"
}
]
|
science.scifact_entailment | Determine the relationship between them and generate a JSON response. Scientific Claim:
Tuberculosis incidence occurs at higher rates in more sparsely populated areas in the UK.
Research Abstract:
OBJECTIVE To examine the effect of ethnicity on the relation between tuberculosis and deprivation.
DESIGN Retrospective ecological study comparing incidence of tuberculosis in white and south Asian residents of the 39 electoral wards in Birmingham with ethnic specific indices of deprivation.
SETTING Birmingham, 1989-93.
SUBJECTS 1516 notified cases of tuberculosis.
MAIN OUTCOME MEASURES Rates of tuberculosis and measures of deprivation.
RESULTS Univariate analysis showed significant associations of tuberculosis rates for the whole population with several indices of deprivation (P<0.01) and with the proportion of the population of south Asian origin (P<0.01).
All deprivation covariates were positively associated with each other but on multiple regression, higher level of overcrowding was independently associated with tuberculosis rates.
For the white population, overcrowding was associated with tuberculosis rates independently of other variables (P=0.0036).
No relation with deprivation was found for the south Asian population in either single or multivariable analyses.
CONCLUSIONS Poverty is significantly associated with tuberculosis in the white population, but no such relation exists for those of Asian ethnicity.
These findings suggest that causal factors, and therefore potential interventions, will also differ by ethnic group. | {"verdict": "CONTRADICT", "evidence": ["All deprivation covariates were positively associated with each other but on multiple regression, higher level of overcrowding was independently associated with tuberculosis rates.", "For the white population, overcrowding was associated with tuberculosis rates independently of other variables (P=0.0036)."]} | science.scifact_entailment.858 | [
{
"content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nTuberculosis incidence occurs at higher rates in more sparsely populated areas in the UK.\n\nResearch Abstract:\nOBJECTIVE To examine the effect of ethnicity on the relation between tuberculosis and deprivation.\nDESIGN Retrospective ecological study comparing incidence of tuberculosis in white and south Asian residents of the 39 electoral wards in Birmingham with ethnic specific indices of deprivation.\nSETTING Birmingham, 1989-93.\nSUBJECTS 1516 notified cases of tuberculosis.\nMAIN OUTCOME MEASURES Rates of tuberculosis and measures of deprivation.\nRESULTS Univariate analysis showed significant associations of tuberculosis rates for the whole population with several indices of deprivation (P<0.01) and with the proportion of the population of south Asian origin (P<0.01).\nAll deprivation covariates were positively associated with each other but on multiple regression, higher level of overcrowding was independently associated with tuberculosis rates.\nFor the white population, overcrowding was associated with tuberculosis rates independently of other variables (P=0.0036).\nNo relation with deprivation was found for the south Asian population in either single or multivariable analyses.\nCONCLUSIONS Poverty is significantly associated with tuberculosis in the white population, but no such relation exists for those of Asian ethnicity.\nThese findings suggest that causal factors, and therefore potential interventions, will also differ by ethnic group.",
"role": "user"
},
{
"content": "{\"verdict\": \"CONTRADICT\", \"evidence\": [\"All deprivation covariates were positively associated with each other but on multiple regression, higher level of overcrowding was independently associated with tuberculosis rates.\", \"For the white population, overcrowding was associated with tuberculosis rates independently of other variables (P=0.0036).\"]}",
"role": "assistant"
}
]
|
science.discomat_te | | Glasses | SiO2 (mol%) | Na2O (mol%) | CaO (mol%) | T g (degC) | r (gcm-3) | a (10-6 degC-1) | E (GPa) | n | H (GPa) | K c (MPam1/2) |
| Float glass | 70.9 | 12.8 | 10.1 | 562 | 2.55 | 9.3 | 71.5 | 0.21 | 6.12 | 0.72 |
| SLS 1 | 71 | 17.5 | 11.5 | 558 | 2.53 | 10.9 | 70.2 | 0.202 | 5.35 | 0.71 |
| SLS 2 | 74 | 15.7 | 10.3 | 559 | 2.50 | 10.1 | 69.3 | 0.188 | 5.11 | 0.70 |
| SLS 3 | 77 | 13.9 | 9.1 | 560 | 2.47 | 9.1 | 68.6 | 0.181 | 5.15 | 0.76 |
| SLS 4 | 80 | 12.1 | 7.9 | 561 | 2.43 | 7.7 | 68.1 | 0.172 | 5.19 | 0.82 |
| Experimental error | - | - | - | +-2 | +-0.005 | +-0.2x10-6 | +-1 | +-0.01 | +-0.01 | +-0.02 |
Caption: Composition and physical properties of glasses
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "70.9", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "Float glass"}
{"value": "12.8", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "mol", "material": "Float glass"}
{"value": "10.1", "type": "Glass_Compound_Amount", "constituent": "CaO", "unit": "mol", "material": "Float glass"}
{"value": "562", "type": "Other"}
{"value": "2.55", "type": "Other"}
{"value": "9.3", "type": "Other"}
{"value": "71.5", "type": "Other"}
{"value": "0.21", "type": "Other"}
{"value": "6.12", "type": "Other"}
{"value": "0.72", "type": "Other"}
{"value": "71", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "SLS 1"}
{"value": "17.5", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "mol", "material": "SLS 1"}
{"value": "11.5", "type": "Glass_Compound_Amount", "constituent": "CaO", "unit": "mol", "material": "SLS 1"}
{"value": "558", "type": "Other"}
{"value": "2.53", "type": "Other"}
{"value": "10.9", "type": "Other"}
{"value": "70.2", "type": "Other"}
{"value": "0.202", "type": "Other"}
{"value": "5.35", "type": "Other"}
{"value": "0.71", "type": "Other"}
{"value": "74", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "SLS 2"}
{"value": "15.7", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "mol", "material": "SLS 2"}
{"value": "10.3", "type": "Glass_Compound_Amount", "constituent": "CaO", "unit": "mol", "material": "SLS 2"}
{"value": "559", "type": "Other"}
{"value": "2.50", "type": "Other"}
{"value": "10.1", "type": "Other"}
{"value": "69.3", "type": "Other"}
{"value": "0.188", "type": "Other"}
{"value": "5.11", "type": "Other"}
{"value": "0.70", "type": "Other"}
{"value": "77", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "SLS 3"}
{"value": "13.9", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "mol", "material": "SLS 3"}
{"value": "9.1", "type": "Glass_Compound_Amount", "constituent": "CaO", "unit": "mol", "material": "SLS 3"}
{"value": "560", "type": "Other"}
{"value": "2.47", "type": "Other"}
{"value": "9.1", "type": "Other"}
{"value": "68.6", "type": "Other"}
{"value": "0.181", "type": "Other"}
{"value": "5.15", "type": "Other"}
{"value": "0.76", "type": "Other"}
{"value": "80", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "SLS 4"}
{"value": "12.1", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "mol", "material": "SLS 4"}
{"value": "7.9", "type": "Glass_Compound_Amount", "constituent": "CaO", "unit": "mol", "material": "SLS 4"}
{"value": "561", "type": "Other"}
{"value": "2.43", "type": "Other"}
{"value": "7.7", "type": "Other"}
{"value": "68.1", "type": "Other"}
{"value": "0.172", "type": "Other"}
{"value": "5.19", "type": "Other"}
{"value": "0.82", "type": "Other"} | science.discomat_te.236 | [
{
"content": "| Glasses | SiO2 (mol%) | Na2O (mol%) | CaO (mol%) | T g (degC) | r (gcm-3) | a (10-6 degC-1) | E (GPa) | n | H (GPa) | K c (MPam1/2) |\n| Float glass | 70.9 | 12.8 | 10.1 | 562 | 2.55 | 9.3 | 71.5 | 0.21 | 6.12 | 0.72 |\n| SLS 1 | 71 | 17.5 | 11.5 | 558 | 2.53 | 10.9 | 70.2 | 0.202 | 5.35 | 0.71 |\n| SLS 2 | 74 | 15.7 | 10.3 | 559 | 2.50 | 10.1 | 69.3 | 0.188 | 5.11 | 0.70 |\n| SLS 3 | 77 | 13.9 | 9.1 | 560 | 2.47 | 9.1 | 68.6 | 0.181 | 5.15 | 0.76 |\n| SLS 4 | 80 | 12.1 | 7.9 | 561 | 2.43 | 7.7 | 68.1 | 0.172 | 5.19 | 0.82 |\n| Experimental error | - | - | - | +-2 | +-0.005 | +-0.2x10-6 | +-1 | +-0.01 | +-0.01 | +-0.02 |\n\nCaption: Composition and physical properties of glasses\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"70.9\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"Float glass\"}\n{\"value\": \"12.8\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"mol\", \"material\": \"Float glass\"}\n{\"value\": \"10.1\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CaO\", \"unit\": \"mol\", \"material\": \"Float glass\"}\n{\"value\": \"562\", \"type\": \"Other\"}\n{\"value\": \"2.55\", \"type\": \"Other\"}\n{\"value\": \"9.3\", \"type\": \"Other\"}\n{\"value\": \"71.5\", \"type\": \"Other\"}\n{\"value\": \"0.21\", \"type\": \"Other\"}\n{\"value\": \"6.12\", \"type\": \"Other\"}\n{\"value\": \"0.72\", \"type\": \"Other\"}\n{\"value\": \"71\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"SLS 1\"}\n{\"value\": \"17.5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"mol\", \"material\": \"SLS 1\"}\n{\"value\": \"11.5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CaO\", \"unit\": \"mol\", \"material\": \"SLS 1\"}\n{\"value\": \"558\", \"type\": \"Other\"}\n{\"value\": \"2.53\", \"type\": \"Other\"}\n{\"value\": \"10.9\", \"type\": \"Other\"}\n{\"value\": \"70.2\", \"type\": \"Other\"}\n{\"value\": \"0.202\", \"type\": \"Other\"}\n{\"value\": \"5.35\", \"type\": \"Other\"}\n{\"value\": \"0.71\", \"type\": \"Other\"}\n{\"value\": \"74\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"SLS 2\"}\n{\"value\": \"15.7\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"mol\", \"material\": \"SLS 2\"}\n{\"value\": \"10.3\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CaO\", \"unit\": \"mol\", \"material\": \"SLS 2\"}\n{\"value\": \"559\", \"type\": \"Other\"}\n{\"value\": \"2.50\", \"type\": \"Other\"}\n{\"value\": \"10.1\", \"type\": \"Other\"}\n{\"value\": \"69.3\", \"type\": \"Other\"}\n{\"value\": \"0.188\", \"type\": \"Other\"}\n{\"value\": \"5.11\", \"type\": \"Other\"}\n{\"value\": \"0.70\", \"type\": \"Other\"}\n{\"value\": \"77\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"SLS 3\"}\n{\"value\": \"13.9\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"mol\", \"material\": \"SLS 3\"}\n{\"value\": \"9.1\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CaO\", \"unit\": \"mol\", \"material\": \"SLS 3\"}\n{\"value\": \"560\", \"type\": \"Other\"}\n{\"value\": \"2.47\", \"type\": \"Other\"}\n{\"value\": \"9.1\", \"type\": \"Other\"}\n{\"value\": \"68.6\", \"type\": \"Other\"}\n{\"value\": \"0.181\", \"type\": \"Other\"}\n{\"value\": \"5.15\", \"type\": \"Other\"}\n{\"value\": \"0.76\", \"type\": \"Other\"}\n{\"value\": \"80\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"SLS 4\"}\n{\"value\": \"12.1\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"mol\", \"material\": \"SLS 4\"}\n{\"value\": \"7.9\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CaO\", \"unit\": \"mol\", \"material\": \"SLS 4\"}\n{\"value\": \"561\", \"type\": \"Other\"}\n{\"value\": \"2.43\", \"type\": \"Other\"}\n{\"value\": \"7.7\", \"type\": \"Other\"}\n{\"value\": \"68.1\", \"type\": \"Other\"}\n{\"value\": \"0.172\", \"type\": \"Other\"}\n{\"value\": \"5.19\", \"type\": \"Other\"}\n{\"value\": \"0.82\", \"type\": \"Other\"}",
"role": "assistant"
}
]
|
science.qasper_abstractive_qa | Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.
Document Content:
From English To Foreign Languages: Transferring Pre-trained Language Models
Pre-trained models have demonstrated their effectiveness in many downstream natural language processing (NLP) tasks. The availability of multilingual pre-trained models enables zero-shot transfer of NLP tasks from high resource languages to low resource ones. However, recent research in improving pre-trained models focuses heavily on English. While it is possible to train the latest neural architectures for other languages from scratch, it is undesirable due to the required amount of compute. In this work, we tackle the problem of transferring an existing pre-trained model from English to other languages under a limited computational budget. With a single GPU, our approach can obtain a foreign BERT base model within a day and a foreign BERT large within two days. Furthermore, evaluating our models on six languages, we demonstrate that our models are better than multilingual BERT on two zero-shot tasks: natural language inference and dependency parsing.
Introduction
Pre-trained models BIBREF0, BIBREF1 have received much of attention recently thanks to their impressive results in many down stream NLP tasks. Additionally, multilingual pre-trained models enable many NLP applications for other languages via zero-short cross-lingual transfer. Zero-shot cross-lingual transfer has shown promising results for rapidly building applications for low resource languages. BIBREF2 show the potential of multilingual-BERT BIBREF0 in zero-shot transfer for a large number of languages from different language families on five NLP tasks, namely, natural language inference, document classification, named entity recognition, part-of-speech tagging, and dependency parsing.
Although multilingual models are an important ingredient for enhancing language technology in many languages, recent research on improving pre-trained models puts much emphasis on English BIBREF3, BIBREF4, BIBREF5. The current state of affairs makes it difficult to translate advancements in pre-training from English to non-English languages. To our best knowledge, there are only three available multilingual pre-trained models to date: (1) the multilingual-BERT (mBERT) that supports 104 languages, (2) cross-lingual language model BIBREF6 that supports 100 languages, and (3) Language Agnostic SEntence Representations BIBREF7 that supports 93 languages. Among the three models, LASER is based on neural machine translation approach and strictly requires parallel data to train.
Do multilingual models always need to be trained from scratch? Can we transfer linguistic knowledge learned by English pre-trained models to other languages? In this work, we develop a technique to rapidly transfer an existing pre-trained model from English to other languages in an energy efficient way BIBREF8. As the first step, we focus on building a bilingual language model (LM) of English and a target language. Starting from a pre-trained English LM, we learn the target language specific parameters (i.e., word embeddings), while keeping the encoder layers of the pre-trained English LM fixed. We then fine-tune both English and target model to obtain the bilingual LM. We apply our approach to autoencoding language models with masked language model objective and show the advantage of the proposed approach in zero-shot transfer. Our main contributions in this work are:
We propose a fast adaptation method for obtaining a bilingual BERT$_{\textsc {base}}$ of English and a target language within a day using one Tesla V100 16GB GPU.
We evaluate our bilingual LMs for six languages on two zero-shot cross-lingual transfer tasks, namely natural language inference BIBREF9 and universal dependency parsing. We show that our models offer competitive performance or even better that mBERT.
We illustrate that our bilingual LMs can serve as an excellent feature extractor in supervised dependency parsing task.
Bilingual Pre-trained LMs ::: Fine-tuning Target Embeddings
After initializing foreign word-embeddings, we replace English word-embeddings in the English pre-trained LM with foreign word-embeddings to obtain the foreign LM. We then fine-tune only foreign word-embeddings on monolingual data. The training objective is the same as the training objective of the English pre-trained LM (i.e., masked LM for BERT). Since the trained encoder $\Psi ()$ is good at capturing association, the purpose of this step is to further optimize target embeddings such that the target LM can utilized the trained encoder for association task. For example, if the words Albert Camus presented in a French input sequence, the self-attention in the encoder more likely attends to words absurde and existentialisme once their embeddings are tuned.
Bilingual Pre-trained LMs ::: Fine-tuning Bilingual LM
We create a bilingual LM by plugging foreign language specific parameters to the pre-trained English LM (Figure FIGREF7). The new model has two separate embedding layers and output layers, one for English and one for foreign language. The encoder layer in between is shared. We then fine-tune this model using English and foreign monolingual data. Here, we keep tuning the model on English to ensure that it does not forget what it has learned in English and that we can use the resulting model for zero-shot transfer (§SECREF3). In this step, the encoder parameters are also updated so that in can learn syntactic aspects (i.e., word order, morphological agreement) of the target languages.
Zero-shot Experiments ::: Data
We evaluate our approach for six target languages: French (fr), Russian (ru), Arabic (ar), Chinese (zh), Hindi (hi), and Vietnamese (vi). These languages belong to four different language families. French, Russian, and Hindi are Indo-European languages, similar to English. Arabic, Chinese, and Vietnamese belong to Afro-Asiatic, Sino-Tibetan, and Austro-Asiatic family respectively. The choice of the six languages also reflects different training conditions depending on the amount of monolingual data. French and Russian, and Arabic can be regarded as high resource languages whereas Hindi has far less data and can be considered as low resource.
For experiments that use parallel data to initialize foreign specific parameters, we use the same datasets in the work of BIBREF6. Specifically, we use United Nations Parallel Corpus BIBREF18 for en-ru, en-ar, en-zh, and en-fr. We collect en-hi parallel data from IIT Bombay corpus BIBREF19 and en-vi data from OpenSubtitles 2018. For experiments that use only monolingual data to initialize foreign parameters, instead of training word-vectors from the scratch, we use the pre-trained word vectors from fastText BIBREF14 to estimate word translation probabilities (Eq. DISPLAY_FORM13). We align these vectors into a common space using orthogonal Procrustes BIBREF20, BIBREF15, BIBREF16. We only use identical words between the two languages as the supervised signal. We use WikiExtractor to extract extract raw sentences from Wikipedias as monolingual data for fine-tuning target embeddings and bilingual LMs (§SECREF15). We do not lowercase or remove accents in our data preprocessing pipeline.
We tokenize English using the provided tokenizer from pre-trained models. For target languages, we use fastBPE to learn 30,000 BPE codes and 50,000 codes when transferring from BERT and RoBERTa respectively. We truncate the BPE vocabulary of foreign languages to match the size of the English vocabulary in the source models. Precisely, the size of foreign vocabulary is set to 32,000 when transferring from BERT and 50,000 when transferring from RoBERTa.
We use XNLI dataset BIBREF9 for classification task and Universal Dependencies v2.4 BIBREF21 for parsing task. Since a language might have more than one treebank in Universal Dependencies, we use the following treebanks: en_ewt (English), fr_gsd (French), ru_syntagrus (Russian) ar_padt (Arabic), vi_vtb (Vietnamese), hi_hdtb (Hindi), and zh_gsd (Chinese).
Zero-shot Experiments ::: Data ::: Remark on BPE
BIBREF22 show that sharing subwords between languages improves alignments between embedding spaces. BIBREF2 observe a strong correlation between the percentage of overlapping subwords and mBERT's performances for cross-lingual zero-shot transfer. However, in our current approach, subwords between source and target are not shared. A subword that is in both English and foreign vocabulary has two different embeddings.
Zero-shot Experiments ::: Estimating translation probabilities
Since pre-trained models operate on subword level, we need to estimate subword translation probabilities. Therefore, we subsample 2M sentence pairs from each parallel corpus and tokenize the data into subwords before running fast-align BIBREF13.
Estimating subword translation probabilities from aligned word vectors requires an additional processing step since the provided vectors from fastText are not at subword level. We use the following approximation to obtain subword vectors: the vector $_s$ of subword $s$ is the weighted average of all the aligned word vectors $_{w_i}$ that have $s$ as an subword
where $p(w_j)$ is the unigram probability of word $w_j$ and $n_s = \sum _{w_j:\, s\in w_j} p(w_j)$. We take the top 50,000 words in each aligned word-vectors to compute subword vectors.
In both cases, not all the words in the foreign vocabulary can be initialized from the English word-embeddings. Those words are initialized randomly from a Gaussian $\mathcal {N}(0, {1}{d^2})$.
Zero-shot Experiments ::: Hyper-parameters
In all the experiments, we tune RAMEN$_{\textsc {base}}$ for 175,000 updates and RAMEN$_{\textsc {large}}$ for 275,000 updates where the first 25,000 updates are for language specific parameters. The sequence length is set to 256. The mini-batch size are 64 and 24 when tuning language specific parameters using RAMEN$_{\textsc {base}}$ and RAMEN$_{\textsc {large}}$ respectively. For tuning bilingual LMs, we use a mini-batch size of 64 for RAMEN$_{\textsc {base}}$ and 24 for RAMEN$_{\textsc {large}}$ where half of the batch are English sequences and the other half are foreign sequences. This strategy of balancing mini-batch has been used in multilingual neural machine translation BIBREF23, BIBREF24.
We optimize RAMEN$_{\textsc {base}}$ using Lookahead optimizer BIBREF25 wrapped around Adam with the learning rate of $10^{-4}$, the number of fast weight updates $k=5$, and interpolation parameter $\alpha =0.5$. We choose Lookahead optimizer because it has been shown to be robust to the initial parameters of the based optimizer (Adam). For Adam optimizer, we linearly increase the learning rate from $10^{-7}$ to $10^{-4}$ in the first 4000 updates and then follow an inverse square root decay. All RAMEN$_{\textsc {large}}$ models are optimized with Adam due to memory limit.
When fine-tuning RAMEN on XNLI and UD, we use a mini-batch size of 32, Adam's learning rate of $10^{-5}$. The number of epochs are set to 4 and 50 for XNLI and UD tasks respectively. All experiments are carried out on a single Tesla V100 16GB GPU. Each RAMEN$_{\textsc {base}}$ model is trained within a day and each RAMEN$_{\textsc {large}}$ is trained within two days.
Results ::: Universal Dependency Parsing
We build on top of RAMEN a graph-based dependency parser BIBREF27. For the purpose of evaluating the contextual representations learned by our model, we do not use part-of-speech tags. Contextualized representations are directly fed into Deep-Biaffine layers to predict arc and label scores. Table TABREF34 presents the Labeled Attachment Scores (LAS) for zero-shot dependency parsing.
We first look at the fairest comparison between mBERT and monolingually initialized RAMEN$_{\textsc {base}}$+BERT. The latter outperforms the former on five languages except Arabic. We observe the largest gain of +5.2 LAS for French. Chinese enjoys +3.1 LAS from our approach. With similar architecture (12 or 24 layers) and initialization (using monolingual or parallel data), RAMEN+RoBERTa performs better than RAMEN+BERT for most of the languages. Arabic and Hindi benefit the most from bigger models. For the other four languages, RAMEN$_{\textsc {large}}$ renders a modest improvement over RAMEN$_{\textsc {base}}$.
Analysis ::: Impact of initialization
Initializing foreign embeddings is the backbone of our approach. A good initialization leads to better zero-shot transfer results and enables fast adaptation. To verify the importance of a good initialization, we train a RAMEN$_{\textsc {base}}$+RoBERTa with foreign word-embeddings are initialized randomly from $\mathcal {N}(0, {1}{d^2})$. For a fair comparison, we use the same hyper-parameters in §SECREF27. Table TABREF36 shows the results of XNLI and UD parsing of random initialization. In comparison to the initialization using aligned fastText vectors, random initialization decreases the zero-shot performance of RAMEN$_{\textsc {base}}$ by 15.9% for XNLI and 27.8 points for UD parsing on average. We also see that zero-shot parsing of SOV languages (Arabic and Hindi) suffers random initialization.
Table 1: Zero-shot classification results on XNLI. ì indicates parallel data is used. RAMEN only uses parallel data for initialization. The best results are marked in bold.
Table 2: LAS scores for zero-shot dependency parsing. ì indicates parallel data is used for initialization. Punctuation are removed during the evaluation. The best results are marked in bold.
Table 3: Comparison between random initialization (rnd) of language specific parameters and initialization using aligned fastText vectors (vec).
Table 4: Evaluation in supervised UD parsing. The scores are LAS.
Figure 2: Accuracy and LAS evaluated at each checkpoints.
Table 6: Comparison between b-BERT trained from scratch for 1,000,000 updates and RAMEN trained for 175,000 updates.
Query: How is the model transferred to other languages?
JSON Response: | {"answer": "Build a bilingual language model, learn the target language specific parameters starting from a pretrained English LM , fine-tune both English and target model to obtain the bilingual LM.", "evidence": ["Do multilingual models always need to be trained from scratch? Can we transfer linguistic knowledge learned by English pre-trained models to other languages? In this work, we develop a technique to rapidly transfer an existing pre-trained model from English to other languages in an energy efficient way BIBREF8. As the first step, we focus on building a bilingual language model (LM) of English and a target language. Starting from a pre-trained English LM, we learn the target language specific parameters (i.e., word embeddings), while keeping the encoder layers of the pre-trained English LM fixed. We then fine-tune both English and target model to obtain the bilingual LM. We apply our approach to autoencoding language models with masked language model objective and show the advantage of the proposed approach in zero-shot transfer. Our main contributions in this work are:"]} | science.qasper_abstractive_qa.221.1 | [
{
"content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nFrom English To Foreign Languages: Transferring Pre-trained Language Models\n\nPre-trained models have demonstrated their effectiveness in many downstream natural language processing (NLP) tasks. The availability of multilingual pre-trained models enables zero-shot transfer of NLP tasks from high resource languages to low resource ones. However, recent research in improving pre-trained models focuses heavily on English. While it is possible to train the latest neural architectures for other languages from scratch, it is undesirable due to the required amount of compute. In this work, we tackle the problem of transferring an existing pre-trained model from English to other languages under a limited computational budget. With a single GPU, our approach can obtain a foreign BERT base model within a day and a foreign BERT large within two days. Furthermore, evaluating our models on six languages, we demonstrate that our models are better than multilingual BERT on two zero-shot tasks: natural language inference and dependency parsing.\n\nIntroduction\nPre-trained models BIBREF0, BIBREF1 have received much of attention recently thanks to their impressive results in many down stream NLP tasks. Additionally, multilingual pre-trained models enable many NLP applications for other languages via zero-short cross-lingual transfer. Zero-shot cross-lingual transfer has shown promising results for rapidly building applications for low resource languages. BIBREF2 show the potential of multilingual-BERT BIBREF0 in zero-shot transfer for a large number of languages from different language families on five NLP tasks, namely, natural language inference, document classification, named entity recognition, part-of-speech tagging, and dependency parsing.\nAlthough multilingual models are an important ingredient for enhancing language technology in many languages, recent research on improving pre-trained models puts much emphasis on English BIBREF3, BIBREF4, BIBREF5. The current state of affairs makes it difficult to translate advancements in pre-training from English to non-English languages. To our best knowledge, there are only three available multilingual pre-trained models to date: (1) the multilingual-BERT (mBERT) that supports 104 languages, (2) cross-lingual language model BIBREF6 that supports 100 languages, and (3) Language Agnostic SEntence Representations BIBREF7 that supports 93 languages. Among the three models, LASER is based on neural machine translation approach and strictly requires parallel data to train.\nDo multilingual models always need to be trained from scratch? Can we transfer linguistic knowledge learned by English pre-trained models to other languages? In this work, we develop a technique to rapidly transfer an existing pre-trained model from English to other languages in an energy efficient way BIBREF8. As the first step, we focus on building a bilingual language model (LM) of English and a target language. Starting from a pre-trained English LM, we learn the target language specific parameters (i.e., word embeddings), while keeping the encoder layers of the pre-trained English LM fixed. We then fine-tune both English and target model to obtain the bilingual LM. We apply our approach to autoencoding language models with masked language model objective and show the advantage of the proposed approach in zero-shot transfer. Our main contributions in this work are:\nWe propose a fast adaptation method for obtaining a bilingual BERT$_{\\textsc {base}}$ of English and a target language within a day using one Tesla V100 16GB GPU.\nWe evaluate our bilingual LMs for six languages on two zero-shot cross-lingual transfer tasks, namely natural language inference BIBREF9 and universal dependency parsing. We show that our models offer competitive performance or even better that mBERT.\nWe illustrate that our bilingual LMs can serve as an excellent feature extractor in supervised dependency parsing task.\n\nBilingual Pre-trained LMs ::: Fine-tuning Target Embeddings\nAfter initializing foreign word-embeddings, we replace English word-embeddings in the English pre-trained LM with foreign word-embeddings to obtain the foreign LM. We then fine-tune only foreign word-embeddings on monolingual data. The training objective is the same as the training objective of the English pre-trained LM (i.e., masked LM for BERT). Since the trained encoder $\\Psi ()$ is good at capturing association, the purpose of this step is to further optimize target embeddings such that the target LM can utilized the trained encoder for association task. For example, if the words Albert Camus presented in a French input sequence, the self-attention in the encoder more likely attends to words absurde and existentialisme once their embeddings are tuned.\n\nBilingual Pre-trained LMs ::: Fine-tuning Bilingual LM\nWe create a bilingual LM by plugging foreign language specific parameters to the pre-trained English LM (Figure FIGREF7). The new model has two separate embedding layers and output layers, one for English and one for foreign language. The encoder layer in between is shared. We then fine-tune this model using English and foreign monolingual data. Here, we keep tuning the model on English to ensure that it does not forget what it has learned in English and that we can use the resulting model for zero-shot transfer (§SECREF3). In this step, the encoder parameters are also updated so that in can learn syntactic aspects (i.e., word order, morphological agreement) of the target languages.\n\nZero-shot Experiments ::: Data\nWe evaluate our approach for six target languages: French (fr), Russian (ru), Arabic (ar), Chinese (zh), Hindi (hi), and Vietnamese (vi). These languages belong to four different language families. French, Russian, and Hindi are Indo-European languages, similar to English. Arabic, Chinese, and Vietnamese belong to Afro-Asiatic, Sino-Tibetan, and Austro-Asiatic family respectively. The choice of the six languages also reflects different training conditions depending on the amount of monolingual data. French and Russian, and Arabic can be regarded as high resource languages whereas Hindi has far less data and can be considered as low resource.\nFor experiments that use parallel data to initialize foreign specific parameters, we use the same datasets in the work of BIBREF6. Specifically, we use United Nations Parallel Corpus BIBREF18 for en-ru, en-ar, en-zh, and en-fr. We collect en-hi parallel data from IIT Bombay corpus BIBREF19 and en-vi data from OpenSubtitles 2018. For experiments that use only monolingual data to initialize foreign parameters, instead of training word-vectors from the scratch, we use the pre-trained word vectors from fastText BIBREF14 to estimate word translation probabilities (Eq. DISPLAY_FORM13). We align these vectors into a common space using orthogonal Procrustes BIBREF20, BIBREF15, BIBREF16. We only use identical words between the two languages as the supervised signal. We use WikiExtractor to extract extract raw sentences from Wikipedias as monolingual data for fine-tuning target embeddings and bilingual LMs (§SECREF15). We do not lowercase or remove accents in our data preprocessing pipeline.\nWe tokenize English using the provided tokenizer from pre-trained models. For target languages, we use fastBPE to learn 30,000 BPE codes and 50,000 codes when transferring from BERT and RoBERTa respectively. We truncate the BPE vocabulary of foreign languages to match the size of the English vocabulary in the source models. Precisely, the size of foreign vocabulary is set to 32,000 when transferring from BERT and 50,000 when transferring from RoBERTa.\nWe use XNLI dataset BIBREF9 for classification task and Universal Dependencies v2.4 BIBREF21 for parsing task. Since a language might have more than one treebank in Universal Dependencies, we use the following treebanks: en_ewt (English), fr_gsd (French), ru_syntagrus (Russian) ar_padt (Arabic), vi_vtb (Vietnamese), hi_hdtb (Hindi), and zh_gsd (Chinese).\n\nZero-shot Experiments ::: Data ::: Remark on BPE\nBIBREF22 show that sharing subwords between languages improves alignments between embedding spaces. BIBREF2 observe a strong correlation between the percentage of overlapping subwords and mBERT's performances for cross-lingual zero-shot transfer. However, in our current approach, subwords between source and target are not shared. A subword that is in both English and foreign vocabulary has two different embeddings.\n\nZero-shot Experiments ::: Estimating translation probabilities\nSince pre-trained models operate on subword level, we need to estimate subword translation probabilities. Therefore, we subsample 2M sentence pairs from each parallel corpus and tokenize the data into subwords before running fast-align BIBREF13.\nEstimating subword translation probabilities from aligned word vectors requires an additional processing step since the provided vectors from fastText are not at subword level. We use the following approximation to obtain subword vectors: the vector $_s$ of subword $s$ is the weighted average of all the aligned word vectors $_{w_i}$ that have $s$ as an subword\nwhere $p(w_j)$ is the unigram probability of word $w_j$ and $n_s = \\sum _{w_j:\\, s\\in w_j} p(w_j)$. We take the top 50,000 words in each aligned word-vectors to compute subword vectors.\nIn both cases, not all the words in the foreign vocabulary can be initialized from the English word-embeddings. Those words are initialized randomly from a Gaussian $\\mathcal {N}(0, {1}{d^2})$.\n\nZero-shot Experiments ::: Hyper-parameters\nIn all the experiments, we tune RAMEN$_{\\textsc {base}}$ for 175,000 updates and RAMEN$_{\\textsc {large}}$ for 275,000 updates where the first 25,000 updates are for language specific parameters. The sequence length is set to 256. The mini-batch size are 64 and 24 when tuning language specific parameters using RAMEN$_{\\textsc {base}}$ and RAMEN$_{\\textsc {large}}$ respectively. For tuning bilingual LMs, we use a mini-batch size of 64 for RAMEN$_{\\textsc {base}}$ and 24 for RAMEN$_{\\textsc {large}}$ where half of the batch are English sequences and the other half are foreign sequences. This strategy of balancing mini-batch has been used in multilingual neural machine translation BIBREF23, BIBREF24.\nWe optimize RAMEN$_{\\textsc {base}}$ using Lookahead optimizer BIBREF25 wrapped around Adam with the learning rate of $10^{-4}$, the number of fast weight updates $k=5$, and interpolation parameter $\\alpha =0.5$. We choose Lookahead optimizer because it has been shown to be robust to the initial parameters of the based optimizer (Adam). For Adam optimizer, we linearly increase the learning rate from $10^{-7}$ to $10^{-4}$ in the first 4000 updates and then follow an inverse square root decay. All RAMEN$_{\\textsc {large}}$ models are optimized with Adam due to memory limit.\nWhen fine-tuning RAMEN on XNLI and UD, we use a mini-batch size of 32, Adam's learning rate of $10^{-5}$. The number of epochs are set to 4 and 50 for XNLI and UD tasks respectively. All experiments are carried out on a single Tesla V100 16GB GPU. Each RAMEN$_{\\textsc {base}}$ model is trained within a day and each RAMEN$_{\\textsc {large}}$ is trained within two days.\n\nResults ::: Universal Dependency Parsing\nWe build on top of RAMEN a graph-based dependency parser BIBREF27. For the purpose of evaluating the contextual representations learned by our model, we do not use part-of-speech tags. Contextualized representations are directly fed into Deep-Biaffine layers to predict arc and label scores. Table TABREF34 presents the Labeled Attachment Scores (LAS) for zero-shot dependency parsing.\nWe first look at the fairest comparison between mBERT and monolingually initialized RAMEN$_{\\textsc {base}}$+BERT. The latter outperforms the former on five languages except Arabic. We observe the largest gain of +5.2 LAS for French. Chinese enjoys +3.1 LAS from our approach. With similar architecture (12 or 24 layers) and initialization (using monolingual or parallel data), RAMEN+RoBERTa performs better than RAMEN+BERT for most of the languages. Arabic and Hindi benefit the most from bigger models. For the other four languages, RAMEN$_{\\textsc {large}}$ renders a modest improvement over RAMEN$_{\\textsc {base}}$.\n\nAnalysis ::: Impact of initialization\nInitializing foreign embeddings is the backbone of our approach. A good initialization leads to better zero-shot transfer results and enables fast adaptation. To verify the importance of a good initialization, we train a RAMEN$_{\\textsc {base}}$+RoBERTa with foreign word-embeddings are initialized randomly from $\\mathcal {N}(0, {1}{d^2})$. For a fair comparison, we use the same hyper-parameters in §SECREF27. Table TABREF36 shows the results of XNLI and UD parsing of random initialization. In comparison to the initialization using aligned fastText vectors, random initialization decreases the zero-shot performance of RAMEN$_{\\textsc {base}}$ by 15.9% for XNLI and 27.8 points for UD parsing on average. We also see that zero-shot parsing of SOV languages (Arabic and Hindi) suffers random initialization.\n\nTable 1: Zero-shot classification results on XNLI. ì indicates parallel data is used. RAMEN only uses parallel data for initialization. The best results are marked in bold.\n\nTable 2: LAS scores for zero-shot dependency parsing. ì indicates parallel data is used for initialization. Punctuation are removed during the evaluation. The best results are marked in bold.\n\nTable 3: Comparison between random initialization (rnd) of language specific parameters and initialization using aligned fastText vectors (vec).\n\nTable 4: Evaluation in supervised UD parsing. The scores are LAS.\n\nFigure 2: Accuracy and LAS evaluated at each checkpoints.\n\nTable 6: Comparison between b-BERT trained from scratch for 1,000,000 updates and RAMEN trained for 175,000 updates.\n\nQuery: How is the model transferred to other languages?\n\nJSON Response:",
"role": "user"
},
{
"content": "{\"answer\": \"Build a bilingual language model, learn the target language specific parameters starting from a pretrained English LM , fine-tune both English and target model to obtain the bilingual LM.\", \"evidence\": [\"Do multilingual models always need to be trained from scratch? Can we transfer linguistic knowledge learned by English pre-trained models to other languages? In this work, we develop a technique to rapidly transfer an existing pre-trained model from English to other languages in an energy efficient way BIBREF8. As the first step, we focus on building a bilingual language model (LM) of English and a target language. Starting from a pre-trained English LM, we learn the target language specific parameters (i.e., word embeddings), while keeping the encoder layers of the pre-trained English LM fixed. We then fine-tune both English and target model to obtain the bilingual LM. We apply our approach to autoencoding language models with masked language model objective and show the advantage of the proposed approach in zero-shot transfer. Our main contributions in this work are:\"]}",
"role": "assistant"
}
]
|
science.scifact_entailment | Determine the relationship between them and generate a JSON response. Scientific Claim:
Increased conversion of PGE 2 to PPARy ligand 15-ket-PGE 2 causes accumulation of PGE.
Research Abstract:
Cystic fibrosis is caused by mutations in the cystic fibrosis transmembrane conductance regulator (encoded by Cftr) that impair its role as an apical chloride channel that supports bicarbonate transport.
Individuals with cystic fibrosis show retained, thickened mucus that plugs airways and obstructs luminal organs as well as numerous other abnormalities that include inflammation of affected organs, alterations in lipid metabolism and insulin resistance.
Here we show that colonic epithelial cells and whole lung tissue from Cftr-deficient mice show a defect in peroxisome proliferator-activated receptor-gamma (PPAR-gamma, encoded by Pparg) function that contributes to a pathological program of gene expression.
Lipidomic analysis of colonic epithelial cells suggests that this defect results in part from reduced amounts of the endogenous PPAR-gamma ligand 15-keto-prostaglandin E(2) (15-keto-PGE(2)).
Treatment of Cftr-deficient mice with the synthetic PPAR-gamma ligand rosiglitazone partially normalizes the altered gene expression pattern associated with Cftr deficiency and reduces disease severity.
Rosiglitazone has no effect on chloride secretion in the colon, but it increases expression of the genes encoding carbonic anhydrases 4 and 2 (Car4 and Car2), increases bicarbonate secretion and reduces mucus retention.
These studies reveal a reversible defect in PPAR-gamma signaling in Cftr-deficient cells that can be pharmacologically corrected to ameliorate the severity of the cystic fibrosis phenotype in mice. | {"verdict": "NEI", "evidence": []} | science.scifact_entailment.388 | [
{
"content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nIncreased conversion of PGE 2 to PPARy ligand 15-ket-PGE 2 causes accumulation of PGE.\n\nResearch Abstract:\nCystic fibrosis is caused by mutations in the cystic fibrosis transmembrane conductance regulator (encoded by Cftr) that impair its role as an apical chloride channel that supports bicarbonate transport.\nIndividuals with cystic fibrosis show retained, thickened mucus that plugs airways and obstructs luminal organs as well as numerous other abnormalities that include inflammation of affected organs, alterations in lipid metabolism and insulin resistance.\nHere we show that colonic epithelial cells and whole lung tissue from Cftr-deficient mice show a defect in peroxisome proliferator-activated receptor-gamma (PPAR-gamma, encoded by Pparg) function that contributes to a pathological program of gene expression.\nLipidomic analysis of colonic epithelial cells suggests that this defect results in part from reduced amounts of the endogenous PPAR-gamma ligand 15-keto-prostaglandin E(2) (15-keto-PGE(2)).\nTreatment of Cftr-deficient mice with the synthetic PPAR-gamma ligand rosiglitazone partially normalizes the altered gene expression pattern associated with Cftr deficiency and reduces disease severity.\nRosiglitazone has no effect on chloride secretion in the colon, but it increases expression of the genes encoding carbonic anhydrases 4 and 2 (Car4 and Car2), increases bicarbonate secretion and reduces mucus retention.\nThese studies reveal a reversible defect in PPAR-gamma signaling in Cftr-deficient cells that can be pharmacologically corrected to ameliorate the severity of the cystic fibrosis phenotype in mice.",
"role": "user"
},
{
"content": "{\"verdict\": \"NEI\", \"evidence\": []}",
"role": "assistant"
}
]
|
science.bioasq_list_qa | Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.
Context: Our program Sherlocc, detects statistically relevant conserved rare codon clusters and produces a user-friendly HTML output.
LaTcOm: a web server for visualizing rare codon clusters in coding sequences
We present LaTcOm, a new web tool, which offers several alternative methods for 'rare codon cluster' (RCC) identification
three RCC detection schemes are implemented: the recently described %MinMax algorithm and a simplified sliding window approach, along with a novel modification of a linear-time algorithm for the detection of maximally scoring subsequences tailored to the RCC detection problem
The PAUSE software has been developed as a new tool to study translational control over protein targeting. This makes it possible to correlate the position of clusters of rare codons in a gene, predicted to cause a translational pause, with the position of hydrophobic stretches in the encoded protein, predicted to span a membrane or to act as a cleavable signal for targeting to the secretory pathway.
Question: Which are currently available software tools for detecting rare codon clusters in coding sequences? | ["LaTcOm", "%MinMax", "PAUSE", "Sherlocc", "Sliding Window"] | science.bioasq_list_qa.391 | [
{
"content": "Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.\n\nContext: Our program Sherlocc, detects statistically relevant conserved rare codon clusters and produces a user-friendly HTML output.\nLaTcOm: a web server for visualizing rare codon clusters in coding sequences\nWe present LaTcOm, a new web tool, which offers several alternative methods for 'rare codon cluster' (RCC) identification\nthree RCC detection schemes are implemented: the recently described %MinMax algorithm and a simplified sliding window approach, along with a novel modification of a linear-time algorithm for the detection of maximally scoring subsequences tailored to the RCC detection problem\nThe PAUSE software has been developed as a new tool to study translational control over protein targeting. This makes it possible to correlate the position of clusters of rare codons in a gene, predicted to cause a translational pause, with the position of hydrophobic stretches in the encoded protein, predicted to span a membrane or to act as a cleavable signal for targeting to the secretory pathway.\n\nQuestion: Which are currently available software tools for detecting rare codon clusters in coding sequences?",
"role": "user"
},
{
"content": "[\"LaTcOm\", \"%MinMax\", \"PAUSE\", \"Sherlocc\", \"Sliding Window\"]",
"role": "assistant"
}
]
|
science.biored_ner | Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: ["CellLine", "Variant", "Species", "Disease", "Chemical", "Gene"].
Ensure the output is strictly the JSON object.
Source Text:
Identification of novel mutations and sequence variants in the SOX2 and CHX10 genes in patients with anophthalmia/microphthalmia. PURPOSE: Mutations in the SOX2 and CHX10 genes have been reported in patients with anophthalmia and/or microphthalmia. In this study, we evaluated 34 anophthalmic/microphthalmic patient DNA samples (two sets of siblings included) for mutations and sequence variants in SOX2 and CHX10. METHODS: Conformational sensitive gel electrophoresis (CSGE) was used for the initial SOX2 and CHX10 screening of 34 affected individuals (two sets of siblings), five unaffected family members, and 80 healthy controls. Patient samples containing heteroduplexes were selected for sequence analysis. Base pair changes in SOX2 and CHX10 were confirmed by sequencing bidirectionally in patient samples. RESULTS: Two novel heterozygous mutations and two sequence variants (one known) in SOX2 were identified in this cohort. Mutation c.310 G>T (p. Glu104X), found in one patient, was in the region encoding the high mobility group (HMG) DNA-binding domain and resulted in a change from glutamic acid to a stop codon. The second mutation, noted in two affected siblings, was a single nucleotide deletion c.549delC (p. Pro184ArgfsX19) in the region encoding the activation domain, resulting in a frameshift and premature termination of the coding sequence. The shortened protein products may result in the loss of function. In addition, a novel nucleotide substitution c.*557G>A was identified in the 3'-untranslated region in one patient. The relationship between the nucleotide change and the protein function is indeterminate. A known single nucleotide polymorphism (c. *469 C>A, SNP rs11915160) was also detected in 2 of the 34 patients. Screening of CHX10 identified two synonymous sequence variants, c.471 C>T (p.Ser157Ser, rs35435463) and c.579 G>A (p. Gln193Gln, novel SNP), and one non-synonymous sequence variant, c.871 G>A (p. Asp291Asn, novel SNP). The non-synonymous polymorphism was also present in healthy controls, suggesting non-causality. CONCLUSIONS: These results support the role of SOX2 in ocular development. Loss of SOX2 function results in severe eye malformation. CHX10 was not implicated with microphthalmia/anophthalmia in our patient cohort. | {"CellLine": [], "Chemical": [], "Disease": ["eye malformation", "microphthalmia", "anophthalmic", "anophthalmia", "microphthalmic"], "Gene": ["SOX2", "CHX10"], "Species": ["patient", "Patient", "patients"], "Variant": ["p. Asp291Asn", "c.471 C\u003eT", "rs11915160", "c.549delC", "c.310 G\u003eT", "p.Ser157Ser", "rs35435463", "p. Gln193Gln", "c.871 G\u003eA", "p. Pro184ArgfsX19", "c.*557G\u003eA", "c. *469 C\u003eA", "c.579 G\u003eA", "p. Glu104X", "glutamic acid to a stop codon"]} | science.biored_ner.57 | [
{
"content": "Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: [\"CellLine\", \"Variant\", \"Species\", \"Disease\", \"Chemical\", \"Gene\"].\n\nEnsure the output is strictly the JSON object.\n\nSource Text:\n\nIdentification of novel mutations and sequence variants in the SOX2 and CHX10 genes in patients with anophthalmia/microphthalmia. PURPOSE: Mutations in the SOX2 and CHX10 genes have been reported in patients with anophthalmia and/or microphthalmia. In this study, we evaluated 34 anophthalmic/microphthalmic patient DNA samples (two sets of siblings included) for mutations and sequence variants in SOX2 and CHX10. METHODS: Conformational sensitive gel electrophoresis (CSGE) was used for the initial SOX2 and CHX10 screening of 34 affected individuals (two sets of siblings), five unaffected family members, and 80 healthy controls. Patient samples containing heteroduplexes were selected for sequence analysis. Base pair changes in SOX2 and CHX10 were confirmed by sequencing bidirectionally in patient samples. RESULTS: Two novel heterozygous mutations and two sequence variants (one known) in SOX2 were identified in this cohort. Mutation c.310 G>T (p. Glu104X), found in one patient, was in the region encoding the high mobility group (HMG) DNA-binding domain and resulted in a change from glutamic acid to a stop codon. The second mutation, noted in two affected siblings, was a single nucleotide deletion c.549delC (p. Pro184ArgfsX19) in the region encoding the activation domain, resulting in a frameshift and premature termination of the coding sequence. The shortened protein products may result in the loss of function. In addition, a novel nucleotide substitution c.*557G>A was identified in the 3'-untranslated region in one patient. The relationship between the nucleotide change and the protein function is indeterminate. A known single nucleotide polymorphism (c. *469 C>A, SNP rs11915160) was also detected in 2 of the 34 patients. Screening of CHX10 identified two synonymous sequence variants, c.471 C>T (p.Ser157Ser, rs35435463) and c.579 G>A (p. Gln193Gln, novel SNP), and one non-synonymous sequence variant, c.871 G>A (p. Asp291Asn, novel SNP). The non-synonymous polymorphism was also present in healthy controls, suggesting non-causality. CONCLUSIONS: These results support the role of SOX2 in ocular development. Loss of SOX2 function results in severe eye malformation. CHX10 was not implicated with microphthalmia/anophthalmia in our patient cohort.",
"role": "user"
},
{
"content": "{\"CellLine\": [], \"Chemical\": [], \"Disease\": [\"eye malformation\", \"microphthalmia\", \"anophthalmic\", \"anophthalmia\", \"microphthalmic\"], \"Gene\": [\"SOX2\", \"CHX10\"], \"Species\": [\"patient\", \"Patient\", \"patients\"], \"Variant\": [\"p. Asp291Asn\", \"c.471 C\\u003eT\", \"rs11915160\", \"c.549delC\", \"c.310 G\\u003eT\", \"p.Ser157Ser\", \"rs35435463\", \"p. Gln193Gln\", \"c.871 G\\u003eA\", \"p. Pro184ArgfsX19\", \"c.*557G\\u003eA\", \"c. *469 C\\u003eA\", \"c.579 G\\u003eA\", \"p. Glu104X\", \"glutamic acid to a stop codon\"]}",
"role": "assistant"
}
]
|
science.scifact_entailment | Determine the relationship between them and generate a JSON response. Scientific Claim:
Post-transcriptional handling of mitochondrial transcripts occurs in mitochondrial RNA granules.
Research Abstract:
Human mitochondrial ribosomes are specialized in the synthesis of 13 proteins, which are fundamental components of the oxidative phosphorylation system.
The pathway of mitoribosome biogenesis, the compartmentalization of the process, and factors involved remain largely unknown.
Here, we have identified the DEAD-box protein DDX28 as an RNA granule component essential for the biogenesis of the mitoribosome large subunit (mt-LSU).
DDX28 interacts with the 16S rRNA and the mt-LSU.
RNAi-mediated DDX28 silencing in HEK293T cells does not affect mitochondrial mRNA stability or 16S rRNA processing or modification.
However, it leads to reduced levels of 16S rRNA and mt-LSU proteins, impaired mt-LSU assembly, deeply attenuated mitochondrial protein synthesis, and consequent failure to assemble oxidative phosphorylation complexes.
Our findings identify DDX28 as essential during the early stages of mitoribosome mt-LSU biogenesis, a process that takes place mainly near the mitochondrial nucleoids, in the compartment defined by the RNA granules. | {"verdict": "SUPPORT", "evidence": ["Here, we have identified the DEAD-box protein DDX28 as an RNA granule component essential for the biogenesis of the mitoribosome large subunit (mt-LSU).", "Our findings identify DDX28 as essential during the early stages of mitoribosome mt-LSU biogenesis, a process that takes place mainly near the mitochondrial nucleoids, in the compartment defined by the RNA granules."]} | science.scifact_entailment.613 | [
{
"content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nPost-transcriptional handling of mitochondrial transcripts occurs in mitochondrial RNA granules.\n\nResearch Abstract:\nHuman mitochondrial ribosomes are specialized in the synthesis of 13 proteins, which are fundamental components of the oxidative phosphorylation system.\nThe pathway of mitoribosome biogenesis, the compartmentalization of the process, and factors involved remain largely unknown.\nHere, we have identified the DEAD-box protein DDX28 as an RNA granule component essential for the biogenesis of the mitoribosome large subunit (mt-LSU).\nDDX28 interacts with the 16S rRNA and the mt-LSU.\nRNAi-mediated DDX28 silencing in HEK293T cells does not affect mitochondrial mRNA stability or 16S rRNA processing or modification.\nHowever, it leads to reduced levels of 16S rRNA and mt-LSU proteins, impaired mt-LSU assembly, deeply attenuated mitochondrial protein synthesis, and consequent failure to assemble oxidative phosphorylation complexes.\nOur findings identify DDX28 as essential during the early stages of mitoribosome mt-LSU biogenesis, a process that takes place mainly near the mitochondrial nucleoids, in the compartment defined by the RNA granules.",
"role": "user"
},
{
"content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"Here, we have identified the DEAD-box protein DDX28 as an RNA granule component essential for the biogenesis of the mitoribosome large subunit (mt-LSU).\", \"Our findings identify DDX28 as essential during the early stages of mitoribosome mt-LSU biogenesis, a process that takes place mainly near the mitochondrial nucleoids, in the compartment defined by the RNA granules.\"]}",
"role": "assistant"
}
]
|
science.discomat_te | | Gd2O3 (wt%) | Base 5 oxide aluminoborosilicate glass composition (wt%) | Base 5 oxide aluminoborosilicate glass composition (wt%) | Base 5 oxide aluminoborosilicate glass composition (wt%) | Base 5 oxide aluminoborosilicate glass composition (wt%) | Base 5 oxide aluminoborosilicate glass composition (wt%) |
| +-0.05% | +-0.05% | +-0.05% | +-0.05% | +-0.05% | +-0.05% |
| | SiO2 | Al2O3 | B2O3 | Na2O | ZrO2 |
| 0.92 | 54.47 | 5.86 | 23.76 | 11.82 | 3.16 |
| 1.83 | 53.97 | 5.81 | 23.55 | 11.71 | 3.13 |
| 2.71 | 53.48 | 5.76 | 23.33 | 11.61 | 3.11 |
| 4.44 | 52.53 | 5.66 | 22.92 | 11.40 | 3.05 |
Caption: Compositions of base 5 oxide aluminoborosilicate glasses and doping concentration of gadolinium oxide
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "0.92", "type": "Glass_Compound_Amount", "constituent": "Gd2O3", "unit": "wt", "material": "xx"}
{"value": "54.47", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "wt", "material": "xx"}
{"value": "5.86", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "wt", "material": "xx"}
{"value": "23.76", "type": "Glass_Compound_Amount", "constituent": "B2O3", "unit": "wt", "material": "xx"}
{"value": "11.82", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "wt", "material": "xx"}
{"value": "3.16", "type": "Glass_Compound_Amount", "constituent": "ZrO2", "unit": "wt", "material": "xx"}
{"value": "1.83", "type": "Glass_Compound_Amount", "constituent": "Gd2O3", "unit": "wt", "material": "xx"}
{"value": "53.97", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "wt", "material": "xx"}
{"value": "5.81", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "wt", "material": "xx"}
{"value": "23.55", "type": "Glass_Compound_Amount", "constituent": "B2O3", "unit": "wt", "material": "xx"}
{"value": "11.71", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "wt", "material": "xx"}
{"value": "3.13", "type": "Glass_Compound_Amount", "constituent": "ZrO2", "unit": "wt", "material": "xx"}
{"value": "2.71", "type": "Glass_Compound_Amount", "constituent": "Gd2O3", "unit": "wt", "material": "xx"}
{"value": "53.48", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "wt", "material": "xx"}
{"value": "5.76", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "wt", "material": "xx"}
{"value": "23.33", "type": "Glass_Compound_Amount", "constituent": "B2O3", "unit": "wt", "material": "xx"}
{"value": "11.61", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "wt", "material": "xx"}
{"value": "3.11", "type": "Glass_Compound_Amount", "constituent": "ZrO2", "unit": "wt", "material": "xx"}
{"value": "4.44", "type": "Glass_Compound_Amount", "constituent": "Gd2O3", "unit": "wt", "material": "xx"}
{"value": "52.53", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "wt", "material": "xx"}
{"value": "5.66", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "wt", "material": "xx"}
{"value": "22.92", "type": "Glass_Compound_Amount", "constituent": "B2O3", "unit": "wt", "material": "xx"}
{"value": "11.40", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "wt", "material": "xx"}
{"value": "3.05", "type": "Glass_Compound_Amount", "constituent": "ZrO2", "unit": "wt", "material": "xx"} | science.discomat_te.267 | [
{
"content": "| Gd2O3 (wt%) | Base 5 oxide aluminoborosilicate glass composition (wt%) | Base 5 oxide aluminoborosilicate glass composition (wt%) | Base 5 oxide aluminoborosilicate glass composition (wt%) | Base 5 oxide aluminoborosilicate glass composition (wt%) | Base 5 oxide aluminoborosilicate glass composition (wt%) |\n| +-0.05% | +-0.05% | +-0.05% | +-0.05% | +-0.05% | +-0.05% |\n| | SiO2 | Al2O3 | B2O3 | Na2O | ZrO2 |\n| 0.92 | 54.47 | 5.86 | 23.76 | 11.82 | 3.16 |\n| 1.83 | 53.97 | 5.81 | 23.55 | 11.71 | 3.13 |\n| 2.71 | 53.48 | 5.76 | 23.33 | 11.61 | 3.11 |\n| 4.44 | 52.53 | 5.66 | 22.92 | 11.40 | 3.05 |\n\nCaption: Compositions of base 5 oxide aluminoborosilicate glasses and doping concentration of gadolinium oxide\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"0.92\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Gd2O3\", \"unit\": \"wt\", \"material\": \"xx\"}\n{\"value\": \"54.47\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"wt\", \"material\": \"xx\"}\n{\"value\": \"5.86\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"wt\", \"material\": \"xx\"}\n{\"value\": \"23.76\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B2O3\", \"unit\": \"wt\", \"material\": \"xx\"}\n{\"value\": \"11.82\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"wt\", \"material\": \"xx\"}\n{\"value\": \"3.16\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"ZrO2\", \"unit\": \"wt\", \"material\": \"xx\"}\n{\"value\": \"1.83\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Gd2O3\", \"unit\": \"wt\", \"material\": \"xx\"}\n{\"value\": \"53.97\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"wt\", \"material\": \"xx\"}\n{\"value\": \"5.81\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"wt\", \"material\": \"xx\"}\n{\"value\": \"23.55\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B2O3\", \"unit\": \"wt\", \"material\": \"xx\"}\n{\"value\": \"11.71\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"wt\", \"material\": \"xx\"}\n{\"value\": \"3.13\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"ZrO2\", \"unit\": \"wt\", \"material\": \"xx\"}\n{\"value\": \"2.71\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Gd2O3\", \"unit\": \"wt\", \"material\": \"xx\"}\n{\"value\": \"53.48\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"wt\", \"material\": \"xx\"}\n{\"value\": \"5.76\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"wt\", \"material\": \"xx\"}\n{\"value\": \"23.33\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B2O3\", \"unit\": \"wt\", \"material\": \"xx\"}\n{\"value\": \"11.61\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"wt\", \"material\": \"xx\"}\n{\"value\": \"3.11\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"ZrO2\", \"unit\": \"wt\", \"material\": \"xx\"}\n{\"value\": \"4.44\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Gd2O3\", \"unit\": \"wt\", \"material\": \"xx\"}\n{\"value\": \"52.53\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"wt\", \"material\": \"xx\"}\n{\"value\": \"5.66\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"wt\", \"material\": \"xx\"}\n{\"value\": \"22.92\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B2O3\", \"unit\": \"wt\", \"material\": \"xx\"}\n{\"value\": \"11.40\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"wt\", \"material\": \"xx\"}\n{\"value\": \"3.05\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"ZrO2\", \"unit\": \"wt\", \"material\": \"xx\"}",
"role": "assistant"
}
]
|
science.discomat_te | | Parameter | First peak | Second peak |
| Activation energy (kJ/mol) | E 1 =340 | E 2 =498 |
| Pre-exponential factor, log q (K/min) | 18.90 | 28.20 |
Caption: Crystallization kinetic parameters for MAS-T glass determined by DTA
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "18.90", "type": "Other"}
{"value": "28.20", "type": "Other"} | science.discomat_te.358 | [
{
"content": "| Parameter | First peak | Second peak |\n| Activation energy (kJ/mol) | E 1 =340 | E 2 =498 |\n| Pre-exponential factor, log q (K/min) | 18.90 | 28.20 |\n\nCaption: Crystallization kinetic parameters for MAS-T glass determined by DTA\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"18.90\", \"type\": \"Other\"}\n{\"value\": \"28.20\", \"type\": \"Other\"}",
"role": "assistant"
}
]
|
science.mup_single_document_summarization | Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.
Title of the Paper:
Sample Complexity of Tree Search Configuration: Cutting Planes and Beyond
Main Content:
1 Introduction . Integer programming is one of the most broadly-applicable tools in computer science , used to formulate problems from operations research ( such as routing , scheduling , and pricing ) , machine learning ( such as adversarially-robust learning , MAP estimation , and clustering ) , and beyond . Branchand-cut ( B & C ) is the most widely-used algorithm for solving integer programs ( IPs ) . B & C is highly configurable , and with a deft configuration , it can be used to solve computationally challenging problems . Finding a good configuration , however , is a notoriously difficult problem . We study machine learning approaches to configuring policies for selecting cutting planes , which have an enormous impact on B & C ’ s performance . At a high level , B & C works by recursively partitioning the IP ’ s feasible region , searching for the locally optimal solution within each set of the partition , 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . until it can verify that it has found the globally optimal solution . An IP ’ s feasible region is defined by a set of linear inequalities Ax ≤ b and integer constraints x ∈ Zn , where n is the number of variables . By dropping the integrality constraints , we obtain the linear programming ( LP ) relaxation of the IP , which can be solved efficiently . A cutting plane is a carefully-chosen linear inequality αTx ≤ β which refines the LP relaxation ’ s feasible region without separating any integral point . Intuitively , a well-chosen cutting plane will remove a large portion of the LP relaxation ’ s feasible region , speeding up the time it takes B & C to find the optimal solution to the original IP . Cutting plane selection is a crucial task , yet it is challenging because many cutting planes and cut-selection policies have tunable parameters , and the best configuration depends intimately on the application domain . We provide the first provable guarantees for learning high-performing cutting planes and cut-selection policies , tailored to the application at hand . We model the application domain via an unknown , application-specific distribution over IPs , as is standard in the literature on using machine learning for integer programming [ e.g. , 21 , 23 , 31 , 36 , 43 ] . For example , this could be a distribution over the routing IPs that a shipping company must solve day after day . The learning algorithm ’ s input is a training set sampled from this distribution . The goal is to use this training set to learn cutting planes and cut-selection policies with strong future performance on problems from the same application but which are not already in the training set—or more formally , strong expected performance . 1.1 Summary of main contributions and overview of techniques . As our first main contribution , we provide sample complexity bounds of the following form : fixing a family of cutting planes , we bound the number of samples sufficient to ensure that for any sequence of cutting planes from the family , the average size of the B & C tree is close to the expected size of the B & C tree . We measure performance in terms of the size of the search tree B & C builds . Our guarantees apply to the parameterized family of Chvátal-Gomory ( CG ) cuts [ 10 , 17 ] , one of the most widely-used families of cutting planes . The overriding challenge is that to provide guarantees , we must analyze how the tree size changes as a function of the cut parameters . This is a sensitive function—slightly shifting the parameters can cause the tree size to shift from constant to exponential in the number of variables . Our key technical insight is that as the parameters vary , the entries of the cut ( i.e. , the vector α and offset β of the cut αTx ≤ β ) are multivariate polynomials of bounded degree . The number of terms defining the polynomials is exponential in the number of parameters , but we show that the polynomials can be embedded in a space with dimension sublinear in the number of parameters . This insight allows us to better understand tree size as a function of the parameters . We then leverage results by Balcan et al . [ 8 ] that show how to use structure exhibited by dual functions ( measuring an algorithm ’ s performance , such as its tree size , as a function of its parameters ) to derive sample complexity bounds . Our second main contribution is a sample complexity bound for learning cut-selection policies , which allow B & C to adaptively select cuts as it solves the input IP . These cut-selection policies assign a number of real-valued scores to a set of cutting planes and then apply the cut that has the maximum weighted sum of scores . Tree size is a volatile function of these weights , though we prove that it is piecewise constant , as illustrated in Figure 1 , which allows us to prove our sample complexity bound . Finally , as our third main contribution , we provide guarantees for tuning weighted combinations of scoring rules for other aspects of tree search beyond cut selection , including node and variable selection . We prove that there is a set of hyperplanes splitting the parameter space into regions such that if tree search uses any configuration from a single region , it will take the same sequence of actions . This structure allows us to prove our sample complexity bound . This is the first paper to provide guarantees for tree search configuration that apply simultaneously to multiple different aspects of the algorithm—prior research was specific to variable selection [ 5 ] . Sample complexity bounds are important because if the parameterized class of cuts or cut-selection policies that we optimize over is highly complex and the training set is too small , the learned cut or cut-selection policy might have great average empirical performance over the training set but terrible future performance . In other words , the parameter configuration procedure may overfit to the training set . The sample complexity bounds we provide are uniform-convergence : we prove that given enough samples , uniformly across all parameter settings , the difference between average and empirical performance is small . In other words , these bounds hold for any procedure one might use to optimize over the training set : manual or automated , optimal or suboptimal . No matter what parameter setting the configuration procedure comes up with , the user can be guaranteed that so long as that parameter setting has good average empirical performance over the training set , it will also have strong future performance . 1.2 Related work . Applied research on tree search configuration . Over the past decade , a substantial literature has developed on the use of machine learning for integer programming and tree search [ e.g. , 2 , 7 , 9 , 13 , 19 , 23–25 , 29 , 31–33 , 35 , 36 , 41–43 ] . This has included research that improves specific aspects of B & C such as variable selection [ 2 , 13 , 24 , 29 , 32 , 41 ] , node selection [ 19 , 35 , 44 ] , and heuristic scheduling [ 25 ] . These papers are applied , whereas we focus on providing theoretical guarantees . With respect to cutting plane selection , the focus of this paper , Sandholm [ 36 ] uses machine learning techniques to customize B & C for combinatorial auction winner determination , including cutting plane selection . Tang et al . [ 37 ] and Huang et al . [ 20 ] study machine learning approaches to cutting plane selection . The former work formulates this problem as a reinforcement learning problem and shows that their approach can outperform human-designed heuristics for a variety of tasks . The latter work studies cutting plane selection in the multiple-instance-learning framework and proposes a neural-network architecture for scoring and ranking cutting planes . Meanwhile , the focus of our paper is to provide the first provable guarantees for cutting plane selection via machine learning . Ferber et al . [ 15 ] study a problem where the IP objective vector c is unknown , but an estimate ĉ can be obtained from data . Their goal is to optimize the quality of the solutions obtained by solving the IP defined by ĉ , with respect to the true vector c. They do so by formulating the IP as a differentiable layer in a neural network . The nonconvex nature of the IP does not allow for straightforward gradient computation for the backward pass , so they obtain a continuous surrogate using cutting planes . Provable guarantees for algorithm configuration . Gupta and Roughgarden [ 18 ] initiated the study of sample complexity bounds for algorithm configuration . In research most related to ours , Balcan et al . [ 5 ] provide sample complexity bounds for learning tree search variable selection policies ( VSPs ) . They prove their bounds by showing that for any IP , hyperplanes partition the VSP parameter space into regions where the B & C tree size is a constant function of the parameters . The analysis in this paper requires new techniques because although we prove that the B & C tree size is a piecewiseconstant function of the CG cutting plane parameters , the boundaries between pieces are far more complex than hyperplanes : they are hypersurfaces defined by multivariate polynomials . Kleinberg et al . [ 26 , 27 ] and Weisz et al . [ 38 , 39 ] design configuration procedures for runtime minimization that come with theoretical guarantees . Their algorithms are designed for the case where there are a finitely-many parameter settings to choose from ( although they are still able to provide guarantees for infinite parameter spaces by running their procedure on a finite sample of configurations ; Balcan et al . [ 5 , 6 ] analyze when discretization approaches can and can not be gainfully employed ) . In contrast , our guarantees are designed for infinite parameter spaces . 2 Problem formulation . In this section we give a more detailed technical overview of branch-and-cut , as well as an overview of the tools from learning theory we use to prove sample complexity guarantees . 2.1 Branch-and-cut . We study integer programs ( IPs ) in canonical form given by max { cTx : Ax ≤ b , x ≥ 0 , x ∈ Zn } , ( 1 ) where A ∈ Zm×n , b ∈ Zm , and c ∈ Rn . Branch-and-cut ( B & C ) works by recursively partitioning the input IP ’ s feasible region , searching for the locally optimal solution within each set of the partition until it can verify that it has found the globally optimal solution . It organizes this partition as a search tree , with the input IP stored at the root . It begins by solving the LP relaxation of the input IP ; we denote the solution as x∗LP ∈ Rn . If x∗LP satisfies the IP ’ s integrality constraints ( x∗LP ∈ Zn ) , then the procedure terminates—x∗LP is the globally optimal solution . Otherwise , it uses a variable selection policy to choose a variable x [ i ] . In the left child of the root , it stores the original IP with the additional constraint that x [ i ] ≤ bx∗LP [ i ] c , and in the right child , with the additional constraint that x [ i ] ≥ dx∗LP [ i ] e. It then uses a node selection policy to select a leaf of the tree and repeats this procedure—solving the LP relaxation and branching on a variable . B & C can fathom a node , meaning that it will stop searching along that branch , if 1 ) the LP relaxation satisfies the IP ’ s integrality constraints , 2 ) the LP relaxation is infeasible , or 3 ) the objective value of the LP relaxation ’ s solution is no better than the best integral solution found thus far . We assume there is a bound κ on the size of the tree we allow B & C to build before we terminate , as is common in prior research [ 5 , 21 , 26 , 27 ] . Cutting planes are a means of ensuring that at each iteration of B & C , the solution to the LP relaxation is as close to the optimal integral solution as possible . Formally , let P = { x ∈ Rn : Ax ≤ b , x ≥ 0 } denote the feasible region obtained by taking the LP relaxation of IP ( 1 ) . Let PI = conv ( P ∩ Zn ) denote the integer hull of P . A valid cutting plane is any hyperplane αTx ≤ β such that if x is in the integer hull ( x ∈ PI ) , then x satisfies the inequality αTx ≤ β . In other words , a valid cut does not remove any integral point from the LP relaxation ’ s feasible region . A valid cutting plane separates x ∈ P \ PI if it does not satisfy the inequality , or in other words , αTx > β . At any node of the search tree , B & C can add valid cutting planes that separate the optimal solution to the node ’ s LP relaxation , thus improving the solution estimates used to prune the search tree . However , adding too many cuts will increase the time it takes to solve the LP relaxation at each node . Therefore , solvers such as SCIP [ 16 ] , the leading open-source solver , bound the number of cuts that will be applied . A famous class of cutting planes is the family of Chvátal-Gomory ( CG ) cuts1 [ 10 , 17 ] , which are parameterized by vectors u ∈ Rm . The CG cut defined by u ∈ Rm is the hyperplane buTAcx ≤ buT bc , which is guaranteed to be valid . Throughout this paper we primarily restrict our attention to u ∈ [ 0 , 1 ) m. This is without loss of generality , since the facets of P ∩ { x ∈ Rn : buTAcx ≤ buT bc ∀u ∈ Rm } can be described by the finitely many u ∈ [ 0 , 1 ) m such that uTA ∈ Zn ( Lemma 5.13 of Conforti et al . [ 11 ] ) . Some IP solvers such as SCIP use scoring rules to select among cutting planes , which are meant to measure the quality of a cut . Some commonly-used scoring rules include efficacy [ 4 ] ( score1 ) , objective parallelism [ 1 ] ( score2 ) , directed cutoff distance [ 16 ] ( score3 ) , and integral support [ 40 ] ( score4 ) ( defined in Appendix A ) . Efficacy measures the distance between the cut αTx ≤ β and x∗LP : score1 ( α Tx ≤ β ) = ( αTx∗LP − β ) / ‖α‖2 , as illustrated in Figure 2a . Objective parallelism measures the angle between the objective c and the cut ’ s normal vector α : score2 ( αTx ≤ β ) =∣∣cTα∣∣ / ( ‖α‖2 ‖c‖2 ) , as illustrated in Figures 2b and 2c . Directed cutoff distance measures the distance between the LP optimal solution and the cut in a more relevant direction than the efficacy scoring rule . Specifically , let x be the incumbent solution , which is the best-known feasible solution to the input IP . The directed cutoff distance is the distance between the hyperplane ( α , β ) and the current LP solution x∗LP along the direction of the incumbent x , as illustrated in Figures 2d and 2e : score3 ( αTx ≤ β ) = ‖x− x∗LP‖2 · ( α Tx∗LP − β ) / ∣∣αT ( x− x∗LP ) ∣∣ . SCIP uses the scoring rule 3 5score1 + 1 10score2 + 1 2score3 + 1 10score4 [ 16 ] . 1The set of CG cuts is equivalent to the set of Gomory ( fractional ) cuts [ 12 ] , another commonly studied family of cutting planes with a slightly different parameterization . | The paper focuses on the sample complexity of learning to select Chvatal-Gomory cuts for integer linear programming. We assume that there is an unknown distribution that generates ILP instances. CG cuts are parametrized by a set of weights, one per constraint. How large should the set of training instances be for one to accurately estimate the "goodness" of a given parametrization? This is the main question tackled here. Using the data-driven algorithm design framework of Balcan et al. [8], this paper shows that three flavors of the learning problem can be analyzed effectively. The main contribution is to show that there is structure to the cut generation process as its parameters vary; the space of possible cuts can be partitioned, the form of the boundaries that determine the partition is identified, and the behavior of the cut generation is constant within each region. These can be plugged into a very general PAC learning bound from Balcan et al. [8]. Additionally, the sample complexity of generic tree search is analyzed. It is shown that variable, node, and cut selection can be parameterized simultaneously, each with its own additive scoring function, and sample complexity bounds can be derived accordingly. This result generalizes a previous branching-only bound from Balcan et al. [5]. | science.mup_single_document_summarization.15842 | [
{
"content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nSample Complexity of Tree Search Configuration: Cutting Planes and Beyond\n\nMain Content:\n1 Introduction . Integer programming is one of the most broadly-applicable tools in computer science , used to formulate problems from operations research ( such as routing , scheduling , and pricing ) , machine learning ( such as adversarially-robust learning , MAP estimation , and clustering ) , and beyond . Branchand-cut ( B & C ) is the most widely-used algorithm for solving integer programs ( IPs ) . B & C is highly configurable , and with a deft configuration , it can be used to solve computationally challenging problems . Finding a good configuration , however , is a notoriously difficult problem . We study machine learning approaches to configuring policies for selecting cutting planes , which have an enormous impact on B & C ’ s performance . At a high level , B & C works by recursively partitioning the IP ’ s feasible region , searching for the locally optimal solution within each set of the partition , 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . until it can verify that it has found the globally optimal solution . An IP ’ s feasible region is defined by a set of linear inequalities Ax ≤ b and integer constraints x ∈ Zn , where n is the number of variables . By dropping the integrality constraints , we obtain the linear programming ( LP ) relaxation of the IP , which can be solved efficiently . A cutting plane is a carefully-chosen linear inequality αTx ≤ β which refines the LP relaxation ’ s feasible region without separating any integral point . Intuitively , a well-chosen cutting plane will remove a large portion of the LP relaxation ’ s feasible region , speeding up the time it takes B & C to find the optimal solution to the original IP . Cutting plane selection is a crucial task , yet it is challenging because many cutting planes and cut-selection policies have tunable parameters , and the best configuration depends intimately on the application domain . We provide the first provable guarantees for learning high-performing cutting planes and cut-selection policies , tailored to the application at hand . We model the application domain via an unknown , application-specific distribution over IPs , as is standard in the literature on using machine learning for integer programming [ e.g. , 21 , 23 , 31 , 36 , 43 ] . For example , this could be a distribution over the routing IPs that a shipping company must solve day after day . The learning algorithm ’ s input is a training set sampled from this distribution . The goal is to use this training set to learn cutting planes and cut-selection policies with strong future performance on problems from the same application but which are not already in the training set—or more formally , strong expected performance . 1.1 Summary of main contributions and overview of techniques . As our first main contribution , we provide sample complexity bounds of the following form : fixing a family of cutting planes , we bound the number of samples sufficient to ensure that for any sequence of cutting planes from the family , the average size of the B & C tree is close to the expected size of the B & C tree . We measure performance in terms of the size of the search tree B & C builds . Our guarantees apply to the parameterized family of Chvátal-Gomory ( CG ) cuts [ 10 , 17 ] , one of the most widely-used families of cutting planes . The overriding challenge is that to provide guarantees , we must analyze how the tree size changes as a function of the cut parameters . This is a sensitive function—slightly shifting the parameters can cause the tree size to shift from constant to exponential in the number of variables . Our key technical insight is that as the parameters vary , the entries of the cut ( i.e. , the vector α and offset β of the cut αTx ≤ β ) are multivariate polynomials of bounded degree . The number of terms defining the polynomials is exponential in the number of parameters , but we show that the polynomials can be embedded in a space with dimension sublinear in the number of parameters . This insight allows us to better understand tree size as a function of the parameters . We then leverage results by Balcan et al . [ 8 ] that show how to use structure exhibited by dual functions ( measuring an algorithm ’ s performance , such as its tree size , as a function of its parameters ) to derive sample complexity bounds . Our second main contribution is a sample complexity bound for learning cut-selection policies , which allow B & C to adaptively select cuts as it solves the input IP . These cut-selection policies assign a number of real-valued scores to a set of cutting planes and then apply the cut that has the maximum weighted sum of scores . Tree size is a volatile function of these weights , though we prove that it is piecewise constant , as illustrated in Figure 1 , which allows us to prove our sample complexity bound . Finally , as our third main contribution , we provide guarantees for tuning weighted combinations of scoring rules for other aspects of tree search beyond cut selection , including node and variable selection . We prove that there is a set of hyperplanes splitting the parameter space into regions such that if tree search uses any configuration from a single region , it will take the same sequence of actions . This structure allows us to prove our sample complexity bound . This is the first paper to provide guarantees for tree search configuration that apply simultaneously to multiple different aspects of the algorithm—prior research was specific to variable selection [ 5 ] . Sample complexity bounds are important because if the parameterized class of cuts or cut-selection policies that we optimize over is highly complex and the training set is too small , the learned cut or cut-selection policy might have great average empirical performance over the training set but terrible future performance . In other words , the parameter configuration procedure may overfit to the training set . The sample complexity bounds we provide are uniform-convergence : we prove that given enough samples , uniformly across all parameter settings , the difference between average and empirical performance is small . In other words , these bounds hold for any procedure one might use to optimize over the training set : manual or automated , optimal or suboptimal . No matter what parameter setting the configuration procedure comes up with , the user can be guaranteed that so long as that parameter setting has good average empirical performance over the training set , it will also have strong future performance . 1.2 Related work . Applied research on tree search configuration . Over the past decade , a substantial literature has developed on the use of machine learning for integer programming and tree search [ e.g. , 2 , 7 , 9 , 13 , 19 , 23–25 , 29 , 31–33 , 35 , 36 , 41–43 ] . This has included research that improves specific aspects of B & C such as variable selection [ 2 , 13 , 24 , 29 , 32 , 41 ] , node selection [ 19 , 35 , 44 ] , and heuristic scheduling [ 25 ] . These papers are applied , whereas we focus on providing theoretical guarantees . With respect to cutting plane selection , the focus of this paper , Sandholm [ 36 ] uses machine learning techniques to customize B & C for combinatorial auction winner determination , including cutting plane selection . Tang et al . [ 37 ] and Huang et al . [ 20 ] study machine learning approaches to cutting plane selection . The former work formulates this problem as a reinforcement learning problem and shows that their approach can outperform human-designed heuristics for a variety of tasks . The latter work studies cutting plane selection in the multiple-instance-learning framework and proposes a neural-network architecture for scoring and ranking cutting planes . Meanwhile , the focus of our paper is to provide the first provable guarantees for cutting plane selection via machine learning . Ferber et al . [ 15 ] study a problem where the IP objective vector c is unknown , but an estimate ĉ can be obtained from data . Their goal is to optimize the quality of the solutions obtained by solving the IP defined by ĉ , with respect to the true vector c. They do so by formulating the IP as a differentiable layer in a neural network . The nonconvex nature of the IP does not allow for straightforward gradient computation for the backward pass , so they obtain a continuous surrogate using cutting planes . Provable guarantees for algorithm configuration . Gupta and Roughgarden [ 18 ] initiated the study of sample complexity bounds for algorithm configuration . In research most related to ours , Balcan et al . [ 5 ] provide sample complexity bounds for learning tree search variable selection policies ( VSPs ) . They prove their bounds by showing that for any IP , hyperplanes partition the VSP parameter space into regions where the B & C tree size is a constant function of the parameters . The analysis in this paper requires new techniques because although we prove that the B & C tree size is a piecewiseconstant function of the CG cutting plane parameters , the boundaries between pieces are far more complex than hyperplanes : they are hypersurfaces defined by multivariate polynomials . Kleinberg et al . [ 26 , 27 ] and Weisz et al . [ 38 , 39 ] design configuration procedures for runtime minimization that come with theoretical guarantees . Their algorithms are designed for the case where there are a finitely-many parameter settings to choose from ( although they are still able to provide guarantees for infinite parameter spaces by running their procedure on a finite sample of configurations ; Balcan et al . [ 5 , 6 ] analyze when discretization approaches can and can not be gainfully employed ) . In contrast , our guarantees are designed for infinite parameter spaces . 2 Problem formulation . In this section we give a more detailed technical overview of branch-and-cut , as well as an overview of the tools from learning theory we use to prove sample complexity guarantees . 2.1 Branch-and-cut . We study integer programs ( IPs ) in canonical form given by max { cTx : Ax ≤ b , x ≥ 0 , x ∈ Zn } , ( 1 ) where A ∈ Zm×n , b ∈ Zm , and c ∈ Rn . Branch-and-cut ( B & C ) works by recursively partitioning the input IP ’ s feasible region , searching for the locally optimal solution within each set of the partition until it can verify that it has found the globally optimal solution . It organizes this partition as a search tree , with the input IP stored at the root . It begins by solving the LP relaxation of the input IP ; we denote the solution as x∗LP ∈ Rn . If x∗LP satisfies the IP ’ s integrality constraints ( x∗LP ∈ Zn ) , then the procedure terminates—x∗LP is the globally optimal solution . Otherwise , it uses a variable selection policy to choose a variable x [ i ] . In the left child of the root , it stores the original IP with the additional constraint that x [ i ] ≤ bx∗LP [ i ] c , and in the right child , with the additional constraint that x [ i ] ≥ dx∗LP [ i ] e. It then uses a node selection policy to select a leaf of the tree and repeats this procedure—solving the LP relaxation and branching on a variable . B & C can fathom a node , meaning that it will stop searching along that branch , if 1 ) the LP relaxation satisfies the IP ’ s integrality constraints , 2 ) the LP relaxation is infeasible , or 3 ) the objective value of the LP relaxation ’ s solution is no better than the best integral solution found thus far . We assume there is a bound κ on the size of the tree we allow B & C to build before we terminate , as is common in prior research [ 5 , 21 , 26 , 27 ] . Cutting planes are a means of ensuring that at each iteration of B & C , the solution to the LP relaxation is as close to the optimal integral solution as possible . Formally , let P = { x ∈ Rn : Ax ≤ b , x ≥ 0 } denote the feasible region obtained by taking the LP relaxation of IP ( 1 ) . Let PI = conv ( P ∩ Zn ) denote the integer hull of P . A valid cutting plane is any hyperplane αTx ≤ β such that if x is in the integer hull ( x ∈ PI ) , then x satisfies the inequality αTx ≤ β . In other words , a valid cut does not remove any integral point from the LP relaxation ’ s feasible region . A valid cutting plane separates x ∈ P \\ PI if it does not satisfy the inequality , or in other words , αTx > β . At any node of the search tree , B & C can add valid cutting planes that separate the optimal solution to the node ’ s LP relaxation , thus improving the solution estimates used to prune the search tree . However , adding too many cuts will increase the time it takes to solve the LP relaxation at each node . Therefore , solvers such as SCIP [ 16 ] , the leading open-source solver , bound the number of cuts that will be applied . A famous class of cutting planes is the family of Chvátal-Gomory ( CG ) cuts1 [ 10 , 17 ] , which are parameterized by vectors u ∈ Rm . The CG cut defined by u ∈ Rm is the hyperplane buTAcx ≤ buT bc , which is guaranteed to be valid . Throughout this paper we primarily restrict our attention to u ∈ [ 0 , 1 ) m. This is without loss of generality , since the facets of P ∩ { x ∈ Rn : buTAcx ≤ buT bc ∀u ∈ Rm } can be described by the finitely many u ∈ [ 0 , 1 ) m such that uTA ∈ Zn ( Lemma 5.13 of Conforti et al . [ 11 ] ) . Some IP solvers such as SCIP use scoring rules to select among cutting planes , which are meant to measure the quality of a cut . Some commonly-used scoring rules include efficacy [ 4 ] ( score1 ) , objective parallelism [ 1 ] ( score2 ) , directed cutoff distance [ 16 ] ( score3 ) , and integral support [ 40 ] ( score4 ) ( defined in Appendix A ) . Efficacy measures the distance between the cut αTx ≤ β and x∗LP : score1 ( α Tx ≤ β ) = ( αTx∗LP − β ) / ‖α‖2 , as illustrated in Figure 2a . Objective parallelism measures the angle between the objective c and the cut ’ s normal vector α : score2 ( αTx ≤ β ) =∣∣cTα∣∣ / ( ‖α‖2 ‖c‖2 ) , as illustrated in Figures 2b and 2c . Directed cutoff distance measures the distance between the LP optimal solution and the cut in a more relevant direction than the efficacy scoring rule . Specifically , let x be the incumbent solution , which is the best-known feasible solution to the input IP . The directed cutoff distance is the distance between the hyperplane ( α , β ) and the current LP solution x∗LP along the direction of the incumbent x , as illustrated in Figures 2d and 2e : score3 ( αTx ≤ β ) = ‖x− x∗LP‖2 · ( α Tx∗LP − β ) / ∣∣αT ( x− x∗LP ) ∣∣ . SCIP uses the scoring rule 3 5score1 + 1 10score2 + 1 2score3 + 1 10score4 [ 16 ] . 1The set of CG cuts is equivalent to the set of Gomory ( fractional ) cuts [ 12 ] , another commonly studied family of cutting planes with a slightly different parameterization .",
"role": "user"
},
{
"content": "The paper focuses on the sample complexity of learning to select Chvatal-Gomory cuts for integer linear programming. We assume that there is an unknown distribution that generates ILP instances. CG cuts are parametrized by a set of weights, one per constraint. How large should the set of training instances be for one to accurately estimate the \"goodness\" of a given parametrization? This is the main question tackled here. Using the data-driven algorithm design framework of Balcan et al. [8], this paper shows that three flavors of the learning problem can be analyzed effectively. The main contribution is to show that there is structure to the cut generation process as its parameters vary; the space of possible cuts can be partitioned, the form of the boundaries that determine the partition is identified, and the behavior of the cut generation is constant within each region. These can be plugged into a very general PAC learning bound from Balcan et al. [8]. Additionally, the sample complexity of generic tree search is analyzed. It is shown that variable, node, and cut selection can be parameterized simultaneously, each with its own additive scoring function, and sample complexity bounds can be derived accordingly. This result generalizes a previous branching-only bound from Balcan et al. [5].",
"role": "assistant"
}
]
|
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT: This pilot study evaluates the degree of side effects during high-dose chemotherapy (HD-VIC) plus autologous bone marrow transplant (HDCT) and its possible prevention by the cytoprotective thiol-derivate amifostine. Additionally, the in-patient medical costs of both treatment arms were compared. 40 patients with solid tumours were randomized to receive HD-VIC chemotherapy with or without amifostine (910 mg/m2 at day 1–3) given as a short infusion prior to carboplatin and ifosfamide. Patients were stratified according to pretreatment. HDCT consisted of an 18 h infusion of carboplatin (500 mg/m2/d over 18 h), ifosfamide (4 g/m2/d over 4 h) and etoposide (500 mg/m2/d) all given for 3 consecutive days. All patients received prophylactic application of G-CSF (5 μg kg−1 subcutaneously) to ameliorate neutropenia after treatment. Patients were monitored for nephrotoxicity, gastrointestinal side effects, haematopoietic recovery, as well as frequency of fever and infections. The median fall of the glomerular filtration rate (GFR) was 10% from baseline in the amifostine group (105 to 95 ml min−1) and 37% in the control patient group (107 to 67 ml min−1) (P< 0.01). Amifostine-treated patients revealed a less pronounced increase in albumine and low molecular weight protein urinary excretion. Stomatitis grade III/IV occurred in 25% without versus 0% of patients with amifostine (P = 0.01). Acute nausea/vomiting was frequently observed immediately during or after the application of amifostine despite intensive antiemetic prophylaxis consisting of 5-HT3-receptor antagonists/dexamethasone/trifluorpromazine. However, delayed emesis occurred more often in the control patients. Engraftment of neutrophil (> 500 μl−1) and thrombocytes (> 25 000 μl−1)were observed at days 9 versus 10 and 10 versus 12, respectively, both slightly in favour of the amifostine arm. In addition, a lower number of days with fever and a shortened duration of hospital stay were observed in the amifostine arm. The reduction of acute toxicity observed in the amifostine arm resulted in 30% savings in costs for supportive care (Euro 4396 versus Euro 3153 per patient). Taking into account the drug costs of amifostine, calculation of in-patient treatment costs from the start of chemotherapy to discharge revealed additional costs of Euro 540 per patient in the amifostine arm. This randomized pilot study indicates that both organ and haematotoxicity of HD-VIC chemotherapy can be ameliorated by the use of amifostine. Additionally, a nearly complete preservation of GFR was observed in amifostine-treated patients which may be advantageous if repetitive cycles of HDCT are planned. Larger randomized trials evaluating amifostine cytoprotection during high-dose chemotherapy are warranted. © 2001 Cancer Research Campaign http://www.bjcancer.com
Extracted Data: | [["Amifostine ", "Control", "Days with fever", "significantly decreased", "In addition, a lower number of days with fever and a shortened duration of hospital stay were observed in the amifostine arm."], ["Amifostine ", "Control", "Hospital stay", "significantly decreased", "In addition, a lower number of days with fever and a shortened duration of hospital stay were observed in the amifostine arm."], ["Amifostine ", "Control", "Grade III/IV stomatitis ", "significantly decreased", "Stomatitis grade III/IV occurred in 25% without versus 0% of patients with amifostine (P = 0.01)."], ["Amifostine ", "Control", "Glomerular filtration rate fall", "significantly decreased", "The median fall of the glomerular filtration rate (GFR) was 10% from baseline in the amifostine group (105 to 95 ml min\u22121) and 37% in the control patient group (107 to 67 ml min\u22121) (P\u003c 0.01)."]] | science.evidence_inference.144 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT: This pilot study evaluates the degree of side effects during high-dose chemotherapy (HD-VIC) plus autologous bone marrow transplant (HDCT) and its possible prevention by the cytoprotective thiol-derivate amifostine. Additionally, the in-patient medical costs of both treatment arms were compared. 40 patients with solid tumours were randomized to receive HD-VIC chemotherapy with or without amifostine (910 mg/m2 at day 1–3) given as a short infusion prior to carboplatin and ifosfamide. Patients were stratified according to pretreatment. HDCT consisted of an 18 h infusion of carboplatin (500 mg/m2/d over 18 h), ifosfamide (4 g/m2/d over 4 h) and etoposide (500 mg/m2/d) all given for 3 consecutive days. All patients received prophylactic application of G-CSF (5 μg kg−1 subcutaneously) to ameliorate neutropenia after treatment. Patients were monitored for nephrotoxicity, gastrointestinal side effects, haematopoietic recovery, as well as frequency of fever and infections. The median fall of the glomerular filtration rate (GFR) was 10% from baseline in the amifostine group (105 to 95 ml min−1) and 37% in the control patient group (107 to 67 ml min−1) (P< 0.01). Amifostine-treated patients revealed a less pronounced increase in albumine and low molecular weight protein urinary excretion. Stomatitis grade III/IV occurred in 25% without versus 0% of patients with amifostine (P = 0.01). Acute nausea/vomiting was frequently observed immediately during or after the application of amifostine despite intensive antiemetic prophylaxis consisting of 5-HT3-receptor antagonists/dexamethasone/trifluorpromazine. However, delayed emesis occurred more often in the control patients. Engraftment of neutrophil (> 500 μl−1) and thrombocytes (> 25 000 μl−1)were observed at days 9 versus 10 and 10 versus 12, respectively, both slightly in favour of the amifostine arm. In addition, a lower number of days with fever and a shortened duration of hospital stay were observed in the amifostine arm. The reduction of acute toxicity observed in the amifostine arm resulted in 30% savings in costs for supportive care (Euro 4396 versus Euro 3153 per patient). Taking into account the drug costs of amifostine, calculation of in-patient treatment costs from the start of chemotherapy to discharge revealed additional costs of Euro 540 per patient in the amifostine arm. This randomized pilot study indicates that both organ and haematotoxicity of HD-VIC chemotherapy can be ameliorated by the use of amifostine. Additionally, a nearly complete preservation of GFR was observed in amifostine-treated patients which may be advantageous if repetitive cycles of HDCT are planned. Larger randomized trials evaluating amifostine cytoprotection during high-dose chemotherapy are warranted. © 2001 Cancer Research Campaign http://www.bjcancer.com\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"Amifostine \", \"Control\", \"Days with fever\", \"significantly decreased\", \"In addition, a lower number of days with fever and a shortened duration of hospital stay were observed in the amifostine arm.\"], [\"Amifostine \", \"Control\", \"Hospital stay\", \"significantly decreased\", \"In addition, a lower number of days with fever and a shortened duration of hospital stay were observed in the amifostine arm.\"], [\"Amifostine \", \"Control\", \"Grade III/IV stomatitis \", \"significantly decreased\", \"Stomatitis grade III/IV occurred in 25% without versus 0% of patients with amifostine (P = 0.01).\"], [\"Amifostine \", \"Control\", \"Glomerular filtration rate fall\", \"significantly decreased\", \"The median fall of the glomerular filtration rate (GFR) was 10% from baseline in the amifostine group (105 to 95 ml min\\u22121) and 37% in the control patient group (107 to 67 ml min\\u22121) (P\\u003c 0.01).\"]]",
"role": "assistant"
}
]
|
science.mup_single_document_summarization | Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.
Title of the Paper:
Autonomous Learning of Object-Centric Abstractions for High-Level Planning
Main Content:
1 INTRODUCTION . Model-based methods are a promising approach to improving sample efficiency in reinforcement learning ( RL ) . However , they require the agent to either learn a highly detailed model—which is infeasible for sufficiently complex problems ( Ho et al. , 2019 ) —or to build a compact , high-level model that abstracts away unimportant details while retaining only the information required to plan . This raises the question of how best to build such an abstract model . While recent advances have shown how to learn models of complex environments , they lack theoretical guarantees and require millions of sample interactions ( Schrittwieser et al. , 2020 ; Hafner et al. , 2021 ) . Fortunately , recent work has shown how to learn an abstraction of a task that is provably suitable for planning with a given set of skills ( Konidaris et al. , 2018 ) . However , these representations are highly task-specific and must be relearned for any new task , or even any small change to an existing task . This makes them fatally impractical , especially for agents that must solve multiple complex tasks . We extend these methods by incorporating additional structure—namely , that the world consists of objects , and that similar objects are common amongst tasks . For example , when we play video games , we solve the game quickly by leveraging our existing knowledge of objects and their affordances ( such as doors and ladders which occur across multiple levels ) ( Dubey et al. , 2018 ) . Similarly , robot manipulation tasks often use the same robot and a similar set of physical objects in different configurations . This can substantially improve learning efficiency , because an object-centric model can be reused wherever that same object appears ( within the same task , or across different tasks ) and can also be generalised across objects that behave similarly—object types . We assume that the agent is able to individuate the objects in its environment , and propose a framework for building portable object-centric abstractions given only the data collected by executing high-level skills . These abstractions specify both the abstract object attributes that support high-level planning , and an object-relative lifted transition model that can be instantiated in a new task . This reduces the number of samples required to learn a new task by allowing the agent to avoid relearning the dynamics of previously seen object types . We make the following contributions : under the assumption that the agent can individuate objects in its environment , we develop a framework for building portable , object-centric abstractions , and for estimating object types , given only the data collected by executing high-level skills . We also show how to integrate problem-specific information to instantiate these representations in a new task . This reduces the samples required to learn a new task by allowing the agent to avoid relearning the dynamics of previously-seen objects . We demonstrate our approach on a Blocks World domain and a 2D crafting domain , and then apply it to a series of Minecraft tasks where an agent autonomously learns an abstract representation of a high-dimensional task from raw pixel input . In particular , we use the probabilistic planning domain definition language ( PPDDL ) ( Younes & Littman , 2004 ) to represent our learned abstraction , which allows for the use of existing task-level planners . Our results show that an agent can leverage these portable abstractions to learn a representation of new Minecraft tasks using a diminishing number of samples , allowing it to quickly construct plans composed of hundreds of low-level actions.1 2 BACKGROUND . We assume that tasks are modelled as semi-Markov decision processesM = 〈S , O , T , R〉 where ( i ) S is the state space ; ( ii ) O ( s ) is the set of temporally-extended actions known as options available at state s ; ( iii ) T describes the transition dynamics , specifying the probability of arriving in state s′ after option o is executed from s ; and ( iv ) R specifies the reward for reaching state s′ after executing option o in state s. An option o is defined by the tuple 〈Io , πo ; βo〉 , where Io is the initiation set specifying the states where the option can be executed , πo is the option policy which specifies the action to execute , and βo the probability of the option terminating in each state ( Sutton et al. , 1999 ) . We adopt the object-centric formulation from Ugur & Piater ( 2015 ) : in a task with n objects , the state is represented by the set { fa , f1 , f2 , . . . , fn } , where fa is a vector of the agent ’ s features and fi is a vector of features particular to object i . Note that the feature vector describing each object can itself be arbitrarily complex , such as an image or voxel grid—in one of our domains , we use pixels . Our state space representation assumes that individual objects have already been factored into their constituent low-level attributes . Practically , this means that the agent is aware that the world consists of objects , but is unaware of what the objects are , or whether multiple instantiations of the same object are present . It is also easy to see that different tasks will have differing numbers of objects with potentially arbitrary ordering ; any learned abstract representation should be agnostic to this . 2.1 STATE ABSTRACTIONS FOR PLANNING . We intend to learn an abstract representation suitable for planning . Prior work has shown that a soundand complete abstract representation must necessarily be able to estimate the set of initiating and terminating states for each option ( Konidaris et al. , 2018 ) . In classical planning , this corresponds to the precondition and effect of each high-level action operator ( McDermott et al. , 1998 ) . The precondition is defined as Pre ( o ) = Pr ( s ∈ Io ) , which is a probabilistic classifier that expresses the probability that option o can be executed at state s. Similarly , the effect or image represents the distribution of states an agent may find itself in after executing an option from states drawn from some starting distribution ( Konidaris et al. , 2018 ) . Since the precondition is a probabilistic classifier and the effect is a density estimator , they can be learned directly from option execution data . We can use preconditions and effects to evaluate the probability of a sequence of options—a plan— executing successfully . Given an initial state distribution , the precondition is used to evaluate the probability that the first option can execute , and the effects are used to determine the resulting state distribution . We can apply the same logic to the subsequent options to compute the probability of the entire plan executing successfully . It follows that these representations are sufficient for evaluating the probability of successfully executing any plan ( Konidaris et al. , 2018 ) . Partitioned Options For large or continuous state spaces , estimating Pr ( s′ | s , o ) is difficult because the worst case requires learning a distribution conditioned on every state . However , if we assume that terminating states are independent of starting states , we can make the simplification Pr ( s′ | s , o ) = Pr ( s′ | o ) . These subgoal options ( Precup , 2000 ) are not overly restrictive , since they refer to options that drive an agent to some set of states with high reliability . Nonetheless , many options are not subgoal . It is often possible , however , to partition an option ’ s initiation set into a finite number of subsets , so that it is approximately subgoal when executed from any of the individual subsets . That is , we partition an option o ’ s start states into finite regions C such that Pr ( s′ | s , o , c ) ≈ Pr ( s′ | o , c ) , c ∈ C ( Konidaris et al. , 2018 ) . 1More results and videos can be found at : https : //sites.google.com/view/mine-pddl Factors We adopt the frame assumption , which states that aspects of the world not explicitly affected by an agent ’ s action remain the same ( Pasula et al. , 2004 ) . Prior work leverages this to learn a factored or STRIPS-like ( Fikes & Nilsson , 1971 ) representation by computing the option ’ s mask : the state variables explicitly changed by the option ( Konidaris et al. , 2018 ) . In our formulation , the state space is already factorised into its constituent objects , so computing the mask amounts to determining which objects are affected by a given option . 3 LEARNING OBJECT-CENTRIC REPRESENTATIONS . Although prior work ( Konidaris et al. , 2018 ) allows an agent to autonomously learn an abstract representation supporting fast task-level planning , that representation lacks generalisability—since the symbols are distributions over states in the current task , they can not be reused in new ones . This approach can be fatally expensive in complex domains , where learning an abstract model may be as hard as solving a task from scratch , and is therefore pointless if we only want to solve a single task . However , an agent able to reuse aspects of its learned representation can amortise the cost of learning over many interactions , accelerating learning in later tasks . The key question is what forms of representation support transfer in this way . We now introduce an object-centric generalisation of a learned symbolic representation that admits transfer in tasks when the state space representation consists of features centred on objects in the environment . This is common in robotics , where each object is often isolated from the environment and represented as a point cloud or subsequently a voxelised occupancy grid . Our approach builds on a significant amount of machinery , involving clustering , feature selection , classification and density estimation . We summarise our proposed approach in Figure 1 and provide a high-level description in the remainder of this section , but provide pseudocode and specific practical details in the appendix . 3.1 GENERATING A PROPOSITIONAL MODEL ( STEPS 1–2 ) ( AS IN KONIDARIS ET AL. , 2018 ) . The agent begins by collecting transition data using an exploration policy . The first step is to partition the options into approximately subgoal options . For each option o and empirical sets of initial and terminating states Ĩo and β̃o , the agent partitions Ĩo into a number of disjoint subsets , such that for each subset K ⊆ Ĩo , we have Pr ( s′ | si , o ) = Pr ( s′ | sj , o ) ∀si , sj ∈ K , s′ ∈ β̃o . In other words , the effect distribution of the option is identical , independent of the state in K from which it was executed . In practice , this can be approximated by first clustering state transition samples based on terminating states , and then assigning each cluster to a partition . Finally , pairs of partitions whose initiating states overlap are merged to handle probabilistic effects ( Konidaris et al. , 2018 ) . The agent next learns a precondition classifier for each approximately partitioned option . For each partition , the initiating states are used as positive examples , and all other states are treated as negative ones . A feature selection procedure next determines which objects are relevant to the precondition , and a classifier is fit using only those objects . A density estimator is then used to estimate the effect distribution for each partitioned option . The agent learns distributions over only the objects affected by the option , learning one estimator per object . Together these effect distributions form our propositional PPDDL vocabulary V . To construct a PPDDL representation for each partitioned option , we must specify both the precondition and effects using the state distributions ( propositions ) in V . The effects are directly specified using these distributions , and so pose no problem . However , the estimated precondition is a classifier rather than a state distribution . The agent must therefore iterate through all possible effect distributions to compute whether the skill can be executed there . To do so , we denote P as some set of propositions in V , and G ( s ; P ) as the probability that a low-level state s is drawn from the conjunction of propositions in P . Then , for an option with learned classifier Îo , we can represent the precondition with every P ∈ ℘ ( V ) such that ∫ S Îo ( s ) G ( s ; P ) ds > 0 , where ℘ ( V ) denotes the powerset of V . In other words , the agent considers every combination of effect state distributions and draws samples from their conjunction . If these samples are classified as positive by Îo , then the conjunction of P is used to represent the precondition . The preconditions and effects are now specified using distributions over state variables , where each distribution is a proposition—this is our PPDDL representation , which is sound and complete for planning . | - No Baseline Comparisons: Paper proposed a way to generate object level representation that can be used across the tasks with same objects. Authors claim that this should reduce the number of environment interactions required to solve the new task. However, there is no baseline comparison being done to figure out how sample efficient it is. | science.mup_single_document_summarization.2128 | [
{
"content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nAutonomous Learning of Object-Centric Abstractions for High-Level Planning\n\nMain Content:\n1 INTRODUCTION . Model-based methods are a promising approach to improving sample efficiency in reinforcement learning ( RL ) . However , they require the agent to either learn a highly detailed model—which is infeasible for sufficiently complex problems ( Ho et al. , 2019 ) —or to build a compact , high-level model that abstracts away unimportant details while retaining only the information required to plan . This raises the question of how best to build such an abstract model . While recent advances have shown how to learn models of complex environments , they lack theoretical guarantees and require millions of sample interactions ( Schrittwieser et al. , 2020 ; Hafner et al. , 2021 ) . Fortunately , recent work has shown how to learn an abstraction of a task that is provably suitable for planning with a given set of skills ( Konidaris et al. , 2018 ) . However , these representations are highly task-specific and must be relearned for any new task , or even any small change to an existing task . This makes them fatally impractical , especially for agents that must solve multiple complex tasks . We extend these methods by incorporating additional structure—namely , that the world consists of objects , and that similar objects are common amongst tasks . For example , when we play video games , we solve the game quickly by leveraging our existing knowledge of objects and their affordances ( such as doors and ladders which occur across multiple levels ) ( Dubey et al. , 2018 ) . Similarly , robot manipulation tasks often use the same robot and a similar set of physical objects in different configurations . This can substantially improve learning efficiency , because an object-centric model can be reused wherever that same object appears ( within the same task , or across different tasks ) and can also be generalised across objects that behave similarly—object types . We assume that the agent is able to individuate the objects in its environment , and propose a framework for building portable object-centric abstractions given only the data collected by executing high-level skills . These abstractions specify both the abstract object attributes that support high-level planning , and an object-relative lifted transition model that can be instantiated in a new task . This reduces the number of samples required to learn a new task by allowing the agent to avoid relearning the dynamics of previously seen object types . We make the following contributions : under the assumption that the agent can individuate objects in its environment , we develop a framework for building portable , object-centric abstractions , and for estimating object types , given only the data collected by executing high-level skills . We also show how to integrate problem-specific information to instantiate these representations in a new task . This reduces the samples required to learn a new task by allowing the agent to avoid relearning the dynamics of previously-seen objects . We demonstrate our approach on a Blocks World domain and a 2D crafting domain , and then apply it to a series of Minecraft tasks where an agent autonomously learns an abstract representation of a high-dimensional task from raw pixel input . In particular , we use the probabilistic planning domain definition language ( PPDDL ) ( Younes & Littman , 2004 ) to represent our learned abstraction , which allows for the use of existing task-level planners . Our results show that an agent can leverage these portable abstractions to learn a representation of new Minecraft tasks using a diminishing number of samples , allowing it to quickly construct plans composed of hundreds of low-level actions.1 2 BACKGROUND . We assume that tasks are modelled as semi-Markov decision processesM = 〈S , O , T , R〉 where ( i ) S is the state space ; ( ii ) O ( s ) is the set of temporally-extended actions known as options available at state s ; ( iii ) T describes the transition dynamics , specifying the probability of arriving in state s′ after option o is executed from s ; and ( iv ) R specifies the reward for reaching state s′ after executing option o in state s. An option o is defined by the tuple 〈Io , πo ; βo〉 , where Io is the initiation set specifying the states where the option can be executed , πo is the option policy which specifies the action to execute , and βo the probability of the option terminating in each state ( Sutton et al. , 1999 ) . We adopt the object-centric formulation from Ugur & Piater ( 2015 ) : in a task with n objects , the state is represented by the set { fa , f1 , f2 , . . . , fn } , where fa is a vector of the agent ’ s features and fi is a vector of features particular to object i . Note that the feature vector describing each object can itself be arbitrarily complex , such as an image or voxel grid—in one of our domains , we use pixels . Our state space representation assumes that individual objects have already been factored into their constituent low-level attributes . Practically , this means that the agent is aware that the world consists of objects , but is unaware of what the objects are , or whether multiple instantiations of the same object are present . It is also easy to see that different tasks will have differing numbers of objects with potentially arbitrary ordering ; any learned abstract representation should be agnostic to this . 2.1 STATE ABSTRACTIONS FOR PLANNING . We intend to learn an abstract representation suitable for planning . Prior work has shown that a soundand complete abstract representation must necessarily be able to estimate the set of initiating and terminating states for each option ( Konidaris et al. , 2018 ) . In classical planning , this corresponds to the precondition and effect of each high-level action operator ( McDermott et al. , 1998 ) . The precondition is defined as Pre ( o ) = Pr ( s ∈ Io ) , which is a probabilistic classifier that expresses the probability that option o can be executed at state s. Similarly , the effect or image represents the distribution of states an agent may find itself in after executing an option from states drawn from some starting distribution ( Konidaris et al. , 2018 ) . Since the precondition is a probabilistic classifier and the effect is a density estimator , they can be learned directly from option execution data . We can use preconditions and effects to evaluate the probability of a sequence of options—a plan— executing successfully . Given an initial state distribution , the precondition is used to evaluate the probability that the first option can execute , and the effects are used to determine the resulting state distribution . We can apply the same logic to the subsequent options to compute the probability of the entire plan executing successfully . It follows that these representations are sufficient for evaluating the probability of successfully executing any plan ( Konidaris et al. , 2018 ) . Partitioned Options For large or continuous state spaces , estimating Pr ( s′ | s , o ) is difficult because the worst case requires learning a distribution conditioned on every state . However , if we assume that terminating states are independent of starting states , we can make the simplification Pr ( s′ | s , o ) = Pr ( s′ | o ) . These subgoal options ( Precup , 2000 ) are not overly restrictive , since they refer to options that drive an agent to some set of states with high reliability . Nonetheless , many options are not subgoal . It is often possible , however , to partition an option ’ s initiation set into a finite number of subsets , so that it is approximately subgoal when executed from any of the individual subsets . That is , we partition an option o ’ s start states into finite regions C such that Pr ( s′ | s , o , c ) ≈ Pr ( s′ | o , c ) , c ∈ C ( Konidaris et al. , 2018 ) . 1More results and videos can be found at : https : //sites.google.com/view/mine-pddl Factors We adopt the frame assumption , which states that aspects of the world not explicitly affected by an agent ’ s action remain the same ( Pasula et al. , 2004 ) . Prior work leverages this to learn a factored or STRIPS-like ( Fikes & Nilsson , 1971 ) representation by computing the option ’ s mask : the state variables explicitly changed by the option ( Konidaris et al. , 2018 ) . In our formulation , the state space is already factorised into its constituent objects , so computing the mask amounts to determining which objects are affected by a given option . 3 LEARNING OBJECT-CENTRIC REPRESENTATIONS . Although prior work ( Konidaris et al. , 2018 ) allows an agent to autonomously learn an abstract representation supporting fast task-level planning , that representation lacks generalisability—since the symbols are distributions over states in the current task , they can not be reused in new ones . This approach can be fatally expensive in complex domains , where learning an abstract model may be as hard as solving a task from scratch , and is therefore pointless if we only want to solve a single task . However , an agent able to reuse aspects of its learned representation can amortise the cost of learning over many interactions , accelerating learning in later tasks . The key question is what forms of representation support transfer in this way . We now introduce an object-centric generalisation of a learned symbolic representation that admits transfer in tasks when the state space representation consists of features centred on objects in the environment . This is common in robotics , where each object is often isolated from the environment and represented as a point cloud or subsequently a voxelised occupancy grid . Our approach builds on a significant amount of machinery , involving clustering , feature selection , classification and density estimation . We summarise our proposed approach in Figure 1 and provide a high-level description in the remainder of this section , but provide pseudocode and specific practical details in the appendix . 3.1 GENERATING A PROPOSITIONAL MODEL ( STEPS 1–2 ) ( AS IN KONIDARIS ET AL. , 2018 ) . The agent begins by collecting transition data using an exploration policy . The first step is to partition the options into approximately subgoal options . For each option o and empirical sets of initial and terminating states Ĩo and β̃o , the agent partitions Ĩo into a number of disjoint subsets , such that for each subset K ⊆ Ĩo , we have Pr ( s′ | si , o ) = Pr ( s′ | sj , o ) ∀si , sj ∈ K , s′ ∈ β̃o . In other words , the effect distribution of the option is identical , independent of the state in K from which it was executed . In practice , this can be approximated by first clustering state transition samples based on terminating states , and then assigning each cluster to a partition . Finally , pairs of partitions whose initiating states overlap are merged to handle probabilistic effects ( Konidaris et al. , 2018 ) . The agent next learns a precondition classifier for each approximately partitioned option . For each partition , the initiating states are used as positive examples , and all other states are treated as negative ones . A feature selection procedure next determines which objects are relevant to the precondition , and a classifier is fit using only those objects . A density estimator is then used to estimate the effect distribution for each partitioned option . The agent learns distributions over only the objects affected by the option , learning one estimator per object . Together these effect distributions form our propositional PPDDL vocabulary V . To construct a PPDDL representation for each partitioned option , we must specify both the precondition and effects using the state distributions ( propositions ) in V . The effects are directly specified using these distributions , and so pose no problem . However , the estimated precondition is a classifier rather than a state distribution . The agent must therefore iterate through all possible effect distributions to compute whether the skill can be executed there . To do so , we denote P as some set of propositions in V , and G ( s ; P ) as the probability that a low-level state s is drawn from the conjunction of propositions in P . Then , for an option with learned classifier Îo , we can represent the precondition with every P ∈ ℘ ( V ) such that ∫ S Îo ( s ) G ( s ; P ) ds > 0 , where ℘ ( V ) denotes the powerset of V . In other words , the agent considers every combination of effect state distributions and draws samples from their conjunction . If these samples are classified as positive by Îo , then the conjunction of P is used to represent the precondition . The preconditions and effects are now specified using distributions over state variables , where each distribution is a proposition—this is our PPDDL representation , which is sound and complete for planning .",
"role": "user"
},
{
"content": "- No Baseline Comparisons: Paper proposed a way to generate object level representation that can be used across the tasks with same objects. Authors claim that this should reduce the number of environment interactions required to solve the new task. However, there is no baseline comparison being done to figure out how sample efficient it is.",
"role": "assistant"
}
]
|
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT.INTRODUCTION: Intimate partner violence (IPV) violates women's human rights, and it is a serious public health concern associated with increased HIV risk. SASA!, a phased community mobilization intervention, engages communities to prevent IPV and promote gender equity. The SASA! study assessed the community-level impact of SASA! on reported HIV-related risk behaviours and relationship dynamics.
ABSTRACT.METHODS: Data were collected as part of a cluster randomized controlled trial conducted between 2007 and 2012 in eight communities in Kampala. An adjusted cluster-level intention to treat analysis, compares secondary outcomes in intervention and control communities at follow-up. The qualitative evaluation explored participants' subjective experience of SASA!. A total of 82 in-depth interviews were audio recorded at follow-up, transcribed verbatim and analyzed using thematic analysis.
ABSTRACT.RESULTS: Men in intervention communities were significantly more likely than controls to report a broad range of HIV-protective behaviours, including higher levels of condom use (aRR 2.03, 95% CI 1.22–3.39), HIV testing (aRR 1.50, 95% CI 1.13–2.00) and fewer concurrent partners (aRR 0.60, 95% CI 0.37–0.97). They were also more likely to report increased joint decision-making (aRR 1.92, 95% CI 1.27–2.91), greater male participation in household tasks (aRR 1.48, 95% CI 1.09–2.01), more open communication and greater appreciation of their partner's work inside (aRR 1.31, 95% CI 1.04–1.66) and outside (aRR 1.49, 95% CI 1.08–2.06) the home. For women, all outcomes were in the hypothesized direction, but effect sizes were smaller. Only some achieved statistical significance. Women in intervention communities were significantly more likely to report being able to refuse sex with their partners (aRR 1.16, 95% CI 1.00–1.35), joint decision-making (aRR 1.37, 95% CI 1.06–1.78) and more open communication on a number of indicators. Qualitative interviews suggest that shifts operated through broader improvements in relationships, including increased trust and cooperation, participants' greater awareness of the connections between HIV and IPV and their resultant desire to improve their relationships. Barriers to change include partial uptake of SASA!, partner resistance, fear and entrenched previous beliefs.
ABSTRACT.CONCLUSIONS: SASA! impacted positively on reported HIV-related risk behaviours and relationship dynamics at a community level, especially among men. Social change programmes focusing on IPV and gender equity could play an important role in HIV prevention efforts.
Extracted Data: | [["SASA! community mobilization intervention program", "no intervention", "women\u0027s ability to refuse sex", "significantly increased", ". Women in intervention communities were significantly more likely to report being able to refuse sex with their partners (aRR 1.16, 95% CI 1.00\u20131.35),"], ["SASA! community mobilization intervention program", "no intervention", "male reports of undergoing HIV testing", "significantly increased", "Men in intervention communities were significantly more likely than controls to report a broad range of HIV-protective behaviours, including higher levels of condom use (aRR 2.03, 95% CI 1.22\u20133.39), HIV testing (aRR 1.50, 95% CI 1.13\u20132.00) and fewer concurrent partners (aRR 0.60, 95% CI 0.37\u20130.97)."], ["SASA! community mobilization intervention program", "no intervention", "women making decisions jointly with their partner", "significantly increased", "Women in intervention communities were significantly more likely to report being able to refuse sex with their partners (aRR 1.16, 95% CI 1.00\u20131.35), joint decision-making (aRR 1.37, 95% CI 1.06\u20131.78) and more open communication on a number of indicators."]] | science.evidence_inference.25 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.INTRODUCTION: Intimate partner violence (IPV) violates women's human rights, and it is a serious public health concern associated with increased HIV risk. SASA!, a phased community mobilization intervention, engages communities to prevent IPV and promote gender equity. The SASA! study assessed the community-level impact of SASA! on reported HIV-related risk behaviours and relationship dynamics.\n\nABSTRACT.METHODS: Data were collected as part of a cluster randomized controlled trial conducted between 2007 and 2012 in eight communities in Kampala. An adjusted cluster-level intention to treat analysis, compares secondary outcomes in intervention and control communities at follow-up. The qualitative evaluation explored participants' subjective experience of SASA!. A total of 82 in-depth interviews were audio recorded at follow-up, transcribed verbatim and analyzed using thematic analysis.\n\nABSTRACT.RESULTS: Men in intervention communities were significantly more likely than controls to report a broad range of HIV-protective behaviours, including higher levels of condom use (aRR 2.03, 95% CI 1.22–3.39), HIV testing (aRR 1.50, 95% CI 1.13–2.00) and fewer concurrent partners (aRR 0.60, 95% CI 0.37–0.97). They were also more likely to report increased joint decision-making (aRR 1.92, 95% CI 1.27–2.91), greater male participation in household tasks (aRR 1.48, 95% CI 1.09–2.01), more open communication and greater appreciation of their partner's work inside (aRR 1.31, 95% CI 1.04–1.66) and outside (aRR 1.49, 95% CI 1.08–2.06) the home. For women, all outcomes were in the hypothesized direction, but effect sizes were smaller. Only some achieved statistical significance. Women in intervention communities were significantly more likely to report being able to refuse sex with their partners (aRR 1.16, 95% CI 1.00–1.35), joint decision-making (aRR 1.37, 95% CI 1.06–1.78) and more open communication on a number of indicators. Qualitative interviews suggest that shifts operated through broader improvements in relationships, including increased trust and cooperation, participants' greater awareness of the connections between HIV and IPV and their resultant desire to improve their relationships. Barriers to change include partial uptake of SASA!, partner resistance, fear and entrenched previous beliefs.\n\nABSTRACT.CONCLUSIONS: SASA! impacted positively on reported HIV-related risk behaviours and relationship dynamics at a community level, especially among men. Social change programmes focusing on IPV and gender equity could play an important role in HIV prevention efforts.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"SASA! community mobilization intervention program\", \"no intervention\", \"women\\u0027s ability to refuse sex\", \"significantly increased\", \". Women in intervention communities were significantly more likely to report being able to refuse sex with their partners (aRR 1.16, 95% CI 1.00\\u20131.35),\"], [\"SASA! community mobilization intervention program\", \"no intervention\", \"male reports of undergoing HIV testing\", \"significantly increased\", \"Men in intervention communities were significantly more likely than controls to report a broad range of HIV-protective behaviours, including higher levels of condom use (aRR 2.03, 95% CI 1.22\\u20133.39), HIV testing (aRR 1.50, 95% CI 1.13\\u20132.00) and fewer concurrent partners (aRR 0.60, 95% CI 0.37\\u20130.97).\"], [\"SASA! community mobilization intervention program\", \"no intervention\", \"women making decisions jointly with their partner\", \"significantly increased\", \"Women in intervention communities were significantly more likely to report being able to refuse sex with their partners (aRR 1.16, 95% CI 1.00\\u20131.35), joint decision-making (aRR 1.37, 95% CI 1.06\\u20131.78) and more open communication on a number of indicators.\"]]",
"role": "assistant"
}
]
|
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT.INTRODUCTION: In a previous pooled analysis of 19 double-blind clinical studies conducted by Merck, which included data available as of July 2009 on 10,246 patients with type 2 diabetes (T2DM), treatment with sitagliptin was shown to be generally well tolerated compared with treatment with control agents. As the sitagliptin clinical development program continues, additional studies with sitagliptin have been completed. The present analysis updates the safety and tolerability assessment of sitagliptin by examining pooled data from 25 double-blind clinical studies.
ABSTRACT.METHODS: The present analysis included data from 14,611 patients in 25 studies with T2DM who received either sitagliptin 100 mg/day (n = 7,726; sitagliptin group) or a comparator agent (n = 6,885; non-exposed group). These studies represent all randomized, double-blind trials conducted by Merck that included patients treated with the usual clinical dose of sitagliptin (100 mg/day) for between 12 weeks and 2 years, and for which results were available as of December 2011. These studies assessed sitagliptin, versus comparator agents, taken as monotherapy, initial combination therapy with metformin or pioglitazone, or as add-on combination therapy with other antihyperglycemic agents (metformin, pioglitazone, a sulfonylurea ± metformin, insulin ± metformin, or metformin + pioglitazone or rosiglitazone). Patient-level data from each study were used to evaluate between-group differences in the exposure-adjusted incidence rates of adverse events (AEs).
ABSTRACT.RESULTS: Overall incidence rates of AEs and drug-related AEs were higher in the non-exposed group compared with the sitagliptin group. Incidence rates of specific AEs were generally similar between the two groups, except for higher incidence rates of hypoglycemia related to the greater use of a sulfonylurea and diarrhea related to the greater use of metformin in the non-exposed group, and of constipation in the sitagliptin group. Treatment with sitagliptin was not associated with an increased risk of major adverse cardiovascular events, malignancy, or pancreatitis.
ABSTRACT.CONCLUSION: In this updated pooled safety analysis of data from 14,611 patients with T2DM, sitagliptin 100 mg/day was generally well tolerated in clinical trials of up to 2 years in duration.
Extracted Data: | [["sitagliptin 100 mg/day", "comparator agent", "The incidence rates of malignancy", "no significant difference", "Treatment with sitagliptin was not associated with an increased risk of major adverse cardiovascular events, malignancy, or pancreatitis."]] | science.evidence_inference.242 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.INTRODUCTION: In a previous pooled analysis of 19 double-blind clinical studies conducted by Merck, which included data available as of July 2009 on 10,246 patients with type 2 diabetes (T2DM), treatment with sitagliptin was shown to be generally well tolerated compared with treatment with control agents. As the sitagliptin clinical development program continues, additional studies with sitagliptin have been completed. The present analysis updates the safety and tolerability assessment of sitagliptin by examining pooled data from 25 double-blind clinical studies.\n\nABSTRACT.METHODS: The present analysis included data from 14,611 patients in 25 studies with T2DM who received either sitagliptin 100 mg/day (n = 7,726; sitagliptin group) or a comparator agent (n = 6,885; non-exposed group). These studies represent all randomized, double-blind trials conducted by Merck that included patients treated with the usual clinical dose of sitagliptin (100 mg/day) for between 12 weeks and 2 years, and for which results were available as of December 2011. These studies assessed sitagliptin, versus comparator agents, taken as monotherapy, initial combination therapy with metformin or pioglitazone, or as add-on combination therapy with other antihyperglycemic agents (metformin, pioglitazone, a sulfonylurea ± metformin, insulin ± metformin, or metformin + pioglitazone or rosiglitazone). Patient-level data from each study were used to evaluate between-group differences in the exposure-adjusted incidence rates of adverse events (AEs).\n\nABSTRACT.RESULTS: Overall incidence rates of AEs and drug-related AEs were higher in the non-exposed group compared with the sitagliptin group. Incidence rates of specific AEs were generally similar between the two groups, except for higher incidence rates of hypoglycemia related to the greater use of a sulfonylurea and diarrhea related to the greater use of metformin in the non-exposed group, and of constipation in the sitagliptin group. Treatment with sitagliptin was not associated with an increased risk of major adverse cardiovascular events, malignancy, or pancreatitis.\n\nABSTRACT.CONCLUSION: In this updated pooled safety analysis of data from 14,611 patients with T2DM, sitagliptin 100 mg/day was generally well tolerated in clinical trials of up to 2 years in duration.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"sitagliptin 100 mg/day\", \"comparator agent\", \"The incidence rates of malignancy\", \"no significant difference\", \"Treatment with sitagliptin was not associated with an increased risk of major adverse cardiovascular events, malignancy, or pancreatitis.\"]]",
"role": "assistant"
}
]
|
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT.BACKGROUND: Alcohol misuse in England costs around £7.3 billion (US$12.2 billion) annually from lost productivity and absenteeism. Delivering brief alcohol interventions to employees as part of a health check may be acceptable, particularly with online delivery which can provide privacy for this stigmatised behaviour. Research to support this approach is limited and methodologically weak. The aim was to determine the effectiveness of online screening and personalised feedback on alcohol consumption, delivered in a workplace as part of a health check.
ABSTRACT.METHODS AND FINDINGS: This two-group online individually randomised controlled trial recruited employees from a UK-based private sector organisation (approx. 100,000 employees). 3,375 employees completed the online health check in the three week recruitment period. Of these, 1,330 (39%) scored five or more on the AUDIT-C (indicating alcohol misuse) and were randomised to receive personalised feedback on their alcohol intake, alongside feedback on other health behaviours (n = 659), or to receive feedback on all health behaviours except alcohol intake (n = 671). Participants were mostly male (75%), with a median age of 48 years and half were in managerial positions (55%). Median Body Mass Index was 26, 12% were smokers, median time undertaking moderate/vigorous physical activity a week was 173 minutes and median fruit and vegetable consumption was three portions a day. Eighty percent (n = 1,066) of participants completed follow-up questionnaires at three months. An intention to treat analysis found no difference between experimental groups for past week drinking (primary outcome) (5.6% increase associated with the intervention (95% CI −4.7% to 16.9%; p = .30)), AUDIT (measure of alcohol-related harm) and health utility (EQ-5D).
ABSTRACT.CONCLUSIONS: There was no evidence to support the use of personalised feedback within an online health check for reducing alcohol consumption among employees in this organisation. Further research is needed on how to engage a larger proportion of employees in screening.
ABSTRACT.TRIAL REGISTRATION: International Standard Randomised Controlled Trial Number Register ISRCTN50658915
Extracted Data: | [["Personalised feedback on alcohol intake", "General feedback ", "Past week drinking", "no significant difference", "An intention to treat analysis found no difference between experimental groups for past week drinking (primary outcome) (5.6% increase associated with the intervention (95% CI \u22124.7% to 16.9%; p = .30)), AUDIT (measure of alcohol-related harm) and health utility (EQ-5D)."], ["Personalised feedback on alcohol intake", "General feedback ", "EQ-5D index", "no significant difference", "An intention to treat analysis found no difference between experimental groups for past week drinking (primary outcome) (5.6% increase associated with the intervention (95% CI \u22124.7% to 16.9%; p = .30)), AUDIT (measure of alcohol-related harm) and health utility (EQ-5D)."]] | science.evidence_inference.149 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND: Alcohol misuse in England costs around £7.3 billion (US$12.2 billion) annually from lost productivity and absenteeism. Delivering brief alcohol interventions to employees as part of a health check may be acceptable, particularly with online delivery which can provide privacy for this stigmatised behaviour. Research to support this approach is limited and methodologically weak. The aim was to determine the effectiveness of online screening and personalised feedback on alcohol consumption, delivered in a workplace as part of a health check.\n\nABSTRACT.METHODS AND FINDINGS: This two-group online individually randomised controlled trial recruited employees from a UK-based private sector organisation (approx. 100,000 employees). 3,375 employees completed the online health check in the three week recruitment period. Of these, 1,330 (39%) scored five or more on the AUDIT-C (indicating alcohol misuse) and were randomised to receive personalised feedback on their alcohol intake, alongside feedback on other health behaviours (n = 659), or to receive feedback on all health behaviours except alcohol intake (n = 671). Participants were mostly male (75%), with a median age of 48 years and half were in managerial positions (55%). Median Body Mass Index was 26, 12% were smokers, median time undertaking moderate/vigorous physical activity a week was 173 minutes and median fruit and vegetable consumption was three portions a day. Eighty percent (n = 1,066) of participants completed follow-up questionnaires at three months. An intention to treat analysis found no difference between experimental groups for past week drinking (primary outcome) (5.6% increase associated with the intervention (95% CI −4.7% to 16.9%; p = .30)), AUDIT (measure of alcohol-related harm) and health utility (EQ-5D).\n\nABSTRACT.CONCLUSIONS: There was no evidence to support the use of personalised feedback within an online health check for reducing alcohol consumption among employees in this organisation. Further research is needed on how to engage a larger proportion of employees in screening.\n\nABSTRACT.TRIAL REGISTRATION: International Standard Randomised Controlled Trial Number Register ISRCTN50658915\n\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"Personalised feedback on alcohol intake\", \"General feedback \", \"Past week drinking\", \"no significant difference\", \"An intention to treat analysis found no difference between experimental groups for past week drinking (primary outcome) (5.6% increase associated with the intervention (95% CI \\u22124.7% to 16.9%; p = .30)), AUDIT (measure of alcohol-related harm) and health utility (EQ-5D).\"], [\"Personalised feedback on alcohol intake\", \"General feedback \", \"EQ-5D index\", \"no significant difference\", \"An intention to treat analysis found no difference between experimental groups for past week drinking (primary outcome) (5.6% increase associated with the intervention (95% CI \\u22124.7% to 16.9%; p = .30)), AUDIT (measure of alcohol-related harm) and health utility (EQ-5D).\"]]",
"role": "assistant"
}
]
|
science.biored_ner | Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: ["CellLine", "Variant", "Species", "Disease", "Chemical", "Gene"].
Ensure the output is strictly the JSON object.
Source Text:
A first Taiwanese Chinese family of type 2B von Willebrand disease with R1306W mutation. Clinical, laboratory and genetic defect of a Taiwanese family with type 2B von Willebrand disease (VWD) were studied. The proband was a 55-year-old woman who gave birth to two daughters and one son aged 30, 29 and 27, respectively. All had abnormal mucocutaneous bleedings since their childhood. In proband, PT, PTT and platelet count were normal; template bleeding time was 14 min; VIII:C was 51%, von Willebrand factor antigen (VWF:Ag), 42% and von Willerand factor ristocetin-cofactor (VWF:RCo, 15%); ristocetin-induced platelet aggregation (RIPA) at 0.3 and 0.6 mg/ml of ristocetin was 16% and 68%, respectively. The enhanced response to ristocetin was identified to be in plasma, not in platelet itself, by mixing studies. Analysis of von Willebrand factor (VWF) multimer of plasma but not of platelets showed absence of high-molecular weight (HMW) multimer. All three children had similar laboratory findings. Exon 28 of VWF gene was amplified using polymerase chain reaction (PCR) and sequenced. The proband and three children were all found to be heterozygous for C to T transition at nucleotide 3916 resulting in Arg 1306 Trp (R1306W) substitution. This mutation in the glycoprotein Ib (GPIb)-binding site has been found to increase the affinity of plasma VWF for platelets, and thus cause loss of HMW multimers and often thrombocytopenia. In conclusion, a first report of type 2B VWD in a Taiwanese Chinese family who show R1306W mutation in VWF gene was described. | {"CellLine": [], "Chemical": ["ristocetin"], "Disease": ["bleeding", "ristocetin-induced platelet aggregation", "RIPA", "thrombocytopenia", "genetic defect", "VWD", "bleedings", "type 2B von Willebrand disease", "type 2B VWD"], "Gene": ["GPIb", "VIII", "VWF", "von Willerand factor", "von Willebrand factor", "glycoprotein Ib"], "Species": ["woman"], "Variant": ["R1306W", "C to T transition at nucleotide 3916", "Arg 1306 Trp"]} | science.biored_ner.66 | [
{
"content": "Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: [\"CellLine\", \"Variant\", \"Species\", \"Disease\", \"Chemical\", \"Gene\"].\n\nEnsure the output is strictly the JSON object.\n\nSource Text:\n\nA first Taiwanese Chinese family of type 2B von Willebrand disease with R1306W mutation. Clinical, laboratory and genetic defect of a Taiwanese family with type 2B von Willebrand disease (VWD) were studied. The proband was a 55-year-old woman who gave birth to two daughters and one son aged 30, 29 and 27, respectively. All had abnormal mucocutaneous bleedings since their childhood. In proband, PT, PTT and platelet count were normal; template bleeding time was 14 min; VIII:C was 51%, von Willebrand factor antigen (VWF:Ag), 42% and von Willerand factor ristocetin-cofactor (VWF:RCo, 15%); ristocetin-induced platelet aggregation (RIPA) at 0.3 and 0.6 mg/ml of ristocetin was 16% and 68%, respectively. The enhanced response to ristocetin was identified to be in plasma, not in platelet itself, by mixing studies. Analysis of von Willebrand factor (VWF) multimer of plasma but not of platelets showed absence of high-molecular weight (HMW) multimer. All three children had similar laboratory findings. Exon 28 of VWF gene was amplified using polymerase chain reaction (PCR) and sequenced. The proband and three children were all found to be heterozygous for C to T transition at nucleotide 3916 resulting in Arg 1306 Trp (R1306W) substitution. This mutation in the glycoprotein Ib (GPIb)-binding site has been found to increase the affinity of plasma VWF for platelets, and thus cause loss of HMW multimers and often thrombocytopenia. In conclusion, a first report of type 2B VWD in a Taiwanese Chinese family who show R1306W mutation in VWF gene was described.",
"role": "user"
},
{
"content": "{\"CellLine\": [], \"Chemical\": [\"ristocetin\"], \"Disease\": [\"bleeding\", \"ristocetin-induced platelet aggregation\", \"RIPA\", \"thrombocytopenia\", \"genetic defect\", \"VWD\", \"bleedings\", \"type 2B von Willebrand disease\", \"type 2B VWD\"], \"Gene\": [\"GPIb\", \"VIII\", \"VWF\", \"von Willerand factor\", \"von Willebrand factor\", \"glycoprotein Ib\"], \"Species\": [\"woman\"], \"Variant\": [\"R1306W\", \"C to T transition at nucleotide 3916\", \"Arg 1306 Trp\"]}",
"role": "assistant"
}
]
|
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
Sagae and Lavie (2005) and <cite>Wang et al. (2006)</cite> only used the first three transition actions, setting the final state as all incoming words having been processed, and the stack containing only one node. | ["Background"] | science.multicite_intent_classification.2229 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nSagae and Lavie (2005) and <cite>Wang et al. (2006)</cite> only used the first three transition actions, setting the final state as all incoming words having been processed, and the stack containing only one node.",
"role": "user"
},
{
"content": "[\"Background\"]",
"role": "assistant"
}
]
|
science.scifact_entailment | Determine the relationship between them and generate a JSON response. Scientific Claim:
Activation of the Rac1 homolog CED-10 kills viable cells in SRGP-1 mutant Caenorhabditis elegans.
Research Abstract:
Multicellular animals rapidly clear dying cells from their bodies.
Many of the pathways that mediate this cell removal are conserved through evolution.
Here, we identify srgp-1 as a negative regulator of cell clearance in both Caenorhabditis elegans and mammalian cells.
Loss of srgp-1 function results in improved engulfment of apoptotic cells, whereas srgp-1 overexpression inhibits apoptotic cell corpse removal.
We show that SRGP-1 functions in engulfing cells and functions as a GTPase activating protein (GAP) for CED-10 (Rac1).
Interestingly, loss of srgp-1 function promotes not only the clearance of already dead cells, but also the removal of cells that have been brought to the verge of death through sublethal apoptotic, necrotic or cytotoxic insults.
In contrast, impaired engulfment allows damaged cells to escape clearance, which results in increased long-term survival.
We propose that C. elegans uses the engulfment machinery as part of a primitive, but evolutionarily conserved, survey mechanism that identifies and removes unfit cells within a tissue. | {"verdict": "SUPPORT", "evidence": ["We show that SRGP-1 functions in engulfing cells and functions as a GTPase activating protein (GAP) for CED-10 (Rac1).", "Interestingly, loss of srgp-1 function promotes not only the clearance of already dead cells, but also the removal of cells that have been brought to the verge of death through sublethal apoptotic, necrotic or cytotoxic insults."]} | science.scifact_entailment.52 | [
{
"content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nActivation of the Rac1 homolog CED-10 kills viable cells in SRGP-1 mutant Caenorhabditis elegans.\n\nResearch Abstract:\nMulticellular animals rapidly clear dying cells from their bodies.\nMany of the pathways that mediate this cell removal are conserved through evolution.\nHere, we identify srgp-1 as a negative regulator of cell clearance in both Caenorhabditis elegans and mammalian cells.\nLoss of srgp-1 function results in improved engulfment of apoptotic cells, whereas srgp-1 overexpression inhibits apoptotic cell corpse removal.\nWe show that SRGP-1 functions in engulfing cells and functions as a GTPase activating protein (GAP) for CED-10 (Rac1).\nInterestingly, loss of srgp-1 function promotes not only the clearance of already dead cells, but also the removal of cells that have been brought to the verge of death through sublethal apoptotic, necrotic or cytotoxic insults.\nIn contrast, impaired engulfment allows damaged cells to escape clearance, which results in increased long-term survival.\nWe propose that C. elegans uses the engulfment machinery as part of a primitive, but evolutionarily conserved, survey mechanism that identifies and removes unfit cells within a tissue.",
"role": "user"
},
{
"content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"We show that SRGP-1 functions in engulfing cells and functions as a GTPase activating protein (GAP) for CED-10 (Rac1).\", \"Interestingly, loss of srgp-1 function promotes not only the clearance of already dead cells, but also the removal of cells that have been brought to the verge of death through sublethal apoptotic, necrotic or cytotoxic insults.\"]}",
"role": "assistant"
}
]
|
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
Since this happens for all models using the same pipeline (CNN-MME, CNN-MME †, CNN-MME*), these results indicate that the original architecture of <cite>Kiros et al. [1]</cite> is itself outperformed in general by more problem-specific techniques. | ["Differences"] | science.multicite_intent_classification.2223 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nSince this happens for all models using the same pipeline (CNN-MME, CNN-MME †, CNN-MME*), these results indicate that the original architecture of <cite>Kiros et al. [1]</cite> is itself outperformed in general by more problem-specific techniques.",
"role": "user"
},
{
"content": "[\"Differences\"]",
"role": "assistant"
}
]
|
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
A third current related work is PTransE<cite> [Lin et al., 2015a</cite> ] and the path ranking algorithm (PRA)<cite> [Lao et al., 2011]</cite> . | ["Background"] | science.multicite_intent_classification.2194 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nA third current related work is PTransE<cite> [Lin et al., 2015a</cite> ] and the path ranking algorithm (PRA)<cite> [Lao et al., 2011]</cite> .",
"role": "user"
},
{
"content": "[\"Background\"]",
"role": "assistant"
}
]
|
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
Their parser ran around 6 times faster than <cite>Ninomiya et al. (2006)</cite> <cite>'s model 3</cite>, 9 times faster than 'our model 1' and 60 times faster than 'our model 2.' Instead, our models achieved better accuracy. | ["Background"] | science.multicite_intent_classification.1774 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nTheir parser ran around 6 times faster than <cite>Ninomiya et al. (2006)</cite> <cite>'s model 3</cite>, 9 times faster than 'our model 1' and 60 times faster than 'our model 2.' Instead, our models achieved better accuracy.",
"role": "user"
},
{
"content": "[\"Background\"]",
"role": "assistant"
}
]
|
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT: OBJECTIVE—The purpose of this study was to assess the efficacy of an insulin priming dose with a continuous insulin infusion versus two continuous infusions without a priming dose. <p> RESEARCH DESIGN AND METHODS—This prospective randomized protocol used three insulin therapy methods: 1) load group using a priming dose of 0.07 units of regular insulin per kg body weight followed by a dose of 0.07 unit · kg−1 · h−1 i.v. in 12 patients with diabetic ketoacidosis (DKA); 2) no load group using an infusion of regular insulin of 0.07 unit · kg body weight−1 · h−1 without a loading dose in 12 patients with DKA, and 3) twice no load group using an infusion of regular insulin of 0.14 · kg−1 · h−1 without a loading dose in 13 patients with DKA. Outcome was based on the effects of insulin therapy on biochemical and hormonal changes during treatment and recovery of DKA. <p> RESULTS—The load group reached a peak in free insulin value (460 μU/ml) within 5 min and plateaued at 88 μU/ml in 60 min. The twice no load group reached a peak (200 μU/ml) at 45 min. The no load group reached a peak (60 μU/ml) in 60–120 min. Five patients in the no load group required supplemental insulin doses to decrease initial glucose levels by 10%; patients in the twice no load and load groups did not. Except for these differences, times to reach glucose ≤250 mg/dl, pH ≥7.3, and HCO3− ≥15 mEq/l did not differ significantly among the three groups. <p> CONCLUSIONS—A priming dose in low-dose insulin therapy in patients with DKA is unnecessary if an adequate dose of regular insulin of 0.14 unit · kg body weight−1 · h−1 (about 10 units/h in a 70-kg patient) is given.
Extracted Data: | [["Insulin 0.07 IU/kg infusion", "Insulin 0.07 IU/kg load + 0.07 IU/kg infusion", "Time for reaching glucose predetermined values ", "no significant difference", ", times to reach glucose \u2264250 mg/dl, pH \u22657.3, and HCO3\u2212 \u226515 mEq/l did not differ significantly among the three groups."], ["Insulin 0.07 IU/kg infusion", "Insulin 0.07 IU/kg load + 0.07 IU/kg infusion", "Time for reaching bicarbonate predetermined values ", "no significant difference", ", times to reach glucose \u2264250 mg/dl, pH \u22657.3, and HCO3\u2212 \u226515 mEq/l did not differ significantly among the three groups."]] | science.evidence_inference.122 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT: OBJECTIVE—The purpose of this study was to assess the efficacy of an insulin priming dose with a continuous insulin infusion versus two continuous infusions without a priming dose. <p> RESEARCH DESIGN AND METHODS—This prospective randomized protocol used three insulin therapy methods: 1) load group using a priming dose of 0.07 units of regular insulin per kg body weight followed by a dose of 0.07 unit · kg−1 · h−1 i.v. in 12 patients with diabetic ketoacidosis (DKA); 2) no load group using an infusion of regular insulin of 0.07 unit · kg body weight−1 · h−1 without a loading dose in 12 patients with DKA, and 3) twice no load group using an infusion of regular insulin of 0.14 · kg−1 · h−1 without a loading dose in 13 patients with DKA. Outcome was based on the effects of insulin therapy on biochemical and hormonal changes during treatment and recovery of DKA. <p> RESULTS—The load group reached a peak in free insulin value (460 μU/ml) within 5 min and plateaued at 88 μU/ml in 60 min. The twice no load group reached a peak (200 μU/ml) at 45 min. The no load group reached a peak (60 μU/ml) in 60–120 min. Five patients in the no load group required supplemental insulin doses to decrease initial glucose levels by 10%; patients in the twice no load and load groups did not. Except for these differences, times to reach glucose ≤250 mg/dl, pH ≥7.3, and HCO3− ≥15 mEq/l did not differ significantly among the three groups. <p> CONCLUSIONS—A priming dose in low-dose insulin therapy in patients with DKA is unnecessary if an adequate dose of regular insulin of 0.14 unit · kg body weight−1 · h−1 (about 10 units/h in a 70-kg patient) is given.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"Insulin 0.07 IU/kg infusion\", \"Insulin 0.07 IU/kg load + 0.07 IU/kg infusion\", \"Time for reaching glucose predetermined values \", \"no significant difference\", \", times to reach glucose \\u2264250 mg/dl, pH \\u22657.3, and HCO3\\u2212 \\u226515 mEq/l did not differ significantly among the three groups.\"], [\"Insulin 0.07 IU/kg infusion\", \"Insulin 0.07 IU/kg load + 0.07 IU/kg infusion\", \"Time for reaching bicarbonate predetermined values \", \"no significant difference\", \", times to reach glucose \\u2264250 mg/dl, pH \\u22657.3, and HCO3\\u2212 \\u226515 mEq/l did not differ significantly among the three groups.\"]]",
"role": "assistant"
}
]
|
science.bioasq_list_qa | Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.
Context: PER1, CRY1, CRY2, CLOCK, BMAL1, and CKlε
Expression levels of five clock genes (Rev-Erbα, Per1, Per2, Bmal1 and Cry1)
altered expression patterns of the circadian clock genes, Bmal1 and Per2.
This review will focus on the core circadian clock genes CLOCK, BMAL1, Per, and Cry.
CLOCK, RORA, and NPAS2
Rev-erbα and Bmal1
he core circadian clock genes BMAL1, PER1/2 and CRY1/2.
CLOCK/BMAL1, the core circadian clock components
Question: List core circadian clock genes. | ["CLOCK", "BMAL1", "Per", "Cry", "CKl\u03b5"] | science.bioasq_list_qa.199 | [
{
"content": "Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.\n\nContext: PER1, CRY1, CRY2, CLOCK, BMAL1, and CKlε\nExpression levels of five clock genes (Rev-Erbα, Per1, Per2, Bmal1 and Cry1)\naltered expression patterns of the circadian clock genes, Bmal1 and Per2.\nThis review will focus on the core circadian clock genes CLOCK, BMAL1, Per, and Cry.\nCLOCK, RORA, and NPAS2\nRev-erbα and Bmal1\nhe core circadian clock genes BMAL1, PER1/2 and CRY1/2.\nCLOCK/BMAL1, the core circadian clock components\n\nQuestion: List core circadian clock genes.",
"role": "user"
},
{
"content": "[\"CLOCK\", \"BMAL1\", \"Per\", \"Cry\", \"CKl\\u03b5\"]",
"role": "assistant"
}
]
|
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT: The physiologic stress induced by physical activity is reflected in immune system perturbations, oxidative stress, muscle injury, and inflammation. We investigated the effect of astaxanthin (Asx) supplementation on salivary IgA (sIgA) and oxidative stress status in plasma, along with changes in biochemical parameters and total/differential white cell counts. Forty trained male soccer players were randomly assigned to Asx and placebo groups. Asx group was supplemented with 4 mg of Asx. Saliva and blood samples were collected at the baseline and after 90 days of supplementation in preexercise conditions. We observed a rise of sIgA levels at rest after 90 days of Asx supplementation, which was accompanied with a decrease in prooxidant-antioxidant balance. The plasma muscle enzymes levels were reduced significantly by Asx supplementation and by regular training. The increase in neutrophil count and hs-CRP level was found only in placebo group, indicating a significant blunting of the systemic inflammatory response in the subjects taking Asx. This study indicates that Asx supplementation improves sIgA response and attenuates muscle damage, thus preventing inflammation induced by rigorous physical training. Our findings also point that Asx could show significant physiologic modulation in individuals with mucosal immunity impairment or under conditions of increased oxidative stress and inflammation.
Extracted Data: | [["placebo (P group)", "baseline", "neutrophil count", "significantly increased", "The increase in neutrophil count and hs-CRP level was found only in placebo group,"], ["4 mg of astaxanthin (Asx) ", "baseline", "neutrophil count", "no significant difference", "The increase in neutrophil count and hs-CRP level was found only in placebo group,"], ["placebo (P group)", "baseline", "high sensitivity C-reactive protein (hs-CRP)", "significantly increased", ". The increase in neutrophil count and hs-CRP level was found only in placebo group, i"]] | science.evidence_inference.100 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT: The physiologic stress induced by physical activity is reflected in immune system perturbations, oxidative stress, muscle injury, and inflammation. We investigated the effect of astaxanthin (Asx) supplementation on salivary IgA (sIgA) and oxidative stress status in plasma, along with changes in biochemical parameters and total/differential white cell counts. Forty trained male soccer players were randomly assigned to Asx and placebo groups. Asx group was supplemented with 4 mg of Asx. Saliva and blood samples were collected at the baseline and after 90 days of supplementation in preexercise conditions. We observed a rise of sIgA levels at rest after 90 days of Asx supplementation, which was accompanied with a decrease in prooxidant-antioxidant balance. The plasma muscle enzymes levels were reduced significantly by Asx supplementation and by regular training. The increase in neutrophil count and hs-CRP level was found only in placebo group, indicating a significant blunting of the systemic inflammatory response in the subjects taking Asx. This study indicates that Asx supplementation improves sIgA response and attenuates muscle damage, thus preventing inflammation induced by rigorous physical training. Our findings also point that Asx could show significant physiologic modulation in individuals with mucosal immunity impairment or under conditions of increased oxidative stress and inflammation.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"placebo (P group)\", \"baseline\", \"neutrophil count\", \"significantly increased\", \"The increase in neutrophil count and hs-CRP level was found only in placebo group,\"], [\"4 mg of astaxanthin (Asx) \", \"baseline\", \"neutrophil count\", \"no significant difference\", \"The increase in neutrophil count and hs-CRP level was found only in placebo group,\"], [\"placebo (P group)\", \"baseline\", \"high sensitivity C-reactive protein (hs-CRP)\", \"significantly increased\", \". The increase in neutrophil count and hs-CRP level was found only in placebo group, i\"]]",
"role": "assistant"
}
]
|
science.scifact_entailment | Determine the relationship between them and generate a JSON response. Scientific Claim:
During non-homologous end joining, the ligation step is not as tolerant of disrepairs and other distortions when joining 3' of strand breaks as compared to 5' strand breaks.
Research Abstract:
Nonhomologous end joining (NHEJ) can effectively resolve chromosome breaks despite diverse end structures; however, it is unclear how the steps employed for resolution are determined.
We sought to address this question by analysing cellular NHEJ of ends with systematically mispaired and damaged termini.
We show NHEJ is uniquely proficient at bypassing subtle terminal mispairs and radiomimetic damage by direct ligation.
Nevertheless, bypass ability varies widely, with increases in mispair severity gradually reducing bypass products from 85 to 6%.
End-processing by nucleases and polymerases is increased to compensate, although paths with the fewest number of steps to generate a substrate suitable for ligation are favoured.
Thus, both the frequency and nature of end processing are tailored to meet the needs of the ligation step.
We propose a model where the ligase organizes all steps during NHEJ within the stable paired-end complex to limit end processing and associated errors. | {"verdict": "NEI", "evidence": []} | science.scifact_entailment.233 | [
{
"content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nDuring non-homologous end joining, the ligation step is not as tolerant of disrepairs and other distortions when joining 3' of strand breaks as compared to 5' strand breaks.\n\nResearch Abstract:\nNonhomologous end joining (NHEJ) can effectively resolve chromosome breaks despite diverse end structures; however, it is unclear how the steps employed for resolution are determined.\nWe sought to address this question by analysing cellular NHEJ of ends with systematically mispaired and damaged termini.\nWe show NHEJ is uniquely proficient at bypassing subtle terminal mispairs and radiomimetic damage by direct ligation.\nNevertheless, bypass ability varies widely, with increases in mispair severity gradually reducing bypass products from 85 to 6%.\nEnd-processing by nucleases and polymerases is increased to compensate, although paths with the fewest number of steps to generate a substrate suitable for ligation are favoured.\nThus, both the frequency and nature of end processing are tailored to meet the needs of the ligation step.\nWe propose a model where the ligase organizes all steps during NHEJ within the stable paired-end complex to limit end processing and associated errors.",
"role": "user"
},
{
"content": "{\"verdict\": \"NEI\", \"evidence\": []}",
"role": "assistant"
}
]
|
science.scierc_ner | Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context.
Input Abstract:
The features based on Markov random field (MRF) models are usually sensitive to the rotation of image textures. This paper develops an anisotropic circular Gaussian MRF (ACGMRF) model for modelling rotated image textures and retrieving rotation-invariant texture features. To overcome the singularity problem of the least squares estimate (LSE) method, an approximate least squares estimate (ALSE) method is proposed to estimate the parameters of the ACGMRF model. The rotation-invariant features can be obtained from the parameters of the ACGMRF model by the one-dimensional (1-D) discrete Fourier transform (DFT). Significantly improved accuracy can be achieved by applying the rotation-invariant features to classify SAR (synthetic aperture radar) sea ice and Brodatz imagery. | Output Entities:
{"Generic": [], "Material": [], "Method": ["Markov random field (MRF) models", "least squares estimate (LSE) method", "approximate least squares estimate (ALSE) method", "one-dimensional (1-D) discrete Fourier transform (DFT)", "anisotropic circular Gaussian MRF (ACGMRF) model"], "Metric": ["accuracy"], "OtherScientificTerm": ["features", "rotation of image textures", "SAR (synthetic aperture radar", "singularity problem", "parameters of the ACGMRF model", "rotation-invariant features"], "Task": ["modelling rotated image textures", "retrieving rotation-invariant texture features"]} | science.scierc_ner.32 | [
{
"content": "Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context.\n\nInput Abstract:\n\nThe features based on Markov random field (MRF) models are usually sensitive to the rotation of image textures. This paper develops an anisotropic circular Gaussian MRF (ACGMRF) model for modelling rotated image textures and retrieving rotation-invariant texture features. To overcome the singularity problem of the least squares estimate (LSE) method, an approximate least squares estimate (ALSE) method is proposed to estimate the parameters of the ACGMRF model. The rotation-invariant features can be obtained from the parameters of the ACGMRF model by the one-dimensional (1-D) discrete Fourier transform (DFT). Significantly improved accuracy can be achieved by applying the rotation-invariant features to classify SAR (synthetic aperture radar) sea ice and Brodatz imagery.",
"role": "user"
},
{
"content": "Output Entities:\n\n{\"Generic\": [], \"Material\": [], \"Method\": [\"Markov random field (MRF) models\", \"least squares estimate (LSE) method\", \"approximate least squares estimate (ALSE) method\", \"one-dimensional (1-D) discrete Fourier transform (DFT)\", \"anisotropic circular Gaussian MRF (ACGMRF) model\"], \"Metric\": [\"accuracy\"], \"OtherScientificTerm\": [\"features\", \"rotation of image textures\", \"SAR (synthetic aperture radar\", \"singularity problem\", \"parameters of the ACGMRF model\", \"rotation-invariant features\"], \"Task\": [\"modelling rotated image textures\", \"retrieving rotation-invariant texture features\"]}",
"role": "assistant"
}
]
|
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT.BACKGROUND: This randomised, single-blind controlled pilot trial aimed to determine the effectiveness of a physiotherapy program, including exercise and manual therapy, in reducing impairments and improving physical function and health-related quality of life in people with a history of painful osteoporotic vertebral fracture.
ABSTRACT.METHODS: 20 participants were randomly allocated to an intervention (n = 11) or control (n = 9) group. The intervention group attended individual sessions with an experienced clinician once a week for 10 weeks and performed daily home exercises with adherence monitored by a self-report diary. The control group received no treatment. Blinded assessment was conducted at baseline and 11 weeks. Questionnaires assessed self-reported changes in back pain, physical function, and health-related quality of life. Objective measures of thoracic kyphosis, back and shoulder muscle endurance (Timed Loaded Standing Test), and function (Timed Up and Go test) were also taken.
ABSTRACT.RESULTS: Compared with the control group, the intervention group showed significant reductions in pain during movement (mean difference (95% CI) -1.8 (-3.5 to -0.1)) and at rest (-2.0 (-3.8 to -0.2)) and significantly greater improvements in Qualeffo physical function (-4.8 (-9.2 to -0.5)) and the Timed Loaded Standing test (46.7 (16.1 to 77.3) secs). For the perceived change in back pain over the 10 weeks, 9/11 (82%) participants in the intervention group rated their pain as 'much better' compared with only 1/9 (11%) participants in the control group.
ABSTRACT.CONCLUSION: Despite the modest sample size, these results support the benefits of exercise and manual therapy in the clinical management of patients with osteoporotic vertebral fractures, but need to be confirmed in a larger sample.
ABSTRACT.TRAIL REGISTRATION: NCT00638768
Extracted Data: | [["attended individual sessions with an experienced clinician once a week for 10 weeks and performed daily home exercises with adherence monitored by a self-report diary", "received no treatment", "pain during movement", "significantly decreased", "Compared with the control group, the intervention group showed significant reductions in pain during movement (mean difference (95% CI) -1.8 (-3.5 to -0.1))"], ["attended individual sessions with an experienced clinician once a week for 10 weeks and performed daily home exercises with adherence monitored by a self-report diary", "received no treatment", "Pain on rest (0-10)", "significantly decreased", "Compared with the control group, the intervention group showed significant reductions in pain during movement (mean difference (95% CI) -1.8 (-3.5 to -0.1)) and at rest (-2.0 (-3.8 to -0.2)) and"]] | science.evidence_inference.3 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND: This randomised, single-blind controlled pilot trial aimed to determine the effectiveness of a physiotherapy program, including exercise and manual therapy, in reducing impairments and improving physical function and health-related quality of life in people with a history of painful osteoporotic vertebral fracture.\n\nABSTRACT.METHODS: 20 participants were randomly allocated to an intervention (n = 11) or control (n = 9) group. The intervention group attended individual sessions with an experienced clinician once a week for 10 weeks and performed daily home exercises with adherence monitored by a self-report diary. The control group received no treatment. Blinded assessment was conducted at baseline and 11 weeks. Questionnaires assessed self-reported changes in back pain, physical function, and health-related quality of life. Objective measures of thoracic kyphosis, back and shoulder muscle endurance (Timed Loaded Standing Test), and function (Timed Up and Go test) were also taken.\n\nABSTRACT.RESULTS: Compared with the control group, the intervention group showed significant reductions in pain during movement (mean difference (95% CI) -1.8 (-3.5 to -0.1)) and at rest (-2.0 (-3.8 to -0.2)) and significantly greater improvements in Qualeffo physical function (-4.8 (-9.2 to -0.5)) and the Timed Loaded Standing test (46.7 (16.1 to 77.3) secs). For the perceived change in back pain over the 10 weeks, 9/11 (82%) participants in the intervention group rated their pain as 'much better' compared with only 1/9 (11%) participants in the control group.\n\nABSTRACT.CONCLUSION: Despite the modest sample size, these results support the benefits of exercise and manual therapy in the clinical management of patients with osteoporotic vertebral fractures, but need to be confirmed in a larger sample.\n\nABSTRACT.TRAIL REGISTRATION: NCT00638768\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"attended individual sessions with an experienced clinician once a week for 10 weeks and performed daily home exercises with adherence monitored by a self-report diary\", \"received no treatment\", \"pain during movement\", \"significantly decreased\", \"Compared with the control group, the intervention group showed significant reductions in pain during movement (mean difference (95% CI) -1.8 (-3.5 to -0.1))\"], [\"attended individual sessions with an experienced clinician once a week for 10 weeks and performed daily home exercises with adherence monitored by a self-report diary\", \"received no treatment\", \"Pain on rest (0-10)\", \"significantly decreased\", \"Compared with the control group, the intervention group showed significant reductions in pain during movement (mean difference (95% CI) -1.8 (-3.5 to -0.1)) and at rest (-2.0 (-3.8 to -0.2)) and\"]]",
"role": "assistant"
}
]
|
science.discomat_te | | l tan (nm) | s | a s | T + | T - | Dd (nm) | m | d (nm) | n |
| 1588 | 1.491 | 0.9831 | 0.903 | 0.623 | 22.3 | 3.5 | 1155 | 2.421 |
| 1395 | 1.495 | 0.9767 | 0.899 | 0.616 | 14.5 | 4.0 | 1150 | 2.430 |
| 1243 | 1.500 | 0.9737 | 0.896 | 0.611 | 12.4 | 4.5 | 1145 | 2.436 |
| 1124 | 1.498 | 0.9710 | 0.893 | 0.607 | 11.3 | 5.0 | 1147 | 2.448 |
| 1027 | 1.503 | 0.9726 | 0.892 | 0.605 | 12.2 | 5.5 | 1145 | 2.460 |
| 947 | 1.508 | 0.9754 | 0.893 | 0.604 | 12.4 | 6.0 | 1145 | 2.475 |
| 880 | 1.514 | 0.9799 | 0.895 | 0.603 | 11.5 | 6.5 | 1143 | 2.491 |
| 822 | 1.510 | 0.9842 | 0.898 | 0.604 | 12.6 | 7.0 | 1147 | 2.506 |
| 773 | 1.509 | 0.9875 | 0.900 | 0.604 | 12.4 | 7.5 | 1151 | 2.525 |
| 730 | 1.509 | 0.9909 | 0.900 | 0.602 | 13.0 | 8.0 | 1152 | 2.544 |
| 693 | 1.508 | 0.9932 | 0.898 | 0.598 | 14.2 | 8.5 | 1153 | 2.566 |
| 661 | 1.508 | 0.9951 | 0.892 | 0.589 | 15.6 | 9.0 | 1151 | 2.591 |
| 633 | 1.508 | 0.9968 | 0.868 | 0.575 | 20.2 | 9.5 | 1153 | 2.619 |
| 608 | 1.508 | 0.9980 | 0.821 | 0.544 | 24.2 | 10.0 | 1143 | 2.648 |
| 586 | 1.509 | 0.9992 | 0.716 | 0.496 | 28.0 | 10.5 | 1153 | 2.680 |
| 568 | 1.509 | 1.0000 | 0.584 | 0.420 | 26.3 | 11.0 | 1136 | 2.721 |
| 549 | 1.510 | 1.0000 | 0.405 | 0.305 | 20.0 | 11.5 | - | 2.750 |
Caption: Calculation of the average thickness of the film, d , thickness variation, Dd, and refractive index, n, of a representative 1148+-5 nm thick amorphous As40S40Se20 film (Dd=13+-5 nm), using the optical characterisation method mentioned in the text. The reported value for the thickness variation has been calculated by averaging the underlined Dds, as explained in the text
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "1588", "type": "Other"}
{"value": "1.491", "type": "Other"}
{"value": "0.9831", "type": "Other"}
{"value": "0.903", "type": "Other"}
{"value": "0.623", "type": "Other"}
{"value": "22.3", "type": "Other"}
{"value": "3.5", "type": "Other"}
{"value": "1155", "type": "Other"}
{"value": "2.421", "type": "Other"}
{"value": "1395", "type": "Other"}
{"value": "1.495", "type": "Other"}
{"value": "0.9767", "type": "Other"}
{"value": "0.899", "type": "Other"}
{"value": "0.616", "type": "Other"}
{"value": "14.5", "type": "Other"}
{"value": "4.0", "type": "Other"}
{"value": "1150", "type": "Other"}
{"value": "2.430", "type": "Other"}
{"value": "1243", "type": "Other"}
{"value": "1.500", "type": "Other"}
{"value": "0.9737", "type": "Other"}
{"value": "0.896", "type": "Other"}
{"value": "0.611", "type": "Other"}
{"value": "12.4", "type": "Other"}
{"value": "4.5", "type": "Other"}
{"value": "1145", "type": "Other"}
{"value": "2.436", "type": "Other"}
{"value": "1124", "type": "Other"}
{"value": "1.498", "type": "Other"}
{"value": "0.9710", "type": "Other"}
{"value": "0.893", "type": "Other"}
{"value": "0.607", "type": "Other"}
{"value": "11.3", "type": "Other"}
{"value": "5.0", "type": "Other"}
{"value": "1147", "type": "Other"}
{"value": "2.448", "type": "Other"}
{"value": "1027", "type": "Other"}
{"value": "1.503", "type": "Other"}
{"value": "0.9726", "type": "Other"}
{"value": "0.892", "type": "Other"}
{"value": "0.605", "type": "Other"}
{"value": "12.2", "type": "Other"}
{"value": "5.5", "type": "Other"}
{"value": "1145", "type": "Other"}
{"value": "2.460", "type": "Other"}
{"value": "947", "type": "Other"}
{"value": "1.508", "type": "Other"}
{"value": "0.9754", "type": "Other"}
{"value": "0.893", "type": "Other"}
{"value": "0.604", "type": "Other"}
{"value": "12.4", "type": "Other"}
{"value": "6.0", "type": "Other"}
{"value": "1145", "type": "Other"}
{"value": "2.475", "type": "Other"}
{"value": "880", "type": "Other"}
{"value": "1.514", "type": "Other"}
{"value": "0.9799", "type": "Other"}
{"value": "0.895", "type": "Other"}
{"value": "0.603", "type": "Other"}
{"value": "11.5", "type": "Other"}
{"value": "6.5", "type": "Other"}
{"value": "1143", "type": "Other"}
{"value": "2.491", "type": "Other"}
{"value": "822", "type": "Other"}
{"value": "1.510", "type": "Other"}
{"value": "0.9842", "type": "Other"}
{"value": "0.898", "type": "Other"}
{"value": "0.604", "type": "Other"}
{"value": "12.6", "type": "Other"}
{"value": "7.0", "type": "Other"}
{"value": "1147", "type": "Other"}
{"value": "2.506", "type": "Other"}
{"value": "773", "type": "Other"}
{"value": "1.509", "type": "Other"}
{"value": "0.9875", "type": "Other"}
{"value": "0.900", "type": "Other"}
{"value": "0.604", "type": "Other"}
{"value": "12.4", "type": "Other"}
{"value": "7.5", "type": "Other"}
{"value": "1151", "type": "Other"}
{"value": "2.525", "type": "Other"}
{"value": "730", "type": "Other"}
{"value": "1.509", "type": "Other"}
{"value": "0.9909", "type": "Other"}
{"value": "0.900", "type": "Other"}
{"value": "0.602", "type": "Other"}
{"value": "13.0", "type": "Other"}
{"value": "8.0", "type": "Other"}
{"value": "1152", "type": "Other"}
{"value": "2.544", "type": "Other"}
{"value": "693", "type": "Other"}
{"value": "1.508", "type": "Other"}
{"value": "0.9932", "type": "Other"}
{"value": "0.898", "type": "Other"}
{"value": "0.598", "type": "Other"}
{"value": "14.2", "type": "Other"}
{"value": "8.5", "type": "Other"}
{"value": "1153", "type": "Other"}
{"value": "2.566", "type": "Other"}
{"value": "661", "type": "Other"}
{"value": "1.508", "type": "Other"}
{"value": "0.9951", "type": "Other"}
{"value": "0.892", "type": "Other"}
{"value": "0.589", "type": "Other"}
{"value": "15.6", "type": "Other"}
{"value": "9.0", "type": "Other"}
{"value": "1151", "type": "Other"}
{"value": "2.591", "type": "Other"}
{"value": "633", "type": "Other"}
{"value": "1.508", "type": "Other"}
{"value": "0.9968", "type": "Other"}
{"value": "0.868", "type": "Other"}
{"value": "0.575", "type": "Other"}
{"value": "20.2", "type": "Other"}
{"value": "9.5", "type": "Other"}
{"value": "1153", "type": "Other"}
{"value": "2.619", "type": "Other"}
{"value": "608", "type": "Other"}
{"value": "1.508", "type": "Other"}
{"value": "0.9980", "type": "Other"}
{"value": "0.821", "type": "Other"}
{"value": "0.544", "type": "Other"}
{"value": "24.2", "type": "Other"}
{"value": "10.0", "type": "Other"}
{"value": "1143", "type": "Other"}
{"value": "2.648", "type": "Other"}
{"value": "586", "type": "Other"}
{"value": "1.509", "type": "Other"}
{"value": "0.9992", "type": "Other"}
{"value": "0.716", "type": "Other"}
{"value": "0.496", "type": "Other"}
{"value": "28.0", "type": "Other"}
{"value": "10.5", "type": "Other"}
{"value": "1153", "type": "Other"}
{"value": "2.680", "type": "Other"}
{"value": "568", "type": "Other"}
{"value": "1.509", "type": "Other"}
{"value": "1.0000", "type": "Other"}
{"value": "0.584", "type": "Other"}
{"value": "0.420", "type": "Other"}
{"value": "26.3", "type": "Other"}
{"value": "11.0", "type": "Other"}
{"value": "1136", "type": "Other"}
{"value": "2.721", "type": "Other"}
{"value": "549", "type": "Other"}
{"value": "1.510", "type": "Other"}
{"value": "1.0000", "type": "Other"}
{"value": "0.405", "type": "Other"}
{"value": "0.305", "type": "Other"}
{"value": "20.0", "type": "Other"}
{"value": "11.5", "type": "Other"}
{"value": "2.750", "type": "Other"} | science.discomat_te.336 | [
{
"content": "| l tan (nm) | s | a s | T + | T - | Dd (nm) | m | d (nm) | n |\n| 1588 | 1.491 | 0.9831 | 0.903 | 0.623 | 22.3 | 3.5 | 1155 | 2.421 |\n| 1395 | 1.495 | 0.9767 | 0.899 | 0.616 | 14.5 | 4.0 | 1150 | 2.430 |\n| 1243 | 1.500 | 0.9737 | 0.896 | 0.611 | 12.4 | 4.5 | 1145 | 2.436 |\n| 1124 | 1.498 | 0.9710 | 0.893 | 0.607 | 11.3 | 5.0 | 1147 | 2.448 |\n| 1027 | 1.503 | 0.9726 | 0.892 | 0.605 | 12.2 | 5.5 | 1145 | 2.460 |\n| 947 | 1.508 | 0.9754 | 0.893 | 0.604 | 12.4 | 6.0 | 1145 | 2.475 |\n| 880 | 1.514 | 0.9799 | 0.895 | 0.603 | 11.5 | 6.5 | 1143 | 2.491 |\n| 822 | 1.510 | 0.9842 | 0.898 | 0.604 | 12.6 | 7.0 | 1147 | 2.506 |\n| 773 | 1.509 | 0.9875 | 0.900 | 0.604 | 12.4 | 7.5 | 1151 | 2.525 |\n| 730 | 1.509 | 0.9909 | 0.900 | 0.602 | 13.0 | 8.0 | 1152 | 2.544 |\n| 693 | 1.508 | 0.9932 | 0.898 | 0.598 | 14.2 | 8.5 | 1153 | 2.566 |\n| 661 | 1.508 | 0.9951 | 0.892 | 0.589 | 15.6 | 9.0 | 1151 | 2.591 |\n| 633 | 1.508 | 0.9968 | 0.868 | 0.575 | 20.2 | 9.5 | 1153 | 2.619 |\n| 608 | 1.508 | 0.9980 | 0.821 | 0.544 | 24.2 | 10.0 | 1143 | 2.648 |\n| 586 | 1.509 | 0.9992 | 0.716 | 0.496 | 28.0 | 10.5 | 1153 | 2.680 |\n| 568 | 1.509 | 1.0000 | 0.584 | 0.420 | 26.3 | 11.0 | 1136 | 2.721 |\n| 549 | 1.510 | 1.0000 | 0.405 | 0.305 | 20.0 | 11.5 | - | 2.750 |\n\nCaption: Calculation of the average thickness of the film, d , thickness variation, Dd, and refractive index, n, of a representative 1148+-5 nm thick amorphous As40S40Se20 film (Dd=13+-5 nm), using the optical characterisation method mentioned in the text. The reported value for the thickness variation has been calculated by averaging the underlined Dds, as explained in the text\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"1588\", \"type\": \"Other\"}\n{\"value\": \"1.491\", \"type\": \"Other\"}\n{\"value\": \"0.9831\", \"type\": \"Other\"}\n{\"value\": \"0.903\", \"type\": \"Other\"}\n{\"value\": \"0.623\", \"type\": \"Other\"}\n{\"value\": \"22.3\", \"type\": \"Other\"}\n{\"value\": \"3.5\", \"type\": \"Other\"}\n{\"value\": \"1155\", \"type\": \"Other\"}\n{\"value\": \"2.421\", \"type\": \"Other\"}\n{\"value\": \"1395\", \"type\": \"Other\"}\n{\"value\": \"1.495\", \"type\": \"Other\"}\n{\"value\": \"0.9767\", \"type\": \"Other\"}\n{\"value\": \"0.899\", \"type\": \"Other\"}\n{\"value\": \"0.616\", \"type\": \"Other\"}\n{\"value\": \"14.5\", \"type\": \"Other\"}\n{\"value\": \"4.0\", \"type\": \"Other\"}\n{\"value\": \"1150\", \"type\": \"Other\"}\n{\"value\": \"2.430\", \"type\": \"Other\"}\n{\"value\": \"1243\", \"type\": \"Other\"}\n{\"value\": \"1.500\", \"type\": \"Other\"}\n{\"value\": \"0.9737\", \"type\": \"Other\"}\n{\"value\": \"0.896\", \"type\": \"Other\"}\n{\"value\": \"0.611\", \"type\": \"Other\"}\n{\"value\": \"12.4\", \"type\": \"Other\"}\n{\"value\": \"4.5\", \"type\": \"Other\"}\n{\"value\": \"1145\", \"type\": \"Other\"}\n{\"value\": \"2.436\", \"type\": \"Other\"}\n{\"value\": \"1124\", \"type\": \"Other\"}\n{\"value\": \"1.498\", \"type\": \"Other\"}\n{\"value\": \"0.9710\", \"type\": \"Other\"}\n{\"value\": \"0.893\", \"type\": \"Other\"}\n{\"value\": \"0.607\", \"type\": \"Other\"}\n{\"value\": \"11.3\", \"type\": \"Other\"}\n{\"value\": \"5.0\", \"type\": \"Other\"}\n{\"value\": \"1147\", \"type\": \"Other\"}\n{\"value\": \"2.448\", \"type\": \"Other\"}\n{\"value\": \"1027\", \"type\": \"Other\"}\n{\"value\": \"1.503\", \"type\": \"Other\"}\n{\"value\": \"0.9726\", \"type\": \"Other\"}\n{\"value\": \"0.892\", \"type\": \"Other\"}\n{\"value\": \"0.605\", \"type\": \"Other\"}\n{\"value\": \"12.2\", \"type\": \"Other\"}\n{\"value\": \"5.5\", \"type\": \"Other\"}\n{\"value\": \"1145\", \"type\": \"Other\"}\n{\"value\": \"2.460\", \"type\": \"Other\"}\n{\"value\": \"947\", \"type\": \"Other\"}\n{\"value\": \"1.508\", \"type\": \"Other\"}\n{\"value\": \"0.9754\", \"type\": \"Other\"}\n{\"value\": \"0.893\", \"type\": \"Other\"}\n{\"value\": \"0.604\", \"type\": \"Other\"}\n{\"value\": \"12.4\", \"type\": \"Other\"}\n{\"value\": \"6.0\", \"type\": \"Other\"}\n{\"value\": \"1145\", \"type\": \"Other\"}\n{\"value\": \"2.475\", \"type\": \"Other\"}\n{\"value\": \"880\", \"type\": \"Other\"}\n{\"value\": \"1.514\", \"type\": \"Other\"}\n{\"value\": \"0.9799\", \"type\": \"Other\"}\n{\"value\": \"0.895\", \"type\": \"Other\"}\n{\"value\": \"0.603\", \"type\": \"Other\"}\n{\"value\": \"11.5\", \"type\": \"Other\"}\n{\"value\": \"6.5\", \"type\": \"Other\"}\n{\"value\": \"1143\", \"type\": \"Other\"}\n{\"value\": \"2.491\", \"type\": \"Other\"}\n{\"value\": \"822\", \"type\": \"Other\"}\n{\"value\": \"1.510\", \"type\": \"Other\"}\n{\"value\": \"0.9842\", \"type\": \"Other\"}\n{\"value\": \"0.898\", \"type\": \"Other\"}\n{\"value\": \"0.604\", \"type\": \"Other\"}\n{\"value\": \"12.6\", \"type\": \"Other\"}\n{\"value\": \"7.0\", \"type\": \"Other\"}\n{\"value\": \"1147\", \"type\": \"Other\"}\n{\"value\": \"2.506\", \"type\": \"Other\"}\n{\"value\": \"773\", \"type\": \"Other\"}\n{\"value\": \"1.509\", \"type\": \"Other\"}\n{\"value\": \"0.9875\", \"type\": \"Other\"}\n{\"value\": \"0.900\", \"type\": \"Other\"}\n{\"value\": \"0.604\", \"type\": \"Other\"}\n{\"value\": \"12.4\", \"type\": \"Other\"}\n{\"value\": \"7.5\", \"type\": \"Other\"}\n{\"value\": \"1151\", \"type\": \"Other\"}\n{\"value\": \"2.525\", \"type\": \"Other\"}\n{\"value\": \"730\", \"type\": \"Other\"}\n{\"value\": \"1.509\", \"type\": \"Other\"}\n{\"value\": \"0.9909\", \"type\": \"Other\"}\n{\"value\": \"0.900\", \"type\": \"Other\"}\n{\"value\": \"0.602\", \"type\": \"Other\"}\n{\"value\": \"13.0\", \"type\": \"Other\"}\n{\"value\": \"8.0\", \"type\": \"Other\"}\n{\"value\": \"1152\", \"type\": \"Other\"}\n{\"value\": \"2.544\", \"type\": \"Other\"}\n{\"value\": \"693\", \"type\": \"Other\"}\n{\"value\": \"1.508\", \"type\": \"Other\"}\n{\"value\": \"0.9932\", \"type\": \"Other\"}\n{\"value\": \"0.898\", \"type\": \"Other\"}\n{\"value\": \"0.598\", \"type\": \"Other\"}\n{\"value\": \"14.2\", \"type\": \"Other\"}\n{\"value\": \"8.5\", \"type\": \"Other\"}\n{\"value\": \"1153\", \"type\": \"Other\"}\n{\"value\": \"2.566\", \"type\": \"Other\"}\n{\"value\": \"661\", \"type\": \"Other\"}\n{\"value\": \"1.508\", \"type\": \"Other\"}\n{\"value\": \"0.9951\", \"type\": \"Other\"}\n{\"value\": \"0.892\", \"type\": \"Other\"}\n{\"value\": \"0.589\", \"type\": \"Other\"}\n{\"value\": \"15.6\", \"type\": \"Other\"}\n{\"value\": \"9.0\", \"type\": \"Other\"}\n{\"value\": \"1151\", \"type\": \"Other\"}\n{\"value\": \"2.591\", \"type\": \"Other\"}\n{\"value\": \"633\", \"type\": \"Other\"}\n{\"value\": \"1.508\", \"type\": \"Other\"}\n{\"value\": \"0.9968\", \"type\": \"Other\"}\n{\"value\": \"0.868\", \"type\": \"Other\"}\n{\"value\": \"0.575\", \"type\": \"Other\"}\n{\"value\": \"20.2\", \"type\": \"Other\"}\n{\"value\": \"9.5\", \"type\": \"Other\"}\n{\"value\": \"1153\", \"type\": \"Other\"}\n{\"value\": \"2.619\", \"type\": \"Other\"}\n{\"value\": \"608\", \"type\": \"Other\"}\n{\"value\": \"1.508\", \"type\": \"Other\"}\n{\"value\": \"0.9980\", \"type\": \"Other\"}\n{\"value\": \"0.821\", \"type\": \"Other\"}\n{\"value\": \"0.544\", \"type\": \"Other\"}\n{\"value\": \"24.2\", \"type\": \"Other\"}\n{\"value\": \"10.0\", \"type\": \"Other\"}\n{\"value\": \"1143\", \"type\": \"Other\"}\n{\"value\": \"2.648\", \"type\": \"Other\"}\n{\"value\": \"586\", \"type\": \"Other\"}\n{\"value\": \"1.509\", \"type\": \"Other\"}\n{\"value\": \"0.9992\", \"type\": \"Other\"}\n{\"value\": \"0.716\", \"type\": \"Other\"}\n{\"value\": \"0.496\", \"type\": \"Other\"}\n{\"value\": \"28.0\", \"type\": \"Other\"}\n{\"value\": \"10.5\", \"type\": \"Other\"}\n{\"value\": \"1153\", \"type\": \"Other\"}\n{\"value\": \"2.680\", \"type\": \"Other\"}\n{\"value\": \"568\", \"type\": \"Other\"}\n{\"value\": \"1.509\", \"type\": \"Other\"}\n{\"value\": \"1.0000\", \"type\": \"Other\"}\n{\"value\": \"0.584\", \"type\": \"Other\"}\n{\"value\": \"0.420\", \"type\": \"Other\"}\n{\"value\": \"26.3\", \"type\": \"Other\"}\n{\"value\": \"11.0\", \"type\": \"Other\"}\n{\"value\": \"1136\", \"type\": \"Other\"}\n{\"value\": \"2.721\", \"type\": \"Other\"}\n{\"value\": \"549\", \"type\": \"Other\"}\n{\"value\": \"1.510\", \"type\": \"Other\"}\n{\"value\": \"1.0000\", \"type\": \"Other\"}\n{\"value\": \"0.405\", \"type\": \"Other\"}\n{\"value\": \"0.305\", \"type\": \"Other\"}\n{\"value\": \"20.0\", \"type\": \"Other\"}\n{\"value\": \"11.5\", \"type\": \"Other\"}\n{\"value\": \"2.750\", \"type\": \"Other\"}",
"role": "assistant"
}
]
|
science.discomat_te | | Composition (mol%) | Composition (mol%) | Composition (mol%) | Composition (mol%) | T g | W s | W c | b | |
| ZnF2 | AlF3 | PbF2 | LiF | (K) | (eV) | (eV) | +-0.01 | |
| | | | | +-5 | +-0.01 | +-0.01 | | |
| 30 | 25 | 45 | 0 | 548 | 0.86 | 0.83 | 0.59 | 26.81 |
| 30 | 25 | 40 | 5 | 533 | 0.91 | 0.86 | 0.62 | 24.97 |
| 30 | 25 | 35 | 10 | 515 | 1.00 | 0.97 | 0.63 | 23.53 |
| 30 | 25 | 30 | 15 | 508 | 1.11 | 1.10 | 0.56 | 23.15 |
| 30 | 25 | 25 | 20 | 503 | 1.24 | 1.17 | 0.54 | 23.26 |
Caption: Compositions, activation energies for the conductivity and the relaxation time, stretching exponent and high-frequency dielectric constant for the ZnF2-AlF3-PbF2-LiF glasses
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "30", "type": "Glass_Compound_Amount", "constituent": "ZnF2", "unit": "mol", "material": "xx"}
{"value": "25", "type": "Glass_Compound_Amount", "constituent": "AlF3", "unit": "mol", "material": "xx"}
{"value": "45", "type": "Glass_Compound_Amount", "constituent": "PbF2", "unit": "mol", "material": "xx"}
{"value": "0", "type": "Other"}
{"value": "548", "type": "Other"}
{"value": "0.86", "type": "Other"}
{"value": "0.83", "type": "Other"}
{"value": "0.59", "type": "Other"}
{"value": "26.81", "type": "Other"}
{"value": "30", "type": "Glass_Compound_Amount", "constituent": "ZnF2", "unit": "mol", "material": "xx"}
{"value": "25", "type": "Glass_Compound_Amount", "constituent": "AlF3", "unit": "mol", "material": "xx"}
{"value": "40", "type": "Glass_Compound_Amount", "constituent": "PbF2", "unit": "mol", "material": "xx"}
{"value": "5", "type": "Glass_Compound_Amount", "constituent": "LiF", "unit": "mol", "material": "xx"}
{"value": "533", "type": "Other"}
{"value": "0.91", "type": "Other"}
{"value": "0.86", "type": "Other"}
{"value": "0.62", "type": "Other"}
{"value": "24.97", "type": "Other"}
{"value": "30", "type": "Glass_Compound_Amount", "constituent": "ZnF2", "unit": "mol", "material": "xx"}
{"value": "25", "type": "Glass_Compound_Amount", "constituent": "AlF3", "unit": "mol", "material": "xx"}
{"value": "35", "type": "Glass_Compound_Amount", "constituent": "PbF2", "unit": "mol", "material": "xx"}
{"value": "10", "type": "Glass_Compound_Amount", "constituent": "LiF", "unit": "mol", "material": "xx"}
{"value": "515", "type": "Other"}
{"value": "1.00", "type": "Other"}
{"value": "0.97", "type": "Other"}
{"value": "0.63", "type": "Other"}
{"value": "23.53", "type": "Other"}
{"value": "30", "type": "Glass_Compound_Amount", "constituent": "ZnF2", "unit": "mol", "material": "xx"}
{"value": "25", "type": "Glass_Compound_Amount", "constituent": "AlF3", "unit": "mol", "material": "xx"}
{"value": "30", "type": "Glass_Compound_Amount", "constituent": "PbF2", "unit": "mol", "material": "xx"}
{"value": "15", "type": "Glass_Compound_Amount", "constituent": "LiF", "unit": "mol", "material": "xx"}
{"value": "508", "type": "Other"}
{"value": "1.11", "type": "Other"}
{"value": "1.10", "type": "Other"}
{"value": "0.56", "type": "Other"}
{"value": "23.15", "type": "Other"}
{"value": "30", "type": "Glass_Compound_Amount", "constituent": "ZnF2", "unit": "mol", "material": "xx"}
{"value": "25", "type": "Glass_Compound_Amount", "constituent": "AlF3", "unit": "mol", "material": "xx"}
{"value": "25", "type": "Glass_Compound_Amount", "constituent": "PbF2", "unit": "mol", "material": "xx"}
{"value": "20", "type": "Glass_Compound_Amount", "constituent": "LiF", "unit": "mol", "material": "xx"}
{"value": "503", "type": "Other"}
{"value": "1.24", "type": "Other"}
{"value": "1.17", "type": "Other"}
{"value": "0.54", "type": "Other"}
{"value": "23.26", "type": "Other"} | science.discomat_te.215 | [
{
"content": "| Composition (mol%) | Composition (mol%) | Composition (mol%) | Composition (mol%) | T g | W s | W c | b | |\n| ZnF2 | AlF3 | PbF2 | LiF | (K) | (eV) | (eV) | +-0.01 | |\n| | | | | +-5 | +-0.01 | +-0.01 | | |\n| 30 | 25 | 45 | 0 | 548 | 0.86 | 0.83 | 0.59 | 26.81 |\n| 30 | 25 | 40 | 5 | 533 | 0.91 | 0.86 | 0.62 | 24.97 |\n| 30 | 25 | 35 | 10 | 515 | 1.00 | 0.97 | 0.63 | 23.53 |\n| 30 | 25 | 30 | 15 | 508 | 1.11 | 1.10 | 0.56 | 23.15 |\n| 30 | 25 | 25 | 20 | 503 | 1.24 | 1.17 | 0.54 | 23.26 |\n\nCaption: Compositions, activation energies for the conductivity and the relaxation time, stretching exponent and high-frequency dielectric constant for the ZnF2-AlF3-PbF2-LiF glasses\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"30\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"ZnF2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"25\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"AlF3\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"45\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"PbF2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0\", \"type\": \"Other\"}\n{\"value\": \"548\", \"type\": \"Other\"}\n{\"value\": \"0.86\", \"type\": \"Other\"}\n{\"value\": \"0.83\", \"type\": \"Other\"}\n{\"value\": \"0.59\", \"type\": \"Other\"}\n{\"value\": \"26.81\", \"type\": \"Other\"}\n{\"value\": \"30\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"ZnF2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"25\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"AlF3\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"40\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"PbF2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"LiF\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"533\", \"type\": \"Other\"}\n{\"value\": \"0.91\", \"type\": \"Other\"}\n{\"value\": \"0.86\", \"type\": \"Other\"}\n{\"value\": \"0.62\", \"type\": \"Other\"}\n{\"value\": \"24.97\", \"type\": \"Other\"}\n{\"value\": \"30\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"ZnF2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"25\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"AlF3\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"35\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"PbF2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"10\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"LiF\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"515\", \"type\": \"Other\"}\n{\"value\": \"1.00\", \"type\": \"Other\"}\n{\"value\": \"0.97\", \"type\": \"Other\"}\n{\"value\": \"0.63\", \"type\": \"Other\"}\n{\"value\": \"23.53\", \"type\": \"Other\"}\n{\"value\": \"30\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"ZnF2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"25\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"AlF3\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"30\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"PbF2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"15\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"LiF\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"508\", \"type\": \"Other\"}\n{\"value\": \"1.11\", \"type\": \"Other\"}\n{\"value\": \"1.10\", \"type\": \"Other\"}\n{\"value\": \"0.56\", \"type\": \"Other\"}\n{\"value\": \"23.15\", \"type\": \"Other\"}\n{\"value\": \"30\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"ZnF2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"25\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"AlF3\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"25\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"PbF2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"20\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"LiF\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"503\", \"type\": \"Other\"}\n{\"value\": \"1.24\", \"type\": \"Other\"}\n{\"value\": \"1.17\", \"type\": \"Other\"}\n{\"value\": \"0.54\", \"type\": \"Other\"}\n{\"value\": \"23.26\", \"type\": \"Other\"}",
"role": "assistant"
}
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.