dataset
stringclasses 45
values | id
stringlengths 17
64
| messages
listlengths 2
2
|
---|---|---|
science.pico_ner | science.pico_ner.1535 | [
{
"content": "Find entities of these types in the text: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nOutput a JSON object with entity types as keys and lists of found entities as values.\n\nText:\nSeveral indicators of oxidative stress , immunity , and illness improved in trained men consuming an encapsulated juice powder concentrate for 28 weeks . Phytonutrients from plant foods provide numerous antioxidants . We hypothesized that supplementation for 28 wk with a commercially available encapsulated juice powder concentrate ( JPC ) could influence indicators of oxidative stress , immunity , and illness . Trained men ( n = 41 ; 34 +/- 5 y ; maximum oxygen uptake = 55 +/- 7 mL x kg ( -1 ) x min ( -1 ) ) from a homogenous police Special Forces unit were randomly assigned in a double blind manner to either JPC ( n = 21 ) or placebo ( n = 20 ) . We used multiple 7-d food records to assess dietary intake and found inadequate mean daily fruit and vegetable consumption ( 3.2 +/- 1.2 servings ) . The group physician documented all duty days lost due to illness . We collected plasma at baseline and study wk 4 , 8 , 16 , and 28 for analysis of carbonyl groups on protein ( CP ) and TNFalpha . Over the 28-wk investigation , CP was lower in the JPC group , with both a treatment and a time x treatment interaction ( P < 0.05 ) . Concentrations of both CP and TNFalpha at 16 and 28 wk were lower in the JPC than in the placebo group ( P < 0.001 ) . TNFalpha increased during the first 8 wk followed by a decrease in both groups for the following 20 wk ( P < 0.001 ) . Over the final 20 wk of the study , the placebo group tended to have more days of illness than the JPC group ( P = 0.068 ) . These data suggest beneficial JPC effects with regard to reduction of duty days lost due to illness and reduction of CP and TNFalpha concentrations in this group of trained men over 28 wk .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"commercially available encapsulated juice powder concentrate ( JPC )\", \"encapsulated juice powder concentrate\", \"Phytonutrients from plant foods\", \"placebo\", \"or placebo\", \"JPC\"], \"Outcome\": [\"TNFalpha concentrations\", \"TNFalpha\", \"protein ( CP ) and TNFalpha\", \"immunity\", \"oxidative stress\", \"carbonyl groups on\", \"indicators of oxidative stress , immunity , and illness\", \"CP\", \"illness\"], \"Participant\": []}",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.13742 | [
{
"content": "Write a Related Work section based on these abstracts. Use numerical citations like [1], [2], etc.\n\nMain Abstract:\nIn most recent years, zero-shot recognition (ZSR) has gained increasing attention in machine learning and image processing fields. It aims at recognizing unseen class instances with knowledge transferred from seen classes. This is typically achieved by exploiting a pre-defined semantic feature space (FS), i.e., semantic attributes or word vectors, as a bridge to transfer knowledge between seen and unseen classes. However, due to the absence of unseen classes during training, the conventional ZSR easily suffers from domain shift and hubness problems. In this paper, we propose a novel ZSR learning framework that can handle these two issues well by adaptively adjusting semantic FS. To the best of our knowledge, our work is the first to consider the adaptive adjustment of semantic FS in ZSR. Moreover, our solution can be formulated to a more efficient framework that significantly boosts the training. Extensive experiments show the remarkable performance improvement of our model compared with other existing methods.\n\nCited Abstract(s):\n[1]: Given semantic descriptions of object classes, zeroshot learning aims to accurately recognize objects of the unseen classes, from which no examples are available at the training stage, by associating them to the seen classes, from which labeled examples are provided. We propose to tackle this problem from the perspective of manifold learning. Our main idea is to align the semantic space that is derived from external information to the model space that concerns itself with recognizing visual features. To this end, we introduce a set of \"phantom\" object classes whose coordinates live in both the semantic space and the model space. Serving as bases in a dictionary, they can be optimized from labeled data such that the synthesized real object classifiers achieve optimal discriminative performance. We demonstrate superior accuracy of our approach over the state of the art on four benchmark datasets for zero-shot learning, including the full ImageNet Fall 2011 dataset with more than 20,000 unseen classes.\n\n[2]: Most existing zero-shot learning approaches exploit transfer learning via an intermediate semantic representation shared between an annotated auxiliary dataset and a target dataset with different classes and no annotation. A projection from a low-level feature space to the semantic representation space is learned from the auxiliary dataset and applied without adaptation to the target dataset. In this paper we identify two inherent limitations with these approaches. First, due to having disjoint and potentially unrelated classes, the projection functions learned from the auxiliary dataset domain are biased when applied directly to the target dataset domain. We call this problem the projection domain shift problem and propose a novel framework, transductive multi-view embedding , to solve it. The second limitation is the prototype sparsity problem which refers to the fact that for each target class, only a single prototype is available for zero-shot learning given a semantic representation. To overcome this problem, a novel heterogeneous multi-view hypergraph label propagation method is formulated for zero-shot learning in the transductive embedding space. It effectively exploits the complementary information offered by different semantic representations and takes advantage of the manifold structures of multiple representation spaces in a coherent manner. We demonstrate through extensive experiments that the proposed approach (1) rectifies the projection shift between the auxiliary and target domains, (2) exploits the complementarity of multiple semantic representations, (3) significantly outperforms existing methods for both zero-shot and N-shot recognition on three image and video benchmark datasets, and (4) enables novel cross-view annotation tasks.\n\n[3]: Most existing zero-shot learning methods consider the problem as a visual semantic embedding one. Given the demonstrated capability of Generative Adversarial Networks(GANs) to generate images, we instead leverage GANs to imagine unseen categories from text descriptions and hence recognize novel classes with no examples being seen. Specifically, we propose a simple yet effective generative model that takes as input noisy text descriptions about an unseen class (e.g. Wikipedia articles) and generates synthesized visual features for this class. With added pseudo data, zero-shot learning is naturally converted to a traditional classification problem. Additionally, to preserve the inter-class discrimination of the generated features, a visual pivot regularization is proposed as an explicit supervision. Unlike previous methods using complex engineered regularizers, our approach can suppress the noise well without additional regularization. Empirically, we show that our method consistently outperforms the state of the art on the largest available benchmarks on Text-based Zero-shot Learning.\n\n[4]: Robust object recognition systems usually rely on powerful feature extraction mechanisms from a large number of real images. However, in many realistic applications, collecting sufficient images for ever-growing new classes is unattainable. In this paper, we propose a new Zero-shot learning (ZSL) framework that can synthesise visual features for unseen classes without acquiring real images. Using the proposed Unseen Visual Data Synthesis (UVDS) algorithm, semantic attributes are effectively utilised as an intermediate clue to synthesise unseen visual features at the training stage. Hereafter, ZSL recognition is converted into the conventional supervised problem, i.e. the synthesised visual features can be straightforwardly fed to typical classifiers such as SVM. On four benchmark datasets, we demonstrate the benefit of using synthesised unseen data. Extensive experimental results suggest that our proposed approach significantly improve the state-of-the-art results.\n\n[5]: \n\n[6]: Modern visual recognition systems are often limited in their ability to scale to large numbers of object categories. This limitation is in part due to the increasing difficulty of acquiring sufficient training data in the form of labeled images as the number of object categories grows. One remedy is to leverage data from other sources - such as text data - both to train visual models and to constrain their predictions. In this paper we present a new deep visual-semantic embedding model trained to identify visual objects using both labeled image data as well as semantic information gleaned from unannotated text. We demonstrate that this model matches state-of-the-art performance on the 1000-class ImageNet object recognition challenge while making more semantically reasonable errors, and also show that the semantic information can be exploited to make predictions about tens of thousands of image labels not observed during training. Semantic knowledge improves such zero-shot predictions achieving hit rates of up to 18 across thousands of novel labels never seen by the visual model.\n\n[7]: \n\n[8]: This paper addresses the task of zero-shot image classification. The key contribution of the proposed approach is to control the semantic embedding of images – one of the main ingredients of zero-shot learning – by formulating it as a metric learning problem. The optimized empirical criterion associates two types of sub-task constraints: metric discriminating capacity and accurate attribute prediction. This results in a novel expression of zero-shot learning not requiring the notion of class in the training phase: only pairs of image attributes, augmented with a consistency indicator, are given as ground truth. At test time, the learned model can predict the consistency of a test image with a given set of attributes, allowing flexible ways to produce recognition inferences. Despite its simplicity, the proposed approach gives state-of-the-art results on four challenging datasets used for zero-shot recognition evaluation.\n\n[9]: Zero-shot learning consists in learning how to recognise new concepts by just having a description of them. Many sophisticated approaches have been proposed to address the challenges this problem comprises. In this paper we describe a zero-shot learning approach that can be implemented in just one line of code, yet it is able to outperform state of the art approaches on standard datasets. The approach is based on a more general framework which models the relationships between features, attributes, and classes as a two linear layers network, where the weights of the top layer are not learned but are given by the environment. We further provide a learning bound on the generalisation error of this kind of approaches, by casting them as domain adaptation methods. In experiments carried out on three standard real datasets, we found that our approach is able to perform significantly better than the state of art on all of them, obtaining a ratio of improvement up to 17 .\n\n[10]: In this paper we consider a version of the zero-shot learning problem where seen class source and target domain data are provided. The goal during test-time is to accurately predict the class label of an unseen target domain instance based on revealed source domain side information (e.g. attributes) for unseen classes. Our method is based on viewing each source or target data as a mixture of seen class proportions and we postulate that the mixture patterns have to be similar if the two instances belong to the same unseen class. This perspective leads us to learning source target embedding functions that map an arbitrary source target domain data into a same semantic space where similarity can be readily measured. We develop a max-margin framework to learn these similarity functions and jointly optimize parameters by means of cross validation. Our test results are compelling, leading to significant improvement in terms of accuracy on most benchmark datasets for zero-shot recognition.",
"role": "user"
},
{
"content": "To deal with these issues, several transductive learning based methods [2] assume that the unseen class instances (unlabelled) are available at once during training. DeViSE [6] trains a linear mapping between visual and semantic FS by an effective ranking loss formulation. ESZSL [9] utilizes the square loss to learn the bilinear compatibility and adds regularization to the objective with respect to Frobenius norm. SSE [6]0 uses the mixture of seen class parts as the intermediate FS. AMP [8] embeds the visual features into the attribute space. SynC @math [1] and CLN+KRR [4] jointly embed several kinds of textual features and visual features to ground attributes. MFMR [5] leverages the sophisticated technique of matrix tri-factorization with manifold regularizers to enhance the mapping between visual and semantic FS. With the popularity of generative adversarial networks (GANs), GANZrl [7] applies GANs to synthesize instances with specified semantics to cover a higher diversity of seen classes. Instead, GAZSL [3] leverages GANs to imagine unseen classes from text descriptions. Despite the efforts made, the domain shift and hubness problems are still open issues.",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_plos_single_doc_summ | science.scientific_lay_summarisation_plos_single_doc_summ.16860 | [
{
"content": "Summarize the article in an uncomplicated manner.\n\nTitle: Global Analysis of Posttranslational Protein Arginylation\nArticle:\nAbstract:\nPosttranslational arginylation is critical for embryogenesis, cardiovascular development, and angiogenesis, but its molecular effects and the identity of proteins arginylated in vivo are largely unknown. Here we report a global analysis of this modification on the protein level and identification of 43 proteins arginylated in vivo on highly specific sites. Our data demonstrate that unlike previously believed, arginylation can occur on any N-terminally exposed residue likely defined by a structural recognition motif on the protein surface, and that it preferentially affects a number of physiological systems, including cytoskeleton and primary metabolic pathways. The results of our study suggest that protein arginylation is a general mechanism for regulation of protein structure and function and outline the potential role of protein arginylation in cell metabolism and embryonic development.\nIntroduction:\nProtein arginylation is an enigmatic posttranslational modification mediated by arginyl-tRNA-protein transferase( ATE1) that transfers arginine( Arg) from tRNA onto proteins[1]. Until recently it had been believed that arginylation can only occur on the exposed N-terminus of aspartic acid( Asp), glutamic acid( Glu), or cysteine( Cys); however, a single case of the addition of Arg onto the side chain of Glu has been recently identified in vivo[2]. Despite the fact that arginylation was originally discovered more than 40 y ago[3, 4], its biological functions are still poorly understood. ATE1 enzyme is highly conserved in evolution. Eukaryotic species, from yeast to human, appear to contain only one arginyltransferase gene and no other known enzymes with a similar activity. ATE1 is nonessential in yeast, but required for embryogenesis in mammals[5]. Mouse Ate1 knockout results in embryonic lethality and severe defects in cardiovascular development and angiogenesis[5]. It has been believed that the molecular function of arginylation is to induce degradation of the target protein substrates by the ubiquitin-dependent N-end rule pathway[6]. Indeed, arginylation by ATE1 induces rapid degradation of experimentally constructed test proteins in yeast[6], and the half-life of RGS family proteins in mammals has been shown to decrease upon arginylation[7]. Recently, however, it has been found that arginylation regulates structure and intracellular assembly of beta actin in motile cells without affecting its short-term metabolic stability[8], suggesting that the function of arginylation in vivo may be more complex. The most direct way to understand the function of protein arginylation is by identifying the in vivo substrates of ATE1; however, such identification is difficult for the following reasons. First, arginylation by ATE1 is believed to involve Arg-tRNA that is also utilized during protein synthesis, and to result in the formation of a normal peptide bond. As a result, arginylated proteins are difficult to distinguish in biochemical tests or to label differentially from other intracellular proteins. Second, arginylation is believed to occur only on the N-terminally exposed residues other than methionine( Met) 1 found in all proteins immediately after translation initiation; therefore, arginylation requires a preceding posttranslational modification either by proteolysis or by aminopeptidation—modifications whose targets are themselves poorly characterized. Finally, the situation is further complicated by the cases where arginylation results in decreased metabolic stability, making at least some arginylated proteins less abundant in vivo. Thus, despite numerous evidence of protein arginylation in vivo[9–14] and multiple attempts to identify arginylated proteins, no systematic progress has been made beyond identification of several targets[7, 15–19]. In this study we utilize the arginyltransferase knockout( Ate1−/−) mouse model[5] and use two-dimensional( 2D) gel fractionation, metabolic labeling by Arg, immunoaffinity chromatography, and mass spectrometry( MS) to estimate the entire complexity of arginylation, identify key proteins arginylated in vivo, and gain insights into the biological functions of arginylation as a posttranslational modification. Our analysis reveals that a vast number of proteins, on the order of hundreds, are potentially arginylated in vivo. While some of these proteins are possibly metabolically destabilized upon arginylation, a much larger number of proteins are not. We find that in addition to N-terminal Asp, Glu, and Cys, proteins can be arginylated on other N-terminally exposed amino acid residues, including one identified case of arginylation on N-terminal Met. We further find that proteins arginylated in vivo fall into several distinct functional groups with roles in cytoskeleton, transcriptional regulation, and general metabolic pathways. These proteins are arginylated in vivo on a limited number of sites predicted to affect their key functional properties. Our results suggest that protein arginylation plays a general regulatory function in vivo by modulating the structure of proteins and affecting their intracellular functions.\nDiscussion:\nIn this study we report a systematic characterization of posstranslational protein arginylation and identification of 43 proteins arginylated in vivo in different mouse tissues. Our analysis shows that( 1) arginylation can occur on almost any N-terminally exposed residue without an apparent bias;( 2) arginylation occurs on a highly limited and conserved number of sites in each protein located on the surface of the assembled molecule, which suggests arginylation target recognition by the secondary or higher order structure; and( 3) arginylation is functionally associated with proteins involved in cytoskeleton and primary metabolic pathways. It has been previously suggested that arginylation can occur only on N-terminally exposed Asp, Glu, and Cys[24] and that such arginylation should result in a decrease in metabolic stability and increase in protein degradation by the ubiquitin-dependent N-end rule pathway[22]. Our study expands this view and demonstrates that arginylation can also occur on other residues and that in many, apparently the majority of cases, it does not result in protein degradation. It has been previously shown that beta actin regulation by arginylation involves structural rearrangements of actin polymers without an effect on its metabolic stability[8]. The current study is in agreement with this finding and suggests that N-terminal arginylation regulates structural properties of a number of other proteins. It should be noted that since our analysis by its nature was biased to proteins abundant( and therefore not degraded) in the wild-type extracts, those proteins that are metabolically destabilized upon arginylation are not a part of the current study, and the conclusions presented here do not take such proteins into account. Therefore, we must acknowledge the possibility that another subset of arginylated proteins exists in vivo that follow different rules than the proteins described here. We expect, however, that some similarities should exist that would be common to the arginylation reaction and its function in vivo. Since most of the proteins in our analysis were found to be arginylated on the residues internal to the initiator Met, it is clear that for such proteins arginylation involves preceding cleavage by proteases or multiple aminopeptidases, or occurs on the side chains, where chemically possible. It has been previously shown that Glu residue can be arginylated on the side chain[2], which suggests that Asp—which has a similar side chain—can also be arginylated by the same mechanism. Stretching this analogy further, it is also possible to suggest that Asn and Gln can be arginylated with or without deamidation, that lysine( Lys) can accept Arg onto its side chain amino group, and that serine, threonine, and Cys can be arginylated on the side chains after oxidation. The finding that a large subset of arginylation sites are either exposed on the molecule surface or located at a clear boundary between two protein domains suggests an interesting possibility of protein regulation by arginylation( Figure 7). While it is possible that in some cases arginylation occurs after degradatory proteolysis of damaged proteins( left branch of the pathway in Figure 7), it is notable that apparently only a relatively small fraction of the peptides in vivo are arginylated, suggesting that the specificity for the arginylated sites must be fairly high. In fact, as shown here, this specificity may involve structural motifs on the protein surface and amino acids within the polypeptide located on the N-terminal side of the arginylated residues( Figure S1), suggesting that the recognition of the arginylated sites occurs prior to proteolytic cleavage. It is also notable that, based on the known three-dimensional structures, the majority of the arginylated sites are accessible for arginylation on the surface of the intact protein molecules or subunits( as seen in the cases shown in Figures 5 and 6 and observed for all other analyzed structures) or after an interdomain cleavage that would be likely to produce functional protein fragments( middle branch of the pathway in Figure 7). Based on the facts that recognition of the arginylated sites appears to precede proteolytic cleavage and that such sites in many cases are found in a limited number of surface sites, we propose a new mechanism for protein regulation that would involve limited proteolysis by “snipping” the polypeptide chain without breaking down the tertiary structure of the molecule, and the subsequent arginylation( right branch of the pathway in Figure 7). It seems possible that such proteolysis could happen in vivo without breaking down the quaternary structure of the protein molecule, held together by noncovalent interactions or covalent crosslinking of the amino acid side chains[25]. In fact, internal arginylation sites are found in proteins( shown in Tables 1 and S3) that are excised from the SDS gel spots of their expected molecular weight, suggesting that covalent crosslinking in vivo is holding them together after internal cleavage and arginylation. This possibility constitutes an exciting direction of further study. Additional levels of regulation of protein arginylation could be imagined. First, it is possible that for those substrates that are arginylated after denaturation and cleavage, addition of N-terminal Arg causes their degradation, as has been suggested by the N-end rule studies. Such a mechanism would involve massive proteolysis in wild-type cells and accumulation of the corresponding targets in the Ate1−/− cells, which could account for a subset of arginylation-related changes observed in our analysis. Based on our data and the work of other groups, there is currently no solid evidence that this mechanism regulates a large number of arginylated proteins, but we cannot exclude the possibility that some of the arginylated targets are indeed destabilized, in a specific or nonspecific way. A second possibility of arginylation regulation concerns competition with other posttranslational modifications for the free N-termini. It has been previously shown that a majority of eukaryotic proteins are acetylated in vivo[26, 27]. It seems likely that acetylation and arginylation reactions, which can occur on the same alpha amino groups of the same residues, are in competition in vivo and that the fractions of acetylated versus arginylated forms of each protein are regulated by this competition. Such a situation most likely happens in the case of beta actin, which is found in vivo in both acetylated and arginylated forms[8]. Finally, regulation of arginylation is likely to involve de-arginylation enzymes that should cleave off N-terminal Arg to regulate the fraction of arginylated protein versus its non-arginylated form in important metabolic pathways. A likely candidate for such a role is aminopeptidase B[28], which selectively removes Lys and Arg from the N-terminus of certain regulatory peptides and could feasibly perform a similar function for other arginylated proteins. It is also possible that other de-arginylation enzymes with broad or narrow specificity exist in vivo and may be involved in regulation of arginylation. The results of this study suggest that arginylation is a global form of regulation of activity, abundance, and structure of multiple proteins in vivo, reminiscent of protein phosphorylation, and that it involves its own machinery of arginyltransferases, cofactors, and de-arginylation enzymes that is responsible for controlling vital physiological processes. Like protein phosphorylation, arginylation is likely to regulate different proteins in different ways that would modulate their assembly and structure( as seen in the case of beta actin), metabolic stability( as seen with regulators of G-protein signaling RGS4, RGS5, and RGS16), or other properties that facilitate their intracellular functions and roles in cardiovascular development, angiogenesis, and other important biological processes. Studies of this modification constitute an exciting emerging field.",
"role": "user"
},
{
"content": "A common cellular mechanism for the regulation of proteins , once they have been translated from mRNA , is the addition and removal of chemical groups via enzymatic reactions . The posttranslational addition of arginyl groups is critical for the embryonic development and survival of an organism , but the molecular effects and the identity of proteins arginylated in vivo are largely unknown . We developed a technique to screen large numbers of proteins for this modification and identified 43 proteins arginylated in vivo on highly specific sites . Arginylation can occur on any exposed residue at the N-terminus of a protein and appears to require a specific structural recognition motif on the protein surface . It preferentially affects a number of physiological systems , including cytoskeleton and primary metabolic pathways and seems to be a general mechanism for regulation of protein structure and function . Our data provide insights into the previously unknown arginylation-dependent mechanisms of the regulation of embryonic development .",
"role": "assistant"
}
] |
science.bc7_litcovid_topic_classification | science.bc7_litcovid_topic_classification.20410 | [
{
"content": "Classify the topics of this COVID-19 literature paragraph by selecting from these categories:\n\n- Epidemic Forecasting: Predictions about COVID-19 spread\n- Treatment: Therapeutic methods or medication\n- Prevention: Measures to prevent spread\n- Mechanism: Biological mechanisms of the virus\n- Case Report: Report of a COVID-19 case\n- Transmission: How COVID-19 is transmitted\n- Diagnosis: Diagnosing COVID-19 in individuals\n\nYour output should be a valid JSON array with topics in double quotes and separated by commas. No additional text should be included.\n\nArticle:\n\nCoronavirus disease (COVID-19) is caused by SARS-CoV-2 and has spread rapidly and caused a global pandemic. Knowledge about clinical and laboratory manifestations in the pediatric population is necessary to guide and monitor such patients. A 3-year-old female patient diagnosed with COVID-19 presented with high fever. After defervescence, she experienced a maculopapular rash that worsened by the sixth day of the disease with self-limited evolution without relevant laboratory changes. The identification of rashes in children with COVID-19 is an unusual and important condition that must be recognized in view of the high transmissibility shown.",
"role": "user"
},
{
"content": "[\"Case Report\"]",
"role": "assistant"
}
] |
science.bc7_litcovid_topic_classification | science.bc7_litcovid_topic_classification.16857 | [
{
"content": "Classify the topics of this COVID-19 literature paragraph by selecting from these categories:\n\n- Epidemic Forecasting: Predictions about COVID-19 spread\n- Treatment: Therapeutic methods or medication\n- Prevention: Measures to prevent spread\n- Mechanism: Biological mechanisms of the virus\n- Case Report: Report of a COVID-19 case\n- Transmission: How COVID-19 is transmitted\n- Diagnosis: Diagnosing COVID-19 in individuals\n\nYour output should be a valid JSON array with topics in double quotes and separated by commas. No additional text should be included.\n\nArticle:\n\nThe novel coronavirus, SARS-CoV-2, causes much more severe disease in adults than in children. Although it is anticipated that immune compromised children and children with cancer may be at higher risk of developing severe or fatal COVID-19, there are no currently published reports of fatal disease in a child with cancer. Because of the discrepancy in disease severity between adult and pediatric patients, we report the case of an adolescent with pulmonary metastatic osteosarcoma who died of COVID-19 early in the course of the pandemic in New York City in the hope that heightening awareness that pulmonary metastatic disease may predispose to a more severe outcome will increase surveillance in this vulnerable population.",
"role": "user"
},
{
"content": "[\"Case Report\"]",
"role": "assistant"
}
] |
science.scitldr_aic | science.scitldr_aic.217 | [
{
"content": "Summarize the paper:\n\nPaper text:\n\nMagnitude-based pruning is one of the simplest methods for pruning neural networks.\nDespite its simplicity, magnitude-based pruning and its variants demonstrated remarkable performances for pruning modern architectures.\nBased on the observation that the magnitude-based pruning indeed minimizes the Frobenius distortion of a linear operator corresponding to a single layer, we develop a simple pruning method, coined lookahead pruning, by extending the single layer optimization to a multi-layer optimization.\nOur experimental results demonstrate that the proposed method consistently outperforms the magnitude pruning on various networks including VGG and ResNet, particularly in the high-sparsity regime.\nThe \"magnitude-equals-saliency\" approach has been long underlooked as an overly simplistic baseline among all imaginable techniques to eliminate unnecessary weights from over-parametrized neural networks.\nSince the early works of LeCun et al. (1989) ; Hassibi & Stork (1993) which provided more theoretically grounded alternative of magnitude-based pruning (MP) based on second derivatives of loss function, a wide range of methods including Bayesian / information-theoretic approaches (Neal, 1996; Louizos et al., 2017; Molchanov et al., 2017; Dai et al., 2018) , pregularization (Wen et al., 2016; Liu et al., 2017; Louizos et al., 2018) , sharing redundant channels (Zhang et al., 2018; Ding et al., 2019) , and reinforcement learning approaches (Lin et al., 2017; Bellec et al., 2018; He et al., 2018) has been proposed as more sophisticated alternatives.\nOn the other hand, the capabilities of MP heuristics are gaining attention once more.\nCombined with minimalistic techniques including iterative pruning (Han et al., 2015) and dynamic reestablishment of connections (Zhu & Gupta, 2017) , a recent large-scale study by Gale et al. (2019) claims that MP can achieve a state-of-the-art trade-off of sparsity and accuracy on ResNet-50.\nThe unreasonable effectiveness of magnitude scores often extends beyond the strict domain of network pruning; a recent experiment by Frankle & Carbin (2019) suggests an existence of an automatic subnetwork discovery mechanism underlying the standard gradient-based optimization procedures of deep, overparametrized neural networks by showing that the MP algorithm finds an efficient trainable subnetwork.\nThese observations constitute a call to revisit the \"magnitude-equals-saliency\" approach for a better understanding of deep neural network itself.\nAs an attempt to better understand the nature of MP methods, we study a generalization of magnitude scores under a functional approximation framework; by viewing MP as a relaxed minimization of distortion in layerwise operators introduced by zeroing out parameters, we consider a multi-layer extension of the distortion minimization problem.\nMinimization of the newly suggested distortion measure which 'looks ahead' the impact of pruning on neighboring layers gives birth to a novel pruning strategy, coined lookahead pruning (LAP).\nIn this paper, we focus on comparison of the proposed LAP scheme to its MP counterpart.\nWe empirically demonstrate that LAP consistently outperforms the MP under various setups including linear networks, fully-connected networks, and deep convolutional and residual networks.\nIn particular, the LAP consistently enables more than ×2 gain in the compression rate of the considered models, with increasing benefits under the high-sparsity regime.\nApart from its performance, the lookahead pruning method enjoys additional attractive properties: • Easy-to-use: Like magnitude-based pruning, the proposed LAP is a simple score-based approach agnostic to model and data, which can be implemented by computationally light elementary tensor operations.\nUnlike most Hessian-based methods, LAP does not rely on an availability of training data except for the retraining phase.\nIt also has no hyper-parameter to tune, in contrast to other sophisticated training-based and optimization-based schemes.\n• Versatility: As our method simply replaces the \"magnitude-as-saliency\" criterion with a lookahead alternative, it can be deployed jointly with algorithmic tweaks developed for magnitudebased pruning, such as iterative pruning and retraining (Han et al., 2015) or joint pruning and training with dynamic reconnections (Zhu & Gupta, 2017; Gale et al., 2019) .\nThe remainder of this manuscript is structured as follows: In Section 2, we introduce a functional approximation perspective toward MP and motivate LAP and its variants as a generalization of MP for multiple layer setups; in Section 3 we explore the capabilities of LAP and its variants with simple models, then move on to apply LAP to larger-scale models.\nIn this work, we interpret magnitude-based pruning as a solution to the minimization of the Frobenius distortion of a single layer operation incurred by pruning.\nBased on this framework, we consider the minimization of the Frobenius distortion of multi-layer operation, and propose a novel lookahead pruning (LAP) scheme as a computationally efficient algorithm to solve the optimization.\nAlthough LAP was motivated from linear networks, it extends to nonlinear networks which indeed minimizes the root mean square lookahead distortion assuming i.\nτ fraction in all fully-connected layers, except for the last layer where we use (1 + q)/2 instead.\nFor FCN, we use (p, q) = (0, 0.5).\nFor Conv-6, VGGs ResNets, and WRN, we use (0.85, 0.8).\nFor ResNet-{18, 50}, we do not prune the first convolutional layer.\nThe range of sparsity for reported figures in all tables is decided as follows: we start from τ where test error rate starts falling below that of an unpruned model and report the results at τ, τ + 1, τ + 2, . . . for FCN and Conv-6, τ, τ + 2, τ + 4, . . . for VGGs, ResNet-50, and WRN, and τ, τ + 3, τ + 6, . . . for ResNet-18.\nIn this section, we show that the optimization in Eq. (3) is NP-hard by showing the reduction from the following binary quadratic programming which is NP-hard (Murty & Kabadi, 1987) :",
"role": "user"
},
{
"content": "We study a multi-layer generalization of the magnitude-based pruning.",
"role": "assistant"
}
] |
science.scitldr_aic | science.scitldr_aic.91 | [
{
"content": "Summarize the paper:\n\nPaper text:\n\nWe present a method for policy learning to navigate indoor environments.\nWe adopt a hierarchical policy approach, where two agents are trained to work in cohesion with one another to perform a complex navigation task.\nA Planner agent operates at a higher level and proposes sub-goals for an Executor agent.\nThe Executor reports an embedding summary back to the Planner as additional side information at the end of its series of operations for the Planner's next sub-goal proposal.\nThe end goal is generated by the environment and exposed to the Planner which then decides which set of sub-goals to propose to the Executor.\nWe show that this Planner-Executor setup drastically increases the sample efficiency of our method over traditional single agent approaches, effectively mitigating the difficulty accompanying long series of actions with a sparse reward signal.\nOn the challenging Habitat environment which requires navigating various realistic indoor environments, we demonstrate that our approach offers a significant improvement over prior work for navigation.\nThe ability to model and understand the world at a high-level is crucial for performing complex tasks in real world environments.\nPart of this high-level understanding involves the ability to divide and plan out tasks that are complicated and have long time horizons into more manageable subtasks.\nFor example, when navigating to a new location, we typically break the task down into a set of manageable directions (i.e. drive along a certain road until a familiar landmark before taking a turn).\nImbuing machines with this ability of creating abstractions for long and complex tasks is an active area of research known as hierarchical learning (Sutton et al., 1998; 1999) .\nResearch for navigation has recently seen a rejuvenation due to the advent of learning-based approaches Parisotto & Salakhutdinov, 2017; Henriques & Vedaldi, 2018) .\nEmbodied learning-based approaches have shown some appealing properties over classical approaches such as being able to operate in complex environments with limited sensor data (Savva et al., 2019; Mishkin et al., 2019) .\nHowever, there is a need for the ability to plan across long time horizons with sparse reward signals.\nThis in effect, causes limitations such as the inability to overcome small obstacles when navigating towards a given goal and the requirement of invoking the environment a large number of times for any meaningful learning to occur (Le et al., 2018) .\nWorks which have combined hierarchical reinforcement learning with imitation learning have shown promising results (Das et al., 2018b; Le et al., 2018) , by leveraging expert trajectories with policy sketches (Andreas et al., 2017) , which are less expensive to obtain; however these sketches still require annotation of the environment.\nIn this work, we study such hierarchical control for the task of indoor navigation, whereby an embodied agent is randomly spawned within a novel and complex environment and must learn to navigate this environment through interaction (Das et al., 2018a) .\nWe address this challenging learning problem through a hierarchical policy approach, where two agents are cooperatively trained together.\nEach agent performs a different role, where one agent acts as a Planner, learning how to propose good sub-goals to an Executor agent, which acts at the low level to achieve these sub-goals (Fig. 1) .\nIn contrast to existing hierarchical policy learning approaches, communication between our two agents is two-way, where the Executor provides the Planner with a summary of its series of actions and recent observations.\nThis aids the Planner in deciding the next sub-goal with additional side Figure 1 : Our PLEX framework adopts a hierarchical policy approach, where a Planner proposes sub-goals for an Executor to act upon within an environment.\nThe Planner receives an egocentric, top-down view with the target location and an embedding summary provided by the Executor.\nThe Executor receives visual sensory data (i.e. colour and depth) as its input and a sub-goal provided by the Planner.\nOur method reduces the need for long-term planning and addresses the known sample inefficiency problem accompanying memory models within deep reinforcement learning approaches.\ninformation provided by the Executor.\nTo this end, we propose PLEX, a planning and executing learning framework which offers the following contributions:\n• A hierarchical reinforcement learning approach where two agents specialise on different tasks but are jointly trained by sharing information • We demonstrate both theoretically and empirically that our method benefits from significantly improved sample efficiency as the time horizon is distributed between the Planner and Executor • By extension, our approach mitigates problems prevalent in long-horizon planning, especially those adopting LSTM (Hochreiter & Schmidhuber, 1997) planning approaches\nIn this work, we present a hierarchical reinforcement learning approach for solving PointGoal navigation tasks.\nOur proposed approach uses a cooperative learning strategy in which two agents, an Executor and a Planner are jointly learned to solve this task.\nThis is enabled through a two-way communication channel established between the two agents through the use of an Executor Latent Information vector provided by the Executor and sub-goals generated by the Planner.\nWe motivate the use of this hierarchical approach both theoretically, as well as through empirical experiments which demonstrate a significant improvement in sampling efficiency of our approach, allowing our structured approach to perform significantly better on increasingly harder tasks when compared to baseline approaches.",
"role": "user"
},
{
"content": "We present a hierarchical learning framework for navigation within an embodied learning setting",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.22461 | [
{
"content": "Write a Related Work section based on these abstracts. Use numerical citations like [1], [2], etc.\n\nMain Abstract:\nSparse matrix-matrix multiplication (or SpGEMM) is a key primitive for many high-performance graph algorithms as well as for some linear solvers, such as algebraic multigrid. The scaling of existing parallel implementations of SpGEMM is heavily bound by communication. Even though 3D (or 2.5D) algorithms have been proposed and theoretically analyzed in the flat MPI model on Erdos--Renyi matrices, those algorithms had not been implemented in practice and their complexities had not been analyzed for the general case. In this work, we present the first implementation of the 3D SpGEMM formulation that exploits multiple (intranode and internode) levels of parallelism, achieving significant speedups over the state-of-the-art publicly available codes at all levels of concurrencies. We extensively evaluate our implementation and identify bottlenecks that should be subject to further research.\n\nCited Abstract(s):\n[1]: We present lower bounds on the amount of communication that matrix multiplication algorithms must perform on a distributed-memory parallel computer. We denote the number of processors by P and the dimension of square matrices by n. We show that the most widely used class of algorithms, the so-called two-dimensional (2D) algorithms, are optimal, in the sense that in any algorithm that only uses O(n2 P) words of memory per processor, at least one processor must send or receive Ω(n2 P1 2) words. We also show that algorithms from another class, the so-called three-dimensional (3D) algorithms, are also optimal. These algorithms use replication to reduce communication. We show that in any algorithm that uses O(n2 P2 3) words of memory per processor, at least one processor must send or receive Ω(n2 P2 3) words. Furthermore, we show a continuous tradeoff between the size of local memories and the amount of communication that must be performed. The 2D and 3D bounds are essentially instantiations of this tradeoff. We also show that if the input is distributed across the local memories of multiple nodes without replication, then Ω(n2) words must cross any bisection cut of the machine. All our bounds apply only to conventional Θ(n3) algorithms. They do not apply to Strassen's algorithm or other o(n3) algorithms.\n\n[2]: Matrix multiplication algorithms for cube connected and perfect shuffle computers are presented. It is shown that in both these models two @math matrices can be multiplied in @math time when @math , @math , processing elements (PEs) are available. When only @math , @math , PEs are available, two @math matrices can be multiplied in @math time. It is shown that many graph problems can be solved efficiently using the matrix multiplication algorithms.\n\n[3]: Extra memory allows parallel matrix multiplication to be done with asymptotically less communication than Cannon's algorithm and be faster in practice. \"3D\" algorithms arrange the p processors in a 3D array, and store redundant copies of the matrices on each of p1 3 layers. \"2D\" algorithms such as Cannon's algorithm store a single copy of the matrices on a 2D array of processors. We generalize these 2D and 3D algorithms by introducing a new class of \"2.5D algorithms\". For matrix multiplication, we can take advantage of any amount of extra memory to store c copies of the data, for any c ∈ 1, 2,..., ⌊p1 3⌋ , to reduce the bandwidth cost of Cannon's algorithm by a factor of c1 2 and the latency cost by a factor c3 2. We also show that these costs reach the lower bounds, modulo polylog(p) factors. We introduce a novel algorithm for 2.5D LU decomposition. To the best of our knowledge, this LU algorithm is the first to minimize communication along the critical path of execution in the 3D case. Our 2.5D LU algorithm uses communicationavoiding pivoting, a stable alternative to partial-pivoting. We prove a novel lower bound on the latency cost of 2.5D and 3D LU factorization, showing that while c copies of the data can also reduce the bandwidth by a factor of c1 2, the latency must increase by a factor of c1 2, so that the 2D LU algorithm (c = 1) in fact minimizes latency. We provide implementations and performance results for 2D and 2.5D versions of all the new algorithms. Our results demonstrate that 2.5D matrix multiplication and LU algorithms strongly scale more efficiently than 2D algorithms. Each of our 2.5D algorithms performs over 2X faster than the corresponding 2D algorithm for certain problem sizes on 65,536 cores of a BG P supercomputer.\n\n[4]: In this paper, we give a straight forward, highly efficient, scalable implementation of common matrix multiplication operations. The algorithms are much simpler than previously published methods, yield better performance, and require less work space. MPI implementations are given, as are performance results on the Intel Paragon system.",
"role": "user"
},
{
"content": "We also mention that there has been significant research devoted to dense matrix multiplication in distributed-memory settings. In particular, the development of so-called 3D algorithms for dense matrix multiplication spans multiple decades; see [2] [1] [4] [3] and the references therein. Many aspects of our 3D algorithm for sparse matrix multiplication are derived from the dense case, though there are important differences as we detail below.",
"role": "assistant"
}
] |
science.pico_ner | science.pico_ner.315 | [
{
"content": "Find entities of these types in the text: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nOutput a JSON object with entity types as keys and lists of found entities as values.\n\nText:\nClomipramine versus haloperidol in the treatment of autistic disorder : a double-blind , placebo-controlled , crossover study . Clomipramine , haloperidol , and placebo were compared with baseline in the treatment of autism , and overall outcome , specific symptoms , and side effects were examined . It was hypothesized that clomipramine would be better tolerated than haloperidol and prove superior on a measure of stereotypy . Individuals with a DSM-IV diagnosis of autistic disorder ( mean age , 16.3 years ; range , 10-36 years ) were randomly assigned , by using a Latin square design , to the following 7-week trials : placebo , clomipramine ( mean daily dose , 128.4 mg ; range , 100-150 mg ) , or haloperidol ( mean daily dose , 1.3 mg ; range , 1-1.5 mg ) . Data on 36 subjects were analyzed and taken together ; the results favored haloperidol . In those patients who were able to complete a full therapeutic trial , clomipramine proved comparable to haloperidol in terms of improvement compared with baseline . However , significantly fewer individuals receiving clomipramine versus haloperidol were able to complete the trial ( 37.5 % vs. 69.7 % , respectively ) for reasons related to both side effects and efficacy or behavior problems . In the intent-to-treat sample , which is perhaps more clinically relevant , only haloperidol proved superior to baseline on a global measure of autistic symptom severity , as well as specific measures for irritability and hyperactivity . Clomipramine did not seem more effective on a measure of stereotypy , nor was it better tolerated .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"haloperidol\", \"haloperidol .\", \"placebo\", \"clomipramine\", \"Clomipramine\", \"placebo-controlled\", \"Clomipramine , haloperidol\", \"Clomipramine versus haloperidol\"], \"Outcome\": [\"overall outcome\", \"improvement\", \"irritability and hyperactivity\", \"specific symptoms\", \"side effects\", \"autistic symptom severity\", \"efficacy\", \"behavior problems\"], \"Participant\": [\"36\", \"autism\", \"autistic disorder :\"]}",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_arxiv | science.scientific_papers_summarization_single_doc_arxiv.51580 | [
{
"content": "Write an abstract for the research paper below. Your response should only include the abstract and no additional text.\n\nArticle text:\nmotivated by the algorithmic problem of searching large graphs , we study the expansion properties of real - world networks and the extent to which these properties can be exploited to better understand and facilitate decentralized search .\ncomplex networks of linked entities arise across diverse domains , from sociology ( e.g. social networks ) to biology ( e.g. neural networks , protein interactions ) to technological and information systems ( e.g. p2p networks , the web , power grids ) .\nscientists from disparate fields , especially within the past decade , have attempted to characterize both the structure and function of these networked systems .\nthe standard approach here has been to measure topological features of the graph representing the network and correlate them with functional or dynamic aspects of the network ( e.g. evolution of the network , the behavior of processes that occur over the network ) .\nfor instance , in their seminal paper on small - world networks , watts and strogatz @xcite showed that many real - world networks simultaneously exhibit short average path lengths and relatively high degrees of clustering .\nthey further showed that these features can facilitate spreading processes across a network ( e.g. the spread of a virus ) @xcite . in this work ,\nwe study a feature that has received comparatively less attention in the study of real - world - regular graphs and 2 ) the _ minimum _ ( not maximum ) expansion in these graphs @xcite . ]\nnetworks : _\nexpansion_. given a network @xmath0 where @xmath1 is the set of nodes and @xmath2 is the set of links among the nodes , the _ expansion _ of a set of nodes @xmath3 is a function of the number of nodes in @xmath4 to which @xmath5 is connected ( see section [ sec : preliminaries ] for more precise definitions ) .\nthat is , if @xmath6 is the set of nodes to which @xmath5 is connected , then the _ expansion _ of @xmath5 is @xmath7 .\ninformally , an expander graph is a graph in which any subset of nodes has good expansion ( i.e. has many neighbors ) @xcite .\nfor instance , a network is said to be a @xmath8-expander if every @xmath3 has an expansion of _ at least _\n@xmath8 , where @xmath9 @xcite .\nthus , the classical definition of an expander graph focuses on samples with the _ minimum _ expansion ( as @xmath8 is a minimum if every sample @xmath5 has an expansion of at least @xmath8 ) .\nexpander graphs have been shown to have many applications from constructing error correcting codes to routing calls in telephone switching networks @xcite .\nalthough these classical expander graphs are well - studied , there has been surprisingly less attention paid to studying the sample in a network with the _ maximum _ expansion , and it is this in which we are most interested .\nour overall aim in this work is to investigate the extent to which a vertex set with high expansion can be leveraged to both understand and facilitate efficient decentralized search in networks .\nwe now describe the problem of decentralized search .\na central algorithmic problem in the study of complex networks is how to efficiently search them in a decentralized manner .\nthis scenario arises in many practical applications from querying peer - to - peer file sharing networks to focused web crawling @xcite . starting from some initial source node\n, we must locate , access , or route a message to some other target node in the network . without full knowledge of the global network topology , we are unable to simply compute the shortest path or access the target node directly .\nthus , we must hop from node to node until the target node is found .\ndecentralized search , then , is related to information diffusion or dissemination , as an efficient search will involve efficiently disseminating a query message to large portions of the network . in this work\n, we study the effect of network expansion on decentralized search .\nif , using only local information , nodes are visited in such a way that their overall distance is close to many other nodes ( i.e. the set of visited nodes has high expansion ) , then the efficiency of search might be improved . moreover ,\nthe magnitude of expansion in a network may shed light on the extent to which a network can be efficiently searched by _ any _ search algorithm .\nthese are precisely the questions we investigate here .\nwe rigorously investigate decentralized search across a span of networks much wider and more diverse than previously studied in work on searching graphs .\nour main contributions include the following : * borrowing from concepts in expander graphs , we introduce the concept of _ expansion signatures _ , which concisely captures the overall expansion properties of a network .\nwe find that , in many networks , relatively small samples of nodes can exhibit significantly high expansion .\nhowever , we also find that there are a few networks for which small samples with high expansion may not exist .\n* we propose an expansion - based , decentralized search strategy , which explicitly tries to locate these samples with high expansion in order to quickly discover the most nodes in a search .\nwe evaluate a number of search algorithms such as degree - based searches , breadth - first searches , and random walks .\nwe show that an expansion - based search strategy generally outperforms others . *\nwe demonstrate that _ expansion signatures _ correctly infer the extent to which a network can be efficiently searched ( which we refer to as `` searchability '' ) .\nmoreover , we show that it is the maximum expansion in a network , rather than the minimum expansion , that contributes most to searchability and information dissemination in a network . at the same time\n, we find that standard graph - theoretic measures , such as average path length , _ fail _ to fully explain the extent to which a network is easily searched .\nthe last point is our most significant finding .\nexisting works have mostly studied the effect of _ minimum _ expansion on the ease of dissemination in a network ( e.g. @xcite ) .\nother works have focused on the effect of standard graph properties such as average path length on searchability and ease of dissemination ( e.g. @xcite ) .\nfor the first time , we show that it is the _ maximum _ expansion ( rather than minimum expansion ) that most affects efficient searchability . our results , then , offer a more comprehensive picture of decentralized search and information diffusion in networks than has previously been appreciated .\ninterestingly , one of the first experiments on decentralized search in networks was the famous chain - letter study by the social psychologist stanley milgram @xcite . in this experiment , participants were given the name , address , and occupation of an unknown target person and told to forward a chain letter to this person by passing the letter on to a single acquaintance meeting two main conditions : 1 ) the acquaintance must be someone with whom the individual knew on a first - name basis and 2 ) the acquaintance chosen should be the one perceived as closest to the target @xcite .\nthis study not only provided some evidence for short paths in social networks ( the median path length between sources and targets was @xmath10 among letters that reached their destination ) , but also showed that individuals were able to collectively discover these short paths _ without _ full knowledge of the network @xcite .\nkleinberg @xcite later modeled this problem algorithmically using a 2-dimensional grid with probabilistically - added long - range connections and shed light on the precise conditions that these short paths were discoverable using a decentralized search algorithm . in both the works of milgram @xcite and kleinberg @xcite , although nodes had no knowledge of global network connectivity , there was , in fact , _ external _ knowledge that aided searches . in the milgram experiment , for instance\n, individuals were instructed to forward letters to the local acquaintance that was perceived as being closest to the target - as measured by geographic or occupational similarity in many cases .\nthus , external knowledge of geographical distance and occupational similarity was employed as an aid in the search heuristic . in other words\n, there was knowledge ( or , at least , an assumption ) that individuals closer geographically or more similar occupationally are more likely to know each other .\nliben - nowell et al .\n@xcite , in fact , showed some evidence for the geographical basis of online friendships in the livejournal social network . in kleinberg s model\nalso , it is assumed that message holders have access to external knowledge and know the local contacts of _ all _ nodes in the network ( i.e. nodes are aware of the manhattan distances between all nodes in the underlying grid structure and use this information as a forwarding heuristic ) .\nboguna et al .\n@xcite have recently modeled this external information as a hidden metric space .\nunfortunately , in many real - world scenarios , such as unstructured peer - to - peer file sharing networks , these types of external information and similarity - based heuristics are unavailable as search aids , and the problem of decentralized search becomes even more challenging .\ntypical approaches here resort to variations on flooding the network ( which can be unscalable ) , random walks ( which may be less effective in finding information ) , or imposing structure on the network to improve searchability ( which requires additional overhead ) @xcite . for\na review of decentralized search both in the contexts of complex networks and specifically p2p , one may refer to @xcite .\nour focus in this work is to investigate efficient search on networks with _ arbitrary _ structure in which similarity - based heuristics ( e.g. geographic distance ) are unavailable .\nfor the first time , we investigate the relationship between _ expansion _ and the extent to which a network is efficiently searchable .\nwe further show that a search strategy based on expansion generally outperforms typical existing approaches such as random walks and flooding - based techniques .\nwe begin a discussion of our work with some preliminaries .\nwe now briefly describe some notations and definitions used throughout this paper .\n[ defn : network ] @xmath11 is an undirected _ network _ or _ graph _ where @xmath1 is a set of vertices ( or nodes ) and @xmath12 is a set of edges ( or links between the nodes ) .\nwe will use the terms _ network _ and _ graph _ interchangeably .\n[ defn : sample ] a _ sample _ @xmath5 is a subset of vertices , @xmath3 .\n[ defn : neighborhood ] @xmath6 is the _ neighborhood _ of @xmath5 if @xmath13\n. the _ neighborhood _ may also be referred to as the _ frontier _ of a sample @xmath5 .\n[ defn : expansion ] the _ expansion _ of a sample @xcite . ]\n@xmath5 is : @xmath14 [ defn : expanderset ] the _ maximum expander set _ of size @xmath15 is a sample @xmath5 of size @xmath15 with the maximal expansion : @xmath16 [ defn : expansionquality ] the _ expansion quality _ of a sample @xmath5 is the normalized . ]\n_ expansion _ : @xmath17 .\nnotice that , given a sample @xmath5 , the maximum possible expansion on _ any _ network of @xmath18 nodes is : @xmath19 .\nthe _ expansion quality _ @xmath20 , then , captures the extent to which a sample achieves this maximum possible expansion .\na score of @xmath21 indicates that the sample `` touches '' or is one hop away from every other node in the network .\nwe study expansion and search in a total of ten different networks : two random graph models , a neural network , a power grid , a co - authorship network , an email network , a citation network , a p2p file - sharing network , and two online social networks .\nit should be noted that not all of these networks may require efficient decentralized search ( e.g. a co - authorship network , the neural network of a worm ) .\nnevertheless , these datasets represent a rich set of diverse networks from different domains .\nthis allows us to more comprehensively study network expansion and thoroughly assess the performance of decentralized search strategies in the face of varying network topologies .\ntable [ tab : datasets ] shows characteristics of each network .\nwe now describe each dataset . * erdos - renyi model .\n* one of the first random graph models proposed was that of erdos and renyi @xcite .\nthe erdos - renyi @xmath22 model produces a random graph of @xmath23 nodes with each of the @xmath24 possible edges existing with probability @xmath25 .\nerdos - renyi graphs exhibit the short average path lengths found in many real - world networks , but lack the high clustering and skewed ( or heavy - tailed ) degree distributions found in reality . * barabasi - albert model . *\nthe barabasi - albert model follows a more , realistic generative process than previous models : the preferential attachment model @xcite .\na graph of @xmath23 nodes is grown in a sequential fashion .\neach subsequent node of @xmath26 edges is preferentially attached to previously added nodes with high degree ( where the `` degree '' of a node is the number of neighbors ) .\ngraphs generated by this model exhibit skewed , power law degree distributions and short average path lengths , but lack the high clustering found in real networks .\n( skewed degree distributions are ones in which there are many nodes with low connectivity and a few nodes with high connectivity that act as hubs .\na power law distribution is one such example . ) * c. elegans neural network * is the neural network of the c. elegans worm @xcite .\n* power grid .\n* this technological network represents the power grid of the western united states @xcite .\n* condmat .\n* this is a co - authorship network of scientists publishing in arxiv cond - mat ( i.e. the condensed matter physics category ) from the e - print archive , arxiv.org @xcite .\n* enron emails * is the network comprised of email communications among enron employees @xcite .\n* hepph * is a citation network between papers in arxiv hep - ph ( high energy physics phenomenology ) from the e - print archive , arxiv.org @xcite .\n* gnutella .\n* this network is an august 31st , 2002 snapshot of the gnutella peer - to - peer file - sharing network .\nnodes represent hosts and edges represent connections among the hosts @xcite .\n* epinions * is a trust - based online social network of the consumer review site , epinions.com @xcite .\n* slashdot * is an online social network of the technology news site , slashdot.com @xcite .\n[ th ] .network properties . *\nkey : * _ _ n= # of nodes , d= density , pl = characteristic path length , cc = clustering coefficient , ad = average degree . _ _ [ cols=\"<,^,^,^,^,^\",options=\"header \" , ]\nwe have introduced the concept of _ expansion signatures _ and have used them to study the effect of expansion on decentralized search in networks .\nwe have shown that it is the magnitude of maximum expansion ( rather than minimum expansion ) that corresponds to the extent to which a network is efficiently searchable . moreover , we have shown that traditional graph properties such as average path length and skewed degree distributions fail , by themselves , to fully explain the level of searchability in a network .\nfinally , we have shown that a search strategy based on maximizing expansion covers the network far better than some typical approaches to decentralized search . for future work\n, we plan to further investigate the interplay between expansion and various graph - theoretic properties and their effect on dissemination .\ns. schmid and r. wattenhofer . structuring unstructured peer - to - peer networks . in s.\naluru , m. parashar , r. badrinath , and v. k. prasanna , editors , _ hipc 2007 _ , volume 4873 of _ lecture notes in computer science _ , chapter 40 , pages 432442 .\nspringer berlin heidelberg , berlin , heidelberg , 2007 .",
"role": "user"
},
{
"content": "borrowing from concepts in expander graphs , we study the expansion properties of real - world , complex networks ( e.g. social networks , unstructured peer - to - peer or p2p networks ) and the extent to which these properties can be exploited to understand and address the problem of decentralized search . we first produce samples that concisely capture the overall expansion properties of an entire network , which we collectively refer to as the _ expansion signature_. using these signatures , we find a correspondence between the magnitude of maximum expansion and the extent to which a network can be efficiently searched . \n we further find evidence that standard graph - theoretic measures , such as average path length , fail to fully explain the level of `` searchability '' or ease of information diffusion and dissemination in a network . \n finally , we demonstrate that this high expansion can be leveraged to facilitate decentralized search in networks and show that an expansion - based search strategy outperforms typical search methods . \n = 10000 = 10000 [ retrieval models ]",
"role": "assistant"
}
] |
science.bc7_litcovid_topic_classification | science.bc7_litcovid_topic_classification.18652 | [
{
"content": "Classify the topics of this COVID-19 literature paragraph by selecting from these categories:\n\n- Epidemic Forecasting: Predictions about COVID-19 spread\n- Treatment: Therapeutic methods or medication\n- Prevention: Measures to prevent spread\n- Mechanism: Biological mechanisms of the virus\n- Case Report: Report of a COVID-19 case\n- Transmission: How COVID-19 is transmitted\n- Diagnosis: Diagnosing COVID-19 in individuals\n\nYour output should be a valid JSON array with topics in double quotes and separated by commas. No additional text should be included.\n\nArticle:\n\nThe number of people infected with SARS-CoV-2, and sadly dying from COVID-19, has exploded, and so the amount of literature on the novel coronavirus and the disease it causes has increased proportionately. The case numbers in some countries are beyond the epidemic peak, but the uncertainty about a second wave keeps politicians and societies under pressure. Appropriate decision-making and winning support from the population depends on precise scientific information rather than leaving the field to scaremongers of all proveniences. This mini-review is an update of earlier reports (Brussow, Microb Biotechnol 2020a;13:607; Brussow, Microb Biotechnol 2020b; https://doi.org/10.1111/1751-7915.13592).",
"role": "user"
},
{
"content": "[\"Treatment\", \"Mechanism\"]",
"role": "assistant"
}
] |
science.anat_em_ner | science.anat_em_ner.497 | [
{
"content": "Given this abstract, identify all unique entities of these types: [\"AnatomicalEntity\"].\n\nOnly output the JSON object, no other text.\n\nAbstract:\nGranulocyte-macrophage colony-stimulating factor and interleukin-3 enhance the incorporation of cytosine arabinoside into the DNA of leukemic blasts and the cytotoxic effect on clonogenic cells from patients with acute myeloid leukemia. \nIn the present study the effects of the 48-hour administration of granulocyte-macrophage colony-stimulating factor (GM-CSF) (100 U/mL) or interleukin-3 (IL-3) (100 U/mL) on the proliferative activity of leukemic cells and on the intracellular metabolism and cytotoxic efficacy of a subsequent 12-hour application of cytosine arabinoside (ara-C) at doses of 0.1, 1.0, 10.0, and 100.0 mumol/L were evaluated on bone marrow cells from 17 patients with acute myeloid leukemia. After GM-CSF or IL-3, a 1.2- to 2.4-fold increase in S-phase cells was observed in nine of 14 GM-CSF and seven of 11 IL-3 cases. 3H-Cytosine arabinoside incorporation into the DNA was enhanced 1.33- to 18.3-fold over respective controls in 14 of 17 patients. While in control specimens are ara-C dose-dependent increase in 3H-ara-C uptake was accompanied by a corresponding rise in intracellular ara-C-5' triphosphate (ara-CTP) levels, ara-CTP concentrations were not increased after GM-CSF or IL-3 exposure, resulting in a higher ara-C to ara-CTP ratio over controls. This finding may be explained by a stimulatory effect of GM-CSF and IL-3 on ara-C phosphorylating enzymes and a more rapid incorporation of ara-CTP into the DNA of leukemic blasts. These effects translated into a 2.2- to 229.0-fold increase in the cytotoxic activity of ara-C against clonogenic leukemic cells after GM-CSF or IL-3 pretreatment. Hence, GM-CSF and IL-3 enhance the intracellular metabolism of ara-C and its incorporation into the DNA of leukemic cells leading to a higher antileukemic activity of ara-C on clonogenic leukemic cells (CFU-L).",
"role": "user"
},
{
"content": "{\"AnatomicalEntity\": [\"bone marrow cells\", \"clonogenic leukemic cells\", \"CFU-L\", \"leukemic blasts\", \"clonogenic cells\", \"intracellular\", \"leukemic cells\", \"acute myeloid leukemia\", \"cells\", \"specimens\"]}",
"role": "assistant"
}
] |
science.qasper_extractive_qa | science.qasper_extractive_qa.467.2 | [
{
"content": "Read the following paper excerpts and answer the question. Output a JSON object with \"answer\" and \"evidence\" fields. The answer must be extracted directly from the paper text.\n\nIf the question cannot be answered from the given text, respond with \"null\".\n\nPaper: Large Arabic Twitter Dataset on COVID-19\n\nThe 2019 coronavirus disease (COVID-19), emerged late December 2019 in China, is now rapidly spreading across the globe. At the time of writing this paper, the number of global confirmed cases has passed one million and half with over 75,000 fatalities. Many countries have enforced strict social distancing policies to contain the spread of the virus. This has changed the daily life of tens of millions of people, and urged people to turn their discussions online, e.g., via online social media sites like Twitter. In this work, we describe the first Arabic tweets dataset on COVID-19 that we have been collecting since March 1st, 2020. The dataset would help researchers and policy makers in studying different societal issues related to the pandemic. Many other tasks related to behavioral change, information sharing, misinformation and rumors spreading can also be analyzed.\n\nIntroduction\nOn December 31, 2019, Chinese public health authorities reported several cases of a respiratory syndrome caused by an unknown disease, which subsequently became known as COVID-19 in the city of Wuhan, China. This highly contagious disease continued to spread worldwide, leading the World Health Organization (WHO) to declare a global health emergency on January 30, 2020. On March 11, 2020 the disease has been identified as pandemic by WHO, and many countries around the world including Saudi Arabia, United States, United Kingdom, Italy, Canada, and Germany have continued reporting more cases of the disease BIBREF0. As the time of writing this paper, this pandemic is affecting more than 208 countries around the globe with more than one million and half confirmed cases BIBREF1.\nSince the outbreak of COVID-19, many governments around the world enforced different measures to contain the spread of the virus. The measures include travel restrictions, curfews, ban of mass gatherings, social distancing, and probably cities lock-down. This has impacted the routine of people around the globe, and many of them have turned to social media platforms for both news and communication. Since the emergence of COVID-19, Twitter platform plays a significant role in crisis communications where millions of tweets related to the virus are posted daily. Arabic is the official language of more than 22 countries with nearly 300 million native speakers worldwide. Furthermore, there is a large daily Arabic content in Twitter as millions of Arabic users use the social media network to communicate. For instance, Saudi Arabia alone has nearly 15 million Twitter users as of January, 2020 BIBREF2. Hence, it is important to analyze the Arabic users' behavior and sentiment during this pandemic. Other Twitter COVID-19 datasets have been recently proposed BIBREF3, BIBREF4 but with no significant content for the Arabic language.\nIn this work, we provide the first dataset dedicated to Arabic tweets related to COVID-19. The dataset is available at https://github.com/SarahAlqurashi/COVID-19-Arabic-Tweets-Dataset. We have been collecting data in real-time from Twitter API since January 1, 2020, by tracking COVID-19 related keywords which resulted in more than 3,934,610 Arabic tweets so far. The presented dataset is believed to be helpful for both researchers and policy makers in studying the pandemic from social perspective, as well as analyzing the human behaviour and information spreading during pandemics.\nIn what follows, we describe the dataset and the collection methods, present the initial data statistics, and provide information about how to use the dataset.\n\nDataset Description\nWe collected COVID-19 related Arabic tweets from January 1, 2020 until April 15, 2020, using Twitter streaming API and the Tweepy Python library. We have collected more than 3,934,610 million tweets so far. In our dataset, we store the full tweet object including the id of the tweet, username, hashtags, and geolocation of the tweet. We created a list of the most common Arabic keywords associated with COVID-19. Using Twitter’s streaming API, we searched for any tweet containing the keyword(s) in the text of the tweet. Table TABREF1 shows the list of keywords used along with the starting date of tracking each keyword. Furthermore, Table TABREF2 shows the list of hashtags we have been tracking along with the number of tweets collected from each hashtag. Indeed, some tweets were irrelevant, and we kept only those that were relevant to the pandemic.\nA summary over the dataset is given in Table TABREF3. While collecting data, we have observed that the number of retweets increased significantly in late March. This is likely due to the exponential increase in confirmed COVID-19 cases worldwide, including the Arabic speaking countries. A relatively small percentage of tweets were geotagged. Figure FIGREF6 presents the location of tweets observed as of 14 April 2020.\n\nDataset Access\nThe dataset is accessible on GitHub at this address: https://github.com/SarahAlqurashi/COVID-19-Arabic-Tweets-Dataset\nHowever, to comply with Twitter’s content redistribution policy, we are distributing only the IDs of the collected tweets. There are several tools (such as Hydrator) that can be used to retrieve the full tweet object. We also plan to provide more details on the pre-processing phase in the GitHub page.\n\nFuture Work\nWe are continuously updating the dataset to maintain more aspects of COVID-19 Arabic conversations and discussions happening on Twitter. We also plan to study how different groups respond to the pandemic and analyze information sharing behavior among the users.\n\nAcknowledgements\nThe authors wish to express their thanks to Batool Mohammed Hmawi for her help in data collection.\n\nTable 2: Hashtags used in collecting this dataset.\n\nTable 1: The list of keywords that we used to collect the tweets.\n\nTable 3: Summary statistics for the collected tweets\n\nFigure 1: The location of geotagged tweets\n\nQuestion: How big is the dataset?",
"role": "user"
},
{
"content": "{\"answer\": [\"more than 3,934,610 million tweets\"], \"evidence\": [\"We collected COVID-19 related Arabic tweets from January 1, 2020 until April 15, 2020, using Twitter streaming API and the Tweepy Python library. We have collected more than 3,934,610 million tweets so far. In our dataset, we store the full tweet object including the id of the tweet, username, hashtags, and geolocation of the tweet. We created a list of the most common Arabic keywords associated with COVID-19. Using Twitter\\u2019s streaming API, we searched for any tweet containing the keyword(s) in the text of the tweet. Table TABREF1 shows the list of keywords used along with the starting date of tracking each keyword. Furthermore, Table TABREF2 shows the list of hashtags we have been tracking along with the number of tweets collected from each hashtag. Indeed, some tweets were irrelevant, and we kept only those that were relevant to the pandemic.\"]}",
"role": "assistant"
}
] |
science.healthver_entailment | science.healthver_entailment.4676 | [
{
"content": "Output a JSON object with \"verdict\" and \"evidence\" fields based on the claim and abstract below.\n\nClaim: one possible thing that can help is vitamin C.\n\nAbstract:\nOptimal nutrition can improve well-being and might mitigate the risk and morbidity associated with coronavirus disease 2019 (COVID-19), caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2).\nThis review summarizes nutritional guidelines to support dietary counseling provided by dietitians and health-related professionals.\nThe majority of documents encouraged the consumption of fruits, vegetables, and whole grain foods.\nThirty-one percent of the guidelines highlighted the importance of minerals and vitamins such as zinc and vitamins C, A, and D to maintain a well-functioning immune system.\nDietary supplementation has not been linked to COVID-19 prevention.\nHowever, supplementation with vitamins C and D, as well as with zinc and selenium, was highlighted as potentially beneficial for individuals with, or at risk of, respiratory viral infections or for those in whom nutrient deficiency is detected.\nThere was no convincing evidence that food or food packaging is associated with the transmission of COVID-19, but good hygiene practices for handling and preparing foods were recommended.\nNo changes to breastfeeding recommendations have been made, even in women diagnosed with COVID-19.",
"role": "user"
},
{
"content": "{\"verdict\": \"CONTRADICT\", \"evidence\": [\"The majority of documents encouraged the consumption of fruits, vegetables, and whole grain foods.\", \"Thirty-one percent of the guidelines highlighted the importance of minerals and vitamins such as zinc and vitamins C, A, and D to maintain a well-functioning immune system.\", \"Dietary supplementation has not been linked to COVID-19 prevention.\", \"However, supplementation with vitamins C and D, as well as with zinc and selenium, was highlighted as potentially beneficial for individuals with, or at risk of, respiratory viral infections or for those in whom nutrient deficiency is detected.\"]}",
"role": "assistant"
}
] |
science.craftchem_ner | science.craftchem_ner.694 | [
{
"content": "Extract all chemical entities (ChEBI) from this biomedical text. Output as JSON array.\n\nText: Thesedatasuggestalossof80-85%ofvestibularsensoryneuronsinBDNFandtrkBnullmutants[13]andof77%ofvestibularsensoryneuronsinP4Brn3cnullmutants[3].",
"role": "user"
},
{
"content": "[]",
"role": "assistant"
}
] |
science.healthver_entailment | science.healthver_entailment.4431 | [
{
"content": "Output a JSON object with \"verdict\" and \"evidence\" fields based on the claim and abstract below.\n\nClaim: hydroxychloroquine could prevent or treat COVID-19.\n\nAbstract:\nBACKGROUND: Chloroquine and hydroxychloroquine have been found to be efficient on SARS-CoV-2, and reported to be efficient in Chinese COV-19 patients.\nWe evaluate the role of hydroxychloroquine on respiratory viral loads.\nPATIENTS AND METHODS: French Confirmed COVID-19 patients were included in a single arm protocol from early March to March 16th, to receive 600mg of hydroxychloroquine daily and their viral load in nasopharyngeal swabs was tested daily in a hospital setting.\nDepending on their clinical presentation, azithromycin was added to the treatment.\nUntreated patients from another center and cases refusing the protocol were included as negative controls.\nPresence and absence of virus at Day6-post inclusion was considered the end point.\nRESULTS: Six patients were asymptomatic, 22 had upper respiratory tract infection symptoms and eight had lower respiratory tract infection symptoms.\nTwenty cases were treated in this study and showed a significant reduction of the viral carriage at D6-post inclusion compared to controls, and much lower average carrying duration than reported of untreated patients in the literature.\nAzithromycin added to hydroxychloroquine was significantly more efficient for virus elimination.\nCONCLUSION: Despite its small sample size our survey shows that hydroxychloroquine treatment is significantly associated with viral load reduction/disappearance in COVID-19 patients and its effect is reinforced by azithromycin.",
"role": "user"
},
{
"content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"CONCLUSION: Despite its small sample size our survey shows that hydroxychloroquine treatment is significantly associated with viral load reduction/disappearance in COVID-19 patients and its effect is reinforced by azithromycin.\"]}",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.9507 | [
{
"content": "Write a Related Work section based on these abstracts. Use numerical citations like [1], [2], etc.\n\nMain Abstract:\nThis paper studies the use of eye tracking in a First-Person Shooter (FPS) game as a mechanism to: (1) control the attention of the player’s avatar according to the attention deployed by the player; and (2) guide the gameplay and game’s procedural content generation, accordingly. This results in a more natural use of eye tracking in comparison to a use in which the eye tracker directly substitutes control input devices, such as gamepads. The study was conducted on a custom endless runner FPS, Zombie Runner, using an affordable eye tracker. Evaluation sessions showed that the proposed use of eye tracking provides a more challenging and immersive experience to the player, when compared to its absence. However, a strong correlation between eye tracker calibration problems and player’s overall experience was found. This means that eye tracking technology still needs to evolve but also means that once technology gets mature enough players are expected to benefit greatly from the inclusion of eye tracking in their gaming experience.\n\nCited Abstract(s):\n[1]: M. Perreira Da SilvaL3iUniversite de La Rochelle,Av Michel Crepeau, 17042La Rochelle, [email protected]. CourboulayL3iUniversite de La Rochelle,Av Michel Crepeau, 17042La Rochelle, [email protected]. PrigentL3iUniversite de La Rochelle,Av Michel Crepeau, 17042La Rochelle, [email protected]",
"role": "user"
},
{
"content": "Bearing the limitations of using eye tracking as a simple direct control input in mind, we propose to use it to control the attention of the player's avatar and the game's procedural content generation. This use of eye tracking is focused in mapping the mental state of the player and her his avatar, which we believe to be much more natural and useful than controlling a pointer with the eyes, which has no mapping to real life. Another alternative and interesting use of eye tracking, not tackled in this paper, is to know when to actively redirect the player's attention [1] .",
"role": "assistant"
}
] |
science.mslr2022_ms2_multidoc_summarization | science.mslr2022_ms2_multidoc_summarization.11970 | [
{
"content": "Based on the RCT reports below, write a few sentences synthesizing the findings of these reports, which could serve as a conclusion section for a systematic review article.\n\n\n\nTitle 1:\nSpecific food intolerance: its place as a cause of gastrointestinal symptoms.\nAbstract 1:\nThirteen out of 49 patients suspected of having specific food intolerance after withdrawal and re introduction of specific foods , were further subjected to double blind placebo controlled food challenges . Only three of these subjects were thus shown to have proven specific food intolerance . Of the remaining 10 , nine were strong ' placebo reactors ' . The study suggests that a small number of patients with gastrointestinal symptoms have verifiable specific food intolerance but that a greater number have symptoms attributable to psychogenic causes\n\n\nTitle 2:\nPsychometric scores and persistence of irritable bowel after infectious diarrhoea.\nAbstract 2:\nBACKGROUND Although previous studies have shown that psychological disturbances are frequently associated with the irritable bowel syndrome ( IBS ) , the relation was not necessarily cause and effect . The development of chronic bowel symptoms after an episode of acute gastroenteritis has allowed us to examine prospect ively the role of psychological factors . METHODS Seventy-five patients with acute gastroenteritis completed a series of psychometric tests soon after admission to hospital . Of these , 22 had persistent symptoms compatible with IBS after the acute illness , and in 20 of these the symptoms were still present at six months . FINDINGS At the time of their initial illness , patients who subsequently developed IBS had higher scores for anxiety , depression , somatization and neurotic trait than those who returned to normal bowel function . The psychometric scores had not changed when remeasured after the acute illness . Lactose malabsorption was not an important factor . INTERPRETATION These results support the hypothesis that psychological factors are important in IBS\n\n\nTitle 3:\nFOOD HYPERSENSITIVITY IN IRRITABLE BOWEL SYNDROME\nAbstract 3:\nFood hypersensitivity as a cause of abdominal symptoms was investigated by means of exclusion diets and double-blind food provocation in patients with irritable bowel syndrome . Twenty-seven patients entered the study ; nineteen complied with dietary manipulation . Food hypersensitivity as a cause of their presenting symptoms was confirmed by double-blind food provocation in only three patients , who also had evidence of associated atopic disease and positive skin tests to common inhalant allergens . Evidence of minor psychiatric disorder was found in twelve of fourteen patients examined by an independent psychiatrist\n\n\nTitle 4:\nFOOD INTOLERANCE: A MAJOR FACTOR IN THE PATHOGENESIS OF IRRITABLE BOWEL SYNDROME\nAbstract 4:\nSpecific foods were found to provoke symptoms of irritable bowel syndrome ( IBS ) in 14 of 21 patients . In 6 patients who were challenged double blind the food intolerance was confirmed . No difference was detected in changes in plasma glucose , histamine , immune complexes , haematocrit , eosinophil count , or breath hydrogen excretion produced after challenge or control foods . Rectal prostagl and in E2 ( PGE2 ) , however , increased significantly , and in a further 5 patients rectal PGE2 correlated with wet faecal weight . Food intolerance associated with prostagl and in production is an important factor in the pathogenesis of IBS",
"role": "user"
},
{
"content": "Milk , wheat , and eggs were most frequently identified to cause symptom exacerbation ; of the foods identified the most common trait was a high salicylate content .\nFoods high in amines were also identified .\nStudies of diarrhea-predominant IBS identified a higher percentage of adverse food reactions .\nConclusion : Whether adverse reactions to foods are a key factor in exacerbating IBS symptoms or whether dietary manipulation is a valid treatment option is unclear .",
"role": "assistant"
}
] |
science.craftchem_ner | science.craftchem_ner.6138 | [
{
"content": "Extract all chemical entities (ChEBI) from this biomedical text. Output as JSON array.\n\nText: TheresultingMCADmonomerismissingtheC-terminaldomainalpha-helixesthatareresponsibleformakingintersubunitcontactstogeneratethefunctionalMCADhomotetramer.",
"role": "user"
},
{
"content": "[\"C\"]",
"role": "assistant"
}
] |
science.pico_ner | science.pico_ner.4466 | [
{
"content": "Find entities of these types in the text: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nOutput a JSON object with entity types as keys and lists of found entities as values.\n\nText:\nLong-term results regarding the use of recombinant interferon alpha-2b in the treatment of II type mixed essential cryoglobulinemia . Thirty-three patients with II type mixed essential cryoglobulinemia ( MEC ) were randomized into two groups : one to receive combined therapy including prednisone plus interferon , the other to receive prednisone therapy . Interferon was administered as induction treatment ( 3 Mu/day ) and then as maintenance therapy ( 3 Mu three times a week ) . 83 % of the combined therapy patients responded as opposed to 27 % of the prednisone treated patients . Among the patients that responded to combined therapy , nine of them had a complete response , four a partial response , and two a minor response . None of the patients treated with prednisone therapy responded completely but only two had a partial and two a minor response . Four patients ( three of combined therapy and one of prednisone therapy ) showed proteinuria before the treatment which improved at the end of the induction therapy . Ten patients showed anti-HCV positivity which remained unchanged after the treatment . Three patients showed liver involvement secondary to cryoglobulinemia and an improvement of histological pattern after the induction with combined therapy . One patient showed an improvement of peripheral neuropathy after induction with the combined therapy . These data suggest the effectiveness of interferon given as induction and as maintenance treatment in the therapy of II type mixed essential cryoglobulinemia .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"combined therapy .\", \"Interferon\", \"interferon\", \"combined therapy\", \"recombinant interferon alpha-2b\", \"combined therapy including prednisone plus interferon\", \"prednisone\"], \"Outcome\": [\"histological pattern\", \"proteinuria\", \"liver involvement\", \"peripheral neuropathy\", \"anti-HCV positivity\"], \"Participant\": [\"II type mixed essential cryoglobulinemia .\"]}",
"role": "assistant"
}
] |
science.pico_ner | science.pico_ner.2747 | [
{
"content": "Find entities of these types in the text: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nOutput a JSON object with entity types as keys and lists of found entities as values.\n\nText:\nPulmonary vein re-isolation for atrial fibrillation using duty-cycled phased radiofrequency ablation : safety and efficacy of a primary 2:1 bipolar/unipolar ablation mode . PURPOSE Pulmonary vein isolation ( PVI ) using phased radiofrequency ( RF ) energy has been shown to be effective in the treatment of paroxysmal atrial fibrillation ( AF ) . We characterize pulmonary vein ( PV ) conduction at repeat ablation in patients with AF after an initial successful PVI using phased RF technology and analyze the effects of a primary 2:1 ablation mode . METHODS AND RESULTS A primary 4:1 bipolar/unipolar mode in group A patients ( n = 22 ) was compared with a primary 2:1 mode in group B ( n = 22 ) acutely and during follow-up . Of all PVs , 81 % showed reconnection ( s ) ; 52 % of them had reconnected in all PVs . PVI was achieved in all patients without complications . Procedure and fluoroscopy times were shorter in group B ( 108 ± 15 vs. 126 ± 24 min and 17 ± 5 vs. 23 ± 7 min , respectively ) . This was attributed to a significant decrease of early PV reconnections within the first 30 min in 17 % of group B patients vs. 45 % of group A patients ( p < 0.001 ) . After 9.5 ± 4 months , recurrence of AF was detected in 5 of 22 patients ( 22.7 % ) in group A vs. 3 of 22 patients ( 13.6 % ) in group B ( p = 0.722 ) . CONCLUSIONS Phased RF energy applied by a 2:1 bipolar/unipolar mode seems safe and effective in redo-PVI procedures , resulting in a mid-term freedom from AF in 86.4 % . Significant shorter procedure and fluoroscopy times compared with a primary 4:1 ablation mode during repeat PVI are mainly attributed to a lower incidence of acutely reconnected PVs within the first 30 min .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"pulmonary vein ( PV ) conduction at repeat ablation\", \"phased radiofrequency ( RF ) energy\", \"Pulmonary vein isolation ( PVI )\", \"primary 2:1 bipolar/unipolar ablation mode .\", \"duty-cycled phased radiofrequency ablation :\"], \"Outcome\": [\"Procedure and fluoroscopy times\"], \"Participant\": []}",
"role": "assistant"
}
] |
science.chemprot_re | science.chemprot_re.782 | [
{
"content": "Identify relationships between chemical compounds and genes in this abstract. Use only these relation types: ['DOWNREGULATOR', 'SUBSTRATE', 'INDIRECT-REGULATOR', 'PART-OF', 'MODULATOR-ACTIVATOR', 'REGULATOR', 'INHIBITOR', 'COFACTOR', 'UPREGULATOR', 'ACTIVATOR', 'ANTAGONIST', 'NOT', 'INDIRECT-DOWNREGULATOR', 'SUBSTRATE_PRODUCT-OF', 'INDIRECT-UPREGULATOR', 'AGONIST', 'PRODUCT-OF', 'MODULATOR', 'DIRECT-REGULATOR', 'UNDEFINED', 'AGONIST-INHIBITOR', 'AGONIST-ACTIVATOR', 'MODULATOR-INHIBITOR'].\n\nFormat your output as a JSON array with entries like [\"<Entity_A>\", \"<RELATION_A_B>\", \"<Entity_B>\"].\n\nAbstract:\nCatalpol suppresses advanced glycation end-products-induced inflammatory responses through inhibition of reactive oxygen species in human monocytic THP-1 cells. Advanced glycation end-products (AGEs) play a pivotal role in the development of diabetic complications by inducing inflammation. We previously reported that the fresh roots of Rehmannia glutinosa Libosch., which have been used for the treatment of diabetes in traditional Korean medicine, also have the potential to suppress AGE-mediated inflammatory response in THP-1 cells. In the present study, we isolated catalpol from R. glutinosa, and examined whether it has anti-inflammatory effects on AGE-stimulated THP-1 cells. Catalpol reduced the expression of pro-inflammatory mediates, such as monocyte chemotactic protein-1 (MCP-1), tumor necrosis factor-α (TNF-α), inducible NO synthase (iNOS), and receptor for AGE (RAGE). Promoter and electromobility shift assays showed that transcriptional activation of NF-κB was significantly reduced by catalpol treatment, while AP-1 was not. Catalpol also suppressed AGE-induced phosphorylation of mitogen activated protein (MAP) kinases, degradation of IκBα and the nuclear localization of NF-κB. Moreover, the production of intracellular reactive oxygen species (ROS) elicited by AGE was also suppressed by catalpol treatment, through dual action of reducing ROS itself and inhibiting NADPH oxidase activity. Our findings indicate that catalpol suppresses AGE-mediated inflammation by inhibiting ROS production and NF-κB activity. We suggest that catalpol, a major constituent of the fresh roots of R. glutinosa, contributes to the prevention of AGE-mediated diabetic complications.",
"role": "user"
},
{
"content": "[[\"Catalpol\", \"INHIBITOR\", \"mitogen activated protein (MAP) kinases\"], [\"Catalpol\", \"INDIRECT-DOWNREGULATOR\", \"MCP-1\"], [\"catalpol\", \"INHIBITOR\", \"NADPH oxidase\"], [\"Catalpol\", \"INDIRECT-DOWNREGULATOR\", \"tumor necrosis factor-\\u03b1\"], [\"Catalpol\", \"INDIRECT-DOWNREGULATOR\", \"receptor for AGE\"], [\"catalpol\", \"INHIBITOR\", \"NF-\\u03baB\"], [\"catalpol\", \"NOT\", \"AP-1\"], [\"Catalpol\", \"INDIRECT-DOWNREGULATOR\", \"inducible NO synthase\"], [\"catalpol\", \"INDIRECT-DOWNREGULATOR\", \"NF-\\u03baB\"], [\"Catalpol\", \"INDIRECT-DOWNREGULATOR\", \"TNF-\\u03b1\"], [\"Catalpol\", \"INDIRECT-DOWNREGULATOR\", \"RAGE\"], [\"Catalpol\", \"INDIRECT-DOWNREGULATOR\", \"monocyte chemotactic protein-1\"], [\"Catalpol\", \"INDIRECT-DOWNREGULATOR\", \"iNOS\"], [\"Catalpol\", \"INDIRECT-DOWNREGULATOR\", \"I\\u03baB\\u03b1\"]]",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.8816 | [
{
"content": "Write a Related Work section based on these abstracts. Use numerical citations like [1], [2], etc.\n\nMain Abstract:\nThanks to the advances in the technology of low-cost digital cameras and the popularity of the self-recording culture, the amount of visual data on the Internet is going to the opposite side of the available time and patience of the users. Thus, most of the uploaded videos are doomed to be forgotten and unwatched in a computer folder or website. In this work, we address the problem of creating smooth fast-forward videos without losing the relevant content. We present a new adaptive frame selection formulated as a weighted minimum reconstruction problem, which combined with a smoothing frame transition method accelerates first-person videos emphasizing the relevant segments and avoids visual discontinuities. The experiments show that our method is able to fast-forward videos to retain as much relevant information and smoothness as the state-of-the-art techniques in less time. We also present a new 80-hour multimodal (RGB-D, IMU, and GPS) dataset of first-person videos with annotations for recorder profile, frame scene, activities, interaction, and attention1.\n\nCited Abstract(s):\n[1]: The possibility of sharing one’s point of view makes the use of wearable cameras compelling. These videos are often long, boring, and coupled with extreme shaking, as the camera is worn on a moving person. Fast-forwarding (i.e., frame sampling) is a natural choice for quick video browsing. However, this accentuates the shake caused by natural head motion in an egocentric video, making the fast-forwarded video useless. We propose EgoSampling, an adaptive frame sampling that gives stable, fast-forwarded, hyperlapse videos. Adaptive frame sampling is formulated as an energy minimization problem, whose optimal solution can be found in polynomial time. We further turn the camera shake from a drawback into a feature, enabling the increase in field of view of the output video. This is obtained when each output frame is mosaiced from several input frames. The proposed technique also enables the generation of a single hyperlapse video from multiple egocentric videos, allowing even faster video consumption.\n\n[2]: While egocentric cameras like GoPro are gaining popularity, the videos they capture are long, boring, and difficult to watch from start to end. Fast forwarding (i.e. frame sampling) is a natural choice for faster video browsing. However, this accentuates the shake caused by natural head motion, making the fast forwarded video useless.\n\n[3]: Long videos can be played much faster than real-time by recording only one frame per second or by dropping all but one frame each second, i.e., by creating a timelapse. Unstable hand-held moving videos can be stabilized with a number of recently described methods. Unfortunately, creating a stabilized timelapse, or hyperlapse, cannot be achieved through a simple combination of these two methods. Two hyperlapse methods have been previously demonstrated: one with high computational complexity and one requiring special sensors. We present an algorithm for creating hyperlapse videos that can handle significant high-frequency camera motion and runs in real-time on HD video. Our approach does not require sensor data, thus can be run on videos captured on any camera. We optimally select frames from the input video that best match a desired target speed-up while also resulting in the smoothest possible camera motion. We evaluate our approach using several input videos from a range of cameras and compare these results to existing methods.",
"role": "user"
},
{
"content": "Recent strategies focus on selecting frames using different adaptive approaches to adjust the density of frame selection according to the cognitive load. Poleg al [2] model the frame selection as a shortest path in a graph. The nodes of this graph represent the frames of the original video and, the edges weights between pairs of frames are proportional to the cost of including the pair sequentially in the output video. An extension for creating a panoramic hyperlapse of a single or multiple input videos was proposed by Halperin al [1] . They enlarge each of the input frames using neighboring frames from the videos to reduce the perception of instability. Joshi al [3] present a method based on dynamic programming to select an optimal set of frames regarding the desired target speed-up and the smoothness in frame-to-frame transitions jointly.",
"role": "assistant"
}
] |
science.qasa_abstractive_qa | science.qasa_abstractive_qa.1395 | [
{
"content": "Read this passage from a research paper and answer the question.\n\nContext: Multi-scale feature maps for detection We add convolutional feature layers to the end of the truncated base network. These layers decrease in size progressively and allow predictions of detections at multiple scales. The convolutional model for predicting detections is different for each feature layer (cf Overfeat[4] and YOLO[5] that operate on a single scale feature map).\n\nQuestion: Compare the detection feature map of both of single shot detectors (SSD and YOLO) ?",
"role": "user"
},
{
"content": "SSD uses multi-scale feature map while YOLO operates on single scale feature map.",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.28264 | [
{
"content": "Write a Related Work section based on these abstracts. Use numerical citations like [1], [2], etc.\n\nMain Abstract:\nA key issue in physical-layer network coding (PNC) is how to deal with the asynchrony between signals transmitted by multiple transmitters. That is, symbols transmitted by different transmitters could arrive at the receiver with symbol misalignment as well as relative carrier-phase offset. A second important issue is how to integrate channel coding with PNC to achieve reliable communication. This paper investigates these two issues and makes the following contributions: 1) We propose and investigate a general framework for decoding at the receiver based on belief propagation (BP). The framework can effectively deal with symbol and phase asynchronies while incorporating channel coding at the same time. 2) For unchannel-coded PNC, we show that for BPSK and QPSK modulations, our BP method can significantly reduce the asynchrony penalties compared with prior methods. 3) For QPSK unchannel-coded PNC, with a half symbol offset between the transmitters, our BP method can drastically reduce the performance penalty due to phase asynchrony, from more than 6 dB to no more than 1 dB. 4) For channel-coded PNC, with our BP method, both symbol and phase asynchronies actually improve the system performance compared with the perfectly synchronous case. Furthermore, the performance spread due to different combinations of symbol and phase offsets between the transmitters in channel-coded PNC is only around 1 dB. The implication of 3) is that if we could control the symbol arrival times at the receiver, it would be advantageous to deliberately introduce a half symbol offset in unchannel-coded PNC. The implication of 4) is that when channel coding is used, symbol and phase asynchronies are not major performance concerns in PNC.\n\nCited Abstract(s):\n[1]: This paper investigates link-by-link channel-coded PNC (physical layer network coding), in which a critical process at the relay is to transform the superimposed channel-coded packets received from the two end nodes (plus noise), Y3 = X1+ X2+W3, to the network-coded combination of the source packets, S1 oplus S2. This is in contrast to the traditional multiple-access problem, in which the goal is to obtain both S1 and S2 explicitly at the relay node. Trying to obtain S1 and S2 explicitly is an overkill if we are only interested in S1oplusS2. In this paper, we refer to the transformation Y3 rarr S1 oplus S2 as the channel-decoding- network-coding process (CNC) in that it involves both channel decoding and network coding operations. This paper shows that if we adopt the repeat accumulate (RA) channel code at the two end nodes, then there is a compatible decoder at the relay that can perform the transformation Y3 rarr S1oplusS2 efficiently. Specifically, we redesign the belief propagation decoding algorithm of the RA code for traditional point-to-point channel to suit the need of the PNC multiple-access channel. Simulation results show that our new scheme outperforms the previously proposed schemes significantly in terms of BER without added complexity.",
"role": "user"
},
{
"content": "For channel-coded PNC, an important issue is how to integrate the channel decoding operation and the network coding operation at the relay. Ref. ShengliJSAC09 [1] presented a scheme that works well for synchronous channel-coded PNC. The scheme is not amenable to extension for asynchronous channel-coded PNC.",
"role": "assistant"
}
] |
science.medmentions_ner | science.medmentions_ner.172 | [
{
"content": "Extract all unique entities from this biomedical abstract. Only choose entities from these types: [\"Virus\", \"MedicalDevice\", \"BodySubstance\", \"SpatialConcept\", \"BiologicFunction\", \"ResearchActivity\", \"AnatomicalStructure\", \"ProfessionalOrOccupationalGroup\", \"HealthCareActivity\", \"BiomedicalOccupationOrDiscipline\", \"Bacterium\", \"Chemical\", \"InjuryOrPoisoning\", \"IntellectualProduct\", \"Finding\", \"ClinicalAttribute\", \"Organization\", \"Eukaryote\", \"Food\", \"BodySystem\", \"PopulationGroup\"].\n\nOutput a JSON object where keys are entity types and values are lists of extracted entities. If no entities found for a type, use empty list [].\n\nAbstract: Significance of ETV6 rearrangement in acute promyelocytic leukemia with t(15;17)/ promyelocytic leukemia / retinoic acid receptor alpha Acute promyelocytic leukemia (APL) is a common subtype of acute myeloid leukemia in China. Since the application of arsenic trioxide and all-trans retinoic acid in the treatment of APL, the prognosis has greatly improved. However, ~20% of patients with APL relapse upon completing chemotherapy. Decreasing the relapse rate and incidence of early mortality may pose the greatest challenges for the future management of APL. Recently, Ets variant 6 (ETV6) was reported to be involved in a variety of translocations associated with hematological malignancies of myeloid and lymphoid origin. To date, little is known about the clinical implication of ETV6 rearrangement in APL. In the present study, ETV6 rearrangement was examined by split-signal fluorescence in situ hybridization in 258 adults with APL, and its association with the clinical features and outcomes of the patients was analyzed. The data suggested that ETV6 rearrangement may be an independent unfavorable prognostic factor for overall survival in APL patients.",
"role": "user"
},
{
"content": "{\"AnatomicalStructure\": [\"Ets variant 6\", \"ETV6\"], \"Bacterium\": [], \"BiologicFunction\": [\"APL\", \"translocations\", \"Acute promyelocytic leukemia\", \"hematological malignancies\", \"rearrangement\", \"acute myeloid leukemia\", \"promyelocytic leukemia\", \"acute promyelocytic leukemia\"], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [], \"Chemical\": [\"arsenic trioxide\", \"all-trans retinoic acid\", \"retinoic acid receptor alpha\"], \"ClinicalAttribute\": [\"prognostic factor\"], \"Eukaryote\": [], \"Finding\": [\"early mortality\"], \"Food\": [], \"HealthCareActivity\": [\"management\", \"challenges\", \"chemotherapy\", \"prognosis\", \"reported\"], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [\"subtype\"], \"MedicalDevice\": [], \"Organization\": [], \"PopulationGroup\": [], \"ProfessionalOrOccupationalGroup\": [], \"ResearchActivity\": [\"study\", \"analyzed\", \"split-signal fluorescence in situ hybridization\"], \"SpatialConcept\": [\"China\"], \"Virus\": []}",
"role": "assistant"
}
] |
science.nlmgene_ner | science.nlmgene_ner.255 | [
{
"content": "Find all entities of these types in the text: [\"Gene\"].\n\nOutput a JSON object.\n\nText:\nAP4 is a mediator of epithelial-mesenchymal transition and metastasis in colorectal cancer. The basic helix-loop-helix transcription factor AP4/TFAP4/AP-4 is encoded by a c-MYC target gene and displays up-regulation concomitantly with c-MYC in colorectal cancer (CRC) and numerous other tumor types. Here a genome-wide characterization of AP4 DNA binding and mRNA expression was performed using a combination of microarray, genome-wide chromatin immunoprecipitation, next-generation sequencing, and bioinformatic analyses. Thereby, hundreds of induced and repressed AP4 target genes were identified. Besides many genes involved in the control of proliferation, the AP4 target genes included markers of stemness (LGR5 and CD44) and epithelial-mesenchymal transition (EMT) such as SNAIL, E-cadherin/CDH1, OCLN, VIM, FN1, and the Claudins 1, 4, and 7. Accordingly, activation of AP4 induced EMT and enhanced migration and invasion of CRC cells. Conversely, down-regulation of AP4 resulted in mesenchymal-epithelial transition and inhibited migration and invasion. In addition, AP4 induction was required for EMT, migration, and invasion caused by ectopic expression of c-MYC. Inhibition of AP4 in CRC cells resulted in decreased lung metastasis in mice. Elevated AP4 expression in primary CRC significantly correlated with liver metastasis and poor patient survival. These findings imply AP4 as a new regulator of EMT that contributes to metastatic processes in CRC and presumably other carcinomas.",
"role": "user"
},
{
"content": "{\"Gene\": [\"c-MYC\", \"AP-4\", \"CDH1\", \"SNAIL\", \"OCLN\", \"E-cadherin\", \"AP4\", \"VIM\", \"basic helix-loop-helix transcription factor\", \"LGR5\", \"FN1\", \"Claudins 1, 4, and 7\", \"TFAP4\", \"CD44\"]}",
"role": "assistant"
}
] |
science.bioasq_general_qa | science.bioasq_general_qa.824 | [
{
"content": "Answer the following question using the provided context. Only provide the answer without any additional text.\n\nQuestion: What is PWMScan?\nContext: PWMScan: a fast tool for scanning entire genomes with a position-specific weight matrix.\nTranscription factors regulate gene expression by binding to specific short DNA sequences of 5-20 bp to regulate the rate of transcription of genetic information from DNA to messenger RNA. We present PWMScan, a fast web-based tool to scan server-resident genomes for matches to a user-supplied PWM or transcription factor binding site model from a public database.\nWe present PWMScan, a fast web-based tool to scan server-resident genomes for matches to a user-supplied PWM or transcription factor binding site model from a public database.\nWe present PWMScan, a fast web-based tool to scan server-resident genomes for matches to a user-supplied PWM or transcription factor binding site model from a public database.\nWe present PWMScan , a fast web-based tool to scan server-resident genomes for matches to a user-supplied PWM or transcription factor binding site model from a public database .\nWe present PWMScan, a fast web-based tool to scan server-resident genomes for matches to a user-supplied PWM or transcription factor binding site model from a public database.\nWe present PWMScan, a fast web-based tool to scan server-resident genomes for matches to a user-supplied PWM or transcription factor binding site model from a public database.",
"role": "user"
},
{
"content": "Transcription factors regulate gene expression by binding to specific short DNA sequences of 5-20 bp to regulate the rate of transcription of genetic information from DNA to messenger RNA. PWMScan is a fast web-based tool to scan server-resident genomes for matches to a user-supplied PWM or transcription factor binding site model from a public database.",
"role": "assistant"
}
] |
science.pico_ner | science.pico_ner.2198 | [
{
"content": "Find entities of these types in the text: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nOutput a JSON object with entity types as keys and lists of found entities as values.\n\nText:\nHypoxic exercise training reduces senescent T-lymphocyte subsets in blood . The integration and control of systemic immune responses depends on the regulated trafficking of T-lymphocytes . This study elucidates how various exercises regimens with/without hypoxia affect phenotypic characteristics of T-lymphocyte subsets in blood . Fifty sedentary males were randomly divided into five groups . Each group ( n=10 ) received one of five interventions : normoxic ( 21 % O₂ ) resting ( N-C ) , hypoxic ( 15 % O₂ ) resting ( H-C ) , normoxic exercise ( 50 % W ( max ) under 21 % O₂ , N-T ) , hypoxic-relative exercise ( 50 % maximal heart rate reserve under 15 % O₂ , H-RT ) , or hypoxic-absolute exercise ( 50 % W ( max ) under 15 % O₂ , H-AT ) for 30 min/day , 5 days/week for 4 weeks . Before the intervention , strenuous exercise up to exhaustion increased the mobilization of CD3 , CD4 , CD8 , or CD8 ( bright ) lymphocytes expressing activated ( CD11a ) or senescent ( KLRG1 ) molecules into the peripheral blood compartment . The H-AT for 4 weeks up-regulated co-stimulatory molecule CD28 expression and was accompanied by depressed KLRG1 level on CD3 , CD4 , CD8 , or CD8 ( bright ) lymphocytes at rest or following exercise . Simultaneously , this intervention increased interferon-γ ( IFN-γ ) level and unchanged interleukin-4 level , as well as , decreased myeloperoxidase ( MPO ) and interleukin-6 levels in plasma . However , no significant changes in resting and exercise-induced mobilizations of various T-lymphocyte subsets and productions of cytokines and MPO occurred following the N-C , H-C , N-T , and H-RT interventions . Therefore , we conclude that 4-week H-AT intervention reduced senescent T-lymphocyte subsets with increasing IFN-γ level in blood , which responses are accompanied by depressed oxidative stress and pro-inflammatory cytokine production . These findings can help to determine an effective hypoxic exercise regimen to minimize immune dysfunction by retarding T-lymphocyte senescence .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"Hypoxic exercise training\"], \"Outcome\": [\"senescent T-lymphocyte subsets in blood\", \"phenotypic characteristics of T-lymphocyte subsets in blood\"], \"Participant\": [\"sedentary\", \"males\", \"Fifty\"]}",
"role": "assistant"
}
] |
science.craftchem_ner | science.craftchem_ner.3615 | [
{
"content": "Extract all chemical entities (ChEBI) from this biomedical text. Output as JSON array.\n\nText: Second,sporadicallydisplacednucleiwerefoundresidinginthespaceabuttingthePE.",
"role": "user"
},
{
"content": "[]",
"role": "assistant"
}
] |
science.scicite_classification | science.scicite_classification.4005 | [
{
"content": "Classify the intent of this citation sentence by choosing from: [\"Method\", \"Background\", \"Result\"].\n\nMethod: Cites for methodology\nBackground: Cites for background information\nResult: Cites for findings\n\nCitation: Similarly, a synergistic effect of sequence differences at the resistin locus on obesity and risk of type II diabetes has been observed (Ma et al, 2002).",
"role": "user"
},
{
"content": "Background",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_plos_single_doc_summ | science.scientific_lay_summarisation_plos_single_doc_summ.12008 | [
{
"content": "Summarize the article in an uncomplicated manner.\n\nTitle: An Effector Peptide Family Required for Drosophila Toll-Mediated Immunity\nArticle:\nAbstract:\nIn Drosophila melanogaster, recognition of an invading pathogen activates the Toll or Imd signaling pathway, triggering robust upregulation of innate immune effectors. Although the mechanisms of pathogen recognition and signaling are now well understood, the functions of the immune-induced transcriptome and proteome remain much less well characterized. Through bioinformatic analysis of effector gene sequences, we have defined a family of twelve genes – the Bomanins( Boms) – that are specifically induced by Toll and that encode small, secreted peptides of unknown biochemical activity. Using targeted genome engineering, we have deleted ten of the twelve Bom genes. Remarkably, inactivating these ten genes decreases survival upon microbial infection to the same extent, and with the same specificity, as does eliminating Toll pathway function. Toll signaling, however, appears unaffected. Assaying bacterial load post-infection in wild-type and mutant flies, we provide evidence that the Boms are required for resistance to, rather than tolerance of, infection. In addition, by generating and assaying a deletion of a smaller subset of the Bom genes, we find that there is overlap in Bom activity toward particular pathogens. Together, these studies deepen our understanding of Toll-mediated immunity and provide a new in vivo model for exploration of the innate immune effector repertoire.\nIntroduction:\nConstant interaction with microbes is a fact of life, and sometimes death, for animals. Many microbes are neutral or beneficial to the host’s health. Some, however, are pathogenic and threaten the host’s viability. In vertebrates and invertebrates alike, immune responses are initiated by recognition of pathogen associated molecular patterns( PAMPs) following invasion of host tissues[1, 2]. This recognition of conserved microbial products triggers innate immune signaling pathways that are closely related in species as divergent as flies and humans[3–5]. In each case, pathway activation initiates a transcriptional program encoding an array of effector peptides and proteins. In the fruit fly Drosophila melanogaster, Toll and Imd proteins define the two major immune signaling pathways[6–11]. Fragments of fungal cell walls and bacterial peptidoglycan serve as PAMPs for these pathways. Toll signaling is triggered by the β-1, 3-glucans of fungal cell walls or by Lys-type peptidoglycan[12–16]. In contrast, the Imd pathway is activated by DAP-type peptidoglycan[17–21]. Upon activation, Toll and Imd direct expression of distinct but overlapping effector gene repertoires. These effector genes bring about the humoral immune response via factors, including antimicrobial peptides( AMPs), that circulate throughout the fly hemolymph. Effector genes also support other immune processes by, for example, upregulating genes promoting melanization and wound healing[22]. The Drosophila immune effector repertoire has been characterized by microarray, RNA-seq, and mass spectrometry experiments[23–27]. The most highly upregulated genes include most known AMPs, but also many as yet uncharacterized effector peptides. For both the characterized and novel effectors, delineation of in vivo requirements based on loss-of-function phenotypes is largely lacking. Here, we describe the application of recent advances in genome engineering technology to the genetic dissection of innate immune effector function. Generating a designer deletion of multiple members of an effector gene family, we demonstrate an essential role for these genes in Toll-mediated defense against microbial pathogens.\nDiscussion:\nWe report here that Toll-mediated defenses against a bacterium, yeast, or filamentous fungus require Bom gene function. Having reached this conclusion based on loss-of-function phenotypes, we note that such an approach has only rarely been applied to the role of innate immune effectors[37–40]. The paucity of such studies has several likely causes. First, many effector genes, such as the Boms and the known AMP genes, encode peptides that are sufficiently small as to be relatively refractory to random mutagenesis. Second, large-scale screens that rely on reporter genes are useful for identifying lesions that block pathogen recognition or response pathway signaling, but opaque to disruptions in more downstream processes. Perhaps the biggest obstacle, real or imagined, to loss-of-function studies of immune effectors has been the existence of families of closely related genes. One might reasonably expect significant overlap in gene function, meaning that multiple family members would need to be inactivated to uncover reliable phenotypes. Instead, researchers interested in knockout phenotypes have typically focused on those examples where paralogs are absent. Thus, for example, the loss-of-function study demonstrating that disruption of a mouse cathelicidin gene promoted invasive skin infection with Group A Streptococcus[40] relied on the fact that mice, unlike some other mammals, encode only one member of this gene family. How do Bom peptides promote infection resistance? One possibility is that the Bom peptides support cellular immune function. To explore this question, we infected BomΔ55C flies with Staphylococcus aureus. Defense against S. aureus has been shown to involve cellular immune activities to a greater extent than for some other Toll-activating bacteria, including E. faecalis[41–46]. Although BomΔ55C flies succumbed to S. aureus infection more quickly than did the wild type, BomΔ55C survival was indistinguishable from that of MyD88- and the genetic background control, IM2ΔMi( S2A Fig). We thus found no evidence that the 55C Bom gene cluster contributes to Toll-independent cellular mechanisms of resistance. In additional studies of cellular immune functions, we found neither defects in wound site melanization in BomΔ55C adults nor any deficiency in hemocyte number in BomΔ55C larvae( S2B and S2C Fig). These experiments do not, however, preclude a role for for the Boms in Toll-dependent cellular immunity. A likely alternative is that the Bom genes encode antimicrobial peptides( AMPs). Like many AMPs, Bom peptides are short, secreted, have intramolecular disulfide bonds, and undergo post-translational processing. Although short-form Bom peptides would be the shortest characterized Drosophila AMP, the mature form of Drosocin is just three amino acids longer[47]. Furthermore, both the Boms and the known AMPs populate the upper echelons of the sets of genes most highly upregulated upon activation of the Toll pathway. Specifically, at 12, 24, and 96 hours after natural infection by the fungus Beauveria bassiana, the 30 most highly upregulated genes include five or more Bom family members and five or more known AMP genes[24]. Additionally, mass spectrometry data indicate that a number of Bom peptides are as abundant as known AMPs, which after infection reach concentrations of 10–100 μM in the hemolymph[23, 32]. The structure and sequence of the 55C Bom cluster suggests that the Bom family arose by multiple gene duplications. Such events are enriched among loci involved in pathogen resistance and provide the opportunity for divergence in gene function driven by positive selection[48, 49]. One example is the Peptidoglycan Recognition Protein( PGRP) gene family, which consists of 13 genes, some clustered, encoding 19 proteins. While all PGRPs share a peptidoglycan-recognition domain, the functions of the proteins vary considerably[50]. Some activate the Toll or Imd signaling pathways and promote phagocytosis, autophagy, and melanization. Others suppress the Imd pathway, protecting commensal gut bacteria. A third class has direct bactericidal activity. Gene duplications need not, however, result in functional divergence. Rather, there are circumstances in which gene duplications instead lead to changes in the level, location, or timing of expression of what is essentially the same gene product. Minor sequence variation will arise, but in general the coding regions will not bear the hallmarks of positive selection[51–53]. Comparing the effects of eliminating some or all Bom genes in the Bom 55C cluster, we observe differential effects on resistance to particular pathogens. How does this observation fit with the alternative potential outcomes for gene duplication? If functional divergence has occurred, we would expect that at least some Bom peptides have a narrow-spectrum effector activity, i. e., are specific for a particular pathogen or set of pathogens. One or more of the six Bom genes deleted in the BomΔleft chromosome would be specific for E. faecalis, while one or more of the four 55C Bom genes remaining in BomΔleft would protect specifically against C. glabrata. There would need to be at least two peptides specific for F. oxysporum: one or more of the six genes deleted in the BomΔleft chromosome, as well as one or more of the four remaining 55C Bom genes. Although our data can accommodate a narrow-spectrum activity model, we favor the idea that Bom peptides have a common, broad-spectrum activity. In this scenario, resistance to different pathogens would require different total Bom peptide levels, as could be produced by variation in the numbers of Bom genes. In particular, defense against E. faecalis would require the greatest level of Bom activity, while C. glabrata would require the least. Defense against F. oxysporum would require a level intermediate to that required for E. faecalis and C. glabrata. When considered in toto, our survival data support this broad-spectrum effector model: the predicted hierarchy of Bom activity levels required for particular pathogens parallels the overall virulence levels for these pathogens( E. faecalis > F. oxysporum > C. glabrata). This holds true when virulence is measured either by the proportion of wild-type flies that succumb to infection or by the rates at which the mutants succumb after infection( compare Fig 2A, 2C and 2D). The parallel between required Bom activity level and pathogen virulence makes sense if we make the reasonable assumption that the broad-spectrum activity of Bom peptides is more efficient and rapid at higher concentrations. Future comprehensive consideration of the Bom genes will necessitate taking into account four additional loci. Two of these, CG5778 and CG5791, are Bom genes located outside the 55C cluster. The remaining two, IM4 and IM14, encode peptides that lack a CXXC motif, but nevertheless exhibit sequence similarity with Bom family members. Furthermore, the IM4 and IM14 peptides, like the Bomanins listed in Fig 1A, are small, secreted, specific to the Drosophila genus, and robustly induced by Toll. Lemaitre and colleagues have reported that overexpression of a UAS-Drs construct using a ubiquitous GAL4 driver restores F. oxysporum resistance to flies lacking both Toll and Imd pathway function[54]. In our studies, however, we found that BomΔ55C flies induce Drs expression upon infection( see Fig 3B), but succumb as rapidly as flies lacking Toll signaling( see Fig 2). Why was a requirement for Bom gene function not apparent in the Lemaitre study? One possibility is that loading the flies with high levels of Drosomycin prior to infection obviates the need for additional Toll-induced loci, including the Bom genes. An alternative explanation lies in the fact only inducible Bom expression was blocked in the Lemaitre study, whereas our study eliminated all Bom gene function. It might be that the synergistic activity of both Drosomycin and Boms is required to defend against F. oxysporum, but that a basal, Toll-independent level of Bom expression is sufficient for this synergy. In support of this idea, RNA-seq data from modENCODE demonstrate that expression of many Bom genes is robust even in the absence of infection[55]. Given that innate immune signaling pathways direct expression of large batteries of effector genes upon infection, including many AMPs, one might have expected that disabling a small subset of that repertoire would have only minor effects on the overall immune response. That is not what we observe. Instead, elimination of Bom activity is indistinguishable in phenotype from loss of the entire Toll-mediated immune defense for the pathogens tested, although Toll signaling is intact. We envision at least four explanations for the essential role of the Bom gene family: The Boms have a unique and central role in Toll-mediated defense. This might seem unlikely, given that the Bom family is apparently specific to the Drosophila genus and thus represents a relatively young family of effectors. However, such a model is, in fact, in keeping with recent studies on genes that are essential in the sense of being required for viability. In particular, we now know that essentiality is found in equal proportions among old and young Drosophila genes, where age is measured on the scale of divergence time between species[56]. It could therefore be that a gene family and associated function that arose fairly recently has become a dominant and essential feature of the innate immune response. The Boms are essential for Toll-mediated responses specific to certain microbial pathogens. We know from a variety of expression studies that the entire Toll effector repertoire is upregulated regardless of the source of the activating signal, PAMP or otherwise. By this model, Toll activates a large number of effectors, each attacking a subset of pathogens, rather than collectively fighting a common target pathogen. If so, there should be additional pathogens for which Toll is required and the Bom peptides are not. The Bom genes and other effectors synergize, such that loss of just a single factor disrupts defense as strongly as loss of all components. By this model, innate immunity involves a network of effector functions that comprise multiple hubs, each making a vital contribution to defense. The Bom family and other components of the Toll repertoire are each expressed at the minimal level required for resistance. There is good evidence that immune activation requires an energy tradeoff with metabolic processes[57, 58]. As a result, limiting the resources used by an immune response can be beneficial to overall health. By either of the last two models, knocking out other effector families should result in the same phenotype observed with the BomΔ55C deletion, i. e., inactivation of Toll defenses. Testing this prediction thus holds promise for a broader understanding of innate immune effector function in vivo.",
"role": "user"
},
{
"content": "Dedicated defense systems in the bodies of humans and other animals protect against dangerous microbes , such as bacteria and fungi . We study these processes in the fruit fly Drosophila , which can be readily grown and manipulated in the laboratory . In this animal , as in humans , protective activities are triggered when fragments of bacteria or fungi activate a system for defense gene regulation known as the Toll signaling pathway . The result is the large-scale production of defense molecules and , in many cases , clearance of the infection and survival of the animal . Although the systems for recognizing and initiating responses are well described , the role of many defense molecules is not understood . We have identified a group of closely related defense molecules in flies and used state-of-the-art genomic engineering to simultaneously eliminate most of the genes in the group . By comparing the effect of fungal or bacterial infection on the genetically altered flies and normal siblings , we find that this group of defense molecules is essential for disease resistance .",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.23365 | [
{
"content": "Write a Related Work section based on these abstracts. Use numerical citations like [1], [2], etc.\n\nMain Abstract:\nMeaning of a word varies from one domain to another. Despite this important domain dependence in word semantics, existing word representation learning methods are bound to a single domain. Given a pair of - domains, we propose an unsupervised method for learning domain-specific word representations that accurately capture the domain-specific aspects of word semantics. First, we select a subset of frequent words that occur in both domains as . Next, we optimize an objective function that enforces two constraints: (a) for both source and target domain documents, pivots that appear in a document must accurately predict the co-occurring non-pivots, and (b) word representations learnt for pivots must be similar in the two domains. Moreover, we propose a method to perform domain adaptation using the learnt word representations. Our proposed method significantly outperforms competitive baselines including the state-of-the-art domain-insensitive word representations, and reports best sentiment classification accuracies for all domain-pairs in a benchmark dataset.\n\nCited Abstract(s):\n[1]: Context-predicting models (more commonly known as embeddings or neural language models) are the new kids on the distributional semantics block. Despite the buzz surrounding these models, the literature is still lacking a systematic comparison of the predictive models with classic, count-vector-based distributional semantic approaches. In this paper, we perform such an extensive evaluation, on a wide range of lexical semantics tasks and across many parameter settings. The results, to our own surprise, show that the buzz is fully justified, as the context-predicting models obtain a thorough and resounding victory against their count-based counterparts.\n\n[2]: Computers understand very little of the meaning of human language. This profoundly limits our ability to give instructions to computers, the ability of computers to explain their actions to us, and the ability of computers to analyse and process text. Vector space models (VSMs) of semantics are beginning to address these limits. This paper surveys the use of VSMs for semantic processing of text. We organize the literature on VSMs according to the structure of the matrix in a VSM. There are currently three broad classes of VSMs, based on term-document, word-context, and pair-pattern matrices, yielding three classes of applications. We survey a broad range of applications in these three categories and we take a detailed look at a specific open source project in each category. Our goal in this survey is to show the breadth of applications of VSMs for semantics, to provide a new perspective on VSMs for those who are already familiar with the area, and to provide pointers into the literature for those who are less familiar with the field.\n\n[3]: \n\n[4]: This paper proposes a framework for representing the meaning of phrases and sentences in vector space. Central to our approach is vector composition which we operationalize in terms of additive and multiplicative functions. Under this framework, we introduce a wide range of composition models which we evaluate empirically on a sentence similarity task. Experimental results demonstrate that the multiplicative models are superior to the additive alternatives when compared against human judgments.",
"role": "user"
},
{
"content": "Representing the semantics of a word using some algebraic structure such as a vector (more generally a tensor) is a common first step in many NLP tasks [2] . By applying algebraic operations on the word representations, we can perform numerous tasks in NLP, such as composing representations for larger textual units beyond individual words such as phrases [4] . Moreover, word representations are found to be useful for measuring semantic similarity, and for solving proportional analogies [3] . Two main approaches for computing word representations can be identified in prior work [1] : and .",
"role": "assistant"
}
] |
science.chemprot_ner | science.chemprot_ner.220 | [
{
"content": "Extract entities of these types from the text: [\"Chemical\", \"Protein\"].\n\nOutput a JSON object with entity types as keys and lists of extracted entities as values.\n\nText:\nCortisol and interferon tau regulation of endometrial function and conceptus development in female sheep.\nDuring early pregnancy in sheep, the elongating conceptus secretes interferon-τ (IFNT) and the conceptus as well as endometrial epithelia produce prostaglandins (PG) via PG synthase 2 (PTGS2) and cortisol via hydroxysteroid (11-β) dehydrogenase 1 (HSD11B1). Ovarian progesterone induces and PG and IFNT stimulates endometrial HSD11B1 expression and keto-reductase activity as well as many epithelial genes that govern trophectoderm proliferation, migration, and attachment during elongation. The primary aim of these studies was to test the hypothesis that HSD11B1-derived cortisol has a biological role in endometrial function and conceptus development during early pregnancy in sheep. In study 1, cyclic ewes received vehicle, cortisol, PF 915275 (PF; a selective inhibitor of HSD11B1), cortisol and PF, meloxicam (a selective inhibitor of PTGS2), cortisol and meloxicam, recombinant ovine IFNT, or IFNT and PF into the uterus from day 10 to day14 after estrus. Cortisol and IFNT stimulated endometrial HSD11B1 expression and activity, increased endometrial PTGS2 activity and the amount of PG in the uterine lumen, and up-regulated many conceptus elongation-related genes in the endometrium. Some effects of cortisol and IFNT were mediated by PTGS2-derived PG. In study 2, bred ewes received PF 915275 or recombinant ovine IFNT and into the uterus from day 10 to day 14 after mating. Inhibition of HSD11B1 activity in utero prevented conceptus elongation, whereas IFNT rescued conceptus elongation in PF-infused ewes. These results suggest that HSD11B1-derived cortisol mediates, in part, actions of ovarian progesterone and the conceptus on endometrial function and support the hypothesis that IFNT, PG, and cortisol coordinately regulate endometrial functions important for conceptus elongation and implantation during early pregnancy in sheep.",
"role": "user"
},
{
"content": "{\"Chemical\": [\"cortisol\", \"prostaglandins\", \"PF 915275\", \"progesterone\", \"keto\", \"Cortisol\", \"hydroxysteroid\", \"meloxicam\"], \"Protein\": [\"hydroxysteroid (11-\\u03b2) dehydrogenase 1\", \"interferon-\\u03c4\", \"keto-reductase\", \"PG synthase 2\", \"HSD11B1\", \"PTGS2\", \"IFNT\", \"ovine IFNT\"]}",
"role": "assistant"
}
] |
science.pubmedqa_qa | science.pubmedqa_qa.159 | [
{
"content": "Answer the question with YES, NO, or MAYBE based on the abstract.\n\nAbstract:\n(BACKGROUND) This paper uses a life-course approach to explore whether the timing and/or duration of urban (vs rural) exposure was associated with risk factors for NCDs.\n(METHODS) A cross-sectional survey was conducted among health care workers in two hospitals in Thailand. Two measures of urbanicity were considered: early-life urban exposure and the proportion of urban life years. We explored four behavioral NCD risk factors, two physiological risk factors and four biological risk factors.\n(RESULTS) Both measures of urbanicity were each independently associated with increases in all behavioral and physiological risk factors. For some biological risk factors, people spending their early life in an urban area may be more susceptible to the effect of increasing proportion of urban life years than those growing up in rural areas.\n\nQuestion:\nLiving in an urban environment and non-communicable disease risk in Thailand: Does timing matter?\n\nAnswer:",
"role": "user"
},
{
"content": "YES",
"role": "assistant"
}
] |
science.bc7_litcovid_topic_classification | science.bc7_litcovid_topic_classification.19597 | [
{
"content": "Classify the topics of this COVID-19 literature paragraph by selecting from these categories:\n\n- Epidemic Forecasting: Predictions about COVID-19 spread\n- Treatment: Therapeutic methods or medication\n- Prevention: Measures to prevent spread\n- Mechanism: Biological mechanisms of the virus\n- Case Report: Report of a COVID-19 case\n- Transmission: How COVID-19 is transmitted\n- Diagnosis: Diagnosing COVID-19 in individuals\n\nYour output should be a valid JSON array with topics in double quotes and separated by commas. No additional text should be included.\n\nArticle:\n\nAs the SARS-CoV-2 virus wreaks havoc on the populations, health care infrastructures and economies of nations around the world, finding ways to protect health care workers and bolster immune responses in the general population while we await an effective vaccine will be the difference between life and death for many people. Recent studies show that innate immune populations may possess a form of memory, termed Trained Immunity (TRIM), where innate immune cells undergo metabolic, mitochondrial, and epigenetic reprogramming following exposure to an initial stimulus that results in a memory phenotype of enhanced immune responses when exposed to a secondary, heterologous, stimulus. Throughout the literature, it has been shown that the induction of TRIM using such inducers as the BCG vaccine and beta-glucan can provide protection through altered immune responses against a range of viral infections. Here we hypothesize a potential role for beta-glucan in decreasing worldwide morbidity and mortality due to COVID-19, and posit several ideas as to how TRIM may actually shape the observed epidemiological phenomena related to COVID-19. We also evaluate the potential effects of beta-glucan in relation to the immune dysregulation and cytokine storm observed in COVID-19. Ultimately, we hypothesize that the use of oral beta-glucan in a prophylactic setting could be an effective way to boost immune responses and abrogate symptoms in COVID-19, though clinical trials are necessary to confirm the efficacy of this treatment and to further examine differential effects of beta-glucan's from various sources.",
"role": "user"
},
{
"content": "[\"Treatment\", \"Mechanism\"]",
"role": "assistant"
}
] |
science.bioasq_general_qa | science.bioasq_general_qa.13 | [
{
"content": "Answer the following question using the provided context. Only provide the answer without any additional text.\n\nQuestion: Which pathological condition of the heart is known as hypertrophic cardiomyopathy (HCM)?\nContext: Hypertrophic cardiomyopathy (HCM) is a primary disease of the cardiac muscle that occurs mainly due to mutations (>1,400 variants) in genes encoding for the cardiac sarcomere. HCM, the most common familial form of cardiomyopathy, affecting one in every 500 people in the general population, is typically inherited in an autosomal dominant pattern, and presents variable expressivity and age-related penetrance.\nFamilial hypertrophic cardiomyopathy (HCM), due to point mutations in genes for sarcomere proteins such as myosin, occurs in 1/500 people and is the most common cause of sudden death in young individuals.\nn HCM, the modified protein function leads, over years to decades, to secondary remodeling with substantial morphological changes, such as hypertrophy, myofibrillar disarray, and extensive fibrosis associated with severe functional deterioration.\nHypertrophic cardiomyopathy (HCM) is the most common cause of sudden cardiac death (SCD) in the young, particularly among athletes.\nFamilial hypertrophic cardiomyopathy (HCM) is one the most common heart disorders, with gene mutations in the cardiac sarcomere.\nHypertrophic cardiomyopathy (HCM) has been recently recognized as the most common inherited cardiovascular disorder, affecting 1 in 500 adults worldwide. HCM is characterized by myocyte hypertrophy resulting in thickening of the ventricular wall, myocyte disarray, interstitial and/or replacement fibrosis, decreased ventricular cavity volume and diastolic dysfunction. HCM is also the most common cause of sudden death in the young. A large proportion of patients diagnosed with HCM have mutations in sarcomeric proteins.\nHypertrophic cardiomyopathy (HCM) is a genetic cardiomyopathy. The prevalence of phenotypic expression, in the absence of another systemic or cardiac disease causing increased left ventricular (LV) wall thickness, is estimated to be 1:500.\nHCM is the most prevalent genetic disorder affecting the heart, it often goes undiagnosed until midlife after patients show symptoms of myocardial remodeling. Adults with cardiomyopathy suffer SCD or adverse events such as stroke and heart failure from HCM.\nHypertrophic cardiomyopathy (HCM) is a clinically heterogeneous autosomal dominant heart disease characterised by left ventricular hypertrophy in the absence of another cardiac or systemic disease that is capable of producing significant wall thickening.\nHypertrophic cardiomyopathy (HCM) is an autosomal dominant inherited genetic disease characterized by compensatory pathological left ventricle (LV) hypertrophy due to sarcomere dysfunction.\nHypertrophic cardiomyopathy (HCM) is a clinically heterogeneous genetic heart disease characterized by left ventricular hypertrophy in the absence of another disease that could explain the wall thickening.",
"role": "user"
},
{
"content": "Hypertrophic cardiomyopathy (HCM) has been recently recognized as the most common inherited cardiovascular disorder, affecting 1 in 500 adults worldwide. HCM is characterized by myocyte hypertrophy resulting in thickening of the ventricular wall, myocyte disarray, interstitial and/or replacement fibrosis, decreased ventricular cavity volume and diastolic dysfunction. HCM is also the most common cause of sudden death in the young particularly among athletes. A large proportion of patients diagnosed with HCM have mutations in sarcomeric proteins. HCM is the most prevalent genetic disorder affecting the heart and is typically inherited in an autosomal dominant pattern. Adults with cardiomyopathy suffer SCD or adverse events such as stroke and heart failure from HCM.",
"role": "assistant"
}
] |
science.mslr2022_cochrane_multidoc_summarization | science.mslr2022_cochrane_multidoc_summarization.886 | [
{
"content": "Based on the RCT reports below, write a few sentences synthesizing the findings of these reports, which could serve as a conclusion section for a systematic review article.\n\n\n\nTitle 1:\nThe value of antibiotics in minor respiratory illness in children. A controlled trial.\nAbstract 1:\nnan\n\n\nTitle 2:\nDoes amoxicillin improve outcomes in patients with purulent rhinorrhea? A pragmatic randomized double-blind controlled trial in family practice.\nAbstract 2:\nTo compare the efficacy of amoxicillin vs placebo in patients with an acute upper respiratory tract infection and purulent rhinorrhea.\n Double-blind randomized placebo-controlled trial.\n The 416 patients included from 69 family practices were 12 years or older, presenting with acute upper respiratory complaints, and having a history of purulent rhinorrhea and no signs of complications of sinusitis.\n Therapy success (disappearance of symptoms that most greatly affected the patient's health) at day 10 and duration of general illness, pain, and purulent rhinorrhea.\n Therapy was successful in 35% of patients with amoxicillin and in 29% of patients with placebo (relative risk [RR] 1.14, 95% confidence interval [CI], 0.92-1.42). There was no effect on duration of general illness or pain. Duration of purulent rhinorrhea was shortened by amoxicillin (9 days vs 14 for clearing of purulent rhinorrhea in 75% of patients; P =.007). Diarrhea was more frequent with amoxicillin (29% vs 19%, RR 1.28, 95% CI, 1.05-1.57). No complications were reported. One patient (0.5%) receiving amoxicillin and 7 (3.4%) receiving placebo discontinued trial therapy because of exacerbation of symptoms (RR 0.25, 95% CI 0.04-1.56, P =.07). All 8 patients recovered with antibiotic therapy.\n Amoxicillin has a beneficial effect on purulent rhinorrhea caused by an acute infection of the nose or sinuses but not on general recovery. The practical implication is that all such patients, whatever the suspected diagnosis, can be safely treated with symptomatic therapy and instructed to return if symptoms worsen.\n\n\nTitle 3:\nMedical management of purulent rhinitis. A double-blind comparison of vasoconstrictor agent alone with a combination of vasoconstrictor and antimicrobial drugs.\nAbstract 3:\nnan\n\n\nTitle 4:\nDouble-blind trial of early demethylchlortetracycline in minor respiratory illness in general practice.\nAbstract 4:\nnan\n\n\nTitle 5:\nAmoxycillin and co-trimoxazole in presumed viral respiratory infections of childhood: placebo-controlled trial.\nAbstract 5:\nA double-blind randomized controlled trial of amoxycillin, co-trimoxazole, and placebo was conducted on 197 children presenting with presumed viral respiratory infections. Routine throat swabs were taken to exclude streptococcal diseases. The three disease categories studied--nasopharyngitis, pharyngotonsillitis, and bronchitis (including laryngotracheobronchitis)--showed a generally similar pattern of resolution irrespective of treatment. Nevertheless, seven out of 66 children receiving placebo were withdrawn from the trial with unremitting symptoms or complications thought to require antimicrobial treatment. Only two of 56 children receiving amoxycillin and none of 75 receiving co-trimoxazole were withdrawn. Three other children receiving amoxycillin and three receiving placebo were seen during the trial but further treatment was not thought to be necessary. Thus the return consultation rate in children receiving placebo therapy was 15% compared with 4% for those receiving antimicrobial treatment. Antimicrobial treatment was associated with less nasal discharge on the eighth day of treatment. Placebo treatment allowed an earlier return to normal activity. There was a high incidence of possible side effects on all regimens including placebo. It is concluded that the benefits of antimicrobial treatment in presumed viral respiratory infections are marginal, and they should not be routinely prescribed for these conditions.\n\n\nTitle 6:\nEvaluation of orally administered antibiotics for treatment of upper respiratory infections in Thai children.\nAbstract 6:\nnan",
"role": "user"
},
{
"content": "There is no evidence of benefit from antibiotics for the common cold or for persisting acute purulent rhinitis in children or adults. There is evidence that antibiotics cause significant adverse effects in adults when given for the common cold and in all ages when given for acute purulent rhinitis. Routine use of antibiotics for these conditions is not recommended.",
"role": "assistant"
}
] |
science.medmentions_ner | science.medmentions_ner.369 | [
{
"content": "Extract all unique entities from this biomedical abstract. Only choose entities from these types: [\"Virus\", \"MedicalDevice\", \"BodySubstance\", \"SpatialConcept\", \"BiologicFunction\", \"ResearchActivity\", \"AnatomicalStructure\", \"ProfessionalOrOccupationalGroup\", \"HealthCareActivity\", \"BiomedicalOccupationOrDiscipline\", \"Bacterium\", \"Chemical\", \"InjuryOrPoisoning\", \"IntellectualProduct\", \"Finding\", \"ClinicalAttribute\", \"Organization\", \"Eukaryote\", \"Food\", \"BodySystem\", \"PopulationGroup\"].\n\nOutput a JSON object where keys are entity types and values are lists of extracted entities. If no entities found for a type, use empty list [].\n\nAbstract: Growth arrest of lung carcinoma cells (A549) by polyacrylate - anchored peroxovanadate by activating Rac1 - NADPH oxidase signalling axis Hydrogen peroxide is often required in sublethal, millimolar concentrations to show its oxidant effects on cells in culture as it is easily destroyed by cellular catalase. Previously, we had shown that diperoxovanadate, a physiologically stable peroxovanadium compound, can substitute H2O2 effectively in peroxidation reactions. We report here that peroxovanadate when anchored to polyacrylic acid (PAPV) becomes a highly potent inhibitor of growth of lung carcinoma cells (A549). The early events associated with PAPV treatment included cytoskeletal modifications, increase in GTPase activity of Rac1, accumulation of the reactive oxygen species, and also increase in phosphorylation of H2AX (γH2AX), a marker of DNA damage. These effects persisted even at 24 h after removal of the compound and culminated in increased levels of p53 and p21 together with growth arrest. The PAPV -mediated growth arrest was significantly abrogated in cells pre-treated with the N-acetylcysteine, Rac1 knocked down by siRNA and DPI an inhibitor of NADPH oxidase. In conclusion, our results show that polyacrylate derivative of peroxovanadate efficiently arrests growth of A549 cancerous cells by activating the axis of Rac1 - NADPH oxidase leading to oxidative stress and DNA damage.",
"role": "user"
},
{
"content": "{\"AnatomicalStructure\": [\"A549\", \"cellular\", \"(A549)\", \"cytoskeletal\", \"cells\", \"cancerous cells\"], \"Bacterium\": [], \"BiologicFunction\": [\"DNA damage\", \"Growth arrest\", \"GTPase activity\", \"oxidative stress\", \"signalling\", \"arrests growth\", \"growth\", \"growth arrest\", \"phosphorylation\", \"lung carcinoma\"], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [], \"Chemical\": [\"polyacrylic acid\", \"(PAPV)\", \"oxidant effects\", \"p21\", \"polyacrylate\", \"H2AX\", \"(\\u03b3H2AX)\", \"PAPV\", \"diperoxovanadate\", \"catalase\", \"siRNA\", \"peroxovanadate\", \"NADPH oxidase\", \"N-acetylcysteine\", \"Rac1\", \"Hydrogen peroxide\", \"H2O2\", \"reactive oxygen species\", \"DPI\", \"peroxovanadium compound\", \"p53\"], \"ClinicalAttribute\": [], \"Eukaryote\": [], \"Finding\": [\"sublethal\", \"modifications\", \"accumulation\"], \"Food\": [], \"HealthCareActivity\": [\"anchored\"], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [\"report\"], \"MedicalDevice\": [], \"Organization\": [], \"PopulationGroup\": [], \"ProfessionalOrOccupationalGroup\": [], \"ResearchActivity\": [\"knocked down\"], \"SpatialConcept\": [\"marker\", \"axis\"], \"Virus\": []}",
"role": "assistant"
}
] |
science.chemdner_ner | science.chemdner_ner.460 | [
{
"content": "Find all [\"Chemical\"] entities in this text. Output a JSON object.\n\nText:\nEstimation of BDNF gene polymorphism and predisposition to dependence development for selected psychoactive compounds: genetic aspects of addiction with the selected drugs, amphetamine, tetrahydrocannabinol and opiates. The etiology of drug addiction, a central nervous system (CNS) disease, is not fully known. This complex problem is believed to be connected with concurrently affecting genetic, psychological and environmental factors. The development of addiction is connected with CNS reinforcement system and dopaminergic neurotransmission. Molecular processes are postulated to be of universal character and allow to presume a similar mechanism of dependence for both ethanol and other substances. Therefore, elements of dopaminergic transmission become excellent candidates for the examination of genetic influence on the development of addiction. A relationship between alcoholic disease and the presence of TaqIA1 and DRD2 alleles permits to initiate another investigation of gene-coding DRD2 dopamine receptor. The latest results indicate the importance of brain-derived neurotrophic factor (BDNF) in the regulation of dopaminergic route. The purpose of this research was to reveal the relationship between the Val66Met BDNF gene polymorphism and dependence of psychoactive agent. The examinations were performed with the Local Research Ethics Committee approval and patient's consent. The study group consisted of 100 patients (88 men and 12 women) aged 18-52 years, qualified for research program according to the International Classification of Diseases, Tenth Revision (ICD-10) requirements, medical examination and detailed questionnaire.",
"role": "user"
},
{
"content": "{\"Chemical\": [\"amphetamine\", \"ethanol\", \"tetrahydrocannabinol\"]}",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_plos_single_doc_summ | science.scientific_lay_summarisation_plos_single_doc_summ.23736 | [
{
"content": "Summarize the article in an uncomplicated manner.\n\nTitle: Limited Contribution of DNA Methylation Variation to Expression Regulation in Arabidopsis thaliana\nArticle:\nAbstract:\nThe extent to which epigenetic variation affects complex traits in natural populations is not known. We addressed this question using transcriptome and DNA methylation data from a sample of 135 sequenced A. thaliana accessions. Across individuals, expression was significantly associated with cis-methylation for hundreds of genes, and many of these associations remained significant after taking SNP effects into account. The pattern of correlations differed markedly between gene body methylation and transposable element methylation. The former was usually positively correlated with expression, and the latter usually negatively correlated, although exceptions were found in both cases. Finally, we developed graphical models of causality that adapt to a sample with heavy population structure, and used them to show that while methylation appears to affect gene expression more often than expression affects methylation, there is also strong support for both being independently controlled. In conclusion, although we find clear evidence for epigenetic regulation, both the number of loci affected and the magnitude of the effects appear to be small compared to the effect of SNPs.\nIntroduction:\nIt has been long speculated that epigenetic modifications, in particular DNA methylation, contribute to heritable phenotypic variation[1, 2]. That the potential exists is not in doubt, especially in plants. Modern sequencing technology allows us to investigate DNA methylation on a genomewide scale, and has revealed that spontaneous changes in DNA methylation, or epimutations, can be inherited without accompanying DNA changes[3, 4], and that induced DNA methylation changes in genetically homogeneous lines can bring about heritable phenotypic changes[5]. However, these studies tell us nothing about the importance of epigenetic inheritance relative to actual genetic variation, which is typically substantial in natural populations. Recent population studies in A. thaliana have suggested a role for DNA methylation[6, 7], but did not explicitly investigate DNA methylation effects on top of SNP effects. To further address this question, we utilized an existing data set comprising genome-, epigenome-, and transcriptome-sequencing data for a population of 135 Swedish A. thaliana accessions[7]. We consider two types of DNA methylation: C methylation( or TE-like methylation) and CG-only methylation( or gene body methylation), defined as in previous work[7]. The former is characterized by heavy methylation in all contexts( CG as well as non-CG), involves the pathways dependent on RNA-directed DNA methylation( RdDM) or CMT2[8, 9], and is associated with heterochromatin and the silencing of mobile elements[10]. The latter involves sparse CG methylation of a subset of “housekeeping” genes; its presence and level is evolutionarily conserved[11] and it is generally positively correlated with transcription. Based on type distinction and DNA context of the methylated cytosine, we divided DNA methylation variants into four non-overlapping sets: CG where no non-CG methylation is present; CG where there is non-CG methylation present; CHG, and, finally; CHH methylation. These variants are quantified by averaging methylation level of cytosines over all eligible cytosines in 200 bp windows( see Methods). As the four types have different baseline levels and involve different pathways, we normalized their levels and performed most analysis separately. Our study faces statistical challenges in terms of strong population structure, which not only leads to the usual difficulties for genome-wide association studies[12], but also means that DNA methylation variation will be strongly correlated with DNA variation due to linkage disequilibrium as well as direct causation[7]. In what follows we present several novel mixed-model methods that aim to solve these problems.\nDiscussion:\nThere is currently great excitement about the potential role of epigenetics in complex trait variation, both as a regulatory and as an inheritance mechanism. That an important role is in principle possible is not in doubt[2, 5], but there is almost no information on whether it actually matters in practice. This is a clearly a quantitative question and the answer will not be the same for all traits and populations. In this paper, we focus on whether knowing the epigenome( in the form of DNA methylation variation) improves our understanding of expression variation in A. thaliana leaves. The general answer is: Only marginally( Fig 5). In terms of overall heritability, the genome-wide pattern of methylation polymorphism does not explain anything beyond the genome-wide pattern of SNP polymorphism, and while over a thousand expression traits have significant SNP associations, only about a hundred have associations with methylation variation, and most of these are also associated with SNPs. Indeed, no more than about sixty show evidence of a significant methylation association once SNP effects have been taken into account. Thus, although there are numerous caveats to our results( limited sample size, limited technology for measuring both expression and methylation, uncertainty about how to quantify methylation variation, etc.), our overall conclusion is that the effects of methylation variation are marginal relative to those of genetic variation. However, this does not mean that knowing the methylation variation is pointless. One interesting finding is that, for expression traits with both methylation and SNP associations, the former are often physically closer to the gene being expressed than the latter( Fig 4). This could be because the most highly associated SNPs are in fact just tagging multiple underlying causal SNPs[34, 43], and suggests methylation polymorphism could help with fine-mapping especially in a study with larger sample size. Equally importantly, we do find a small number of genes with clear evidence for epigenetic regulation, including several with no significant cis-SNP associations. These merit further investigation. The same is true for the minority of genes in which promoter methylation is positively rather than negatively correlated with expression( Fig 1). It should again be emphasized that our definition of methylation variation( average methylation in windows for different methylation contexts) is rather crude, and that it may be possible to define more biologically relevant statistics. Finally, we address the issue of causality. In particular for gene body methylation, is debated whether the observed correlation is between methylation and expression is cause, effect, both, or neither[16, 20]. While we find support for methylation variation being a direct cause of expression variation, and perhaps even stronger support for both types of variation being influenced by genetics independently, our main conclusion is that is a question that will require direct experimental evidence to answer.",
"role": "user"
},
{
"content": "It has been demonstrated experimentally that epigenetic variation , in particular DNA methylation , can transmit information across generations . However , it is difficult to evaluate the importance of such effects in natural populations due to complex genetic background effects , making experimental the separation of genetic and epigenetic effects challenging . Here we use quantitative genetic models to test whether epigenetic variation plays a significant role in gene expression variation once genetic variation has been taken into account . In addition , we devise and apply methods that go beyond a simple association framework in order to infer causal relationships . Our results suggest a significant but small epigenetic contribution to expression regulation .",
"role": "assistant"
}
] |
science.scireviewgen_multidoc_summarization | science.scireviewgen_multidoc_summarization.66882 | [
{
"content": "Write a literature review chapter based on these paper abstracts.\nLiterature Review Title: A Survey of Tuning Parameter Selection for High-dimensional Regression \nChapter Title: Background \n1. Abstract of Cited Paper (BIB001): Variable selection is fundamental to high-dimensional statistical modeling, including nonparametric regression. Many approaches in use are stepwise selection procedures, which can be computationally expensive and ignore stochastic errors in the variable selection process. In this article, penalized likelihood approaches are proposed to handle these kinds of problems. The proposed methods select variables and estimate coefficients simultaneously. Hence they enable us to construct confidence intervals for estimated parameters. The proposed approaches are distinguished from others in that the penalty functions are symmetric, nonconcave on (0, ∞), and have singularities at the origin to produce sparse solutions. Furthermore, the penalty functions should be bounded by a constant to reduce bias and satisfy certain conditions to yield continuous solutions. A new algorithm is proposed for optimizing penalized likelihood functions. The proposed ideas are widely applicable. They are readily applied to a variety of ... \n2. Abstract of Cited Paper (BIB002): This paper studies oracle properties of l1-penalized least squares in nonparametric regression setting with random design. We show that the penalized least squares estimator satisfies sparsity oracle inequalities, i.e., bounds in terms of the number of non-zero components of the oracle vec- tor. The results are valid even when the dimension of the model is (much) larger than the sample size and the regression matrix is not positive definite. They can be applied to high-dimensional linear regression, to nonparamet- ric adaptive regression estimation and to the problem of aggregation of arbitrary estimators. AMS 2000 subject classifications: Primary 62G08; secondary 62C20, 62G05, 62G20. Keywords and phrases: sparsity, oracle inequalities, Lasso, penalized least squares, nonparametric regression, dimension reduction, aggregation, mutual coherence, adaptive estimation. \n3. Abstract of Cited Paper (BIB003): We consider high-dimensional generalized linear models with Lipschitz loss functions, and prove a nonasymptotic oracle inequality for the empirical risk minimizer with Lasso penalty. The penalty is based on the coefficients in the linear predictor, after normalization with the empirical norm. The examples include logistic regression, density estimation and classification with hinge loss. Least squares regression is also discussed. \n4. Abstract of Cited Paper (BIB004): We consider ``one-at-a-time'' coordinate-wise descent algorithms for a class of convex optimization problems. An algorithm of this kind has been proposed for the $L_1$-penalized regression (lasso) in the literature, but it seems to have been largely ignored. Indeed, it seems that coordinate-wise algorithms are not often used in convex optimization. We show that this algorithm is very competitive with the well-known LARS (or homotopy) procedure in large lasso problems, and that it can be applied to related methods such as the garotte and elastic net. It turns out that coordinate-wise descent does not work in the ``fused lasso,'' however, so we derive a generalized algorithm that yields the solution in much less time that a standard convex optimizer. Finally, we generalize the procedure to the two-dimensional fused lasso, and demonstrate its performance on some image smoothing problems. \n5. Abstract of Cited Paper (BIB005): The penalised least squares approach with smoothly clipped absolute deviation penalty has been consistently demonstrated to be an attractive regression shrinkage and selection method. It not only automatically and consistently selects the important variables, but also produces estimators which are as efficient as the oracle estimator. However, these attractive features depend on appropriately choosing the tuning parameter. We show that the commonly used the generalised crossvalidation cannot select the tuning parameter satisfactorily, with a nonignorable overfitting effect in the resulting model. In addition, we propose a bic tuning parameter selector, which is shown to be able to identify the true model consistently. Simulation studies are presented to support theoretical findings, and an empirical example is given to illustrate its use in the Female Labor Supply data. \n6. Abstract of Cited Paper (BIB006): We study the effective degrees of freedom of the lasso in the framework of Stein’s unbiased risk estimation (SURE). We show that the number of nonzero coefficients is an unbiased estimate forthe degrees of freedom of the lasso—a conclusion that requires no special assumption on the predictors. In addition, the unbiased estimator is shown to be asymptotically consistent. With these results on hand, various model selection criteria—Cp, AIC and BIC—are available, which, along with the LARS algorithm, provide a principled and efficient approach to obtaining the optimal lasso fit with the computational effort of a single ordinary least-squares fit. \n7. Abstract of Cited Paper (BIB007): Meinshausen and Buhlmann [Ann. Statist. 34 (2006) 1436--1462] showed that, for neighborhood selection in Gaussian graphical models, under a neighborhood stability condition, the LASSO is consistent, even when the number of variables is of greater order than the sample size. Zhao and Yu [(2006) J. Machine Learning Research 7 2541--2567] formalized the neighborhood stability condition in the context of linear regression as a strong irrepresentable condition. That paper showed that under this condition, the LASSO selects exactly the set of nonzero regression coefficients, provided that these coefficients are bounded away from zero at a certain rate. In this paper, the regression coefficients outside an ideal model are assumed to be small, but not necessarily zero. Under a sparse Riesz condition on the correlation of design variables, we prove that the LASSO selects a model of the correct order of dimensionality, controls the bias of the selected model at a level determined by the contributions of small regression coefficients and threshold bias, and selects all coefficients of greater order than the bias of the selected model. Moreover, as a consequence of this rate consistency of the LASSO in model selection, it is proved that the sum of error squares for the mean response and the $\\ell_{\\alpha}$-loss for the regression coefficients converge at the best possible rates under the given conditions. An interesting aspect of our results is that the logarithm of the number of variables can be of the same order as the sample size for certain random dependent designs. \n8. Abstract of Cited Paper (BIB008): We exhibit an approximate equivalence between the Lasso estimator and Dantzig selector. For both methods we derive parallel oracle inequalities for the prediction risk in the general nonparametric regression model, as well as bounds on the $\\ell_p$ estimation loss for $1\\le p\\le 2$ in the linear model when the number of variables can be much larger than the sample size. \n9. Abstract of Cited Paper (BIB009): Imposition of a lasso penalty shrinks parameter estimates toward zero and performs continuous model selection. Lasso penalized regression is capable of handling linear regression problems where the number of predictors far exceeds the number of cases. This paper tests two exceptionally fast algorithms for estimating regression coefficients with a lasso penalty. The previously known l 2 algorithm is based on cyclic coordinate descent. Our new l 1 algorithm is based on greedy coordinate descent and Edgeworth's algorithm for ordinary l 1 regression. Each algorithm relies on a tuning constant that can be chosen by cross-validation. In some regression problems it is natural to group parameters and penalize parameters group by group rather than separately. If the group penalty is proportional to the Euclidean norm of the parameters of the group, then it is possible to majorize the norm and reduce parameter estimation to l 2 regression with a lasso penalty. Thus. the existing algorithm can be extended to novel settings. Each of the algorithms discussed is tested via either simulated or real data or both. The Appendix proves that a greedy form of the l 2 algorithm converges to the minimum value of the objective function. \n10. Abstract of Cited Paper (BIB010): We propose MC+, a fast, continuous, nearly unbiased and accurate method of penalized variable selection in high-dimensional linear regression. The LASSO is fast and continuous, but biased. The bias of the LASSO may prevent consistent variable selection. Subset selection is unbiased but computationally costly. The MC+ has two elements: a minimax concave penalty (MCP) and a penalized linear unbiased selection (PLUS) algorithm. The MCP provides the convexity of the penalized loss in sparse regions to the greatest extent given certain thresholds for variable selection and unbiasedness. The PLUS computes multiple exact local minimizers of a possibly nonconvex penalized loss function in a certain main branch of the graph of critical points of the penalized loss. Its output is a continuous piecewise linear path encompassing from the origin for infinite penalty to a least squares solution for zero penalty. We prove that at a universal penalty level, the MC+ has high probability of matching the signs of the unknowns, and thus correct selection, without assuming the strong irrepresentable condition required by the LASSO. This selection consistency applies to the case of $p\\gg n$, and is proved to hold for exactly the MC+ solution among possibly many local minimizers. We prove that the MC+ attains certain minimax convergence rates in probability for the estimation of regression coefficients in $\\ell_r$ balls. We use the SURE method to derive degrees of freedom and $C_p$-type risk estimates for general penalized LSE, including the LASSO and MC+ estimators, and prove their unbiasedness. Based on the estimated degrees of freedom, we propose an estimator of the noise level for proper choice of the penalty level.\n\nLiterature Review:",
"role": "user"
},
{
"content": "A simple yet successful approach for avoiding over-fitting and enforcing sparsity is to regularize the classical least-squares regression with the L 1 penalty, corresponding to adopting p λ (|β j |) = λ|β j | in (2). This choice leads to the well known Least Absolute Shrinkage and Selection Operator (Lasso, Tibshirani [1996] ), which simultaneously performs estimation and variable selection. In the field of signal processing, the Lasso is also known as basis pursuit BIB001 . Formally, the Lasso estimator β Lasso (λ) is obtained by minimizing the regularized least squares loss function, that is, where is the ith row of X, ||β|| 1 denotes the L 1 -norm of β and λ denotes the tuning parameter. By varying the value of λ and solving the above minimization problem for each λ, we obtain a solution path for Lasso. In the literature, a great deal of work has been devoted to understanding the theoretical properties of Lasso, including the theoretical guarantee on the nonasymptotic estimation error bound || β Lasso (λ)−β 0 || 2 , the prediction error bound ||X( β Lasso (λ) − β 0 )|| 2 , and the ability of recovering the support set or the active set of the model {j : β 0j = 0, j = 1, . . . , p}, see BIB003 BIB004 BIB008 BIB009 , among others. The tremendous success of L 1 -regularized regression technique is partly due to its computational convenience. Efficient algorithms such as the exact path-following LARs algorithm and the fast coordinate descent algorithm BIB005 BIB010 have greatly facilitated the use of Lasso. Lasso is known to achieve accurate prediction under rather weak conditions . However, it is also widely recognized that Lasso requires stringent conditions on the design matrix X to achieve variable selection consistency [Zou, 2006, Zhao and ]. In many scientific problems, it is of importance to identify relevant or active variables. For example, biologists are often interested in identifying the genes associated with certain disease. This problem is often referred to as support recovery, with the goal to identify S 0 = {j : β 0j = 0, j = 1, . . . , p}. To alleviate the bias of Lasso due to the over-penalization of L 1 penalty, nonconvex penalized regression has been studied in the literature as an alternative to Lasso Lv, 2010, Zhang and . Two popular choices of nonconvex penalty functions are SCAD BIB002 and MCP BIB011 . The SCAD penalty function is given by where a > 2 is a constant and BIB002 recommended the choice a = 3.7. The MCP penalty function is given by where a > 1 is a constant. Figure 2 depicts the two penalty functions. As cross-validation for Lasso aims for prediction accuracy, it tends to select a somewhat smaller tuning parameter (i.e., less regulation). The resulted model size hence is usually larger than the true model size. In the fixed p setting, BIB006 proved that with a positive probability cross-validation leads to a tuning parameter that would yield an over-fitted model. Recent research has shown that when nonconvex penalized regression is combined with some modified BIC-type criterion, the underlying model can be identified with probability approaching one under appropriate regularity conditions. Several useful results were obtained in the low-dimensional setting. For example, effective Bayesian information criterion (BIC) type criterion for tuning parameter selection for nonconvex penalized regression was investigated in BIB006 for fixed p and Wang, Li and Leng (2009) for diverging p (but p < n). BIB007 considered Akaike information criterion (AIC) and BIC type criterion based on the degrees of freedom for Lasso. Also in the fixed p setting, studied generalized information criterion, encompassing AIC and BIC. They revealed that BIC-type selector enables identification of the true model consistently and that AIC-type selector is asymptotically loss efficient. In the rest of this section, we review several modified BIC-type criteria in the high-dimensional setup (p n) for tuning parameter selection with the goal of support recovery.",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_pubmed | science.scientific_papers_summarization_single_doc_pubmed.17725 | [
{
"content": "Write an abstract for the research paper below.\n\nPaper text:\nfirst , a person must have symptoms severe enough for medical care ( multiplier 1 ) .\nsecond , the physician must collect patients specimens ( multiplier 2 ) and forward them for testing by bacterial culture ( multiplier 3 ) .\nthird , the sample test result must be positive ( multiplier 4 ) , and the confirmed case must be reported ( multiplier 5 ) ( 2,7,8 ) . to obtain multiplier 1\n, we conducted a 12-month population - based household survey during march 1 , 2012february 28 , 2013 ( approved by the ethics committee of guangdong cdc ) .\nrespondents were randomly selected from 4 districts in western , eastern , and central guangdong province .\nwe used a standard questionnaire to collect information about diarrhea in the previous 4 weeks .\nthe incidence rate of diarrhea was 0.1081 ( 95% ci 0.10040.1158 ) episodes / person - year ; 38.6% of the household survey respondents with diarrhea sought medical care .\nmultipliers 2 and 3 were based on data from sentinel hospitals and comprised the overall number of diarrhea cases , samples collected , and samples submitted for culture during the year .\na total of 75,583 ( 45.3% ) samples of 166,729 registered diarrhea cases in the sentinel hospitals were collected , of which 22,577 ( 29.9% ) were tested .\nlaboratories of sentinel hospitals cultured samples for salmonella in accordance with standard protocol provided by the national reference laboratory by using macconkey agar as plating medium . according to a proficiency testing program ,\nthe numbers of salmonella isolates identified and reported to nndrs as nts infectious diarrhea by all sentinel hospitals yielded the proportion of cases reported ( 648/1,061 , 61.1% ) ( multiplier 5 ) .\nbased passive surveillance system , multiplier 1 was the same as for active surveillance . according to a comparison with samples from the submission proportion in a survey of physician - diagnosed diarrhea in guangdong province during 2009 ( mann - whitney test , p = 0.246 ) ( 11 ) , and a comparison between medical institutions that charged and did not charge for testing ( kolmogorov - smirnov test , p = 0.837 ) , the proportion of samples submitted and tested from active surveillance were also used as estimates of passive surveillance .\nthe average test sensitivity of sentinel laboratories before active surveillance began was used as an estimate of all medical institutions ( i.e. , the sensitivity of passive surveillance [ 48.2% ] ) .\nusing numbers of salmonella isolates in guangdong province from laboratory data , and number of reported nts cases by all medical institutions , we determined the proportion of reported nts was 9.6% ( 991/10,360 ) .\nthus , for each reported nts case under passive surveillance , 414.8 cases actually occurred .\nmultipliers of 5 age groups also were presented ( table 1 ) . to generate a more robust estimate , we conducted uncertainty and sensitivity analyses ( online technical appendix , http://wwwnc.cdc.gov/eid/article/22/4/15-1372-techapp1.pdf ) on passive surveillance data using monte carlo simulation ( @risk 6.0 ; palisade , ithaca , ny , usa ) ( 12 ) .\nwe used distribution to describe the uncertainty of proportions and negative binominal distribution to estimate the number of cases .\nthe uncertainty analysis model predicted a 411.9 ( 95% ci 308.4592.7 ) overall multiplier and estimated that 408,499 ( 95% ci 302,899591,901 ) salmonella cases occurred per year when the overall multiplier was applied to the 991 reported nts cases , resulting in 391.6 ( 95% ci 290.3567.4 ) cases/100,000 persons in 2012 .\nthe rank correlation of various factors in the model showed that patients seeking medical care provided the highest uncertainty in the overall estimate ( influence rate 96% ) ( figure ) .\nrank correlations for the total number of nontyphoidal salmonella cases in the population ( tornado diagram ) , guangdong province , china , 2012 .\nour estimated nts incidence was lower than the incidence in china as determined from a literature review ( 626.5 cases/100,000 persons ) ( 13 ) but close to that in the united states ( 352.1 cases/100,000 persons ) ( 3 ) .\nhowever , incidences for persons < 5 years of age and 524 years of age in our study were higher than those for persons in china and the united states , highlighting that salmonella represents a major health problem in guangdong province , especially among younger persons .\nour estimated active surveillance rate ( 35.8 ) of nts infections per reported case is similar to estimates in the united states ( 38.6 and 39 ) ( 2,10 ) but different from those for england ( 3.2 ) , jordan ( 278 ) , and japan ( 63 ) ( 7,8,14 ) .\nsuch differences might be due to differences in methods used and to actual differences in salmonella infections . with fewer missing cases and less underestimation ,\nactive surveillance has lower overall multipliers than passive surveillance , indicating smaller surveillance artifacts and more accurate incidence estimate and presents remarkable advantages over passive surveillance .\nthe estimate for active surveillance also showed that if we seek to reduce uncertainty in the overall estimate , we should first focus on encouraging patients to seek medical care .\nour study provides policymakers in china with a reference for the importance of salmonella incidence and calls for balanced surveillance on both foodborne infections and foods and enlarging active surveillance scales .\nmore surveillance guidelines need to be developed to help physicians identify timing of sampling , tests , and performance .\nlaws requiring reporting of foodborne diseases and pathogens need to be enacted to increase quantity and quality of reporting .\nthe result suggests that to increase care seeking and sample submission , government health insurance schemes should be further developed to cover diagnostic tests and treatments of diseases of public health significance .\nuncertainty and sensitivity analysis and study limitations for a study of nontyphoidal salmonella infection , guangdong province , china , 2012 .",
"role": "user"
},
{
"content": "we used active and passive surveillance to estimate nontyphoidal salmonella ( nts ) infection during 2012 in guangdong province , china . under passive surveillance , for every reported nts infection , an estimated 414.8 cases occurred annually . under active surveillance , an estimated 35.8 cases occurred . \n active surveillance provides remarkable advantages in incidence estimate .",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_pubmed | science.scientific_papers_summarization_single_doc_pubmed.7726 | [
{
"content": "Write an abstract for the research paper below.\n\nPaper text:\nwegener 's granulomatosis is an uncommon multi - organ disease first categorized as a distinct syndrome by friedreich wegener in 1936 [ 1 - 3 ] the hallmarks of this potentially fatal disorder are necrotizing granulomatous inflammation involving the upper and lower respiratory tract , glomerulonephritis , and vasculitis [ 3 - 6 ] .\nthe limited form of wegener 's granulomatosis runs an indolent course whereas the disseminated disease has a rapid progressive course leading to life - threatening multi - organ failure [ 2,7 - 10 ] . of the 2 types of wegener 's granulomatosis ,\npatients with the generalized disease are known to have shorter life expectancy than those presenting with the limited disease .\nwegener 's granulomatosis has an insidious onset and usually develops over a period of time with the mean period from onset of symptoms to diagnosis ranging from 4.7 to 15 months . without treatment\nit is invariably fatal and most patients do not survive more than a year after diagnosis .\ndelay in the diagnosis of wegener 's granulomatosis is attributed mostly to the nonspecific presenting signs and symptoms associated with the early phase of the disease .\nthe most characteristic oral lesion is hyperplastic gingivitis , which is typically red to purple with many petechiae ( strawberry gingivitis ) .\nthese lesions may remain localized in the oral cavity for unusually long periods of time before multi - organ involvement occurs .\ntherefore , timely recognition of this often overlooked oral finding can help to establish an early diagnosis of this disease .\nmanagement with appropriate therapy produces a good response in most cases , with only occasional relapses .\na 50 year old male was referred to the oral medicine clinic at the department of oral pathology , oral medicine and peridodontology , faculty of dentistry , university of malaya for management of unusual gingival lesions .\naccordingly , the patient first attended the primary dental care unit here for a routine dental check - up . during the course of intraoral examination\n, large lobulated purplish - red swellings were found affecting the labial gingival mucosa extending from the distal of the right maxillary first premolar to the distal of the left maxillary central incisor ( figure 1a ) .\nthe patient was unaware of the onset of these swellings , and there were no associated symptoms of pain or bleeding .\nhe was afebrile at the time of examination and his medical history was otherwise unremarkable .\nthe present case consists of large lobulated , purplish - red swellings involving the labial gingival mucosa and extending from the distal of the right maxillary first premolar to the distal of the left maxillary central incisor .\na decision was made to perform an incisional biopsy on the gingival growth under local anaesthesia .\nthe specimen obtained was submitted for routine processing followed by staining with haematoxylin - eosin , periodic acid schiff , grocott methenamine silver and ziehl - neelsen .\nmicroscopic examination of the lesional tissues showed covering parakeratinized stratified squamous epithelium with an irregular lobular surface and exhibiting pseudoepitheliomatous hyperplasia ( figure 2a - b ) . in the underlying connective tissues , a granulomatous inflammatory response was observed .\na diffuse mixed inflammatory cell infiltrate comprising mostly neutrophils which formed subepithelial abscesses ( figure 3 ) and smaller numbers of eosinophils , plasma cells and lymphocytes was present . scattered multinucleated giant cells , some resembling langhan - type giant cells with horse - shoe arrangement of their nuclei , were occasionally seen ( figure 4a - b ) .\nspecial stains were negative for fungi and mycobacterial bacilli . based on these histological findings and in correlation with the clinical presentation of strawberry gingivitis , a diagnosis of wegener 's granulomatosis was made .\nincisional biopsy specimen of present case shows pseudoepitheliomatous hyperplasia ( peh ) ( arrows ) and a mixed inflammatory infiltrate with abundant neutrophils and eosinophils in the underlying connective tissues ( a , h&e ; b , pas ; original magnification 25 ) .\nscattered multinucleated giant cells ( mgc ) ( arrows ) are present in the lesional connective tissue area , occurring side by side with mixed inflammatory cell infiltrate.(a , h&e ; b , pas ; original magnification 200 ) .\ninvestigations yielded normal indices for haemoglobin , complete blood count , erythrocyte sedimentation rate and urine analysis . chest x - rays were also normal .\na decision was taken to treat the patient with prednisolone 60 mg daily and cyclophosphamide 100 mg daily . at one and three weeks post - treatment\n, a dental evaluation showed that the gingival swellings had completely resolved ( figure 5a - b ) .\na - b . clinical review of the present case shows regression of the gingival lesions at one week ( a ) and complete resolution at three weeks ( b ) .\nthe american college of rheumatology ( acr ) recommended that the diagnosis of wegener 's granulomatosis can be made if two of the following criteria are fulfilled:1 ) ulcerative lesions in oral mucosa or nasal bleeding or inflammation , 2 ) nodules , fixed infiltrates or cavities in chest radiograph , 3 ) abnormal urinary sediment and 4 ) granulomatous inflammation on biopsy . in the present case ,\nstrawberry gingivitis and demonstration of granulomatous inflammation in the gingival biopsy supported the diagnosis of this case as wegener 's granulomatosis .\nhowever , there was no history of epistaxis nor pulmonary infiltrates or abnormal urine findings in our patient .\nfurthermore , his routine blood analysis was also noncontributory . in the absence of other systemic findings ,\nmajority of these lesions represent reactive hyperplasias due to plaque - related inflammatory gingival disease .\ndrug - induced gingival hyperplasia as encountered in patients on dilantin ( figure 6a ) , cyclosporine or nifidepine therapy formed the other clinical important group of gingival entities .\nvascular lesions that may occur in the gingiva included haemangiomas , pyogenic granulomas ( figure 6b ) , peripheral giant cell granulomas and the more ominous entities such as kaposi 's sarcoma .\ntherefore histological verification supported by clinical and other investigations are necessary to differentiate the lesions from strawberry gingivitis .\na case of strawberry gingivitis presenting as the first sign of wegener 's granulomatosis is described .\nthe clinical lesson from this case report is that early recognition of the characteristic presentation of the gingival lesions of this potentially fatal disorder led to the timely management of the disease .",
"role": "user"
},
{
"content": "wegener 's granulomatosis is a rare multi - system disease characterized by the classic triad of necrotizing granulomas affecting the upper and lower respiratory tracts , disseminated vasculitis and glomerulonephritis . \n oral lesions as a presenting feature are only encountered in 2% of these cases . \n hyperplastic gingival lesions or strawberry gingivitis , is a characteristic sign of wegener 's granulomatosis . \n the latter consists of reddish - purple exophytic gingival swellings with petechial haemorrhages thus resembling strawberries . \n recognition of this feature is of utmost importance for timely diagnosis and definitive management of this potentially fatal disease . \n a case of strawberry gingivitis as the first presenting sign of wegener 's granulomatosis affecting a 50-year - old malay male is reported here . \n the differential diagnosis of red lesions that may present in the gingiva is discussed .",
"role": "assistant"
}
] |
science.scitldr_aic | science.scitldr_aic.1278 | [
{
"content": "Summarize the paper:\n\nPaper text:\n\nIt has been an open research challenge for developing an end-to-end multi-domain task-oriented dialogue system, in which a human can converse with the dialogue agent to complete tasks in more than one domain.\nFirst, tracking belief states of multi-domain dialogues is difficult as the dialogue agent must obtain the complete belief states from all relevant domains, each of which can have shared slots common among domains as well as unique slots specifically for the domain only.\nSecond, the dialogue agent must also process various types of information, including contextual information from dialogue context, decoded dialogue states of current dialogue turn, and queried results from a knowledge base, to semantically shape context-aware and task-specific responses to human.\nTo address these challenges, we propose an end-to-end neural architecture for task-oriented dialogues in multiple domains.\nWe propose a novel Multi-level Neural Belief Tracker which tracks the dialogue belief states by learning signals at both slot and domain level independently.\nThe representations are combined in a Late Fusion approach to form joint feature vectors of (domain, slot) pairs.\nFollowing recent work in end-to-end dialogue systems, we incorporate the belief tracker with generation components to address end-to-end dialogue tasks.\nWe achieve state-of-the-art performance on the MultiWOZ2.1 benchmark with 50.91% joint goal accuracy and competitive measures in task-completion and response generation.\nIn a task-oriented dialogue system, the Dialogue State Tracking (DST) module is responsible for updating dialogue states (essentially, what the user wants) at each dialogue turn.\nThe DST supports the dialogue agent to steer the conversation towards task completion.\nAs defined by Henderson et al. (2014a) , a dialogue belief state consists of inform slots -information to query a given knowledge base or database (DB), and request slots -information to be returned to the users.\nTask-oriented dialogues can be categorized as either single-domain or multi-domain dialogues.\nIn single-domain dialogues, humans converse with the dialogue agent to complete tasks of one domain.\nIn contrast, in multi-domain dialogues, the tasks of interest can come from different domains.\nA dialogue state in a multi-domain dialogue should include all inform and request slots of corresponding domains up to the current turn.\nExamples of a single-domain dialogue and a multi-domain dialogue with annotated states after each turn can be seen in Figure 1 .\nDespite there being several efforts in developing task-oriented dialogue systems in a single domain (Wen et al., 2016a; Lei et al., 2018) , there have been limited contributions for multi-domain task-oriented dialogues.\nDeveloping end-to-end systems for multi-domain dialogues faces several challenges: (1) Belief states in multi-domain dialogues are usually larger and more complex than in single-domain, because of the diverse information from multiple domains.\nEach domain can have shared slots that are common among domains or unique slots that are not shared with any.\n(2) In an end-to-end system, the dialogue agent must incorporate information from source sequences, e.g. dialogue context and human utterances, as well as tracked belief states and extracted information from knowledge base, to semantically shape a relevant response with accurate information for task completion.\nDirectly applying methods for single-domain dialogues to multi-domain dialogues is not straightforward because the belief states extend across multiple domains.\nA possible solution is to process a multi-domain dialogue for N D times for N D domains, each time obtaining a belief state of one domain.\nHowever, this approach does not allow learning co-references in dialogues whereby users can switch from one domain to another turn by turn.\nWe propose an end-to-end dialogue system approach which explicitly track the dialogue states in multiple domains altogether.\nSpecifically, (1) we propose Multi-level Neural Belief Tracker to process contextual information for both slot-level and domain-level signals independently.\nThe two levels are subsequently combined to learn multi-domain dialogue states.\nOur dialogue state tracker enables shared learning of slots common among domains as well as learning of unique slots in each domain.\n(2) we utilize multi-head attention layers (Vaswani et al., 2017) to comprehensively process various types of information: dialogue context, user utterances, belief states of both inform and request slots, and DB query results.\nThe multi-head structure allows the model to independently attend to the features over multiple representation sub-spaces; and (3) we combine all components to create a dialogue system from state tracking to response generation.\nThe system can be jointly learned in an end-to-end manner.\nOur end-to-end dialogue system utilizes supervision signals of dialogue states and output responses without using system action annotation.\nTo comprehensively validate our method, we compare our models with baselines in end-to-end, DST, and context-to-text generation settings.\nWe achieve the state-of-the-art performance in DST, task-completion, and response generation in the MultiWOZ2.1 corpus Eric et al., 2019 ) as compared to other baselines in similar settings.\nIn context-to-text generation setting that allows supervision of dialogue acts, our models can achieve competitive measures of Inform and BLEU metric.\nIn this work, we proposed an end-to-end dialogue system with a novel Multi-level Neural Belief Tracker.\nOur DST module can track complex belief states of multiple domains and output more accurate dialogue states.\nThe DST is combined with attention-based generation module to generate dialogue responses.\nEvaluated on the large-scale multi-domain dialogue benchmark MultiWOZ2.1, our models achieve the state-of-the-art performance in DST and competitive measures in taskcompletion and response generation.\nFigure 3 : Example dialogue with the input system response St−1 and current user utterance Ut, and the output belief state BSt and system response St. Compared with TSCP (Row 3), our dialogue state and response (Last Row) are more correct and closer to the ground truth (Row 2).\nVisualization of attention to the user utterance sequence at slot-level (lower right) and domain-level (upper right) is also included.\nMore red denotes higher attention score between domain or slot representation and token representation.\nBest viewed in color.",
"role": "user"
},
{
"content": "We proposed an end-to-end dialogue system with a novel multi-level dialogue state tracker and achieved consistent performance on MultiWOZ2.1 in state tracking, task completion, and response generation performance.",
"role": "assistant"
}
] |
science.scitldr_aic | science.scitldr_aic.907 | [
{
"content": "Summarize the paper:\n\nPaper text:\n\nTraining with larger number of parameters while keeping fast iterations is an increasingly\n\nadopted strategy and trend for developing better performing Deep Neural\n\nNetwork (DNN) models.\nThis necessitates increased memory footprint and\n\ncomputational requirements for training.\nHere we introduce a novel methodology\n\nfor training deep neural networks using 8-bit floating point (FP8) numbers.\n\nReduced bit precision allows for a larger effective memory and increased computational\n\nspeed.\nWe name this method Shifted and Squeezed FP8 (S2FP8).\nWe\n\nshow that, unlike previous 8-bit precision training methods, the proposed method\n\nworks out of the box for representative models: ResNet50, Transformer and NCF.\n\nThe method can maintain model accuracy without requiring fine-tuning loss scaling\n\nparameters or keeping certain layers in single precision.\nWe introduce two\n\nlearnable statistics of the DNN tensors - shifted and squeezed factors that are used\n\nto optimally adjust the range of the tensors in 8-bits, thus minimizing the loss in\n\ninformation due to quantization.\nDeep neural networks have achieved state-of-the-art performance on a wide variety of computer vision, audio, and natural language processing (NLP) tasks.\nThis has resulted in an explosion of interest around techniques to reduce the memory footprint and energy consumption of neural network training and inference (Guo, 2018) .\nAlthough there are a number of methods to address some of these issues for inference, the most effective method for training is using reduced precision numerical formats.\nWhile 32-bit floating point (FP32) is the most common data format for neural network training, recent hardware have leveraged techniques that allow for training with 16-bit data formats (Köster et al., 2017; Micikevicius et al., 2018) .\nHowever, 8-bit precision training remains an open challenge (Johnson, 2018; Kalamkar et al., 2019) .\nCurrent FP8 training methodologies (Wang et al., 2018; require either specialized chunk-based accumulation, stochastic rounding techniques, loss scaling or maintaining some layers of the network in higher precision.\nTuning these knobs is non-intuitive and requires significant experimentation for each individual network.\nAccelerating the adoption of 8-bit data in training DNNs requires a hardware-friendly and out-ofthe-box implementation of FP8.\nDue to the reduced number of mantissa bits, 8-bit multipliers are smaller and consume less power compared to higher bit representations.\nIn this work we describe a novel 8-bit floating point (FP8) format -shifted and squeezed FP8 (S2FP8) -which has the following advantages compared to previously proposed 8-bit training methodologies:\n• S2FP8 eliminates the need for loss scaling, which requires significant tuning of the loss scale values and schedule for individual topologies • Leveraged by the forward and backward passes of model training, S2FP8 is effective in adjusting the range of gradients and also of activations and weights • S2FP8 does not require keeping the first and last layer in FP32 precision, which is needed for other approaches , however maintains the master weights and accumulations inside the matrix multipliers in FP32\nWe demonstrate across image classification, translation, and recommendation models that S2FP8 outperforms previous 8-bit approaches, and reaches the accuracy of FP32 models without any additional hyperparameter tuning.\nWe introduce a novel 8-bit floating point data type (S2FP8), that gives competitive performance in comparison to state-of-the-art FP32 baselines over a range of representative networks.\nS2FP8 makes use of shifted and squeezed factors to shift and rescale the range of tensors prior to truncation.\nS2FP8 allows training of neural networks with an 8-bit format while eliminating the need for loss scaling tuning, hardware-complex rounding techniques.\nIn addition, compared to existing FP8 implementations we also eliminate the restriction of maintaining the first and last layers in FP32.\nDecreasing\nMovielens 1 million FP32 S2FP8 ∆ FP8 NCF 0.666 0.663 0.003 0.633 Figure A1 : The range and precision of FP8.\nBar indicate the number density between each power of 2.\nSince FP8 has 2 mantissa bit, the density is 4 (except in the denormals), and the associated machine epsilon is 2 −3 = 1/8.\nThe normal representable range goes from 2 −14 to (1 − 2 −3 )2 16 , with denormals from 2 −16 to 2 −14 .",
"role": "user"
},
{
"content": "We propose a novel 8-bit format that eliminates the need for loss scaling, stochastic rounding, and other low precision techniques",
"role": "assistant"
}
] |
science.scireviewgen_multidoc_summarization | science.scireviewgen_multidoc_summarization.66079 | [
{
"content": "Write a literature review chapter based on these paper abstracts.\nLiterature Review Title: Law and logic: A review from an argumentation perspective \nChapter Title: . . . ⇒ Advances(r, v) \n1. Abstract of Cited Paper (BIB001): This paper analyses legal reasoning with precedents in the setting of a formally defined dialogue game. After giving a legal-theoretical account of judicial reasoning with precedents, a formal method is proposed for representing precedents and it is discussed how such representations can be used in a formally defined dialectical protocol for dispute. The basic ideas are to represent cases as argument structures (including pro and con arguments, and the arguments for adjudicating their conflicts) and to define certain case-based reasoning moves as strategies for introducing information into a dispute. In particular, analogizing and distinguishing are conceived as elementary theory construction moves, which produce new information on the basis of an existing stock of cases. The approach also offers the possibility of using portions of precedents and of expressing criteria for determining the outcome of precedent-based disputes. \n2. Abstract of Cited Paper (BIB002): Reasoning with cases has been a primary focus of those working in AI and law who have attempted to model legal reasoning. In this paper we put forward a formal model of reasoning with cases which captures many of the insights from that previous work. We begin by stating our view of reasoning with cases as a process of constructing, evaluating and applying a theory. Central to our model is a view of the relationship between cases, rules based on cases, and the social values which justify those rules. Having given our view of these relationships, we present our formal model of them, and explain how theories can be constructed, compared and evaluated. We then show how previous work can be described in terms of our model, and discuss extensions to the basic model to accommodate particular features of previous work. We conclude by identifying some directions for future work. \n3. Abstract of Cited Paper (BIB003): This paper takes up Berman and Hafner's (1993) challenge to model legal case-based reasoning not just in terms of factual similarities and differences but also in terms of the values that are at stake. The formal framework of Prakken and Sartor (1998) is applied to examples of case-based reasoning involving values, and a method for formalising such examples is proposed. The method makes it possible to express that a case should be decided in a certain way because that advances certain values. The method also supports the comparison of conflicting precedents in terms of values, and it supports debates on the relevance of distinctions in terms of values.\n\nLiterature Review:",
"role": "user"
},
{
"content": "The antecedent of this rule might have to be derived by further reasoning. Then BIB004 defined a way to talk in the object language about sets of values advanced by a rule r (expressed by the predicate Values(r) ), and about a preference ordering on these sets. This preference ordering then generates the rule priorities needed in BIB002 to represent cases and their outcomes, with the following rule scheme. Consider the Keeble case again. The issue is whether plaintiff was the owner of the ducks: PlOwner. The relevant factors are -Plaintiff was pursuing his livelihood: PlLiving. -Plaintiff was hunting on his own land: OwnLand. -Plaintiff had not caught the animals: ¬Caught. The first two factors are pro-plaintiff while the third is pro-defendant. Then with the method of BIB002 this yields: Keeble: We next show how the rule priority k 1 > k 2 can be derived from value considerations. The values at stake are: -Certainty and avoidance of litigation (Cval). -Economic benefit for society (Eval). -Respecting property (Pval). We have the following rules on how factors promote values: Then Values(k 1 ) = {Eval, Pval} while Values(k 2 ) = {Cval}, so the decision in Keeble can be explained by preferring {Eval, Pval} over {Cval}. Now if a new case arises that is different from Keeble in terms of factors but in which the same values are considered, then the new cases can be decided as Keeble by citing its value preference. New types of a fortiori arguments are also possible. For example, any superset of {Eval, Pval} is also preferred to {Cval}. And if it can be argued that some other value V i is at least as preferred as Eval, then {V i , Pval} is also preferred to {Cval}. We note that this body of work thus in fact formalises Perelman's BIB001 view that outside mathematics the validity of arguments depends on their potential to persuade an audience. This potential is at least in part determined by the audience's value preferences. Bench-Capon & Sartor BIB003 employ a similar way to express that factor-decision rules promote values, and a similar way to derive rule preferences from the preference ordering on the sets of values they promote. But then they embed this in a method for constructing theories that explain a given set of cases. Theory construction is modelled as an adversarial process, where both sides take turns to modify the theory so that it explains the current case in the way they want. The process starts with a set of factor-value pairs and a set of cases represented in terms of factors and an outcome. Then the theory is constructed by creating rules plus rule priorities derived from value preferences.",
"role": "assistant"
}
] |
science.covid_deepset_qa | science.covid_deepset_qa.767 | [
{
"content": "Read the text and answer the question. Only include the exact text from the passage that answers the question.\n\nText: Nevertheless some questions remain and not all lessons have been successful. The national and international response demonstrates the complex link between public health, science and politics when an outbreak threatens to impact on global economies and reputations. The unprecedented measures implemented in China are a bold attempt to control the outbreak – we need to understand their effectiveness to balance costs and benefits for similar events in the future. Text: On 29 December 2019 clinicians in a hospital in Wuhan City, China noticed a clustering of cases of unusual pneumonia (with the first case identified at that time on 12 December) with an apparent link to a market that sells live fish, poultry and animals to the public. This event was reported to the World Health Organisation (WHO) on 31 December [1]. Within 4 weeks, by 26 January 2020, the causative organism had been identified as a novel coronavirus, the genome of the virus had been sequenced and published, reverse transcription polymerase chain reaction tests had been developed, the WHO R&D Blueprint had been activated to accelerate diagnostics, therapeutics and vaccine development and a candidate vaccine was ready for initial laboratory testing. Currently Chinese health authorities are building a 1000 bed hospital in Wuhan in 10 days. By 26 January also, almost 50 million people in Wuhan and neighbouring cities had effectively been placed in quarantine while the WHO had determined that the event should not yet be declared as a Public Health Emergency of International Concern (PHEIC) [2] and had recommended no specific travel restrictions. The WHO have emphasised the importance of exit screening at ports in countries showing transmission of the novel coronavirus and have provided guidance for countries implementing entry screening at airports while acknowledging that evidence for the effectiveness of entry screening is equivocal. This response is one of the swiftest, coordinated global responses to an emerging infectious disease the world has seen in modern times, but is it the appropriate response, will it be effective and is it sustainable? According to the situation report published by the WHO on 28 January 2020 [3], a total of 2798 confirmed 2019-nCoV cases have been reported globally; of these, 2761 cases were from China, including Hong Kong (8 cases), Macau (5) and Taipei (4). Thirty-seven confirmed cases have been reported outside of China in eleven countries in Europe, North America, Australia and Asia; of these 37 exported cases, 36 had a travel history from China or an epidemiological link to a case from China. Of the confirmed cases in China, 461 have been reported as severely ill, with 80 deaths to date. This outbreak and the response to it illustrate some key issues about how global preparedness and response capacity for outbreaks have evolved over almost two decades since the severe acute respiratory syndrome (SARS) epidemic of 2002/3 and what lessons have, or have not, been learned. It also raises questions about the impact these lessons have had on the way agencies and governments respond to these events and about the role of the WHO and the International Health Regulations (IHR). One of the critical lessons from the SARS experience was the absolute necessity to be able to coordinate the international resources that are available in an outbreak and to get them focussed on identifying priorities and solving problems. The WHO established the means to do this for SARS and it has since been further developed and integrated into global preparedness, especially after the West Africa Ebola epidemic. Organisations such as the Global Outbreak Alert and Response Network (GOARN), the Coalition for Epidemic Preparedness Innovations (CEPI), the Global Research Collaboration For Infectious Disease Preparedness (GloPID-R) and the Global Initiative on Sharing All Influenza Data (GISAID) have been supported by the WHO Research Blueprint and its Global Coordinating Mechanism to provide a forum where those with the expertise and capacity to contribute to managing new threats can come together both between and during outbreaks to develop innovative solutions to emerging problems. This global coordination has been active in the novel coronavirus outbreak. WHO's response system includes three virtual groups based on those developed for SARS to collate real time information to inform real time guidelines, and a first candidate vaccine is ready for laboratory testing within 4 weeks of the virus being identified. Another key factor in successfully preventing and managing emerging threats is the rapid and transparent sharing of information between countries and agencies . There was extensive criticism of China for its perceived failure to share information about the emerging SARS infection early enough in the outbreak to allow countries to prepare and respond. There were similar concerns about information sharing as Middle East Respiratory Syndrome (MERS) emerged and evolved in the Middle East in 2012, particularly in Saudi Arabia, and about the emergence of Ebola in West Africa in 2014. On this occasion information sharing seems to have been rapid and effective (while recognising that the information available in the early stages of an outbreak is always less than the global community would like). The WHO was notified of the original clustering within days and the full genomic sequence of the new virus was published less than 2 weeks after the cluster was first detected. The WHO has expressed its satisfaction with the actions of the Chinese authorities in sharing information with the WHO. Working with journalists and the media to help them understand the science and epidemiology, particularly in a fast moving event, will improve risk communication to the public and reduce inappropriate concerns and panic. While reporting of this outbreak shows signs of the efforts of epidemiologists, infectious disease experts, national and international public health agencies and others engaging with journalists, there are also signs that this is not yet achieving it's goal. For example, the public perception is that the increase in case numbers reported daily by the Chinese authorities represents a daily escalation in the epidemic while the reality is that these numbers are also the result of active, aggressive, case finding in China and some of these cases are 'old' cases newly recognised as being due to the novel coronavirus. Similarly the virus is usually described by the media as 'deadly' and although this is true in the sense that it has caused deaths, the nuances of uncertain case fatality rates in the early stages of an outbreak are not being communicated. The current estimated case fatality rate seems to be around 3% which is significant but not comparable to the 10% rate for SARS or 34% reported for MERS. These misperceptions are still driving public anxiety. To supplement formal reporting mechanisms between countries and with WHO (including the IHR), the use of informal mechanisms such as media and social media reports was advocated in the light of the SARS experience. There are now globally several systems that provide collated information from informal reporting including networks of experts and scanning of media and social media. These contribute to, and amplify, epidemic intelligence and are being integrated with national and international surveillance systems. The value, and the challenges, of this additional source of information has been evident in the current outbreak. The value comes from ensuring that early indications of cases beyond the initial outbreak city have been detected and can supplement the global risk assessment and monitoring of the evolution of the outbreak. The challenges lie in the volume and diversity of the information available and the relative lack of verification mechanisms, such that one of these systems (ProMed) has commented that it was becoming increasingly difficult to assimilate the information being supplied [4] and to make meaningful interpretations. Early in the outbreak it was reported that health workers had not been infected. This was reassuring because it is health workers who many times, and inadvertently, amplify transmission. Failure to wash hands between patients, for example, can result not only in autoinfection, but also in infection of patients hospitalised for other causes when they provide care. Autoinfection is not only a risk for the health worker, but also for their families and the communities in which they live, depending on the transmissibility and means of transmission. More recently infection, and at least one death, in health workers has been confirmed. Although not unexpected this does add to the epidemiological risk. A characteristic of the SARS outbreak was the variability of transmissibility between cases and the occurrence of 'superspreading events' where a case infected significantly more contacts than the average. This was also seen with MERS in the outbreak in the Republic of Korea (RoK). In this current novel coronavirus outbreak, such superspreading events have not been documented but the epidemiology is still not clear. Confirming whether or not this is happening must be an urgent task for the Chinese investigation. Modellers have suggested reproductive rates (R 0 ) of 3.8 (95% confidence interval, 3.6-4.0) [5] and 2.6 (1.5-3.5) [6] ; R 0 for SARS was estimated at around 3 in the absence of control measures [7] . The economic impact of major outbreaks can be substantial for the affected country. This was seen clearly in SARS, MERS in RoK and Ebola in West Africa. One analyst estimates that the current coronavirus outbreak's likely impact will range from a 0.8% cut to real GDP if the epidemic is controlled within 3 months, to a 1.9% cost to GDP if the epidemic lasts 9 months [8] . This may increase substantially in the light of the extended restrictions on movement, and therefore trade and commerce, within China. The emergence of a significant respiratory illness linked to a novel coronavirus represents a test of the global capacity to detect and mange emerging disease threats. Its emergence in China adds an additional dimension in the light of previous experience with SARS. The timing of the outbreak immediately before the Chinese Lunar New Year with its attendant population movements adds extra risk and urgency to the response. The rapid sharing of information in this outbreak and the speed of the coordinated response both in the country and internationally suggest that lessons have been learned from SARS that improve global capacity. The international networks and forums that now exist have facilitated the bringing together of expertise from around the world to focus research and development efforts and maximise the impact. At this early stage in the outbreak information remains incomplete and key clinical and epidemiological questions have not yet been answered, but the deficit seems to be due more to the constraints of investigating an emerging disease than to any unwillingness to engage and share information with partners. There are some indications of areas where further improvement is necessary. The global media response to the unfolding events has been relatively balanced and informed but the nuances of the evolving situation have not been critically examined in partnership with the media and as a result the public perception of the risk may be exaggeratedalthough it of course remains possible that the outbreak will develop in a way that matches up to the perceived risk. The lack of appreciation of the uncertainties in determining a meaningful case fatality rate and the significance of ascertainment bias at the beginning of an outbreak, along with the impact of aggressive case finding on case numbers, are examples of where understanding could be improved. This is always a challenging process when balancing the resources focussed on analysing the situation on the ground with resources directed at interpreting the information for journalists but in SARS, the R 0 was seen to decrease in response to information reaching the public and the public then adopting risk reduction actions [6] ; so accurate public risk communication is critical to success. It would be helpful to find a forum where this can be explored with the media community after the event. The increase in access to early information from diverse sources including media and social media adds an important dimension to identifying and tracking new events globally and is a key part of the overall epidemic intelligence system. However, it is also a potential source of disinformation. When, as has been seen in this outbreak, the volume of information coming in exceeds any capacity to collate and analyse it and to attempt to cross-reference and verify separate items, there is a risk that the information fuels speculation and media and public concern. Again there is a fine balance between information that encourages appropriate risk avoidance actions and information that encourages inappropriate actions; however the public health is usually better served by more information rather than less. The role of a declaration of a PHEIC in managing a serious outbreak has been questioned in the light of Ebola in West Africa and in the Democratic Republic of Congo [9] and has been challenged again with this outbreak. The binary nature of a PHEIC declaration (either an event is a PHEIC or it isn'tthere are no intermediate options) and the specificity of the three defined criteria for a PHEIC have caused difficulty for Emergency Committees in considering whether a given event should be a PHEIC. The lack of a clear understanding of what a PHEIC declaration is meant to achieve adds to the Emergency Committee's difficulties, as does the relative paucity of clinical and epidemiological answers at this stage of the investigation. In this instance the Emergency Committee were divided in coming to a conclusion but decided on balance that the current situation, although an emergency, should not as yet be declared a PHEIC [2]. As with Ebola in the DRC, there has been criticism of the WHO for this decision but, as with Ebola, it is not immediately clear what would be different in the response if a PHEIC was declared. The WHO is working on improving the way in which Emergency Committees develop their advice for the Director General but, as recommended by this Emergency Committee and the post-Ebola IHR Review Committee in 2015, the development of an intermediate alert alongside WHO's risk assessment process may be helpful. A key function of a PHEIC declaration is that it is the (only) gateway to the WHO Temporary Recommendations on possible travel and trade restrictions to limit international spread of a disease. In this case several countries globally had already implemented entry screening at airports and China had begun closing down international travel from Wuhan before the Emergency Committee had finished their deliberations. While the WHO would not, and could not, interfere with the sovereign decisions of member states, the lack of influence on travel and trade decisions could prove problematic. Alongside the speed of the response in this outbreak, we have seen dramatic changes in the scale of the response. The imposition of very extensive quarantine measures on millions of people as an attempt to break the transmission of the virus is unprecedented. We do not know whether they will be effective; indeed we do not know how we will determine if they have been effectivewhat end point can we measure that will provide an answer to that question? If recent suggestions that people infected with this coronavirus may be infectious while incubating or asymptomatic, and the reports that up to 5 m people left Wuhan before the travel restrictions were imposed, are confirmed, the efficacy of these control measures will be more challenged. Given the likely impact on at least the Chinese economy and probably the global economy, it will be important to understand the role and the effectiveness of public health measures on this scale for the future. However, the imposition of these dramatic measures does also raise a wider question: if there is an impact from these measures, what other countries would (or could) implement such measures? Would other countries accept the self-imposed economic damage that China has accepted to try and contain this outbreak? Is it reasonable to consider that national governments would close down public transport into and out of London, New York or Paris in the week before Christmas even if it were shown to be an effective control measure? These decisions and questions cross the interface between public health, science and politics. The response to this outbreak in\n\nChina was inevitably influenced by the historical reaction to the country's response to SARS and the world's suspicion of China's lack of cooperation at that time. The current response is therefore framed within a context of not wanting to be seen to be behaving in the same way with this event. This may indicate another impact of the SARS (and MERS and Ebola) experience on the response to subsequent outbreaksa tendency to look at worst case scenarios and respond accordingly and a fear of 'getting it wrong'. This can deter leaders at all levels, from outbreak teams to national governments, from making judgements when all the information they would like is not available in case those judgments turn out to be wrong when the full information becomes available. In emergency response it is generally better to over-react and then scale back if necessary rather than under-react and then act too late.\nQuestion: In what year did the first SARS epidemic occur?",
"role": "user"
},
{
"content": "rapid and transparent sharing of information between countries and agencies",
"role": "assistant"
}
] |
science.data_reco_mcq_mc | science.data_reco_mcq_mc.1190 | [
{
"content": "Based on the dataset descriptions below, which datasets would be suitable for the following research question? Your response should be a JSON array.\n\nQuestion: We introduce a transfer learning approach for enabling question answering systems to answer questions posed against documents in new specialized domains\n\nKeywords: question answering text\n\nDataset descriptions:\nHotpotQA: HotpotQA is a question answering dataset collected on the English Wikipedia, containing about 113K crowd-sourced questions that are constructed to require the introduction paragraphs of two Wikipedia articles to answer. Each question in the dataset comes with the two gold paragraphs, as well as a list of sentences in these paragraphs that crowdworkers identify as supporting facts necessary to answer the question.\nTechQA: TECHQA is a domain-adaptation question answering dataset for the technical support domain. The TECHQA corpus highlights two real-world issues from the automated customer support domain.\nNewsQA: The NewsQA dataset is a crowd-sourced machine reading comprehension dataset of 120,000 question-answer pairs. Documents are CNN news articles.\nGYAFC: Grammarly’s Yahoo Answers Formality Corpus (GYAFC) is the largest dataset for any style containing a total of 110K informal / formal sentence pairs. Yahoo Answers is a question answering forum, contains a large number of informal sentences and allows redistribution of data.\nSQuAD: The Stanford Question Answering Dataset (SQuAD) is a collection of question-answer pairs derived from Wikipedia articles. In SQuAD, the correct answers of questions can be any sequence of tokens in the given text.\nQuAC: Question Answering in Context is a large-scale dataset that consists of around 14K crowdsourced Question Answering dialogs with 98K question-answer pairs in total. Data instances consist of an interactive dialog between two crowd workers: (1) a student who poses a sequence of freeform questions to learn as much as possible about a hidden Wikipedia text, and (2) a teacher who answers the questions by providing short excerpts (spans) from the text.\n\nOptions: A: HotpotQA B: TechQA C: NewsQA D: GYAFC E: SQuAD F: QuAC",
"role": "user"
},
{
"content": "[\"NewsQA\", \"SQuAD\"]",
"role": "assistant"
}
] |
science.chia_ner | science.chia_ner.819 | [
{
"content": "Extract all unique entities of these types from the following clinical trial criteria: [\"Negation\", \"Observation\", \"ReferencePoint\", \"Procedure\", \"Drug\", \"Measurement\", \"Value\", \"Person\", \"Multiplier\", \"Device\", \"Qualifier\", \"Mood\", \"Temporal\", \"Condition\", \"Visit\", \"Scope\"].\n\nOutput a JSON object with entity types as keys and lists of extracted entities as values. If no entities are found for a type, use [].\n\nText:\n\nPatients in the cardiothoracic intensive care after cardiac surgery with cardiopulmonary bypass\nAcute kidney injury, defined as increase in S-creatinine 50% or 27 mol/L\nNormal S-creatinine before surgery",
"role": "user"
},
{
"content": "{\"Condition\": [\"Acute kidney injury\"], \"Device\": [], \"Drug\": [], \"Measurement\": [\"increase in S-creatinine\", \"S-creatinine\"], \"Mood\": [], \"Multiplier\": [], \"Negation\": [], \"Observation\": [], \"Person\": [], \"Procedure\": [\"cardiac surgery\", \"cardiopulmonary bypass\", \"surgery\"], \"Qualifier\": [], \"ReferencePoint\": [\"cardiac surgery with cardiopulmonary bypass\", \"surgery\"], \"Scope\": [], \"Temporal\": [\"before surgery\", \"after cardiac surgery with cardiopulmonary bypass\"], \"Value\": [\"Normal\", \"50% or 27 mol/L\"], \"Visit\": [\"cardiothoracic intensive care\"]}",
"role": "assistant"
}
] |
science.healthver_entailment | science.healthver_entailment.12 | [
{
"content": "Output a JSON object with \"verdict\" and \"evidence\" fields based on the claim and abstract below.\n\nClaim: While it has been established that the virus spreads through infected people, many people want to know if they can be exposed to the virus through the food they eat.\n\nAbstract:\nAgriculture and the food sector are critical to food and nutrition security because they not only produce food but also contribute to economic empowerment by employing a large share of female and male workers, especially in developing countries.\nFood systems at all levels―globally, domestically, locally, and in the home― are expected to be highly affected by the COVID-19 crisis.\nWomen and men work as food producers, processors, and traders and will likely be impacted differently.\nShocks or crises can exacerbate or reduce gender gaps, and so can policy responses to mitigate the impact of these crises or shocks.\nWe offer some perspectives and available country examples on how the COVID-19 crisis and responses to the crisis could be a setback or offer opportunities for gender equality in the food system.",
"role": "user"
},
{
"content": "{\"verdict\": \"NEI\", \"evidence\": []}",
"role": "assistant"
}
] |
science.chemsum_single_document_summarization | science.chemsum_single_document_summarization.64124 | [
{
"content": "Write an abstract for this chemistry paper.\n\nTitle:\nATP-competitive, marine derived natural products that target the DEAD box helicase, eIF4A\n\nPaper sections:\n\n<p>Ongoing problems with resistance to cancer chemotherapies have led researchers to investigate the possibility of targeting nodes of regulation in cancer cells.1 A number of examples of these efforts have appeared in recent years including clinically approved proteasome inhibitors2 and compounds targeting HSP90 that are or have been in clinical trials.3 Another potential target in this genre is the DEAD-box RNA helicase associated with the eukaryotic initiation factor complex (eIF4F), called eIF4A.4 Recruitment of ribosomes to mRNA to initiate translation is a carefully regulated process requiring several protein complexes. The rate limiting step in this process is the formation of the eIF4F heterotrimeric complex that then recruits the 43S pre-initiation complex, which allows for 5′ UTR scanning of the mRNA.5 In many malignant cells, increased translational rates are observed and this is often due to increased levels of eIF4F proteins.6,7 Moreover, because the length and complexity of the 5′ UTR greatly influences eIF4A activity and many oncogenes contain complex 5′ UTRs, targeting this activity has been shown through genetic and chemical biological strategies to be a viable cancer therapeutic strategy.4,8–10 However, to date, this has not been clinically validated.</p><p>The Eukaryotic Initiation Factor 4A (eIF4A) is the founding member of the DEAD-box RNA helicases, structurally related to ATP-dependent enzymes involved in RNA processing and translation. 11–13 eIF4A forms the enzymatic core of the eIF4F translation-initiation complex, which unwinds secondary structures at the 5′ ends of mRNAs, permitting scanning for start codons by 43S ribosomal pre-initiation complexes.14 Natural products that interfere with eIF4F, such as silvestrol,15,16 pateamine A,17,18 and hippuristanol (Fig. 1),9,19 have potent anti-tumor activities and act by altering eIF4A's interaction with RNA.8</p><p>In an effort to discover lead compounds that inhibit eIF4A, a simple malachite green based assay was employed.20–22 In this assay, the release of inorganic phosphate (Pi) from ATP forms a phosphomolybdenum complex that interacts with malachite green, generating a green signal, which can be converted to the amount of inorganic phosphate, Pi. Although this assay is not as sensitive as some coupled assays23,24 or radioactive assays,4 it does not require deconvolution of the signal as coupled assays do (to account for potential inhibition of a coupling enzyme) and it is higher throughput and more environmentally friendly than radioactive assays. Evaluation of the statistical parameters of the assay revealed a robust signal with excellent statistical parameters (Z-factor >0.7; using EDTA as a positive control and DMSO as a negative control) when screening with 500 nM eIF4A and 250 µM ATP, indicating this procedure should give reliable data in a high-throughput context. As such, the malachite green assay was readily adapted to 384-well plates and used to screen 500 marine and terrestrial derived natural products. Moreover, because this is a high-throughput assay, a small panel of other DEAD box helicases was screened in parallel to look for eIF4A selective compounds. Our preliminary screen produced four hits (a 0.8% hit rate). After further investigation, we proceeded with two hits, elisabatin A (1) and allolaurinterol (2) (Fig. 1). Also, shown in Fig. 1 is the known eIF4A inhibitor hippuristanol (3). The two hits from this work, 1 and 2, were subsequently confirmed in triplicate and then advanced to an 8-point dose-response, which confirmed both compounds to be single-digit µM inhibitors of eIF4A activity (Fig. 2A and C).</p><p>After confirming compounds 1 and 2 to be bona fide eIF4 A inhibitors, the mechanism of inhibition was explored using a Lineweaver-Burke analysis. To do so, four concentrations of each of the inhibitors were screened against serial dilutions of substrate (ATP) to get a series of rates. The reciprocal rates were then plotted as a function of the reciprocal substrate concentration. In this analysis, the point of intersection of the four lines informs the mechanism of inhibition. In the present case, the lines converge at the y-axis (Fig. 2B and D), which indicates an ATP-competitive mechanism of inhibition.25,26 A replot of the slopes of these lines as a function of inhibitor concentration yields a straight line that intersects the x-axis at the negative value of the Ki (Fig. 2B and D insets). The Kis for 1 and 2 are 55.04 and 3.86 µM, respectively.</p><p>During the initial screen of the natural products collection, a panel of DEAD box helicases was evaluated, but to broaden the scope of this selectivity search, a larger panel of ATP-utilizing enzymes, including the initial DEAD box helicases, was evaluated using an 8-point serial dilution dose-response. In this analysis, standard P-loop utilizing enzymes (p97, DDX3, DDX17, and DDX39A)27 were used as well as the non-P-loop chaperones (GroEL and HSPA1A).28 As shown in Fig. 3, both compounds 1 and 2 are selective for eIF4A, with most other enzymes surveyed having IC50 values greater than 100 µM. However, there was modest inhibition of DDX39A by compound 1, although eIF4A selectivity was ~20-fold. The precise reason for compound 1 being less selective relative to DDX39A is not currently understood.</p><p>Next, the effects of compounds 1 and 2 on eIF4A mediated helicase activity were measured and compared to the known eIF4A inhibitor 3. Compound 3 is an allosteric modulator that binds to the C-terminal region of eIF4A,19 whereas 1 and 2 are ATP-competitive molecules most likely binding to the ATP-binding pocket at the interface between the N-terminal and C-terminal domains.29 It was, therefore, of interest to determine if inhibition of ATP hydrolysis blocks the RNA resolving action. The assay used for this analysis uses a fluorescently labeled strand of RNA annealed to a template next to a black-hole quencher, which blocks fluorescent signaling when both strands are annealed to the template. Upon introduction of eIF4A, eIF4B, and ATP, the fluorescent strand is released from the template, dequenching the fluorophore and providing a robust signal.30 In order to prevent rebinding to the original strand a large excess of a blank template is present. All three of the compounds were used at 100 µM. Surprisingly, even at this level, compound 1 showed only modest inhibition of eIF4A (34%) relative to a DMSO control, whereas compound 2 showed near complete inhibition of the helicase activity, at a level similar to compound 3 (Fig. 4). Presently, this difference between compounds 1 and 2 remains without an explanation.</p><p>Finally, the cytotoxic activities of compounds 1 and 2 were evaluated against a lung cancer cell line (A549) and a breast cancer cell line (MDA-MA-468). A previously reported compound, elisabatin B, which was isolated from the same organism as 1, had been reported to have cytotoxic activity against a variety of cancers in the NCI-60 panel,31 but only modest activity was reported for compound 1 against the NCI-60 panel.33 Compound 2 has been previously shown to have antibiotic activity, but had not been reported to have cytotoxic activity against cancer cell lines.32 However, given the current discovery that compounds 1 and 2 target eIF4A, it was expected that they would have cytotoxic activity. To this end, an MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) assay was used to determine the LD50 of both compounds using an 8-point dose response (Fig. 5). Both compounds showed modest toxicity against A549 (40.2 µM and 41.4 µM) and MDA-MA-468 (35.2 µM and 31.7 µM) cell lines. These values are approximately 10-fold higher than the IC50 values. A number of factors may contribute to this difference including limited cellular penetrance or metabolism. Although these studies do not indicate eIF4A to be the only cellular target of 1 and 2, it is likely a mode of cellular action. Improvement of the pharmacology and potency of 1 and 2 will be reported at a later date.</p>",
"role": "user"
},
{
"content": "Activation of translation initiation is a common trait of cancer cells. Formation of the heterotrimeric eukaryotic initiation factor F (eIF4F) complex is the rate-limiting step in 5\\xe2\\x80\\xb2 m7GpppN cap-dependent translation. This trimeric complex includes the eIF4E cap binding protein, the eIF4G scaffolding protein, and the DEAD box RNA helicase eIF4A. eIF4A is an ATP-dependent helicase and because it is the only enzyme in the eIF4F complex, it has been shown to be a potential therapeutic target for a variety of malignancies. To this end, we have used a simple ATPase biochemical screen to survey several hundred marine and terrestrial derived natural products. Herein, we report the discovery of two natural products from marine sources, elisabatin A (1) and allolaurinterol (2), which show low \\xc2\\xb5M inhibition of eIF4A ATPase activity. Enzymological analyses revealed 1 and 2 to be ATP-competitive, and cellular evaluations showed reasonable cytotoxicity against A549 (lung cancer) and MDA-MA-468 (breast cancer) cell lines. However, only compound 2 showed potent inhibition of helicase activity congruent with its ATPase inhibitory activity.",
"role": "assistant"
}
] |
science.scicite_classification | science.scicite_classification.4597 | [
{
"content": "Classify the intent of this citation sentence by choosing from: [\"Method\", \"Background\", \"Result\"].\n\nMethod: Cites for methodology\nBackground: Cites for background information\nResult: Cites for findings\n\nCitation: Identity begins to take shape during adolescence, but, with the incorporation of new life experiences, is continually modified throughout adulthood (McLean, 2008).",
"role": "user"
},
{
"content": "Background",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_plos_single_doc_summ | science.scientific_lay_summarisation_plos_single_doc_summ.18034 | [
{
"content": "Summarize the article in an uncomplicated manner.\n\nTitle: A Nucleotide Sugar Transporter Involved in Glycosylation of the Toxoplasma Tissue Cyst Wall Is Required for Efficient Persistence of Bradyzoites\nArticle:\nAbstract:\nToxoplasma gondii is an intracellular parasite that transitions from acute infection to a chronic infective state in its intermediate host via encystation, which enables the parasite to evade immune detection and clearance. It is widely accepted that the tissue cyst perimeter is highly and specifically decorated with glycan modifications; however, the role of these modifications in the establishment and persistence of chronic infection has not been investigated. Here we identify and biochemically and biologically characterize a Toxoplasma nucleotide-sugar transporter( TgNST1) that is required for cyst wall glycosylation. Toxoplasma strains deleted for the TgNST1 gene( Δnst1) form cyst-like structures in vitro but no longer interact with lectins, suggesting that Δnst1 strains are deficient in the transport and use of sugars for the biosynthesis of cyst-wall structures. In vivo infection experiments demonstrate that the lack of TgNST1 activity does not detectably impact the acute( tachyzoite) stages of an infection or tropism of the parasite for the brain but that Δnst1 parasites are severely defective in persistence during the chronic stages of the infection. These results demonstrate for the first time the critical role of parasite glycoconjugates in the persistence of Toxoplasma tissue cysts.\nIntroduction:\nThe Apicomplexa are a phylum of protozoan organisms, many of which are pathogens with human and veterinary relevance. The Apicomplexan parasites share the remarkable ability to invade and propagate within host cells. Among this group, Toxoplasma gondii is an exemplar of the ability to undergo efficient transmission and persist within immunocompetent hosts without causing significant disease[1]. Furthermore, while the complex life cycle of these parasites includes sexual and asexual stages, Toxoplasma has the unusual capacity to circumvent the sexual stage and clonally disseminate through intermediate hosts[2]. Key to this clonal proliferation is Toxoplasma's ability to interconvert between a rapidly proliferating tachyzoite and a slow-dividing form, the bradyzoite, which encysts within host cells. The bradyzoite form is critical for the establishment of a chronic infection, persistence within the host and transmission to other hosts via ingestion of raw meat or other tissue from an infected animal. Under circumstances that cause immunosuppression, Toxoplasma bradyzoites that differentiate back to tachyzoites can reinitiate extensive rounds of proliferation causing significant disease[3],[4]. Toxoplasma tissue cysts appear to be nearly quiescent and much less immunogenic than tachyzoites, occupying a safe intracellular niche for the parasite. Tissue cysts can be observed in the brain of infected animals as early as 10 days postinfection[5]. While the number of cysts can remain high throughout the life of the host, some appear to occasionally rupture[6] and release parasites that could infect and encyst within neighboring host cells. In vivo tissue cysts can be 10–100 µm in diameter with a cyst wall that is 200–850 nm thick surrounding an osmiophilic matrix composed of granular material and small vesicles[7],[8]. Previous studies have shown that the lectins Dolichos Biflorus Agglutinin( DBA), which binds N-acetylgalactosamine( GalNAc) moieties, and succinylated Wheat Germ Agglutinin( sWGA), which binds to N-acetylglucosamine( GlcNAc) residues, effectively stain the cyst wall[9]. These lectins are commonly used as markers to identify in vitro and in vivo Toxoplasma cysts, although the specific parasite ligands bound by these lectins have not been fully characterized[5]. Previously, a mouse monoclonal antibody( MAb 73. 18) was described that reacts with a cyst wall protein, CST1, by western blot and immunofluorescence[10]. A band of about the same size in Western Blots is also recognized by DBA, by sera of chronically infected animals and by a rat MAb CC2[11]. Importantly, the MAb 73. 18 no longer recognizes this protein when glycans are destroyed by sodium metaperiodate treatment suggesting that the MAb 73. 18 might specifically recognize a glycoepitope on CST1[10]. The identity and the role of the CST1 protein have not been described. Recent studies have begun to link cyst wall glycans to cyst development. Craver et al. identified a gene encoding a protein with homology to a proteophosphoglycan from Leishmania that, when disrupted, impairs cyst wall formation and conversion from tachyzoites to bradyzoites[12]. As yet, however, it is not known if this protein is in fact glycosylated. Despite the mounting evidence linking the cyst wall to Toxoplasma persistence, the full repertoire of cyst wall glycoconjugates and their specific roles during infection have not yet been reported. To address the role of Toxoplasma cyst wall glycans during infection, we are interested in proteins required for the early steps in glycoconjugate formation, the nucleotide-sugar transporters( NSTs). NSTs are polytopic proteins that localize to the endoplasmic reticulum( ER) /Golgi apparatus( GA) membrane and translocate nucleotide-sugars from the cytosol to the lumen of the organelle where they are transferred to the corresponding acceptors[13]. Here we show that TgNST1 is a bona fide NST that is capable of transporting UDP-GlcNAc and UDP-GalNAc across membranes. Knockout mutants and in vivo studies reveal a critical role for this transporter in the persistence of Toxoplasma tissue cysts in animals.\nDiscussion:\nHere we have identified and characterized a Toxoplasma nucleotide-sugar transporter, TgNST1, that is required for the synthesis of cyst wall glycoconjugates and whose deletion results in lower cyst numbers during chronic infection. Our results in vitro show that Δnst1 parasites are not defective in the formation of otherwise normal cyst walls, as we demonstrated that the glycosylated cyst wall protein GRA2 and the cyst wall proteins GRA9 and BPK1 are efficiently secreted and incorporated into the cyst matrix surrounding such parasites. This suggests that the glycan moieties of Toxoplasma tissue cysts( protein or otherwise) play a particular role in parasite persistence and the maintenance of a chronic infection. Our results do not address the identities of the key glycan moieties responsible for this phenotype, a complete inventory of which has not been determined for bradyzoites or the cyst wall within which they reside. We cannot exclude, therefore, the possibility that NST1 is involved in modifications involving other glycan residues not tested here. Based on glycoconjugates identified in tachyzoites, however, there are four types of molecules that could be involved: glycosylphosphatidylinositols( GPIs), N-glycans, O-glycans and glycosphingolipids. Toxoplasma GPIs consists of a conserved core phosphatidylinositol-lipid structure linked to a glucosamine and three mannose residues. In addition, Toxoplasma GPIs can be modified by a glucose-α1-4-N-Acetylglucosamine side chain attached to the first mannose. Although we did not specifically examine GPI biosynthesis in the Δnst1 parasites, we saw no differences in the surface staining with antibodies to abundant GPI-anchored proteins like SRS9( Figures 4 and 5), arguing against a role for TgNST1 in GPI biosynthesis. Such is also suggested from the fact the GPIs are important in the pathogenesis of the acute stage of an infection with Toxoplasma[21],[22] and we saw no such phenotype with the Δnst1 parasites. While studies of the topology of GPI biosynthesis in Toxoplasma indicate that most of the GPI intermediates, including those with a side chain modification, localize to the cytosolic phase of the ER, these results don't completely rule out their lumenal synthesis and thus the requirement of a nucleotide-sugar transporter[23]. A thorough analysis of GPI structures in Δnst1 parasites compared to wild type will be needed to definitively determine if NST1 has a role in GPI biosynthesis. Data have been obtained to suggest that, although not common, N-glycan modifications of proteins play important roles in Toxoplasma motility and invasion[24]. However, Toxoplasma only possesses enzymes for the biosynthesis of truncated Glc3Man5GlcNAc2-PP-Dol and while it seems that parasites can incorporate the complete precursor Glc3Man9GlcNAc2-PP-Dol from the host into its proteins, none of these structures are further modified in the lumen of the ER/GA[25]. Given this and the fact that NSTs are not involved in the synthesis of the core of N-linked glycans in any of the many well-studied systems, ranging from yeast to mammals, it seems highly unlikely that an impairment of NST activity will affect Toxoplasma N-glycoproteins. We did not specifically address this, however, through examination of any known N-linked glycoproteins. Our data are consistent with the prediction that TgNST1 activity is required for the synthesis of O-glycans and/or glycosphingolipids, and that one or more of these molecules are responsible for proper encystation and/or immune evasion in vivo. Toxoplasma synthesizes glycosphingolipids de novo using GalNAc as a sugar precursor[26]. Our assays in yeast showed that TgNST1 is a highly active UDP-GalNAc transporter and so the synthesis of GalNAc-modified glycosphingolipids likely also will be affected by the loss of TgNST1 activity. Whether it is this or O-glycosylation that is more important for persistence of tissue cysts cannot be determined at this point but the fact that deletion of TgNST1 completely eliminates the abundant staining with lectins like DBA indicates that an entire class of structures is lost in the Δnst1 parasites. While further studies will be needed to determine the critical constituents and precise mechanisms involved, the results reported here show that a subset of sugar modifications are dispensable for the acute stages of Toxoplasma infection but are crucially important for the chronic and untreatable form of this important parasite.",
"role": "user"
},
{
"content": "The Toxoplasma tissue cyst is essential to the persistence of the parasite during the chronic infection of an immunocompetent host . While significant efforts have been made to identify molecular factors that trigger and sustain parasite encystation , the role of the glycoconjugates that decorate the cyst wall has received little attention . Here we identify and characterize a bona fide nucleotide-sugar transporter , TgNST1 , whose activity is required for the proper assembly of cyst wall glycoconjugates . We found that deletion of TgNST1 interferes with glycosylation during both the tachyzoite and bradyzoite stages of infection , and we observed substantial defects in the ability of Δnst1 parasites to maintain chronic infection . Surprisingly , Δnst1 parasites were not significantly defective in acute infection of mice , and showed wild type levels and migration rates to the brain . These results highlight the important role of cyst-wall glycosylation in parasite persistence during chronic infection , and suggest that drugs targeting nucleotide-sugar transporters and other enzymes required for glycosylation , perhaps in combination with drugs targeting other pathways , might be useful to prevent the establishment of chronic parasite infection .",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_arxiv | science.scientific_papers_summarization_single_doc_arxiv.73045 | [
{
"content": "Write an abstract for the research paper below. Your response should only include the abstract and no additional text.\n\nArticle text:\nobservations from radio to x - rays show evidence for a tidally - induced starburst in the center of m82 ( e.g. , telesco 1988 ) in fact , there is now evidence that m82 has undergone multiple episodes of intense star formation ( cf .\nmarcum & oconnell 1996 , de grijs , oconnell & gallagher 2001 ) .\n`` m82 b , '' the fossil starburst region , has exactly the properties of an _ evolved _ starburst with a similar amplitude to the active burst ( marcum & oconnell 1996 ) .\nthus , m82 is a unique starburst galaxy , since no other galaxy offers the opportunity to study two discrete starbursts at such close range ! by analogy with the _ hst _ results from the active starburst region ( oconnell et al .\n1994 , 1995 ) , we expected m82 b to have contained a complement of luminous super star clusters .\nit is possible that most of the star formation in starbursts takes place in the form of such concentrated clusters ; in m82 , we do not observe any outside the starburst regions .\nthe combination of observations of both the active and the fossil starburst sites in m82 provides a unique physical environment for the study of the stellar and dynamical evolution of star cluster systems\nbased on _ hst / wfpc2 _ observations of two adjacent fields in m82 b ( in f439w , f555w and f814w ) , we detected @xmath2 ( slightly ) evolved star clusters .\nthe clusters brighter than @xmath3 exhibit a wide range of ages , from @xmath4 myr to over 10 gyr .\nthere is a strong peak of cluster formation at @xmath1 myr , in either representation in fig .\n1 ; very few clusters are younger than 300 myr .\nour results suggest steady , continuing cluster formation at a very modest rate at early times ( @xmath0 gyr ago ) , followed by a concentrated formation episode lasting from 4001000 myr ago and a subsequent suppression of cluster formation .\nthus , it appears that the last tidal encounter between m82 and m81 @xmath5 500800 myr ago had a major impact on what was probably an otherwise normal , quiescent , disk galaxy .\nit caused a concentrated burst of star formation activity , which decreased rapidly within a few hundred myr .",
"role": "user"
},
{
"content": "using new _ hst _ imaging , we identify a large , evolved system of super star clusters in a disk region just outside the starburst core in the prototypical starburst galaxy m82 , `` m82 b. '' this region has been suspected to be a fossil starburst site in which an intense episode of star formation occurred over 100 myr ago , which is now confirmed by our derived age distribution . \n it suggests steady , continuing cluster formation at a modest rate at early times ( @xmath0 gyr ago ) , followed by a concentrated formation episode @xmath1 myr ago and more recent suppression of cluster formation . \n the peak episode coincides with independent dynamical estimates for the last tidal encounter with m81 . \n # 1_#1 _ # 1_#1 _ = # 1 1.25 in .125 in .25 in",
"role": "assistant"
}
] |
science.healthver_entailment | science.healthver_entailment.3369 | [
{
"content": "Output a JSON object with \"verdict\" and \"evidence\" fields based on the claim and abstract below.\n\nClaim: There have been hints from lab experiments that increased temperature and humidity may reduce the viability of SARS-CoV-2.\n\nAbstract:\nThe undefendable outbreak of novel coronavirus (SARS-COV-2) lead to a global health emergency due to its higher transmission rate and longer symptomatic duration, created a health surge in a short time.\nSince Nov 2019 the outbreak in China, the virus is spreading exponentially everywhere.\nThe current study focuses on the relationship between environmental parameters and the growth rate of COVID-19.\nThe statistical analysis suggests that the temperature changes retarded the growth rate and found that -6.28{degrees}C and +14.51{degrees}C temperature is the favorable range for COVID-19 growth.\nGutenberg- Richter's relationship is used to estimate the mean daily rate of exceedance of confirmed cases concerning the change in temperature.\nTemperature is the most influential parameter that reduces the growth at the rate of 13-16 cases/day with a 1{degrees}C rise in temperature.",
"role": "user"
},
{
"content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"The statistical analysis suggests that the temperature changes retarded the growth rate and found that -6.28{degrees}C and +14.51{degrees}C temperature is the favorable range for COVID-19 growth.\", \"Temperature is the most influential parameter that reduces the growth at the rate of 13-16 cases/day with a 1{degrees}C rise in temperature.\"]}",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_elife_single_doc_summ | science.scientific_lay_summarisation_elife_single_doc_summ.3335 | [
{
"content": "Summarize the article in an uncomplicated manner.\n\nTitle: Network-wide reorganization of procedural memory during NREM sleep revealed by fMRI\nArticle:\nAbstract:\nSleep is necessary for the optimal consolidation of newly acquired procedural memories. However, the mechanisms by which motor memory traces develop during sleep remain controversial in humans, as this process has been mainly investigated indirectly by comparing pre- and post-sleep conditions. Here, we used functional magnetic resonance imaging and electroencephalography during sleep following motor sequence learning to investigate how newly-formed memory traces evolve dynamically over time. We provide direct evidence for transient reactivation followed by downscaling of functional connectivity in a cortically-dominant pattern formed during learning, as well as gradual reorganization of this representation toward a subcortically-dominant consolidated trace during non-rapid eye movement( NREM) sleep. Importantly, the putamen functional connectivity within the consolidated network during NREM sleep was related to overnight behavioral gains. Our results demonstrate that NREM sleep is necessary for two complementary processes: the restoration and reorganization of newly-learned information during sleep, which underlie human motor memory consolidation.\nIntroduction:\nThere is now ample evidence that sleep plays a crucial role in the consolidation of newly-acquired procedural memory, particularly for explicitly instructed sequential motor skills( Walker et al., 2002; Korman et al., 2003; Doyon and Benali, 2005; Korman et al., 2007; Debas et al., 2010). Several mechanistic hypotheses have also been proposed regarding the contribution of sleep in this memory process( see[Frankland and Bontempi, 2005; Rasch and Born, 2007; Tononi and Cirelli, 2014] for comprehensive reviews). Yet, the dynamic neural changes that drive motor memory consolidation during sleep still remain controversial( Frankland and Bontempi, 2005; Rasch and Born, 2013; Tononi and Cirelli, 2014). One pioneering sleep-dependent consolidation model, the trace reactivation hypothesis assumes that the repeated reactivation of a recently formed memory representation during sleep leads to a gradual strengthening of the learning-related connections, and thus to long-term storage of the memory trace( Rasch and Born, 2007, 2013). There is now mounting evidence in support of this hypothesis including the replay of hippocampal place cell firing( Skaggs and McNaughton, 1996; Lee and Wilson, 2002) in rodents, as well as human studies employing targeted memory reactivation paradigms using auditory or olfactory cues( Rasch et al., 2007; Cousins et al., 2014; Laventure et al., 2016), and neuroimaging studies showing the reactivation of learning-related brain regions during sleep or awake rest( Maquet et al., 2000; Rasch et al., 2007; Deuker et al., 2013; Staresina et al., 2013; Tambini and Davachi, 2013). Another model, built in part upon the trace reactivation, the systems consolidation hypothesis( Frankland and Bontempi, 2005; et al., 2005; Rasch and Born, 2013) proposes that sleep engages an active reorganization process that stabilizes the labile neural representation of a novel skill into a consolidated memory trace. For instance, a systematic transfer in memory representations from hippocampal to neocortical areas has been reported for non-procedural forms of memories( Frankland et al., 2004; Maviel et al., 2004; Frankland and Bontempi, 2005). On the other hand, a systemic shift from cortical( e. g., motor, parietal cortex) to subcortical regions( e. g., striatum) has been proposed to underlie the consolidation of procedural memory, and motor sequence learning in particular( Doyon and Benali, 2005; Yin et al., 2009; Debas et al., 2010; Kawai et al., 2015). Yet in humans, the systems consolidation model has only been inferred indirectly by comparing the effect of motor practice on offline gains in behavioral performance and changes in neural activity between the initial learning and retention sessions separated by either diurnal or nocturnal sleep( Walker et al., 2002; Fischer et al., 2005; Gais et al., 2007; Takashima et al., 2009; Debas et al., 2010). Thus, direct evidence in support of this hypothesis from human neuroimaging studies is lacking. Finally, an alternative and potentially complementary model, the synaptic homeostasis hypothesis( Tononi and Cirelli, 2003, 2006, 2014) proposes that local neuronal networks are potentiated and eventually become saturated during learning. In order for new information to be encoded the following day, sleep would be involved in the restoration of these local networks by downscaling the strength of synaptic connections( Tononi and Cirelli, 2003; Huber et al., 2004; Tononi and Cirelli, 2006). However, direct experimental evidence to support the synaptic homeostasis hypothesis in humans remains limited and controversial( Frank, 2012). It is thus unclear whether and how these different sleep-dependent mechanisms of memory consolidation may be reconciled and contribute to motor skill learning in humans. Here, for the first time, we used simultaneous EEG and fMRI in order to identify the relative contributions of the trace reactivation, systems consolidation, and synaptic homeostasis hypotheses to the consolidation of procedural memory in humans. Specifically, we tested the hypothesis that the memory trace of motor sequence learning involves network-wide reactivation and further reorganization into a more stable representation during non-rapid eye movement( NREM) sleep periods.\nDiscussion:\nOur findings demonstrate a gradual shift in motor memory representations following motor sequence learning; a transiently activated cortical trace is downscaled back to baseline levels and a subcortically-dominant and more interconnected trace, emerges during NREM sleep. These findings suggest that sleep supports both a homeostatic restoration of the memory trace potentiated during learning, and also actively reorganizes the memory trace at a systems-level. Specifically, our findings reveal that the ventrolateral putamen plays a central role in the emergence of the consolidated pattern during NREM sleep.",
"role": "user"
},
{
"content": "The idea that , while you sleep , you could be honing skills such as the ability to play a musical instrument may sound like science fiction . But studies have shown that sleep , in addition to being beneficial for physical and mental health , also enhances memories laid down during the day . The process by which the brain strengthens these memories is called consolidation , but exactly how this process works is unclear . Memories are thought to persist as altered connections between neurons , often referred to as memory traces . When we practice a skill , we activate the neurons encoding that skill over and over again , strengthening the connections between them . However , if this process were to continue unchecked , eventually the connections would become saturated and no further increases in strength could occur . One possible solution to this problem is that sleep enhances skill learning by downscaling connections across the brain as a whole , thereby freeing up capacity for further learning . Alternatively , sleep may reorganize an initially unstable memory trace into a more robust form with the potential to last a lifetime . To test these possibilities , Vahdat et al . asked healthy volunteers to practice a finger-tapping task while lying inside a brain scanner , and then to sleep inside that scanner for 2–3 hours . When the volunteers returned to the scanner the next morning and attempted the task again , they performed better than they had the previous night . Their brains also showed a different pattern of activity when performing the task after a night’s sleep . So what had happened overnight ? As the volunteers lay awake inside the scanner , their brains reactivated the memory trace formed during learning . However , as they entered a stage of non-dreaming sleep called non-REM sleep , this activity became weaker . At the same time , a new pattern of activity – the one that would dominate the scan the next morning – began to emerge . Whereas the post-learning activity was mainly in the brain’s outer layer , the cortex , the new pattern included other areas that are deeper within the brain . The activity of one deeper region in particular , the putamen , predicted how well the volunteers would perform the task the next day . Non-REM sleep thus strengthens memories via two complementary processes . It suppresses the initial memory trace formed during learning , and reorganizes the newly-learned information into a more stable state . These results might explain why people who are sleep-deprived often have impaired motor skills and memories . The findings also open up the possibility of enhancing newly learned skills by manipulating brain circuits during non-REM sleep .",
"role": "assistant"
}
] |
science.chemsum_single_document_summarization | science.chemsum_single_document_summarization.55419 | [
{
"content": "Write an abstract for this chemistry paper.\n\nTitle:\nIdentification and Characterisation of a Nuclear Localisation Signal in the SMN associated protein, Gemin4\n\nPaper sections:\nIntroduction\n<p>Gemin4 was originally identified using a biochemical approach to identify additional members of the SMN/Gemin complex [1]. The deduced 1058 amino acid protein has a calculated molecular mass of 120kD but migrates as a 97kD protein by Western immunoblot analysis [1]. Alignment of the cDNA to the human genomic sequence indicates that Gemin4 is without introns. The international Radiation Hybrid mapping Consortium has localised Gemin4 to human Chromosome 17 and a variant of Gemin4, HCAP1, has further refined the location to 17p13.3, which is a loss of heterozygosity region in human hepatocellular carcinoma [2].</p><p>Gemin4 is part of a large protein complex, termed the Gemin complex, that also contains the spinal muscular atrophy (SMA) determining protein, survival motor neuron (SMN) [1]. SMA is the leading genetic cause of infant mortality and is characterised by loss of motor neurones in the spinal cord and severe degradation of voluntary smooth muscle, [3]. The core components of the Gemin-complex are Gemin2 (formally known as SMN-interacting protein-1, or SIP-1) [4], Gemin3 (also known as dp103/ddx20) [5], Gemin5 [6], Gemin6 [7], Gemin7 [8], Gemin8 [9] and Unrip [10]. The complex is closely associated with the splicing small nuclear ribonuceloproteins (U snRNPs) in the nucleus and the cytoplasm [11; 12; 13]. The U snRNPs are the subunits of the spliceosome and consist of a U snRNA, the Sm core proteins and several proteins unique to the various RNPs [14]. In the cytoplasm, SMN and Gemin2 perform essential roles in the assembly of the Sm core proteins on the U snRNA backbones [4; 12]. SMN and Gemin4 both interact specifically with the U snRNA and the Sm core proteins [4; 12; 15], suggesting they facilitate the assembly by forming a bridge between the two components [1; 16]. The Gemin complex is also involved in the nuclear import of the assembled U snRNPs through an interaction with the import receptor, Importin β [17]. However, no specific nuclear import sequence has been identified and confirmed in any of the complex members.</p><p>Here, using a novel panel of Gemin4 cDNA expression vectors, we have analysed the Gemin4 protein to identify cellular targeting domains. Based upon mapping experiments and in silico analysis, we have identified a nuclear localisation signal (NLS) in the N-terminus of Gemin4. We demonstrate that removal of this sequence results in the cytoplasmic accumulation of Gemin4 and that this sequence alone is independently sufficient to drive nuclear import of a heterologous protein, GFP.</p>\n\ncDNA Expression Constructs\n<p>Gemin4 constructs were cloned into the pEGFP (BD Bioscience), pCI-HA (Promega), and pCMV-2 (Sigma) mammalian expression vectors as described elsewhere [18], using primers listed in Table 1 and the combinations listed in Table 2. All products were amplified by PCR using commercial Gemin4 cDNA cloned into a cloning vector (RZPD; Germany) as a template. All amplified cDNA contained 5′ EcoRI and 3′ XhoI restriction sites. Cloned constructs, with their respective amino acids (aa's) are as follows: full-length (aa 1-1058), F1 (aa 1-370), F2 (aa 300-708), F3 (aa 607-1058), F1-2 (aa 1-708), F2-3 (aa 300-1059), G4ΔN143 (aa 144-1058), G4ΔN193 (aa 194-1058), G4ΔN246 (aa 244-1058), G4ΔN293 (aa 294-1058), G4NLS (aa 194-243), G4ΔNLS (aa 1-193, 244-1058), G4Δ194-218 and G4Δ218-243.</p>\n\nCell culture and immunohistochemistry\n<p>Human cervical carcinoma (HeLa) cells were grown in Dulbecco's Modified Eagle's Medium (DMEM) supplemented with 10 % (v/v) fetal calf serum (FCS) and 1 % (w/v) each of penicillin and streptomycin in 5 % (v/v) CO2 at 37 °C. Subconfluent cells were grown on coverslips and transfected with 1 μg cDNA using Lipofectamine 2000 (Invitrogen) according to the manufacturer's recommendations and as previously reported [19]. Post-transfection, cells were incubated for 4 hours in 5 % (v/v) CO2 at 37 °C, then washed and re-fed with DMEM containing 10 % FCS and incubated for a further 24 hours. Transfected cells were washed three times with 1 x phosphate-buffered saline (PBS) and then fixed with 50% acetone / 50% methanol. Immunohistochemistry staining was carried out as previously described [19; 20; 21]. Target proteins were visualised using the following antibodies: Gemin4 (C-18; Santa Cruz), Gemin2 (3-F8; Santa Cruz), Sm core protein (Y-12; Zymed), and Actin (C-2; Santa Cruz) and secondary antibodies conjugated to either FITC (green), or TRITC (red). Cell nuclei were counterstained with 4′, 6′-diamidino-2-phenylindole (DAPI).</p>\n\nThe N-terminus of Gemin4 contains a nuclear localisation signal (NLS)\n<p>Although Gemin4 clearly localises in nuclear Cajal bodies, the mechanisms that control its nuclear import remain unclear. To determine whether Gemin4 contained a functional nuclear localisation signal (NLS), a panel of Gemin4 sub-domains was generated and transiently expressed in HeLa cells (Fig. 1). In initial experiments, full-length Gemin4 was over expressed in HeLa cells (Fig. 2). Unlike endogenous Gemin4, which is found in the cytoplasm and the nucleus, transiently expressed HA-tagged full-length Gemin4 was predominantly nuclear (Fig. 1). Similar distribution patterns were also seen with GFP and FLAG fusion tag constructs, suggesting this altered distribution is not due to the HA-tag. As all three tags are N-terminal, it is possible that the addition of any N-terminal tag alters the distribution of Gemin 4. Although HA-tagged EWS displays a similar distribution pattern when over expressed in HeLa cells, it is possible that nuclear accumulation could be a consequence of transient over expression [19]. However, although the expressed Gemin4 does not accumulate in the cytoplasm, both the FLAG and HA-tagged Gemin4 co-localised with endogenous SMN and Sm core proteins in Cajal bodies in the nucleus, demonstrating that the tags do not prevent the correct nuclear targeting of Gemin4 (data not shown).</p><p>As Gemin4 is efficiently imported into the nucleus, a panel of deletion constructs were then used to identify potential NLS's. Constructs lacking F1 (F2, F3, and F2-3) were predominantly present within the cytoplasm with Gemin4 primarily concentrated within large cytoplasmic aggregates (Fig. 1; white arrows). The formation of these aggregates is likely a result of improper localisation of Gemin4 after the NLS has been removed. Conversely, proteins containing the amino-terminal F1 domain (F1, F1-2 and FL Gemin4; Fig. 2) localised to the nucleus (FL) or to the nucleus and the cytoplasm (F1, F1-2). Since Gemin4 sub-constructs lacking the F1 region were not detected within the nucleus, these results suggested that the F1 domain might contain a functional nuclear localisation signal (NLS) (Fig. 1). Similar distribution patterns were seen for constructs expressing N-terminal FLAG or GFP tags (data not shown).</p><p>To delineate the putative NLS encoded within the F1 domain, HA epitope tagged Gemin4 constructs with progressively shorter N-terminal domains were transiently transfected into HeLa cells: G4ΔN143 (aa 144-1058; data not shown), G4ΔN193 (aa 194-1058; Fig. 2A), G4ΔN246 (aa 244-1058; Fig. 3A), G4ΔN293 (aa 294-1058; Fig. 2A). G4ΔN143 and G4ΔN193 displayed similar distribution patterns to full-length Gemin4 (Fig. 2A). However, G4ΔN243 and G4ΔN293 localised predominantly in cytoplasmic aggregates (Fig. 2A; white arrows), suggesting that the region between G4ΔN193 and G4ΔN243 (aa 194-243) encoded the NLS required for targeting Gemin4 to the nucleus. This localisation pattern was observed with GFP-tagged and HA-tagged constructs (Fig. 2A and data not shown).</p><p>There are three main sub-classes of NLS's: (1) Simian Virus (SV40)-like signals, which contain a stretch of basic residues; (2) bipartite signals, which consist of two clusters of basic aminoacids separated by a spacer; and (3) Mata2-like signals, which contain a KIPIK-motif [22; 23; 24; 25; 26]. Sequence analysis of Gemin4 revealed two putative nuclear localisation signals; a strong SV40-like NLS signal (Fig. 2B; bold type) and a bipartite NLS signal consisting of the potential SV-40 signal, a 15 amino acid spacer sequence containing numerous proline residues, followed by a cluster of basic residues (Fig. 2B; underlined). Consistent with our previous studies, the region between amino acids 194-243 contained these two predicted nuclear localisation signals (Fig. 2B; bold font). To examine the requirement for the predicted Gemin4 NLS, additional Gemin4 GFP-fusion constructs were generated and transiently expressed in HeLa cells (Fig. 2C). Expression of minimal NLS sequence was independently sufficient to drive nuclear import of GFP (NLS; Fig. 2C), while its removal from Gemin4 (Gemin4ΔNLS; Fig. 2C) resulted in exclusive cytoplasmic localisation of the GFP-tagged construct. Approximately 99% of transfected cells displayed a similar distribution pattern (data not shown).</p><p>To determine whether the complete bipartite import signal was required for nuclear localisation, additional Gemin4 constructs were generated that lacked either the N-terminal (G4Δ193-218) or C-terminal (G4Δ218-243) region of the identified NLS. Both of these constructs were predominantly cytoplasmic when expressed in HeLa cells (Fig. 3). These studies suggest that both elements of the bipartite signal are required for Gemin4 nuclear localisation (Fig. 3). The N-terminal amino acids preceding the NLS did not display a localisation pattern different from GFP-alone (data not shown), confirming the specificity of the NLS sequences and further supporting the N-terminal deletion analysis that identified Gemin4 aa's 194-243 as those responsible for nuclear localisation (Fig. 2 and 3). Therefore, the Gemin4 NLS is necessary and sufficient to target Gemin4 to the nucleus.</p><p>To identify the mechanisms that control Gemin4 import, various import receptors (snurportin-1, β1-karyopherin, β2-karyopherin, and α2-karyopherin) were immunoprecipitated using commercial antibodies to see if they associated with Gemin4. However, endogenous Gemin4 was not precipitated in a complex with any of the isolated import receptors (data not shown).</p><p>This reflects the transient nature of the nuclear import complex. In a previous report, β2-karyopherin (β2-Importin) has been identified as the import receptor that mediates the nuclear import of SMN [17]. This report did not demonstrate a complex between endogenous β2-karyopherin and SMN, and is therefore consistent with this current study. However, in the previous study an association between SMN and β2-Importin was seen using transiently over-expressed SMN with a myc tag, or in ex vivo pull down experiments using GST-tagged SMN [17].</p><p>To conclude, a nuclear localisation signal in Gemin4, encoded by amino acids 194-243, has been identified that is necessary for efficient nuclear import of Gemin4. Furthermore, this sequence is independently sufficient to mediate the nuclear import of GFP. This is the first active NLS that has been confirmed within the Gemin complex. One of the major functions performed by the SMN complex is the cytoplasmic assembly and subsequent nuclear import of the U snRNPs [1; 27]. Therefore, these studies suggest Gemin4, through the newly characterised NLS, may play a functional role in the nuclear import of the U snRNPs. Experiments are currently being performed to confirm this, and also to identify the mechanism through which the NLS mediates import.</p>",
"role": "user"
},
{
"content": "Gemin4 is a ubiquitously expressed multifunctional protein that is involved in U snRNP assembly, apoptosis, nuclear /cytoplasmic transportation, transcription, and RNAi pathways. Gemin4 is one of the core components of the Gemin-complex, which also contains survival motor neuron (SMN), the seven Gemin proteins (Gemin2\\xe2\\x80\\x938), and Unrip. Mutations in the SMN1 gene cause the autosomal recessive disorder spinal muscular atrophy (SMA). Although the functions assigned to Gemin4 predominantly occur in the nucleus, the mechanisms that mediate the nuclear import of Gemin4 remain unclear. Here, using a novel panel of Gemin4 constructs we identify a canonical nuclear import sequence (NLS) in the N-terminus of Gemin4. The Gemin4 NLS is necessary and independently sufficient to mediate nuclear import of Gemin4. This is the first functional NLS identified within the SMN-Gemin complex.",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.20983 | [
{
"content": "Write a Related Work section based on these abstracts. Use numerical citations like [1], [2], etc.\n\nMain Abstract:\nWe propose a new approximate factorization for solving linear systems with symmetric positive definite sparse matrices. In a nutshell the algorithm is to apply hierarchically block Gaussian elimination and additionally compress the fill-in. The systems that have efficient compression of the fill-in mostly arise from discretization of partial differential equations. We show that the resulting factorization can be used as an efficient preconditioner and compare the proposed approach with state-of-art direct and iterative solvers.\n\nCited Abstract(s):\n[1]: \n\n[2]: We describe an algorithm for the direct solution of systems of linear algebraic equations associated with the discretization of boundary integral equations with non-oscillatory kernels in two dimensions. The algorithm is ''fast'' in the sense that its asymptotic complexity is O(n), where n is the number of nodes in the discretization. Unlike previous fast techniques based on iterative solvers, the present algorithm directly constructs a compressed factorization of the inverse of the matrix; thus it is suitable for problems involving relatively ill-conditioned matrices, and is particularly efficient in situations involving multiple right hand sides. The performance of the scheme is illustrated with several numerical examples. rformance of the scheme is illustrated with several numerical examples. ples.\n\n[3]: We consider an algebraic representation that is useful for matrices with off-diagonal blocks of low numerical rank. A fast and stable solver for linear systems of equations in which the coefficient matrix has this representation is presented. We also present a fast algorithm to construct the hierarchically semiseparable representation in the general case.\n\n[4]: Abstract We describe an algorithm for the rapid direct solution of linear algebraic systems arising from the discretization of boundary integral equations of potential theory in two dimensions. The algorithm is combined with a scheme that adaptively rearranges the parameterization of the boundary in order to minimize the ranks of the off-diagonal blocks in the discretized operator, thus obviating the need for the user to supply a parameterization r of the boundary for which the distance ‖ r ( s ) − r ( t ) ‖ between two points on the boundary is related to their corresponding distance | s − t | in the parameter space. The algorithm has an asymptotic complexity of O ( N log 2 N ) , where N is the number of nodes in the discretization. The performance of the algorithm is illustrated with several numerical examples.\n\n[5]: In this paper we present a fast direct solver for certain classes of dense structured linear systems that works by first converting the given dense system to a larger system of block sparse equations and then uses standard sparse direct solvers. The kind of matrix structures that we consider are induced by numerical low rank in the off-diagonal blocks of the matrix and are related to the structures exploited by the fast multipole method (FMM) of Greengard and Rokhlin. The special structure that we exploit in this paper is captured by what we term the hierarchically semiseparable (HSS) representation of a matrix. Numerical experiments indicate that the method is probably backward stable.\n\n[6]: \n\n[7]: This article presents a fast solver for the dense \"frontal\" matrices that arise from the multifrontal sparse elimination process of 3D elliptic PDEs. The solver relies on the fact that these matrices can be efficiently represented as a hierarchically off-diagonal low-rank (HODLR) matrix. To construct the low-rank approximation of the off-diagonal blocks, we propose a new pseudo-skeleton scheme, the boundary distance low-rank approximation, that picks rows and columns based on the location of their corresponding vertices in the sparse matrix graph. We compare this new low-rank approximation method to the adaptive cross approximation (ACA) algorithm and show that it achieves better speedup specially for unstructured meshes. Using the HODLR direct solver as a preconditioner (with a low tolerance) to the GMRES iterative scheme, we can reach machine accuracy much faster than a conventional LU solver. Numerical benchmarks are provided for frontal matrices arising from 3D finite element problems corresponding to a wide range of applications.\n\n[8]: We present a distributed-memory library for computations with dense structured matrices. A matrix is considered structured if its off-diagonal blocks can be approximated by a rank-deficient matrix with low numerical rank. Here, we use Hierarchically Semi-Separable representations (HSS). Such matrices appear in many applications, e.g., finite element methods, boundary element methods, etc. Exploiting this structure allows for fast solution of linear systems and or fast computation of matrix-vector products, which are the two main building blocks of matrix computations. The compression algorithm that we use, that computes the HSS form of an input dense matrix, relies on randomized sampling with a novel adaptive sampling mechanism. We discuss the parallelization of this algorithm and also present the parallelization of structured matrix-vector product, structured factorization and solution routines. The efficiency of the approach is demonstrated on large problems from different academic and industrial applications, on up to 8,000 cores. This work is part of a more global effort, the STRUMPACK (STRUctured Matrices PACKage) software package for computations with sparse and dense structured matrices. Hence, although useful on their own right, the routines also represent a step in the direction of a distributed-memory sparse solver.",
"role": "user"
},
{
"content": "Recently, so-called have attracted a lot of attention. To name few references: [5] [3] [1] [5]5 [2] . As a subclass of such solvers, @math direct solvers [6] [7] [4] have been introduced. These solvers compute approximate sparse LU-like factorization of HSS-matrices. The simplicity of the structure in comparison to the general @math structure allows for a very efficient implementation, and despite the fact that these matrices are essentially 1D- @math matrices and optimal linear complexity is not possible for the matrices that are discretization of 2D 3D PDEs the running times can be quite impressive.",
"role": "assistant"
}
] |
science.bc7_litcovid_topic_classification | science.bc7_litcovid_topic_classification.18926 | [
{
"content": "Classify the topics of this COVID-19 literature paragraph by selecting from these categories:\n\n- Epidemic Forecasting: Predictions about COVID-19 spread\n- Treatment: Therapeutic methods or medication\n- Prevention: Measures to prevent spread\n- Mechanism: Biological mechanisms of the virus\n- Case Report: Report of a COVID-19 case\n- Transmission: How COVID-19 is transmitted\n- Diagnosis: Diagnosing COVID-19 in individuals\n\nYour output should be a valid JSON array with topics in double quotes and separated by commas. No additional text should be included.\n\nArticle:\n\nOBJECTIVE: To describe the characteristics of healthcare workers (HCWs) infected with COVID-19 and to examine their sources of exposure. METHODS: A descriptive cross-sectional study using data extracted from the centralized disease notification system comprising individuals confirmed with COVID-19 in Singapore between 23 January and 17 April 2020. Occupation of HCWs was categorized into six categories. Their job nature was classified into \"frontline\" or \"back-end\" based on the frequency of direct patient contact, and source of exposure was classified as family/household, social interaction or workplace. Chi-square and median tests were used to identify differences between categorical groups and sample medians, respectively. RESULTS: A total of 88 (1.7%) HCWs were identified from 5,050 cases. Their median age was 35 years. Chinese and Indians constituted 42.0% and 31.8%, respectively, and 43.2% were foreigners. The majority (63.6%) was serving at frontlines handling patient-facing duties, 15.9% were doctors, 11.4% were nurses and 44.3% were ancillary staff. About 81.8% acquired the infection locally, of which 40.3% did not have a clearly identifiable source of exposure. Exposure from the family/household was most common (27.8%), followed by workplace (16.7%) and social interaction (15.3%). All HCWs were discharged well with no mortality; three (3.4%) were ever admitted to intensive care unit and required increased care. CONCLUSION: Healthcare workers accounted for a small proportion of COVID-19 cases in Singapore with favourable outcomes. The possibility of transmission resulting from family/household exposure and social interactions highlights the need to maintain strict vigilance and precautionary measures at all times beyond the workplace.",
"role": "user"
},
{
"content": "[\"Prevention\", \"Transmission\"]",
"role": "assistant"
}
] |
science.chemsum_single_document_summarization | science.chemsum_single_document_summarization.10054 | [
{
"content": "Write an abstract for this chemistry paper.\n\nTitle:\nSurface Tension Mediated Conversion of Light to Work\n\nPaper sections:\n\n<p>Sunlight provides a vast resource that has spurred the development of various methods to convert photons into work: photovoltaics for conversion to electricity, solar thermal for water heating, fast growing plants to produce bio-fuels, and solar water splitting to produce hydrogen and oxygen.1 Though useful, these disparate methods are often based on complicated, capital intensive, multistage processes:2 light is collected and converted to a high energy intermediate (e.g. electrical potential, thermal loading, or chemical fuel), which is then used to run a process, such as an engine, that performs work. This multistage approach is ubiquitous and allows for a myriad of applications, but requires production, transportation, and possible storage of intermediates. Considerable effort has been devoted to improving energy collection, storage, and utilization;2,3 however, strategic simplification through the removal of the intermediates remains under-investigated and could provide reductions in capital costs. Here we report that a simple and robust solid/liquid interfacial system can convert light directly into useful work through thermal surface tension effects. This is demonstrated by the propulsion of objects on the surface of water. The simplicity of the system allows for controlled linear motion and rotational motion.</p><p>Few strategies exist for the direct conversion of light into work. Concepts such as the solar sail4 for interstellar travel and optical trapping of small particles5 rely on weak momentum transfer from photons. Harnessing the energy of photons is a far more powerful process. A few opto-thermal methods have been developed;6 for example, the Crookes Radiometer produces rotational motion in part via light-based heating of gases, but requires stringent conditions (i.e. low pressure) to function. When applied to liquids, local heating can produce thermally induced surface tension gradients. Such gradients have been shown to induce thermocapillary convective flows in oils7 and as first realized by Brochard-Wyart et. al.,8 can move silicone oil droplets on hydrophobic surfaces under precisely defined conditions or water droplets in oil filled channels.9 Nature has shown that surface tension gradients, created from chemical gradients, can move insects on the surface of water.10 Similarly, camphor chips,11 soap boats,12 and decomposing hydrogen peroxide13 have been used to produce surface tension gradients and motion in the laboratory, but all inherently rely on the supply of exhaustible chemical intermediates to function. Our goal is to couple optical heating with the capability of surface tension gradients to move objects on the surface of liquids. With this approach, a direct means of converting light into useful work is realized, and a simple strategy for remotely powering and controlling small objects is demonstrated.</p><p>We show that with a highly absorptive material capable of locally converting light energy into heat, concentrated light can be used to selectively generate thermal surface tension gradients and move objects on the surface of a liquid, such as water (Fig. 1 a-b and in greater detail in Supplemental Fig. S1). This approach combines energy collection and utilization into one material. Furthermore, the interfacial system allows control to be readily enacted in two ways: (i) selectively focusing light on different regions of the absorber to affect the direction of the exerted forces (Fig. 1a-b); and (ii) through device design providing specific orientations of the light absorbing material within the device. This versatility gives access to work in the form of controlled linear motion and rotational motion.</p><p>Carbon black and multiwall carbon nanotubes (MWNT), and vertically aligned carbon nanotube forests (VANTs) as shown in Fig. 3. can be used to produce responses.14 We focused on VANTs as a highly absorptive surface material to maximize heating close to the interface with the liquid. Low density VANTs, with very high surface roughness, have recently been reported to be the blackest known material, absorbing greater than 99.9% of incident visible light.15 Thermal relaxation of excited electrons converts absorbed light into heat, thus making VANTs ideal light-activated thermal switches, which can heat the surrounding liquid and maximize surface tension gradients.16</p><p>To increase their structural integrity, VANTs were embedded in a supporting polymer, Fig 3a. Polydimethylsiloxane (PDMS) was employed for four reasons: (i) it is optically transparent, focusing absorption on the thin nanotube surface layer; (ii) it has a density similar to that of water; (iii) it strongly adheres to the VANTs; and (iv) it provides elasticity while maintaining the nanoscale roughness and macroscopic structure of the forests. The VANT-PDMS composites (Fig. 3a,b) are stable to water and solvents, are superhydrophobic17 with contact angles of >155° (Supplemental Fig. S2) and can be created with various shapes and orientations.</p><p>When subjected to focused sunlight or a near-infrared laser18 (450 mW, 785 nm Diode Laser), the VANT-PDMS composites heat dramatically. Under static conditions, water droplets can be boiled and carbonaceous material on the VANTs can be burnt in air. When placed in a liquid, this dramatic heating of the VANTs leads to significant temperature gradients as the substrate transfers energy to the surrounding liquid.</p><p>To test the ability of light to directly produce work, VANT-PDMS composites were floated on liquids and sunlight or laser light was focused on the light absorbing material. Light induced heating directly resulted in linear propulsion (Fig. 1c-d and Supplemental Movie S1) without boiling.19 To determine the mechanism by which the objects move we performed a number of control experiments. First, no response was observed when irradiating transparent PDMS objects lacking absorbing materials, Fig. 3e. Second, substantial heating of substrates was observed when irradiated.20 Third, motion was quenched upon addition of surfactant, sodium dodecyl sulfate (Fig. 2a), which cuts surface tension effects.21 Finally, because the absorbing material is only on one face of the VANT-PDMS composites (i.e. the back of the boat), absorption and heating can occur from the front, with irradiation through the transparent PDMS material. In this configuration we still obtained forward propulsion, indicating that radiation pressure from the incident light is not a dominant force. With these facts in mind, it appears clear that optically induced thermal surface tension gradients are the cause of the laser induced motion.</p><p>Surface tension based force responses are directly related to the temperature differential and the contact length of the heated interface. For our test system we chose to focus on the millimeter scale, where powering and controlling motion are normally difficult due to viscous effects13,22 and machining limitations. For VANT-PDMS composites of millimeter dimensions under focused laser illumination, we obtained speeds up to 8 cm/s and forces of 29 μN.23 These results are at par with reports from chemically fueled surface tension based propulsion systems.24 This force suggests a local increase in temperature of as much as 50 °C.19 In principle, the power density of the motors can be raised by increasing the length of the heated area, perhaps through the use of a multi-fin system.</p><p>Motion was not limited to the surface of water but was also demonstrated on liquids as varied as glycerol, alcohols, DMF, fluorinated solvents, salt water, and even at the boundary between immiscible liquids.25 The light-induced propulsion of the composites was quantified for water, isopropyl alcohol (IPA), and dimethyl formamide (DMF). To compare solvent effects a nIR laser was used under collimated conditions to produce consistent irradiation. Characteristic motion curves are presented in Fig. 2b. On water the VANT-PDMS composite produced an average force of 1.19±0.36 μ N (error is reported as the 95% confidence interval over four samples). The composite was significantly less responsive when floating in IPA and DMF, with forces produced on the order of 0.23±0.12 μN and 0.13±0.03 μN respectively. The obtained solvent dependence on the generated force appears complicated and may be attributed to differences in the heat capacities, buoyancy, viscosity and the temperature dependence of the surface tension. These effects are currently under investigation.</p><p>To investigate the effect of light absorption on the system, we compared VANT-PDMS composites with dispersed composites of MWNT and PDMS (MWNT-PDMS) as shown in Fig. 3a-d. Larger composites (∼250mg) were tested so as to normalize the effective absorbing area.26 Typical motion curves are shown in Fig. 3e. Incorporation of 1-5 wt. % MWNT, Fig. 3d, produces an opaque black composite that performs similarly to the VANT-PDMS composite, with forces of 1.9±0.3 μN, 2.2±0.3 μN, 2.9±0.9 μN for the 1 wt. %, 5 wt. % and VANT composites respectively. Less absorbing 0.1 wt. % MWNT-PDMS composites were gray, Fig. 3c, and produced ∼60% less force, 1.0±0.1 μN, compared to VANT-PDMS.</p><p>The simplicity of this system allows it to be easily applied to multiple forms of work: spatially controlled illumination gives controlled \"linear\" motion, while specific device design can produce rotational motion. Controlled linear motion was obtained by focusing light on different parts of the object, dictating the loci of the relevant temperature gradient and, thus, the direction of the exerted surface tension force and the motion (Fig. 4, Laser light: Supplemental Movie S2, and Sunlight: Supplemental Movie S3). This remotely controlled and powered sub-centimeter object, which is not limited by external fields,13,22 is a clear example of the utility of the technique. Our approach provides a simple, scale-independent and robust way to produce directionally controlled linear motion. Objects ranging in size from the millimeter (milligrams) to tens of centimeters (tens of grams) were controllably moved with laser or concentrated sunlight. This scale independence and versatility of light sources presents an advantage over other optical techniques such as optical trapping.</p><p>Rotational motion can be achieved by introducing directionality into the device design. For instance, by placing light absorbing VANTs on the clockwise face of each fin of a rotor (Fig. 5), localized heating produces surface tension forces that cause the object to rotate in the counterclockwise direction. Alternatively, flipping the object over affords clockwise rotation. As shown in Fig. 5, when placed under focused sunlight or irradiated with a near-IR laser these rotors spin at rates up to 70 rpm (Supplemental Movie S4). Built-in directionality is particularly useful for sunlight powered motion, as it is relatively difficult to focus the sun on small moving objects. In this case, Fresnel lenses can be used to focus sunlight on floating rotors, resulting in continuous rotation. Rotors can also be mounted on an axle or stiff metal wire, which may increase the utility of the rotors with potential application for simple solar powered pumps.</p><p>In summary, we have presented a simple, versatile, method for the direct conversion of light energy into useful linear and rotational work expressed through the motion of objects on a liquid. Additionally, this demonstrates a solution to the challenging problem of remotely controlling the motion of small objects on water.23 It is important to note that in addition to harnessing sunlight to do work; this effect is size-independent and can be extended into the micro and macro regimes.</p>",
"role": "user"
},
{
"content": "As energy demands increase, new, more direct, energy collection and utilization processes must be explored. We present a system that intrinsically combines the absorption of sunlight with the production of useful work in the form of locomotion of objects on liquids. Focused sunlight is locally absorbed by a nanostructured composite, creating a thermal surface tension gradient and, subsequently, motion. Controlled linear motion and rotational motion are demonstrated. The system is scale independent, with remotely powered and controlled motion shown for objects in the milligram to tens of grams range.",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_plos_single_doc_summ | science.scientific_lay_summarisation_plos_single_doc_summ.9731 | [
{
"content": "Summarize the article in an uncomplicated manner.\n\nTitle: Prediction of Drug-Target Interactions and Drug Repositioning via Network-Based Inference\nArticle:\nAbstract:\nDrug-target interaction( DTI) is the basis of drug discovery and design. It is time consuming and costly to determine DTI experimentally. Hence, it is necessary to develop computational methods for the prediction of potential DTI. Based on complex network theory, three supervised inference methods were developed here to predict DTI and used for drug repositioning, namely drug-based similarity inference( DBSI), target-based similarity inference( TBSI) and network-based inference( NBI). Among them, NBI performed best on four benchmark data sets. Then a drug-target network was created with NBI based on 12, 483 FDA-approved and experimental drug-target binary links, and some new DTIs were further predicted. In vitro assays confirmed that five old drugs, namely montelukast, diclofenac, simvastatin, ketoconazole, and itraconazole, showed polypharmacological features on estrogen receptors or dipeptidyl peptidase-IV with half maximal inhibitory or effective concentration ranged from 0. 2 to 10 µM. Moreover, simvastatin and ketoconazole showed potent antiproliferative activities on human MDA-MB-231 breast cancer cell line in MTT assays. The results indicated that these methods could be powerful tools in prediction of DTIs and drug repositioning.\nIntroduction:\nOver the past decade, the rate of new chemical entities transferred to therapeutic agents has been significantly decreased[1]. Interestingly, this phenomenon is concurrent with the dominant assumption that the goal of drug discovery is to design exquisitely selective ligands against a single target. However, this ‘one gene, one drug, one disease’ paradigm was challenged in many cases, and the concept of polypharmacology was hence proposed for those drugs acting on multiple targets rather than one target[1]. For example, serotonin and serotonergic drugs not only bind to G protein-coupled receptors( GPCRs) such as 5-hydroxytryptamine receptors 1, 2 and 4–7( 5-HT1, 2, 4–7), but also might bind to an ion channel, i. e. 5-HT3[2],[3]. Such polypharmacological features of drugs enable us to understand drug side effects or find their new uses, namely drug repositioning[4]. Some good examples are thalidomide, sildenafil, bupropion and fluoxetine[4],[5]. To date, several in silico methods have been developed to address the issues of drug-target interaction( DTI) prediction and drug repositioning[6]–[11]. The conventional methods can be either ligand-based or receptor-based. Ligand-based methods like quantitative structure-activity relationships( QSAR) and similarity search are very useful in this context. For example, Keiser et al. predicted new molecular targets for known drugs using chemical two-dimensional( 2D) structural similarity, namely similarity ensemble approach[6],[7]. Twenty-three new DTIs were confirmed and five of which were potent with Ki values<100 nM. Recently, Humberto et al. developed a multi-target QSAR( mt-QSAR) classifier and built a web server for DTI prediction[8]. Receptor-based methods like reverse docking have also been applied in drug-target( DT) binding affinity prediction, DTI prediction and drug repositioning[9]–[11]. However, those methods could not be used for targets whose three-dimensional( 3D) structures are unknown. More recently, several network-based and phenotype-based methods were developed for such purposes. Yildirim et al. constructed a bipartite graph composed of US Food and Drug Administration( FDA)-approved drugs and proteins linked by DT binary associations[12]. This method quantitatively showed an overabundance of ‘follow-on’ drugs. Campillos et al. identified new DTIs using side-effect similarity[13]. They tested 20 of unexpected DTIs and validated 13 ones by in vitro binding assays. Iorio et al. predicted and validated new drug modes of action and drug repositioning from transcriptional responses[14]. Recently Butte group also reported two successful examples of drug repositioning based on public gene expression data[15],[16]. Furthermore, Yamanishi et al. developed a bipartite graph learning method to predict DTI by integrating chemical and genomic spaces[17]. Though high overall predictive accuracy was obtained in Yamanishi's work, the sensitivity was anomaly low and the method was not validated experimentally. In this study, three inference methods were developed to predict new DTI: drug-based similarity inference( DBSI), target-based similarity inference( TBSI) and network-based inference( NBI), all derived from complex network theory[18]–[21]. Four benchmark data sets with known drugs targeting enzymes, ion channels, GPCRs, and nuclear receptors respectively, were used to assess the performance of the methods in comparison with literature reports. The best-performed method was then selected to create a drug-target network of FDA-approved and experimental drugs and to predict new DTIs subsequently. Some of the predictions were further validated by in vitro assays. This work would provide new powerful tools for DTI prediction and drug repositioning.\nDiscussion:\nIn this study, three supervised inference methods, i. e. DBSI, TBSI and NBI, were developed to predict new DTI. Excellent performance was obtained for these methods on four benchmark data sets, which outperformed some methods reported elsewhere[17],[28],[29]. The essential difference of the three methods is the definition of similarity. DBSI is based on chemical 2D structural similarity, and TBSI is based on genomic sequence similarity, whereas NBI is only based on DT bipartite network topology similarity( Figure 1). The worse AUC values of DBSI on the benchmark data sets indicated that the prediction based on chemical structure similarity alone was poor( Figure 2). This may be caused by the redundancy in the similarity. For example, in the enzyme data set, though chemical structure similarity can present drug similarity very accurately, similar structures without binding to enzymes should be redundant to reduce the predictive accuracy. There is a similar redundancy problem in TBSI. Although NBI is the simplest one for ignoring structural information of drugs and targets, the prediction is the most reliable( see box plot in Figure S5). And NBI only used DTI topology network similarity for inferring new potential DTI, which did not need any 3D structural information of targets and drugs. Therefore, NBI performed better than DBSI, TBSI and other reverse docking methods[10],[11]. Recently, Hansen et al. created four features from gene-drug network and built a logistic classifier for drug-gene association prediction[30]. Although high predictive performance were obtained, an inherent problem in Hansen's work is that the negative drug-gene pairs were randomly constructed( selected on the basis of unknown drug-gene associations), which easily brought noise in a logistic classifier building by the inaccurate negative sample selection. Yamanishi et al. predicted new DTIs by integration of chemical and genomic spaces. Reasonable AUC value was obtained, but the R values were extremely poor, only 0. 574, 0. 271, 0. 234 and 0. 148 for enzymes, ion channels, GPCRs, and nuclear receptors respectively[17], and the predicted results were not validated experimentally. Compared with those reported methods, NBI only used the simple DT association information and yielded high predictive performance( R more than 0. 9, Table S3). Chiang and Butte developed a guilt-by-association method for disease-gene association prediction and drug repositioning[31]. This method only used gene-disease linkage information. In present study, NBI takes fully advantage of the labeled and unlabeled information encoded in the full DT network topology( Figure 1), thereby simultaneously exploiting both topological and functional modularity. Usually there are two major methods for DTI prediction and drug repositioning: traditional drug discovery method, in which new drugs or hits are predicted for a certain target; and chemical biology method, where new potential targets are predicted for a given drug or chemical[17]. In this study, NBI method inherited the advantages of both methods. It can prioritize candidate drugs for a given target or prioritize candidate targets for a given drug simultaneously by personal recommendation[18],[19]. With matrix transposition, we could also prioritize new potential targets for a given drug. As shown in Figure S6, the high performance was yielded for our three methods in prediction of new candidate targets for a given drug, and NBI exhibited the highest predictive accuracy. Therefore, NBI could be a powerful tool in drug repositioning. Since NBI only utilized known DTI information, for a new drug without known target information in the training set, NBI could not predict targets for this new drug. This is a weakness of the method. However, potential targets of a new drug can be predicted by integrating DBSI, TBSI and NBI together. We are actively developing new network inference method by integrating drugs, proteins and phenotype features based on diffusion theory[32]. Our methods could also be used in prediction of other biological networks, such as protein-protein interactions, drug-gene, gene-disease, and drug-disease networks, by integrating additional similarity measures among diseases, genes, and drugs[33]–[35]. Montelukast, antagonist of cysteinyl leukotriene 1 receptor, was marketed in the US and other countries by Merck with the brand name Singulair®. Although Langlois et al. reported that montelukast regulates eosinophil protease activity through a leukotriene-independent mechanism recently[36], there is no report about its binding with DPP-IV so far. Herein, montelukast was predicted and validated as a new DPP-IV inhibitor with IC50 = 9. 79 µM. Recently, Faul et al. found that oral administration of montelukast could change the weak level of Insulin in small scale clinical experiment[37]. Therefore, it is reasonable to deduce that montelukast might have new potential indication in anti-diabetic treatment via inhibiting DPP-IV( Figure 7). Comparing the structural similarity between montelukast and sitagliptin, a classical DPP-IV inhibitor, the Tanimoto similarity based on MACCS keys[38] was only 0. 38, which confirmed that NBI could successfully predict novel structural skeleton molecules for a given target. Diclofenac is an acetic acid nonsteroidal antiinflammatory drug( NSAID) with analgesic and antipyretic properties, and widely used to treat pain, dysmenorrhea, ocular inflammation, and so on. In the past decades, the anti-inflammatory effects of diclofenac were thought to be linked with inhibition of both leukocyte migration and cyclooxygenase( COX-1 and COX-2), leading to the peripheral inhibition of prostaglandin synthesis[23]. Herein, we reported that diclofenac targeted ERα and ERβ with IC50 values of 7. 59 and 2. 32 µM, respectively for the first time( Figure 4). There were a few similar examples to show NSAIDs targeting nuclear receptors recently. Zhou et al. reported that sulindac could induce apoptosis by binding to retinoid X receptor α( RXRα)[39], while Lehmann et al. found that indomethacin could activate the peroxisome proliferator-activated receptors α and β[40]. There were also several reports to show that oral administration of ER ligands had neuroprotective and anti-inflammatory effects[41]. Since ERα and ERβ are widely expressed in several tissues including central nervous system, cardiovascular system, gastrointestinal system, and immune system[42], therefore the anti-inflammatory and neuroprotective effects of diclofenac might be resulted from the novel biological pathways of inhibition to ERα and ERβ( Figure 7). Simvastatin, the methylated form of lovastatin, is an antilipemic agent which inhibits HMG-CoA reductase[23]. Here we identified that simvastatin could inhibit ERβ with IC50 = 3. 12 µM. There is some evidence to support our finding. For example, Wolozin et al. reported that simvastatin was associated with a strong reduction in the incidence of dementia, Alzheimer's disease( AD) and Parkinson's disease( PD)[43],[44]; several studies proved that estrogen treatment was effective in many neurodegenerative disease models[41],[45]; and statins were also found to have inhibitory effects on the proliferation of human breast cancer cells[46]. Therefore, the strong reduction in the incidence of dementia and PD and the inhibitory effects of the proliferation of human breast cancer cells could be explained by the potential novel biological pathway of inhibition to ERβ by simvastatin in Figure 7. Ketoconazole and Itraconazole, as 14-α demethylase( CYP51A1) inhibitors, are synthetic antifungal drugs[23] and could be used to treat refractory bone pain and neurologic injury in patients with advanced metastatic prostate cancer[26],[27]. In this study, both drugs were identified to bind to ERα and ERβ with IC50 or EC50 value less than 1 µM( Figure 5). 14-α demethylase and ER did not share any common features in structures or functions, but they were deduced to have the same ligands by NBI method. The data showed that the therapeutic effect of ketoconazole in prostate cancer could be explained by the selective inhibition of ERβ by ketoconazole. In last decades, tissue- or subtype-selective ER modulators( SERM) showed great advantages in clinic due to less adverse side effects[47],[48]. As shown in Figure 4, ketoconazole selectively inhibit ERβ with IC50 = 0. 79 µM, and it did not show any antagonistic or agonistic activity to ERα. However, itraconazole was a dual-profile compound, which showed agonistic activity on ERα but a higher antagonistic activity on ERβ than the classical anti-breast cancer drug tamoxifen( Figure 5). Both ketoconazole and itraconazole could serve as leads for the discovery of novel oral SERM.",
"role": "user"
},
{
"content": "Study of drug-target interaction is an important topic toward elucidation of protein functions and understanding of molecular mechanisms inside cells . Traditional methods to predict new targets for known drugs were based on small molecules , protein targets or phenotype features . Here , we proposed a network-based inference ( NBI ) method which only used drug-target bipartite network topology similarity to infer new targets for known drugs . The performance of NBI outperformed the drug-based similarity inference and target-based similarity inference methods as well as other published methods . Via the NBI method five old drugs , namely montelukast , diclofenac , simvastatin , ketoconazole , and itraconazole , were identified to have polypharmacological effects on human estrogen receptors or dipeptidyl peptidase-IV with half maximal inhibitory or effective concentration from submicromolar to micromolar by in vitro assays . Moreover , simvastatin and ketoconazole showed potent antiproliferative activities on human MDA-MB-231 breast cancer cell line in MTT assays . The results indicated that the drug-target bipartite network-based inference method could be a useful tool for fishing novel drug-target interactions in molecular polypharmacological space .",
"role": "assistant"
}
] |
science.scitldr_aic | science.scitldr_aic.102 | [
{
"content": "Summarize the paper:\n\nPaper text:\n\nProteins are ubiquitous molecules whose function in biological processes is determined by their 3D structure.\n\nExperimental identification of a protein's structure can be time-consuming, prohibitively expensive, and not always possible. \n\nAlternatively, protein folding can be modeled using computational methods, which however are not guaranteed to always produce optimal results.\n\nGraphQA is a graph-based method to estimate the quality of protein models, that possesses favorable properties such as representation learning, explicit modeling of both sequential and 3D structure, geometric invariance and computational efficiency. \n\nIn this work, we demonstrate significant improvements of the state-of-the-art for both hand-engineered and representation-learning approaches, as well as carefully evaluating the individual contributions of GraphQA.\nProtein molecules are predominantly present in biological forms, responsible for their cellular functions.\nTherefore, understanding, predicting and modifying proteins in biological processes are essential for medical, pharmaceutical and genetic research.\nSuch studies strongly depend on discovering mechanical and chemical properties of proteins through the determination of their structure.\nAt the high level, a protein molecule is a chain of hundreds of smaller molecules called amino acids.\nIdentifying a protein's amino-acid sequence is nowadays straightforward.\nHowever, the function of a protein is primarily determined by its 3D structure.\nSpatial folding can be determined experimentally, but the existing procedures are time-consuming, prohibitively expensive and not always possible.\nThus, several computational techniques were developed for protein structure prediction (Arnold et al., 2006; Wang et al., 2017; Xu, 2019) .\nSo far, no single method is always best, e.g. some protein families are best modeled by certain methods, also, computational methods often produce multiple outputs.\nTherefore, candidate generation is generally followed by an evaluation step.\nThis work focuses on Quality Assessment (QA) of computationally-derived models of a protein (Lundstrom et al., 2001; Won et al., 2019) .\nQA, also referred to as model accuracy estimation (MAE), estimates the quality of computational protein models in terms of divergence from their native structure.\nThe downstream goal of QA is two-fold: to find the best model in a pool of models and to refine a model based on its local quality.\nComputational protein folding and design have recently received attention from the machine learning community (Wang et al., 2017; Xu, 2019; Jones & Kandathil, 2018; Ingraham et al., 2019b; Anand & Huang, 2018; Evans et al., 2018; AlQuraishi, 2019) , while QA has yet to follow.\nThis is despite the importance of QA for structural biology and the availability of standard datasets to benchmark machine learning techniques, such as the biannual CASP event (Moult et al., 1999) .\nThe field of bioinformatics, on the other hand, has witnessed noticeable progress in QA for more than a decade: from earlier works using artificial neural networks (Wallner & Elofsson, 2006) or support vector machines (Ray et al., 2012; Uziela et al., 2016) to more recent deep learning methods based on 1D-CNNs, 3D-CNNs and LSTMs (Hurtado et al., 2018; Derevyanko et al., 2018; Pagès et al., 2018; Conover et al., 2019) .\nIn this work, we tackle Quality Assessment with Graph Convolutional Networks, which offer several desirable properties over previous methods.\nThrough extensive experiments, we show significant improvements over the state-of-the-art, and offer informative qualitative and quantitative analyses.\nGRAPHQA predicts local and global scores from a protein's graph using message passing among residues with chemical bond or spatial proximity.\nCASP QA algorithms score protein models by comparison with experimentally-determined conformations.\nFor the first time we applied graph convolutional networks to the important problem of protein quality assessment (QA).\nSince proteins are naturally represented as graphs, GCN allowed us to collect the individual benefits of the previous QA methods including representation learning, geometric invariance, explicit modeling of sequential and 3D structure, simultaneous local and global scoring, and computational efficiency.\nThanks to these benefits, and through an extensive set of experiments, we demonstrated significant improvements upon the state-of-the-art results on various metrics and datasets and further analyzed the results via thorough ablation and qualitative studies.\nFinally, we wish that Quality Assessment will gain popularity in the machine learning community, that could benefit from several curated datasets and ongoing regular challenges.\nWe believe that richer geometric representations, e.g. including relative rotations, and raw atomic representations could represent an interesting future direction for learning-based Quality Assessment.\nGlobal Distance Test Total Score (GDT TS) Global Distance Test Total Score (GDT TS) is a global-level score obtained by first superimposing the structure of a decoy to the experimental structure using an alignment heuristic, and then computing the fraction of residues whose position is within a certain distance from the corresponding residue in the native structure ( figure 7 ).\nThis percentage is computed at different thresholds and then averaged to produce a score in the range [0, 100], which we rescale between 0 and 1 (table 2).\nTable 2 i\n2.5Å x x 5 6.3Å x 20% 60% 80% 100% Local Distance Difference Test (LDDT) Local Distance Difference Test (LDDT), is a residue-level score that does not require alignment of the structures and compares instead the local neighborhood of every residue, in the decoy and in the native structure.\nIf we define the neighborhood of a residue as the set of its contacts, i.e. the set of other residues that lie within a certain distance from it, we can express the quality of that residue as the percentage of contacts that it shares with the corresponding residue in the native structure.\nFigure 8: Example of LDDT scoring for residue 7: the residues within a radius R 1 are { 6, 8, 10 } the native structure (left) and { 6, 8 } for the decoy (right); at a radius R 2 we have { 3, 6, 8, 9, 10, 11 } the native structure (left) and { 3, 6, 8, 9, 10 } for the decoy (right).",
"role": "user"
},
{
"content": "GraphQA is a graph-based method for protein Quality Assessment that improves the state-of-the-art for both hand-engineered and representation-learning approaches",
"role": "assistant"
}
] |
science.craftchem_ner | science.craftchem_ner.4454 | [
{
"content": "Extract all chemical entities (ChEBI) from this biomedical text. Output as JSON array.\n\nText: Citation:LeoneTC,LehmanJJ,FinckBN,SchaefferPJ,WendeAR,etal.",
"role": "user"
},
{
"content": "[]",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.24128 | [
{
"content": "Write a Related Work section based on these abstracts. Use numerical citations like [1], [2], etc.\n\nMain Abstract:\nConstraint solution reuse is an effective approach to save the time of constraint solving in symbolic execution. Most of the existing reuse approaches are based on syntactic or semantic equivalence of constraints; e.g. the Green framework is able to reuse constraints which have different representations but are semantically equivalent, through canonizing constraints into syntactically equivalent normal forms. However, syntactic semantic equivalence is not a necessary condition for reuse--some constraints are not syntactically or semantically equivalent, but their solutions still have potential for reuse. Existing approaches are unable to recognize and reuse such constraints. In this paper, we present GreenTrie, an extension to the Green framework, which supports constraint reuse based on the logical implication relations among constraints. GreenTrie provides a component, called L-Trie, which stores constraints and solutions into tries, indexed by an implication partial order graph of constraints. L-Trie is able to carry out logical reduction and logical subset and superset querying for given constraints, to check for reuse of previously solved constraints. We report the results of an experimental assessment of GreenTrie against the original Green framework, which shows that our extension achieves better reuse of constraint solving result and saves significant symbolic execution time.\n\nCited Abstract(s):\n[1]: In unit testing, a program is decomposed into units which are collections of functions. A part of unit can be tested by generating inputs for a single entry function. The entry function may contain pointer arguments, in which case the inputs to the unit are memory graphs. The paper addresses the problem of automating unit testing with memory graphs as inputs. The approach used builds on previous work combining symbolic and concrete execution, and more specifically, using such a combination to generate test inputs to explore all feasible execution paths. The current work develops a method to represent and track constraints that capture the behavior of a symbolic execution of a unit with memory graphs as inputs. Moreover, an efficient constraint solver is proposed to facilitate incremental generation of such test inputs. Finally, CUTE, a tool implementing the method is described together with the results of applying CUTE to real-world examples of C code.\n\n[2]: This paper presents EXE, an effective bug-finding tool that automatically generates inputs that crash real code. Instead of running code on manually or randomly constructed input, EXE runs it on symbolic input initially allowed to be \"anything.\" As checked code runs, EXE tracks the constraints on each symbolic (i.e., input-derived) memory location. If a statement uses a symbolic value, EXE does not run it, but instead adds it as an input-constraint; all other statements run as usual. If code conditionally checks a symbolic expression, EXE forks execution, constraining the expression to be true on the true branch and false on the other. Because EXE reasons about all possible values on a path, it has much more power than a traditional runtime tool: (1) it can force execution down any feasible program path and (2) at dangerous operations (e.g., a pointer dereference), it detects if the current path constraints allow any value that causes a bug.When a path terminates or hits a bug, EXE automatically generates a test case by solving the current path constraints to find concrete values using its own co-designed constraint solver, STP. Because EXE's constraints have no approximations, feeding this concrete input to an uninstrumented version of the checked code will cause it to follow the same path and hit the same bug (assuming deterministic code).EXE works well on real code, finding bugs along with inputs that trigger them in: the BSD and Linux packet filter implementations, the udhcpd DHCP server, the pcre regular expression library, and three Linux file systems.\n\n[3]: We present a new symbolic execution tool, KLEE, capable of automatically generating tests that achieve high coverage on a diverse set of complex and environmentally-intensive programs. We used KLEE to thoroughly check all 89 stand-alone programs in the GNU COREUTILS utility suite, which form the core user-level environment installed on millions of Unix systems, and arguably are the single most heavily tested set of open-source programs in existence. KLEE-generated tests achieve high line coverage -- on average over 90 per tool (median: over 94 ) -- and significantly beat the coverage of the developers' own hand-written test suite. When we did the same for 75 equivalent tools in the BUSYBOX embedded system suite, results were even better, including 100 coverage on 31 of them. We also used KLEE as a bug finding tool, applying it to 452 applications (over 430K total lines of code), where it found 56 serious bugs, including three in COREUTILS that had been missed for over 15 years. Finally, we used KLEE to crosscheck purportedly identical BUSYBOX and COREUTILS utilities, finding functional correctness errors and a myriad of inconsistencies.",
"role": "user"
},
{
"content": "Reducing the constraint into a short one is a popular optimization approach of SAT SMT solvers and symbolic executors [3] [1] [2] . For example, KLEE [3] does some constraint reductions before solving: (1): These are classical techniques used by optimizing compilers: e.g., simple arithmetic simplifications (x + 0 @math x), strength reduction ( @math , where < < is the bit shift operator), linear simplification (2 * x - x @math x). (2) : KLEE actively simplifies the constraint set when new equality constraints are added to the constraint set by substituting the value of variables into the constraints. For example, if constraint x < 10 is followed by a constraint x = 5, then the first constraint will be simplified to true and be eliminated by KLEE. (3): KLEE uses the concrete value of a variable to possibly simplify subsequent constraints by substituting the variable's concrete value. (4) . KLEE divides constraint sets into disjoint independent subsets based on the symbolic variables they reference. By explicitly tracking these subsets, KLEE can frequently eliminate irrelevant constraints prior to sending a query to the constraint solver.",
"role": "assistant"
}
] |
science.bioasq_factoid_qa | science.bioasq_factoid_qa.216 | [
{
"content": "Answer the following biomedical question based on the given context. Provide only the answer without additional explanation.\n\nQuestion: Which is the E3 ubiquitin ligase of Hsp90?\n\nContext: C-terminal Hsp-interacting protein (CHIP) is an HSP70 and HSP90 interacting co-chaperone and an E3 ubiquitin ligase. Previous studies have reported the role of CHIP in cancer progression by targeting protein degradation.\nThe U-box E3 ubiquitin ligase CHIP (C terminus of Hsc70-interacting protein) binds Hsp90 and/or Hsp70 via its tetratricopeptide repeat (TPR), facilitating ubiquitination of the chaperone-bound client proteins.\nIn vitro ubiquitination assays indicated that Ca(2+)/S100A2 and S100P are efficient and specific inhibitors of CHIP-mediated ubiquitination of Hsp70, Hsp90, HSF1, and Smad1.\nThe E3 ubiquitin ligase CHIP (C-terminus of Hsc70-interacting protein) is believed to be a central player in the cellular triage decision, as it links the molecular chaperones Hsp70/Hsc70 and Hsp90 to the ubiquitin proteasomal degradation pathway.\nWe found that CHIP has a sixfold higher affinity for Hsp90 compared with Hsc70.\nCarboxyl terminus of hsc70-interacting protein (CHIP) can remodel mature aryl hydrocarbon receptor (AhR) complexes and mediate ubiquitination of both the AhR and the 90 kDa heat-shock protein (hsp90) in vitro\nThe analysis of the sucrose-gradient-fractionated in vitro translated AhR complexes revealed that CHIP can mediate hsp90 ubiquitination while cooperating with unidentified factors to promote the ubiquitination of mature unliganded AhR complexes.",
"role": "user"
},
{
"content": "Carboxyl terminus of hsc70-interacting protein (CHIP)",
"role": "assistant"
}
] |
science.bc7_litcovid_topic_classification | science.bc7_litcovid_topic_classification.21654 | [
{
"content": "Classify the topics of this COVID-19 literature paragraph by selecting from these categories:\n\n- Epidemic Forecasting: Predictions about COVID-19 spread\n- Treatment: Therapeutic methods or medication\n- Prevention: Measures to prevent spread\n- Mechanism: Biological mechanisms of the virus\n- Case Report: Report of a COVID-19 case\n- Transmission: How COVID-19 is transmitted\n- Diagnosis: Diagnosing COVID-19 in individuals\n\nYour output should be a valid JSON array with topics in double quotes and separated by commas. No additional text should be included.\n\nArticle:\n\nThe pandemic of coronavirus disease 2019 (COVID-19) highlights the importance of rapid and sensitive diagnostics of viral infection that enables the efficient tracing of cases and the implementation of public health measures for disease containment. The immediate actions from both academia and industry have led to the development of many COVID-19 diagnostic systems that have secured fast-track regulatory approvals and have been serving our healthcare frontlines since the early stage of the pandemic. On diagnostic technologies, many of these clinically validated systems have significantly benefited from the recent advances in micro- and nanotechnologies in terms of platform design, analytical method, and system integration and miniaturization. The continued development of new diagnostic platforms integrating micro- and nanocomponents will address some of the shortcomings we have witnessed in the existing COVID-19 diagnostic systems. This Perspective reviews the previous and ongoing research efforts on developing integrated micro- and nanosystems for nucleic acid-based virus detection, and highlights promising technologies that could provide better solutions for the diagnosis of COVID-19 and other viral infectious diseases. With the summary and outlook of this rapidly evolving research field, we hope to inspire more research and development activities to better prepare our society for future public health crises.",
"role": "user"
},
{
"content": "[\"Diagnosis\"]",
"role": "assistant"
}
] |
science.data_reco_mcq_sc | science.data_reco_mcq_sc.6036 | [
{
"content": "Which dataset would be most relevant for this research query?\n\nQuery: I want to complete a tensor using its low Tucker-rank approximation.\n\nKeyphrases: tensor completion\n\nDataset descriptions:\nSNIPS: The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity: SearchCreativeWork (e.g. Find me the I, Robot television show), GetWeather (e.g.\nOffice-31: The Office dataset contains 31 object categories in three domains: Amazon, DSLR and Webcam. The 31 categories in the dataset consist of objects commonly encountered in office settings, such as keyboards, file cabinets, and laptops.\nHopkins155: The Hopkins 155 dataset consists of 156 video sequences of two or three motions. Each video sequence motion corresponds to a low-dimensional subspace.\nORL: The ORL Database of Faces contains 400 images from 40 distinct subjects. For some subjects, the images were taken at different times, varying the lighting, facial expressions (open / closed eyes, smiling / not smiling) and facial details (glasses / no glasses).\n\nOptions: A: SNIPS B: Office-31 C: Hopkins155 D: ORL",
"role": "user"
},
{
"content": "ORL",
"role": "assistant"
}
] |
science.chemprot_ner | science.chemprot_ner.1009 | [
{
"content": "Extract entities of these types from the text: [\"Chemical\", \"Protein\"].\n\nOutput a JSON object with entity types as keys and lists of extracted entities as values.\n\nText:\nUse of the Combination Index to determine interactions between plant-derived phenolic acids on hepatotoxicity endpoints in human and rat hepatoma cells.\nThe beneficial or adverse effects of isolated phytochemicals are not always concordant with effects of the botanical dietary supplements from which they were derived. This disparity could be due to interactions between the various phytochemicals present in the whole plant. The phenolic acids, rosmarinic acid (RA), caffeic acid (CA) and ferulic acid (FA) are widely present in foods and dietary supplements, and they are assumed to exert various beneficial biological effects. However, there is little data on the potential biological interactions of these three phenolic acids which commonly occur together and are linked metabolically. In the present study, liver toxicity of the three phenolic acids was assessed on the three compounds singly and in various binary and one ternary combinations. A series of in vitro endpoints relevant to liver toxicity were evaluated in both a human (HepG2/C3A) and rat (MH1C1) hepatocyte cell line. The Combination Index (CI) was calculated for each endpoint from both the concentration responses of the single compounds and the responses of the various binary and ternary mixtures. Both synergistic and antagonistic interactions were observed for some endpoints and some combinations of test agents. Interactions were most prevalent in measures of oxidative stress and cytochrome P450 activities in both cell types. There was only a 53% concordance between the rat and human cells which may be suggestive of species differences. The data suggest an approach for better characterizing the beneficial or adverse effects of complex botanical products through evaluation of interactions between individual phytochemical components.",
"role": "user"
},
{
"content": "{\"Chemical\": [\"rosmarinic acid\", \"phenolic acids\", \"ferulic acid\", \"caffeic acid\"], \"Protein\": [\"cytochrome P450\"]}",
"role": "assistant"
}
] |
science.craftchem_ner | science.craftchem_ner.5498 | [
{
"content": "Extract all chemical entities (ChEBI) from this biomedical text. Output as JSON array.\n\nText: TheamountofmucusinCFmiceonthemixedbackgroundwasvariablefrommousetomouse.",
"role": "user"
},
{
"content": "[]",
"role": "assistant"
}
] |
science.bc7_litcovid_topic_classification | science.bc7_litcovid_topic_classification.21647 | [
{
"content": "Classify the topics of this COVID-19 literature paragraph by selecting from these categories:\n\n- Epidemic Forecasting: Predictions about COVID-19 spread\n- Treatment: Therapeutic methods or medication\n- Prevention: Measures to prevent spread\n- Mechanism: Biological mechanisms of the virus\n- Case Report: Report of a COVID-19 case\n- Transmission: How COVID-19 is transmitted\n- Diagnosis: Diagnosing COVID-19 in individuals\n\nYour output should be a valid JSON array with topics in double quotes and separated by commas. No additional text should be included.\n\nArticle:\n\nBACKGROUND AND IMPORTANCE: Extracorporeal membrane oxygenation (ECMO) represents a life-saving therapy in cases of refractory hypoxia and has been utilized in patients suffering from the most severe forms of coronavirus disease 2019 (COVID-19). A strikingly high mortality rate of 94% was described in early reports of patients with COVID-19 transitioned to ECMO. Later case reports and series demonstrating successful recovery from COVID-19 after ECMO have revived interest in this therapeutic modality, including the recent approval of ECMO for COVID-19 patients by the Food and Drug Administration (FDA). Here, we present the first reports of devastating intracranial hemorrhage as a complication of veno-venous (VV) ECMO in two COVID-19 patients. CLINICAL PRESENTATION: We performed a retrospective analysis of 2 cases of devastating intracranial hemorrhage in patients on VV-ECMO for the treatment of COVID-19. Collected data included clinical history, laboratory results, treatment, and review of all available imaging. Both patients demonstrated activated partial thromboplastin times (aPTT) within an appropriate therapeutic range. No risk factors that clearly predicted likelihood of this complication were identified. CONCLUSION: Understanding the complications of ECMO in this cohort and developing therapeutic algorithms to aid in optimal patient selection will be critical in the limited resource setting experienced as a result of global pandemic. We propose the use of head computed tomography (CT) to identify devastating neurological complications as early as possible, aiding in the resource allocation of ECMO machines to the most appropriately selected patients.",
"role": "user"
},
{
"content": "[\"Case Report\"]",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_pubmed | science.scientific_papers_summarization_single_doc_pubmed.43462 | [
{
"content": "Write an abstract for the research paper below.\n\nPaper text:\nthe present issue of critical care features a publication by min tan and colleagues from the affiliated hospital of north sichuan medical college in china on the use of probiotics in severe traumatic brain injury patients .\ncritically ill trauma patients have many reasons to develop infectious complications when admitted to the icu .\ntrauma causes a massive inflammatory response during which a first phase dominated by pro - inflammatory cytokines is follow by a second phase of immunosuppression in which anti - inflammatory cytokines prevail ( immunoparalysis ) .\ntrauma in many cases disrupts the mechanical barriers that prevent micro - organisms from entering the body . of perhaps even greater impact\nprobiotics have the potential to reduce the infectious complications of these patients because they can limit the mucosal barrier dysfunction and regulate the balance of cd4effector t lymphocytes . because probiotics are , by definition , live bacteria , there has been hesitation in using them for critically ill patients . by\nnow , many studies have demonstrated the safety of probiotics for a variety of clinical conditions maybe with the exception of acute pancreatitis complicated by multiorgan failure [ 4 - 6 ] .\nthe efficacy of probiotics for critically ill patients is debated because both positive as well as negative results have been published . with regard to the prevention of ventilator - associated pneumonia in critically ill patients admitted to the icu , morrow and colleagues reported a positive effect , while in the study of barraud and colleagues no effect\ntan and colleagues now present the results of a pilot study on the immunomodulatory as well as clinical effects of a multispecies probiotic preparation in patients with severe brain trauma .\ntheir study is important because it is the first one in this patient category that includes biomarkers of the immune system .\ntrauma itself is known to cause a depression of the t helper 1 ( th1 ) cytokines il -12 p70 , and interferon- .\nil-12 is the major th1 driving cytokine while interferon- potentiates cellular immunity as well as macrophage function . during the first week\nno differences in cytokine levels were found between the intervention and control groups , but afterwards , il-12 p70 , and interferon- returned to normal faster in the probiotics group .\nserum levels of il-6 and c - reactive protein at days 15 to 21 were lower in the probiotics group .\nthe probiotics significantly reduced the length of icu stay ( from 10.7 to 6.8 days ) .\nthey also reduced the frequency of ventilator - associated pneumonia ( from 68 to 44% ) and mortality ( from 19 to 11% ) , but with a relatively small group size the study was underpowered to detect differences in these outcome parameters .\nlarger , preferably multicenter studies are now needed to confirm these encouraging results . for those studies it will also be important to decide on the choice of probiotics .\nnot all probiotics are the same - in fact , they are all different . even between different strains of the same species considerable differences\nmay exist . in the study of morrow and colleagues , lactobacillus rhamnosus gg ( lgg ) was used in a dose of 2 10colony forming units ( cfu ) per day .\nbarraud and colleagues used a multispecies preparation consisting mainly also of lgg as well as lactobacillus casei , lactobacillus acidophilus , and bifidobacterium bifidum in a total dose of 2 10cfu per day .\nthe multispecies combination of tan and colleagues was composed mainly of bifidobacterium longum as well as lactobacillus bulgaricus and streptococcus thermophilus and was used at a total dose of 10cfu per day .\nmaintenance ( or even improvement ) of th1 cytokine - mediated immune functions can be of decisive importance for severe trauma patients .\nprobiotics selected to fulfill that function may therefore be optimally suited to support the clinical outcome of this category of patients .",
"role": "user"
},
{
"content": "probiotics are live micro - organisms with a health promoting effect . because of their immunomodulating capacity as well as improvement of gut barrier function , probiotics have the capacity to prevent infectious complications in a variety of clinical settings . \n now selected probiotics show potential for improving the clinical outcome of severe trauma patients .",
"role": "assistant"
}
] |
science.scireviewgen_multidoc_summarization | science.scireviewgen_multidoc_summarization.14709 | [
{
"content": "Write a literature review chapter based on these paper abstracts.\nLiterature Review Title: A Survey on Different Feature Selection Methods for Microarray Data Analysis \nChapter Title: Clustering techniques \n1. Abstract of Cited Paper (BIB001): Clustering is the process of grouping a set of objects into classes of similar objects. Although definitions of similarity vary from one clustering model to another, in most of these models the concept of similarity is based on distances, e.g., Euclidean distance or cosine distance. In other words, similar objects are required to have close values on at least a set of dimensions. In this paper, we explore a more general type of similarity. Under the pCluster model we proposed, two objects are similar if they exhibit a coherent pattern on a subset of dimensions. For instance, in DNA microarray analysis, the expression levels of two genes may rise and fall synchronously in response to a set of environmental stimuli. Although the magnitude of their expression levels may not be close, the patterns they exhibit can be very much alike. Discovery of such clusters of genes is essential in revealing significant connections in gene regulatory networks. E-commerce applications, such as collaborative filtering, can also benefit from the new model, which captures not only the closeness of values of certain leading indicators but also the closeness of (purchasing, browsing, etc.) patterns exhibited by the customers. Our paper introduces an effective algorithm to detect such clusters, and we perform tests on several real and synthetic data sets to show its effectiveness. \n2. Abstract of Cited Paper (BIB002): Motivation: Microarray experiments generate a considerable amount of data, which analyzed properly help us gain a huge amount of biologically relevant information about the global cellular behaviour. Clustering (grouping genes with similar expression profiles) is one of the first steps in data analysis of high-throughput expression measurements. A number of clustering algorithms have proved useful to make sense of such data. These classical algorithms, though useful, suffer from several drawbacks (e.g. they require the predefinition of arbitrary parameters like the number of clusters; they force every gene into a cluster despite a low correlation with other cluster members). In the following we describe a novel adaptive quality-based clustering algorithm that tackles some of these drawbacks. Results: We propose a heuristic iterative two-step algorithm: First, we find in the high-dimensional representation of the data a sphere where the ‘density’ of expression profiles is locally maximal (based on a preliminary estimate of the radius of the cluster—quality-based approach). In a second step, we derive an optimal radius of the cluster (adaptive approach) so that only the significantly coexpressed genes are included in the cluster. This estimation is achieved by fitting a model to the data using an EM-algorithm. By inferring the radius from the data itself, the biologist is freed from finding an optimal value for this radius by trial-and-error. The computational complexity of this method is approximately linear in the number of gene expression profiles in the data set. Finally, our method is successfully validated using existing data sets. Availability: http://www.esat.kuleuven.ac.be/∼thijs/Work/ Clustering.html Contact: [email protected] Supplementary information: http://www.esat.kuleuven. ac.be/∼fdesmet/paper/adaptpaper.html \n3. Abstract of Cited Paper (BIB003): DNA microarray technology has now made it possible to simultaneously monitor the expression levels of thousands of genes during important biological processes and across collections of related samples. Elucidating the patterns hidden in gene expression data offers a tremendous opportunity for an enhanced understanding of functional genomics. However, the large number of genes and the complexity of biological networks greatly increases the challenges of comprehending and interpreting the resulting mass of data, which often consists of millions of measurements. A first step toward addressing this challenge is the use of clustering techniques, which is essential in the data mining process to reveal natural structures and identify interesting patterns in the underlying data. Cluster analysis seeks to partition a given data set into groups based on specified features so that the data points within a group are more similar to each other than the points in different groups. A very rich literature on cluster analysis has developed over the past three decades. Many conventional clustering algorithms have been adapted or directly applied to gene expression data, and also new algorithms have recently been proposed specifically aiming at gene expression data. These clustering algorithms have been proven useful for identifying biologically relevant groups of genes and samples. In this paper, we first briefly introduce the concepts of microarray technology and discuss the basic elements of clustering on gene expression data. In particular, we divide cluster analysis for gene expression data into three categories. Then, we present specific challenges pertinent to each clustering category and introduce several representative approaches. We also discuss the problem of cluster validation in three aspects and review various methods to assess the quality and reliability of clustering results. Finally, we conclude this paper and suggest the promising trends in this field.\n\nLiterature Review:",
"role": "user"
},
{
"content": "AS proposed by Daxin Jiang Chun Tang Aidong Zhang BIB004 ,Proximity measurement is a method that measure the similarity (or distance) between two data objects. After taking gene expression data's genes as an object, can be formalized as numerical vectors = , where is the value of the feature for the data object and p is the number of features. The proximity between two objects and is measured by a proximity function of corresponding vectors and . By Euclidean distance the distance between two data object can be measure as: The overall shapes of gene expression patterns have a greater interest than the individual magnitudes of each feature. For shifting or scaled patterns Euclidean distance does not score well BIB002 . For this problem, standardization of each object vector is done with zero mean and variance one before calculating the distance BIB001 BIB003 . To measures the similarity between the shapes of two expression patterns an alternate measure is Pearson's correlation coefficient, given two data objects and , Pearson's correlation coefficient is defined as Pearson ( Where and are the means and , respectively. Pearson's correlation coefficient shows each object as a random variable with diagnosis and measures the similarity between two objects after calculating the linear relationship between the distributions of the two corresponding random variables.",
"role": "assistant"
}
] |
science.pico_ner | science.pico_ner.1718 | [
{
"content": "Find entities of these types in the text: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nOutput a JSON object with entity types as keys and lists of found entities as values.\n\nText:\nThe value of transbronchial lung biopsy using jumbo forceps via rigid bronchoscope in diffuse lung disease . BACKGROUND Transbronchial lung biopsy ( TBLB ) is a valuable procedure used to obtain a parenchymal specimen in the evaluation of diffuse lung infiltrates . Large forceps are expected to result in larger specimens and improve diagnostic yield . AIM The objective of this study was to evaluate diagnostic yield of TBLB using large modified flexible gastroenterological forceps ( \" Jumbo forceps \" ) compared with 'normal ' flexible forceps via rigid bronchoscopy in patients with diffuse parenchymal lung disease ( DPLD ) . METHODS The study was a prospective analysis of 95 patients who underwent fluoroscopy guided TBLB over a two year period . Patients with a lung mass or solitary lung nodule undergoing TBLB were excluded . The larger and small forceps were used in a random sequence to avoid a reduction in diagnostic yield of the second series of biopsies related to possible bleeding by first series of biopsies . To minimize the consequence of haemorrhage , we performed every rigid bronchoscopy , placing a non inflated Fogarty balloon and a rigid aspirator ( diameter 4 mm ) in lobar bronchus near the biopsy segment . The Fogarty balloon has been inflated in case of bleeding . After the bleeding was controlled we continued to operate up to the biopsy segment . RESULTS Diagnostic yield of TBLB using Jumbo forceps was significantly higher than using normal flexible forceps via rigid bronchoscopy in patients with DPLD ( p = 0.001 ) . In 74 out of 95 patients ( 78 % ) the diagnosis was placed with Jumbo forcep while the smaller forcep was diagnostic in 62 out of 95 patients ( 65 % ) . Large forceps obtained significantly more tissue than the small forceps ; the biopsy specimen taken with normal forcep measured in average 1.4 x 1.0 mm and the larger biopsy taken with jumbo forcep measured in average 2.5 x 1.9 mm ( p < 0.005 ) . CONCLUSION The use of large biopsy forceps to perform TBLB via rigid bronchoscope can significantly increase diagnostic yield in the pathological diagnosis of diffuse infiltrative lung disease .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"placing a non inflated Fogarty balloon\", \"underwent fluoroscopy guided TBLB over a two year period .\", \"bronchoscopy\", \"rigid aspirator\", \"The Fogarty balloon has been inflated in case of bleeding\", \"larger and small forceps\"], \"Outcome\": [\"tissue\", \"value of transbronchial lung biopsy\", \"parenchymal specimen\", \"Diagnostic yield\"], \"Participant\": [\"diffuse lung disease\", \"65 %\", \"lung mass\", \"DPLD\", \"95\", \"lung nodule\", \"62\", \"diffuse parenchymal lung disease ( DPLD\", \"78 %\", \"74\", \"diffuse infiltrative lung disease\"]}",
"role": "assistant"
}
] |
science.chemdner_ner | science.chemdner_ner.2231 | [
{
"content": "Find all [\"Chemical\"] entities in this text. Output a JSON object.\n\nText:\nStructural Investigation and Biological Activity of Sesquiterpene Lactones from the Traditional Chinese Herb Inula racemosa. Five new sesquiterpene lactones, racemosalactones A-E (1-5), along with 19 known sesquiterpene latones (6-24), were isolated from the roots of Inula racemosa. Their structures were elucidated by extensive spectroscopic analysis, and the absolute configuration of 2 was deduced from X-ray diffraction analysis. Compounds 1, 6, 8, 10, 12, 14, and 17 exhibited antiproliferative activities with IC50 values ranging from 0.38 to 4.19 μg/mL against human non-small-cell lung cancer A549, hepatocellular carcinoma HepG2, and human fibrosarcoma HT1080 cells. Compounds 6 and 8 exhibited antiproliferative activities against endothelial cells with IC50 values of 2.4 and 2.5 μg/mL, respectively. Furthermore, compounds 6 and 8 both inhibited endothelial cell tube formation at 1.0 μg/mL. A method for the rapid and straightforward preparative-scale isolation of compound 6 from alantolides is described.",
"role": "user"
},
{
"content": "{\"Chemical\": [\"Sesquiterpene Lactones\", \"sesquiterpene lactones\", \"racemosalactones A-E\", \"alantolides\", \"sesquiterpene latones\"]}",
"role": "assistant"
}
] |
science.pico_ner | science.pico_ner.1578 | [
{
"content": "Find entities of these types in the text: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nOutput a JSON object with entity types as keys and lists of found entities as values.\n\nText:\nChronology of panic and avoidance , age of onset in panic disorder , and prediction of treatment response . A report from the Cross-National Collaborative Panic Study . The relevance of the chronology between panic disorder and avoidance behavior and of an early , medium or late onset of panic disorder was tested . Groups from the sample of the cross-national collaborative panic study ( CNCPS ) were compared for differences in basic characteristics and for the ability to predict treatment response . Patients who developed avoidance behavior before the full syndrome of panic disorder had less often a full agoraphobia but were not different in their response to treatment . Patients with an early onset of panic disorder suffered more often from agoraphobia . The treatment response was similar in the groups with early , medium or late onset of panic disorder . Neither the chronology between panic disorder and avoidance behavior nor the age of onset of panic disorder predicted outcome in short-term treatment with alprazolam or imipramine .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"imipramine\", \"treatment\", \"cross-national collaborative panic study\", \"short-term treatment with alprazolam\"], \"Outcome\": [\"agoraphobia\", \"panic disorder and avoidance behavior\", \"panic disorder\", \"early onset of panic disorder\", \"early , medium or late onset of panic disorder\", \"age of onset of panic disorder\", \"avoidance behavior\", \"similar\", \"treatment response\"], \"Participant\": [\"panic disorder\", \"full syndrome\"]}",
"role": "assistant"
}
] |
science.data_reco_mcq_sc | science.data_reco_mcq_sc.13622 | [
{
"content": "Which dataset would be most relevant for this research query?\n\nQuery: We present an anchor-free instance segmentation framework, CentripetalNet, which exploits corner keypoint\n\nKeyphrases: instance segmentation images\n\nDataset descriptions:\nCOCO: The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\nSVIRO: Contains bounding boxes for object detection, instance segmentation masks, keypoints for pose estimation and depth images for each synthetic scenery as well as images for each individual seat for classification.\nA2D: A2D (Actor-Action Dataset) is a dataset for simultaneously inferring actors and actions in videos. A2D has seven actor classes (adult, baby, ball, bird, car, cat, and dog) and eight action classes (climb, crawl, eat, fly, jump, roll, run, and walk) not including the no-action class, which we also consider.\nCryoNuSeg: CryoNuSeg is a fully annotated FS-derived cryosectioned and H&E-stained nuclei instance segmentation dataset. The dataset contains images from 10 human organs that were not exploited in other publicly available datasets, and is provided with three manual mark-ups to allow measuring intra-observer and inter-observer variability.\n\nOptions: A: COCO B: SVIRO C: A2D D: CryoNuSeg",
"role": "user"
},
{
"content": "COCO",
"role": "assistant"
}
] |
science.craftchem_ner | science.craftchem_ner.7511 | [
{
"content": "Extract all chemical entities (ChEBI) from this biomedical text. Output as JSON array.\n\nText: IthasbeensuggestedtheSocs2mayrelyonubiquitinationdependentproteasomaldegradationtoinhibitGhsignaling[23].",
"role": "user"
},
{
"content": "[\"Gh\"]",
"role": "assistant"
}
] |
science.bc7_litcovid_topic_classification | science.bc7_litcovid_topic_classification.19668 | [
{
"content": "Classify the topics of this COVID-19 literature paragraph by selecting from these categories:\n\n- Epidemic Forecasting: Predictions about COVID-19 spread\n- Treatment: Therapeutic methods or medication\n- Prevention: Measures to prevent spread\n- Mechanism: Biological mechanisms of the virus\n- Case Report: Report of a COVID-19 case\n- Transmission: How COVID-19 is transmitted\n- Diagnosis: Diagnosing COVID-19 in individuals\n\nYour output should be a valid JSON array with topics in double quotes and separated by commas. No additional text should be included.\n\nArticle:\n\nSevere acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is the virus responsible for the coronavirus disease of 2019 (COVID-19). First identified in Wuhan (Hubei, China) in December of 2019, it has since been declared a pandemic by the World Health Organization in March of 2020. In this study, we will provide a brief review of viral origin, identification, symptoms, transmission, diagnosis, and potential treatment strategies for the newly identified SARS-CoV-2 strain.",
"role": "user"
},
{
"content": "[\"Diagnosis\", \"Treatment\", \"Transmission\", \"Mechanism\"]",
"role": "assistant"
}
] |
science.chemdner_ner | science.chemdner_ner.2936 | [
{
"content": "Find all [\"Chemical\"] entities in this text. Output a JSON object.\n\nText:\nIncidentally discovered thyroid nodules: frequency in an adult population during Doppler ultrasonographic evaluation of cervical vessels. In this study, the frequency of thyroid nodules was searched in a population of consecutive patients undergoing routine cervical Doppler evaluation with the use of high-end color Doppler ultrasound units equipped with new technologies that increase nodule conspicuity. Three hundred and fifteen subjects, who had been referred for Doppler ultrasonography of cervical vessel examination, were enrolled in the study. During the examinations, the presence or absence of thyroid nodules, if present, their number (as \"single\" or \"multiple\"), the longest diameter, and internal echo pattern of the largest nodule were noted. As a result, one hundred and sixty-one subjects (51 %) were shown to have at least one thyroid nodule. In this subgroup, 54 cases had single and 107 cases had multiple nodules. Incidental thyroid nodules were detected in similar ratios in both men and women (p = 0.34). The results of the study led to the conclusion that half of the elder adult population had at least one thyroid nodule. Use of ultrasound technologies with increasing resolution seems to help significantly in detecting and also evaluating smaller and previously unknown nodules.",
"role": "user"
},
{
"content": "{\"Chemical\": []}",
"role": "assistant"
}
] |
science.mslr2022_ms2_multidoc_summarization | science.mslr2022_ms2_multidoc_summarization.2263 | [
{
"content": "Based on the RCT reports below, write a few sentences synthesizing the findings of these reports, which could serve as a conclusion section for a systematic review article.\n\n\n\nTitle 1:\nIsolation of yeasts and enteric bacteria in root-filled teeth with chronic apical periodontitis.\nAbstract 1:\nAIMS The aim of this study was to determine the occurrence and role of yeasts , enteric gram-negative rods and Enterococcus species in root-filled teeth with chronic apical periodontitis , and the antimicrobial effect of iodine potassium iodide ( IKI ) irrigation . METHODOLOGY Forty symptom-free root-filled teeth with chronic apical periodontitis were included in the study . The patients were divided into two groups . In group A the canals were filled with calcium hydroxide for 10 - 14 days after cleaning and shaping ; in group B the canals were irrigated with IKI for 5 min after cleaning and shaping followed by a permanent root filling . Microbiological sample s were taken from the canals before and after the chemomechanical preparation and after iodine irrigation ( group B ) . RESULTS Microbes were isolated from 33 of 40 teeth in the initial sampling . Yeasts were isolated from six teeth , three of them together with E. faecalis . Enteric rods ( Escherichia coli , Klebsiella pneumoniae and Proteus mirabilis ) were present in three teeth and E. faecalis was isolated from 21 of the 33 culture positive teeth , 11 in pure culture . Growth was detected in 10 teeth of the second sample s. Six of the 10 cases were E. faecalis , with five being a pure culture . All third sample s ( after IKI ) except one were negative . The number of microbial cells per sample did not correlate with lesion size . Two flare-ups were recorded , both in teeth with a mixed infection . CONCLUSION The high prevalence of enteric bacteria and yeasts in root-filled teeth with chronic apical periodontitis was established . IKI improved the antimicrobial effect of the treatment\n\n\nTitle 2:\nHow to write an evidence-based clinical review article.\nAbstract 2:\nTraditional clinical review articles , also known as up date s , differ from systematic review s and meta-analyses . Up date s selectively review the medical literature while discussing a topic broadly . Non-quantitative systematic review s comprehensively examine the medical literature , seeking to identify and synthesize all relevant information to formulate the best approach to diagnosis or treatment . Meta-analyses ( quantitative systematic review s ) seek to answer a focused clinical question , using rigorous statistical analysis of pooled research studies . This article presents guidelines for writing an evidence -based clinical review article for American Family Physician . First , the topic should be of common interest and relevance to family practice . Include a table of the continuing medical education objectives of the review . State how the literature search was done and include several sources of evidence -based review s , such as the Cochrane Collaboration , BMJ 's Clinical Evidence , or the InfoRetriever Web site . Where possible , use evidence based on clinical outcomes relating to morbidity , mortality , or quality of life , and studies of primary care population s. In articles su bmi tted to American Family Physician , rate the level of evidence for key recommendations according to the following scale : level A ( r and omized controlled trial [ RCT ] , meta- analysis ) ; level B ( other evidence ) ; level C ( consensus/expert opinion ) . Finally , provide a table of key summary points\n\n\nTitle 3:\nSystematic reviews in endodontics--examples of GIGO?\nAbstract 3:\nIn many areas of medicine the amount of published information is increasing at an exponential rate , making it difficult for the clinician to condense the data to a manageable amount of valuable and useful information . Systematic review s have been suggested as a remedy for this information overload . Such review s are regarded as the highest level of evidence . Certain statistical methods , e.g. , meta- analysis , have also been introduced for the calculation of a more comprehensive summation of the compiled results from studies with small sample sizes . These findings would then serve as an authoritarian guide for evidence -based practice . Endodontists have lately jumped on the b and wagon of systematic review s and meta-analyses , and papers on various topics are being published . Well done , such comprehensive information would be very useful and authoritative for the practitioner . Alarmingly , the quality of these recent publications is negatively correlated to the frequency . This can have serious consequences in the clinical arena if spurious information is given a seal of approval when communicated as facts in refereed journals . A systematic literature review is design ed to focus on a single question and to synthesize all high- quality research reports relevant to the question . In these studies , however , sources of bias are often not controlled . Therefore , even the best review and meta- analysis becomes useless if it is based on poorly design ed research . Furthermore , these review s are mostly based on published data which are too often biased toward studies that show significant differences in results and /or outcome . “ Uninteresting ” conclusions tend not to be published . Endodontic literature , thus far , is short on well design ed and executed r and omized clinical trials . Furthermore , clinical protocol s often have many important “ proprietary ” modifications that make comparison difficult . This makes it very hard to undertake a systematic review to answer a narrow question . Such GIGO ( garbage in , garbage out ) can easily lead to biased conclusions and reinforce already existing biased comclusions . An example of such bias can be illustrated by a meta- analysis recently published . The study focused on assessing if there were differences in outcome between 1 and 2 treatment visits when treating teeth with apical periodontitis . Relevant publications on this topic are rare , and with the narrow inclusion criteria only 3 investigations were available for the review . However , the only factors connecting these 3 studies was the words “ single ” and “ two visits ” in the title . Beyond that , the selected studies are too different and inadequate for the study . Only 1 study is somewhat demonstrably r and omized , and outcome assessment s are based on 3 vastly different criteria . Two of the studies use calcium hydroxide as intracanal dressing , whereas the third uses no intervisit medication . Complexity of initial pathology and healing time are also clearly factors of great importance but not controlled . In addition , sample sizes in the component studies as well as the final cohort are all too small for the conclusions that there was no significant difference between 1or 2-visit treatment protocol s. To undertake and publish a meta- analysis on such an incomplete sample is counterproductive . Although the publication noted its numerous limitations , it will still communicate erroneous conclusions with a poor scientific foundation to the less informed reader . There are other examples of recently published systematic review s that are very ambiguous and tend to communicate spurious facts . One of these studies looks at the effect of smear-layer removal on sealing ability of root canal obturation . The study is entirely limited to laboratory experiments , with the majority of evaluations using dye leakage which is generally considered to be unreliable . This is an example of review ers lacking content expertise . The authors draw the conclusion , using defective review material , that removal of smear layer enhances the seal of the root filling . We may generally believe that this is the truth",
"role": "user"
},
{
"content": "In conclusion , NaOCl or CHX showed low ability to eliminate E. faecalis when evaluated by either PCR or culture techniques",
"role": "assistant"
}
] |
science.pico_ner | science.pico_ner.2133 | [
{
"content": "Find entities of these types in the text: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nOutput a JSON object with entity types as keys and lists of found entities as values.\n\nText:\nComparison of gonadotropin-releasing hormone agonists and antagonists in assisted reproduction cycles of polycystic ovarian syndrome patients . AIM Gonadotropin-releasing hormone ( GnRH ) antagonists have reduced the incidence of severe ovarian hyper stimulation syndrome ( OHSS ) and rate of hospitalization due to severe OHSS , especially in polycystic ovarian syndrome ( PCOS ) patients . The present study aimed to compare the outcomes of patients with PCOS undergoing controlled ovarian hyperstimulation ( COH ) with GnRH agonist versus GnRH antagonist protocols for assisted reproduction cycles . METHODS The present clinical trial compared GnRH antagonist ( cetrorelix ) and GnRH agonist ( buserelin ) protocols during COH of 112 infertile PCOS patients entering assisted reproduction cycles . The primary outcome measure was pregnancy occurrence . Basal characteristics of the participants , stimulation cycle responses , pregnancy outcomes , incidence of OHSS and types of OHSS were considered in both groups . RESULTS Regarding chemical and clinical pregnancy rates , the number of retrieved oocytes was significantly higher and OHSS was significantly lower in the antagonist group . Follicle stimulating hormone ( FSH ) , luteinizing hormone ( LH ) levels , number of follicles , number of follicles > 18 mm , relative frequency of mature oocytes , number and days of gonadotropin injections , day of human chorionic gonadotropin ( HCG ) administration , estradiol level and abortion were similar between the two groups . CONCLUSION GnRH antagonists are more effective , safe and a well tolerated alternative to agonists for assisted reproduction cycles in PCOS patients . GnRH antagonists are associated with a reduction in the incidence of OHSS in these patients .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"gonadotropin-releasing hormone agonists\", \"GnRH antagonist\", \"GnRH agonist ( buserelin )\", \"antagonists\", \"GnRH antagonist ( cetrorelix )\", \"GnRH agonist\"], \"Outcome\": [\"Follicle stimulating hormone ( FSH ) , luteinizing hormone ( LH ) levels , number of follicles , number of follicles \\u003e 18 mm , relative frequency of mature oocytes , number and days of gonadotropin injections , day of human chorionic gonadotropin ( HCG ) administration , estradiol level and abortion\", \"types of OHSS\", \"pregnancy outcomes\", \"number of retrieved oocytes\", \"chemical and clinical pregnancy rates\", \"stimulation cycle responses\", \"pregnancy occurrence .\", \"incidence of OHSS\"], \"Participant\": [\"( PCOS )\", \"112\", \"patients with PCOS undergoing controlled ovarian hyperstimulation ( COH ) with GnRH agonist\", \"polycystic ovarian syndrome\"]}",
"role": "assistant"
}
] |
science.scireviewgen_multidoc_summarization | science.scireviewgen_multidoc_summarization.73405 | [
{
"content": "Write a literature review chapter based on these paper abstracts.\nLiterature Review Title: A Survey of Learning Causality with Data: Problems and Methods \nChapter Title: Reinforcement Learning \n1. Abstract of Cited Paper (BIB001): This work shows how to leverage causal inference to understand the behavior of complex learning systems interacting with their environment and predict the consequences of changes to the system. Such predictions allow both humans and algorithms to select the changes that would have improved the system performance. This work is illustrated by experiments on the ad placement system associated with the Bing search engine. \n2. Abstract of Cited Paper (BIB002): The Multi-Armed Bandit problem constitutes an archetypal setting for sequential decision-making, permeating multiple domains including engineering, business, and medicine. One of the hallmarks of a bandit setting is the agent's capacity to explore its environment through active intervention, which contrasts with the ability to collect passive data by estimating associational relationships between actions and payouts. The existence of unobserved confounders, namely unmeasured variables affecting both the action and the outcome variables, implies that these two data-collection modes will in general not coincide. In this paper, we show that formalizing this distinction has conceptual and algorithmic implications to the bandit setting. The current generation of bandit algorithms implicitly try to maximize rewards based on estimation of the experimental distribution, which we show is not always the best strategy to pursue. Indeed, to achieve low regret in certain realistic classes of bandit problems (namely, in the face of unobserved confounders), both experimental and observational quantities are required by the rational agent. After this realization, we propose an optimization metric (employing both experimental and observational distributions) that bandit agents should pursue, and illustrate its benefits over traditional algorithms. \n3. Abstract of Cited Paper (BIB003): We leverage causal inference tools to support a principled and more robust transfer of knowledge in reinforcement learning (RL) settings. In particular, we tackle the problem of transferring knowledge across bandit agents in settings where causal effects cannot be identified by Pearl's {do-calculus} nor standard off-policy learning techniques. Our new identification strategy combines two steps -- first, deriving bounds over the arm's distribution based on structural knowledge; second, incorporating these bounds in a novel bandit algorithm, B-kl-UCB. Simulations demonstrate that our strategy is consistently more efficient than the current (non-causal) state-of-the-art methods.\n\nLiterature Review:",
"role": "user"
},
{
"content": "Reinforcement learning (RL) BIB001 is studied for solving sequential decision-making problems. The three key variables in RL are the action A, the state (of the world) Z and the reward Y . When an agent performs an action based on the current state, it will reach the next state and receive a reward. To describe the relationships between these variables, researchers adopt the Markov decision process, which means the next state Z t +1 depends only on the current state Z t and action A t and the reward of the next state Y t +1 is determined only by Z t , Z t +1 and A t . A RL model learns a strategy or policy π (a, z) = P(A t = a|Z t = z) which determines which action to take given the current state Z t . The objective is to maximize the sum of the rewards. In the running example, we can assume that the state Z t represents the location of a restaurant, the action A t can be moving to a certain place or staying at the same place and the reward is the customer flow Y . In each time step, the restaurant owner decides which action to take and then observe the customer flow. Then the owner will make decision for the next time step based on whether the customer flow increases or not. One key issue is the unobserved confounders for multi-armed bandits (MAB) BIB003 . Without knowing the underlying causal relationships, traditional MAB algorithms fail to outperform the strategy that randomly picks an action in each time step. Specifically, the Causal Thompson Sampling algorithm BIB003 is proposed to deal with unobserved confounders in MAB problems. The reward distributions of those arms that are not preferred by the current strategy can also be estimated through hypothetical interventions on the action (choice of arm). By doing this we can avoid confounding bias in estimating the causal effect of choosing an arm on the expected reward. To connect causality with RL, we view a strategy or a policy in RL as an intervention . Given rewards of an observed strategy, we can utilize causal inference methods to predict rewards for another strategy, especially for Episodic RL (ERL) problems. ERL is a subclass of RL where the state is reset after a finite number of actions. ERL helps decision-making in a wide range of real-world applications such as the card game Blackjack and the advertisement placement BIB002 . One popular approach leverages IPTW (see Section 3) for reward prediction. In this approach, a treatment refers to an action and the strategy-specific propensity score is defined as the probability to perform the observed action given the observed state. Particularly, sequences of actions, states and rewards produced by running an observed strategy π are recorded in the data as [(A 1 (1), Z 1 (1)), (A 2 (1), Z 2 (1)), ...], [(A 1 (2), Z 1 (2)), (A 2 (2), Z 2 (2)), ...], .... Then we can estimate the expected sum of rewards of a strategyπ with IPTW on the rewards observed for strategy π :ξ n, ERL := 1 n n i=1 Y (i) K j=1π (A j (i)|Z j (i)) K j=1 π (A j (i)|Z j (i)) , where n and K denote the number of repentance of the strategy π in data and the number of time steps in each episode. In BIB002 , an improved variant of this approach is also mentioned. Other recent work bridging RL and causality includes causal approaches for transfer learning in RL models BIB004 and data-fusion for online reinforcement learners .",
"role": "assistant"
}
] |
science.healthver_entailment | science.healthver_entailment.3339 | [
{
"content": "Output a JSON object with \"verdict\" and \"evidence\" fields based on the claim and abstract below.\n\nClaim: The winter can spread the coronavirus transmission as in winter months, people may cluster together more indoors, increasing the number of folks at risk of becoming infection by someone who's contagious.\n\nAbstract:\nThe coronavirus disease 2019 (COVID-19) outbreak has become a severe public health issue.\nThe novelty of the virus prompts a search for understanding of how ecological factors affect the transmission and survival of the virus.\nSeveral studies have robustly identified a relationship between temperature and the number of cases.\nHowever, there is no specific study for a tropical climate such as Brazil.\nThis work aims to determine the relationship of temperature to COVID-19 infection for the state capital cities of Brazil.\nCumulative data with the daily number of confirmed cases was collected from February 27 to April 1, 2020, for all 27 state capital cities of Brazil affected by COVID-19.\nA generalized additive model (GAM) was applied to explore the linear and nonlinear relationship between annual average temperature compensation and confirmed cases.\nAlso, a polynomial linear regression model was proposed to represent the behavior of the growth curve of COVID-19 in the capital cities of Brazil.\nThe GAM dose-response curve suggested a negative linear relationship between temperatures and daily cumulative confirmed cases of COVID-19 in the range from 16.8 °C to 27.4 °C.\nEach 1 °C rise of temperature was associated with a -4.8951% (t = -2.29, p = 0.0226) decrease in the number of daily cumulative confirmed cases of COVID-19.\nA sensitivity analysis assessed the robustness of the results of the model.\nThe predicted R-squared of the polynomial linear regression model was 0.81053.\nIn this study, which features the tropical temperatures of Brazil, the variation in annual average temperatures ranged from 16.8 °C to 27.4 °C.\nResults indicated that temperatures had a negative linear relationship with the number of confirmed cases.\nThe curve flattened at a threshold of 25.8 °C.\nThere is no evidence supporting that the curve declined for temperatures above 25.8 °C.\nThe study had the goal of supporting governance for healthcare policymakers.",
"role": "user"
},
{
"content": "{\"verdict\": \"CONTRADICT\", \"evidence\": [\"Results indicated that temperatures had a negative linear relationship with the number of confirmed cases.\"]}",
"role": "assistant"
}
] |
science.genia_ner | science.genia_ner.103 | [
{
"content": "Extract all unique entities of these types from the biomedical abstract: [\"CellLine\", \"DNA\", \"Protein\", \"CellType\", \"RNA\"].\n\nOutput a JSON object with entity types as keys and lists of extracted entities as values. If no entities found for a type, use [].\n\nAbstract:\n\nThe number of glucocorticoid receptors in peripheral human lymphocytes is elevated by a zinc containing trace element preparation. A trace element preparation (Beres Drops Plus, BDP) elevates the number of glucocorticoid receptors (gcR) in peripheral lymphocytes isolated both from healthy blood donors and rheumatoid arthritis patients. This enhancement by BDP was found either for constitutive expression of gcRs or in experiments when the lymphocytes were stimulated by interleukin (IL) -6. There was no significant effect of BDP on IL-1 and tumour necrosis factor alpha (TNF alpha) -induced changes of gcRs. The effect of BDP was greatly dependent on the presence of Zn + + ions in the preparation, since the augmenting effect was abolished if BDP did not contain zinc.",
"role": "user"
},
{
"content": "{\"CellLine\": [], \"CellType\": [\"peripheral human lymphocytes\", \"peripheral lymphocytes\", \"lymphocytes\"], \"DNA\": [], \"Protein\": [\"tumour necrosis factor alpha\", \"interleukin (IL) - 6\", \"glucocorticoid receptors (gcR)\", \"gcRs\", \"gcR\", \"IL - 1\", \"TNF alpha\", \"glucocorticoid receptors\"], \"RNA\": []}",
"role": "assistant"
}
] |
science.bioasq_general_qa | science.bioasq_general_qa.21 | [
{
"content": "Answer the following question using the provided context. Only provide the answer without any additional text.\n\nQuestion: What is the extracellular core \"matrisome\"?\nContext: Over 300 ECM molecules have been defined as comprising the \"core matrisome\" in mammals through the analysis of whole genome sequences.\nCompletion of genome sequences for many organisms allows a reasonably complete definition of the complement of extracellular matrix (ECM) proteins. In mammals this \"core matrisome\" comprises ∼300 proteins.\nwe have developed a bioinformatic approach to predict the in silico \"matrisome\" defined as the ensemble of ECM proteins and associated factors.",
"role": "user"
},
{
"content": "The \"matrisome\" is defined as the ensemble of extracellular matrix proteins (ECM) proteins and associated factors. The core matrisome have been defined in mammals through the analysis of whole genome sequences and comprises of ~ 300 proteins.",
"role": "assistant"
}
] |
science.bioasq_yesno_qa | science.bioasq_yesno_qa.290 | [
{
"content": "Based on the following biomedical documents, answer the question with either yes or no.\n\nQuestion: Does splicing occur co-transcriptionally?\n\nContext: Researchers working in multiple model organisms - notably yeast, insects and mammalian cells - have shown that pre-mRNA can be spliced during the process of transcription (i.e. co-transcriptionally), as well as after transcription termination (i.e. post-transcriptionally)\nThe consensus view, based on four organisms, is that the majority of splicing events take place co-transcriptionally in most cells and tissues.\nDeep sequencing of subcellular RNA fractions shows splicing to be predominantly co-transcriptional\nWe show that in the human genome, splicing occurs predominantly during transcription.\nConsistent with co-transcriptional spliceosome assembly and splicing, we have found significant enrichment of spliceosomal snRNAs in chromatin-associated RNA compared with other cellular RNA fractions and other nonspliceosomal snRNAs.\nThe majority of introns in higher eukaryotes are excised prior to transcript release in a manner that is dependent on transcription through pol II\ns a result of co-transcriptional splicing, variations in pol II elongation influence alternative splicing patterns, wherein a slower elongation rate is associated with increased inclusion of alternative exons within mature mRNA.\nWe show that the pattern of intronic sequence read coverage is explained by nascent transcription in combination with co-transcriptional splicing\nModelling reveals co-transcriptional splicing to be the most probable and most efficient splicing pathway for the reporter transcripts, due in part to a positive feedback mechanism for co-transcriptional second step splicing\nRNA processing events that take place on the transcribed pre-mRNA include capping, splicing, editing, 3' processing, and polyadenylation. Most of these processes occur co-transcriptionally while the RNA polymerase II (Pol II) enzyme is engaged in transcriptional elongation\nAbundant evidence indicates that splicing to excise introns occurs co-transcriptionally, prior to release of the nascent transcript from RNAP II\nTogether, our work establishes a system for co-transcriptional splicing in vitro, in which the spliceosome containing the 5' and 3' exons are tethered to RNAP II for splicing.\nCo-transcriptional splicing of constitutive and alternative exons\nCurrent evidence supports co-transcriptional spliceosomal assembly, but there is little quantitative information on how much splicing is completed during RNA synthesis\nThus, we demonstrate that the decision to include or skip an alternative exon is made during transcription and not post-transcriptionally\nHere, we demonstrated that the co-transcriptional splicing of the intron in vitro was blocked by antisense oligonucleotides (AONs) targeting the P3-P7 core of the intron\nRNA editing and alternative splicing: the importance of co-transcriptional coordination\nCo-transcriptional splicing of pre-messenger RNAs: considerations for the mechanism of alternative splicing\nThe realization that splicing occurs co-transcriptionally requires two important considerations",
"role": "user"
},
{
"content": "yes",
"role": "assistant"
}
] |
science.chemsum_single_document_summarization | science.chemsum_single_document_summarization.77186 | [
{
"content": "Write an abstract for this chemistry paper.\n\nTitle:\nAnticancer Peptide SVS-1: Efficacy Precedes Membrane Neutralization\n\nPaper sections:\n\n<p>Cancer is one of the leading causes of death globally, and as a result, much effort has been devoted to designing new molecules as potential therapeutics and molecular tools to further our understanding of the disease.1,2 Although most effort has centered on small molecules and biologics, such as antibodies, recent reports indicate that a distinct class of peptides show anticancer activity.3,4</p><p>Antimicrobial peptides (AMPs) are produced by a variety of organisms as a defense mechanism against pathogens5 and although structurally diverse, most fold into amphiphilic structures due to their short size (< 50 amino acid residues), positive charge and high proportion of hydrophobic residues.5,6 These peptides kill bacteria by first associating with their negativity charged cell surface and subsequently disrupt their cell membranes via mechanisms that involve membrane thinning, formation of transient pores, or disruption of their lipid matrix, which impairs the membranes' barrier function.5,7 Some AMPs are capable of passing through the lipid bilayer of the membrane to act on intracellular targets.5 Due to their membrane perturbing activity, the bactericidal and fungicidal activity of AMPs have been extensively explored. More recently, certain AMPs have been shown to be cytotoxic towards a wide spectrum of human cancer cells8,9 and some have been shown to be tumoricidal.10</p><p>Cancer cells are characterized by a loss of lipid asymmetry that results in accumulation of up to 9% of negatively charged phosphatidylserine (PS) on the outer leaflet of their membrane. 1,5,9 In addition, changes in the glycosylation pattern of glycoproteins result in a high level of sialic acid on the cell's surface.1,9 As a result, the surface of many types of cancer cells are negatively charged, a characteristic shared by bacterial cells.8,9 Because of this similarity, many researchers speculate that the mechanism(s) of action for ACPs might be similar to those of AMPs. However, only a subset of AMPs display anticancer activity and there is active controversy as to whether the mechanism of action for AMPs and ACPs are truly the same.9 Reports indicate that the killing mechanism of ACPs is preferentially a membrane-lytic effect, however other mechanisms such as apoptosis triggering and angiogenesis inhibition might be involved.5,8 Nevertheless, the rapid killing caused by ACPs imply that a non-receptor mediated membranolytic killing mechanism is dominant.11,12 Here we study the interaction of a de novo designed ACP with model membranes as well as cancer cell membranes to further our mechanistic understanding for this class of peptides. In particular, we investigate the role that surface electrostatics play in the initial engagement of the peptide to the membrane surface.</p><p>SVS-1 (KVKVKVKVDPLPTKVKVKVK-NH2) is an 18-residue cationic peptide (formal charge at neutral pH of +9) designed to adopt an ensemble of random coil conformations which are bioinactive when dissolved in solution. However, when SVS-1 encounters a negatively charged surface, it binds to the surface and folds into a bioactive amphiphilic β-hairpin structure, which is capable of disrupting the lipid bilayer.13 Peptide folding results from the formation of electrostatic interactions between the peptide's lysine side chains and the negatively charged components of the lipid membrane as the unfolded peptide engages the surface. Because SVS-1 only folds at negatively charged surfaces, it is capable of discriminating between cancer cells and non-cancerous cells, whose membrane outer leaflets are largely zwitterionic. This behavior makes SVS-1 a good model ACP to investigate the role that electrostatics play in their mechanism of action.</p><p>We started by analyzing the effect of the SVS-1 peptide on the zeta potential values of model biomembrane systems composed of different phospholipids that mimic non-cancerous and cancer cell membranes: 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC) and mixtures of POPC and 1-palmitoyl-2-oleoyl-sn-glycero-3-phospho-L-serine (POPS) 40:60 respectively. Zeta potential values report on the electrostatic potential existing at the shear plane of a particle14,15 and have been used to describe many cellular phenomena such as cell adhesion and agglutination.15 Importantly, zeta potentials can report on the surface electrostatics of intact cells as a function of environmental conditions16 and thus, can be used to assess the interactions of peptides with cell surfaces that result in a change of cell surface electropotential.17 Here, we measure changes in zeta potential for neutral and negatively charged Large Unilamellar Vesicles (LUVs) as well as A549 lung carcinoma cells as a function of added peptide. In addition to studying SVS-1, an additional control peptide (SVS-2) was studied. SVS-2 (KVKVKVKVLPLPTKVKVKVK-NH2) is similar in sequence to SVS-1 but contains a key stereochemical change. The D-proline at position 10 of SVS-1 was changed to L-proline. This change renders SVS-2 incapable of folding, even in the presence of a negatively charged surface. Unlike SVS-1, SVS-2 is unable to form a rich valine hydrophobic face and a lysine-rich face responsible for membrane engagement. SVS-2 was found to be non-cytotoxic to cancer cells. Including SVS-2 in this study allows one to assess the importance of the β-hairpin folding event to the activity of SVS-1.13</p><p>Figure 1 shows the zeta potential values of POPC and POPC:POPS 40:60 LUVs titrated with SVS-1 and SVS-2.</p><p>In these measurements, peptide aliquots in BTP buffer were added to 200 μM LUVs suspensions. The addition of SVS-1 to POPC vesicles caused no significant change on the potential values, suggesting a weak or a non-interaction with the zwitterionic phospholipid. On the contrary, the potential values of POPC:POPS LUVs are significantly dependent on added SVS-1 and become less negative as the concentration of SVS-1 increases. An overall change from −16.03±0.38 mV to −9.79±1.12 mV is realized after the addition of 200 μM SVS-1. This suggests that the positively charged SVS-1 is interacting with the negatively charged LUVs inducing a change in the membrane surface charge either by direct peptide binding and/or insertion in the membrane. However, the progressive neutralization of the LUVs suggests that the phospholipids' negatively charged polar head groups are being neutralized. This observation supports a mechanism that involves initial peptide engagement of the head group region prior to downstream lytic events. Concerning SVS-2, there was very little dependence of the zeta potential as a function of added peptide irrespective of the LUVs type. This is somewhat surprising given the fact that SVS-2, although incapable of folding at the electronegative surface, still contains nine positively charged functional groups in its sequence (8 lysines and an N-terminal amine). Even though not bioactive, one would expect this peptide to at least bind to the surface of the negatively charged LUVs. The fact that it does not underpins the importance of obtaining the folded state of the hairpin at the membrane surface. In the folded state, the peptide is capable of displaying its electropositive side chains at higher surface density per monomer than in the unfolded state, thus driving the association of peptide to the membrane surface. These results point to an electrostatic interaction of SVS-1 with the surface of negatively charged model membranes that is driven by peptide conformation.</p><p>The data in Figure 1 can be fit to derive partition coefficients (Kp), a quantitative measure of interaction between the peptides (SVS-1 and SVS-2) and the anionic lipid bilayer (see supporting information).14 Partition coefficients, Kp = 224.90 ± 9.92 and Kp = 31.98 ± 13.02 were obtained for SVS-1 and SVS-2, respectively. The Kp for SVS-1 is over four fold greater than SVS-2 even though their formal charge state is identical, again indicating that the manner in which the electropositive functionalities of the peptide are presented to the membrane is important. This is mirrored in other ACPs whose amphiphilic structures contain high densities of positively charged side chains.9</p><p>A similar zeta potential analysis was performed on A549 (lung carcinoma) cells. Figure 2 shows the zeta potential values obtained for the interaction of both SVS-1 and SVS-2 with the cancer cells. The measured zeta potential for the cells in the absence of any peptide is −14.76 ± 1.49 mV, indicating that their surfaces are negatively charged. This is due to both lipid composition and negatively charged cell surface macromolecules, as described earlier. As expected, the addition of SVS-1 caused an increase of the zeta potential values towards neutralization indicating that the peptide is binding to the cell surface. Interestingly, the data also show that the control peptide, SVS-2 also binds. Although it is impossible to determine exact partition coefficients with living cells, it is qualitatively clear that SVS-2 binds with significantly lower affinity.</p><p>Lastly, we assessed the ability of both SVS-1 and SVS-2 to kill A549 cells at peptide concentrations used in this study. The data in Figure 3 indicate that the IC50 for SVS-1 is around 5 μM. Correlation with the data in Figure 2, suggests that this corresponds to a zeta potential of −11 mV, an increase in potential of only about 4 mV. Thus, full neutralization of the cell's surface potential by the peptide is not necessary for its cytotoxic action. This is in direct contrast to the mode of action of AMPs, which when introduced to bacteria, fully neutralize the cell's surface potential to effect their action. Figure 3 also shows that SVS-2, even at 200 μM, is not capable of killing the cells, even though the data in Figure 2 indicate that the control peptide is associating with the cell surface in some capacity. It may be that SVS-2 can bind to negatively charged cell surface components, such as GAG, but not strongly to the head group region of the bilayer as evident in Figure 1.</p><p>Taken together, these results suggest that SVS-1 partitions to negatively charged surfaces, and that although peptide binding to anionic, non-lipid components can certainly occur, that lipid composition is a seminal component defining the killing mechanism of the peptide. The data also suggest that proper presentation of positively charged side chains drives SVS-1 association with the head group region of the cancer cell membrane and full cell neutralization is not a prerequisite for downstream lytic events. Recently, our group reported that the MIC for most lytic AMPs correlates with their ability to effect full cell surface charge neutralization.18 An important difference in the mechanisms of AMPs and the ACP, SVS-1, is the observation that full neutralization of the cell surface is not necessary for its killing action. In contrast to anionic lipid on the surface of bacteria, PS can segregate into patches on the surface of cancer cells. It is possible that cell death results from SVS-1 interacting with only a subset of these regions and thus, total surface neutralization is not necessary to elicit its anticancer action. For other ACPs that act through a lytic mechanism, it will be interesting to see if this is a general phenomenon.</p>",
"role": "user"
},
{
"content": "Anticancer peptides (ACPs) are polycationic amphiphiles capable of preferentially killing a wide spectrum of cancer cells relative to non-cancerous cells. Their primary mode of action is an interaction with the cell membrane and subsequent activation of lytic effects, however it remains controversial the exact mechanism responsible for this mode of action. Using zeta potential analyses we demonstrate the interaction of a small anticancer peptide with membrane model systems and cancer cells. Electrostatic interactions have a pivotal role in the cell killing process and in contrast to the AMPs action cell death occurs without achieving full neutralization of the membrane charge.",
"role": "assistant"
}
] |
science.bc7_litcovid_topic_classification | science.bc7_litcovid_topic_classification.16264 | [
{
"content": "Classify the topics of this COVID-19 literature paragraph by selecting from these categories:\n\n- Epidemic Forecasting: Predictions about COVID-19 spread\n- Treatment: Therapeutic methods or medication\n- Prevention: Measures to prevent spread\n- Mechanism: Biological mechanisms of the virus\n- Case Report: Report of a COVID-19 case\n- Transmission: How COVID-19 is transmitted\n- Diagnosis: Diagnosing COVID-19 in individuals\n\nYour output should be a valid JSON array with topics in double quotes and separated by commas. No additional text should be included.\n\nArticle:\n\nWhile many infectious disorders are unknown to most neurologists, COVID-19 is very different. It has impacted neurologists and other health care workers, not only in our professional lives but also through the fear and panic within our own families, colleagues, patients and their families, and even in the wider public. COVID-19 affects all sorts of individuals, but the elderly with underlying chronic conditions are particularly at risk of severe disease, or even death. Parkinson's disease (PD) shares a common profile as an age-dependent degenerative disorder, frequently associated with comorbidities, particularly cardiovascular diseases, so PD patients will almost certainly fall into the high-risk group. Therefore, the aim of this review is to explore the risk of COVID-19 in PD based on the susceptibility to severe disease, its impact on PD disease severity, potential long-term sequelae, and difficulties of PD management during this outbreak, where neurologists face various challenges on how we can maintain effective care for PD patients without exposing them, or ourselves, to the risk of infection. It is less than six months since the identification of the original COVID-19 case on New Year's Eve 2019, so it is still too early to fully understand the natural history of COVID-19 and the evidence on COVID-19-related PD is scant. Though the possibilities presented are speculative, they are theory-based, and supported by prior evidence from other neurotrophic viruses closely related to SARS-CoV-2. Neurologists should be on high alert and vigilant for potential acute and chronic complications when encountering PD patients who are suspected of having COVID-19.",
"role": "user"
},
{
"content": "[\"Prevention\"]",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.20163 | [
{
"content": "Write a Related Work section based on these abstracts. Use numerical citations like [1], [2], etc.\n\nMain Abstract:\nWe describe a system to detect objects in three-dimensional space using video and inertial sensors (accelerometer and gyrometer), ubiquitous in modern mobile platforms from phones to drones. Inertials afford the ability to impose class-specific scale priors for objects, and provide a global orientation reference. A minimal sufficient representation, the posterior of semantic (identity) and syntactic (pose) attributes of objects in space, can be decomposed into a geometric term, which can be maintained by a localization-and-mapping filter, and a likelihood function, which can be approximated by a discriminatively-trained convolutional neural network. The resulting system can process the video stream causally in real time, and provides a representation of objects in the scene that is persistent: Confidence in the presence of objects grows with evidence, and objects previously seen are kept in memory even when temporarily occluded, with their return into view automatically predicted to prime re-detection.\n\nCited Abstract(s):\n[1]: When fusing visual and inertial measurements for motion estimation, each measurement's sampling time must be precisely known. This requires knowledge of the time offset that inevitably exists between the two sensors' data streams. The first contribution of this work is an online approach for estimating this time offset, by treating it as an additional state variable to be estimated along with all other variables of interest inertial measurement unit IMU pose and velocity, biases, camera-to-IMU transformation, feature positions. We show that this approach can be employed in pose-tracking with mapped features, in simultaneous localization and mapping, and in visual-inertial odometry. The second main contribution of this paper is an analysis of the identifiability of the time offset between the visual and inertial sensors. We show that the offset is locally identifiable, except in a small number of degenerate motion cases, which we characterize in detail. These degenerate cases are either i cases known to cause loss of observability even when no time offset exists, or ii cases that are unlikely to occur in practice. Our simulation and experimental results validate these theoretical findings, and demonstrate that the proposed approach yields high-precision, consistent estimates, in scenarios involving either known or unknown features, with both constant and time-varying offsets.\n\n[2]: \n\n[3]: Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.\n\n[4]: In CNN-based object detection methods, region proposal becomes a bottleneck when objects exhibit significant scale variation, occlusion or truncation. In addition, these methods mainly focus on 2D object detection and cannot estimate detailed properties of objects. In this paper, we propose subcategory-aware CNNs for object detection. We introduce a novel region proposal network that uses subcategory information to guide the proposal generating process, and a new detection network for joint detection and subcategory classification. By using subcategories related to object pose, we achieve state-of-the-art performance on both detection and pose estimation on commonly used benchmarks.\n\n[5]: Data association is one of the fundamental problems in multi-sensor systems. Most current techniques rely on pairwise data associations which can be spurious even after the employment of outlier rejection schemes. Considering multiple pairwise associations at once significantly increases accuracy and leads to consistency. In this work, we propose two fully decentralized methods for consistent global data association from pairwise data associations. The first method is a consensus algorithm on the set of doubly stochastic matrices. The second method is a decentralization of the spectral method proposed by . We demonstrate the effectiveness of both methods using theoretical analysis and experimental evaluation.\n\n[6]: \n\n[7]: In this paper, we study estimator inconsistency in Vision-aided Inertial Navigation Systems (VINS) from a standpoint of system observability. We postulate that a leading cause of inconsistency is the gain of spurious information along unobservable directions, resulting in smaller uncertainties, larger estimation errors, and possibly even divergence.We develop an Observability-Constrained VINS (OC-VINS), which explicitly enforces the unobservable directions of the system, hence preventing spurious information gain and reducing inconsistency. Our analysis, along with the proposed method for reducing inconsistency, are extensively validated with simulation trials and real-world experiments.",
"role": "user"
},
{
"content": "Recent work in data association [5] aims to directly infer the association map, which is computationally prohibitive for the scale needed in our real-time system. We therefore resort to heuristics, described in Sect. . More specifically to our implementation, we leverage existing visual-inertial filters [7] [1] [2] and single image-trained CNNs [3] [6] [4] .",
"role": "assistant"
}
] |
science.data_reco_mcq_sc | science.data_reco_mcq_sc.12156 | [
{
"content": "Which dataset would be most relevant for this research query?\n\nQuery: I want to train an unsupervised model for text generation.\n\nKeyphrases: text generation\n\nDataset descriptions:\nWebNLG: The WebNLG corpus comprises of sets of triplets describing facts (entities and relations between them) and the corresponding facts in form of natural language text. The corpus contains sets with up to 7 triplets each along with one or more reference texts for each set.\nSNIPS: The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity: SearchCreativeWork (e.g. Find me the I, Robot television show), GetWeather (e.g.\nLibri-Adapt: Libri-Adapt aims to support unsupervised domain adaptation research on speech recognition models.\nMLB Dataset: A new dataset on the baseball domain.\n\nOptions: A: WebNLG B: SNIPS C: Libri-Adapt D: MLB Dataset",
"role": "user"
},
{
"content": "WebNLG",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.24543 | [
{
"content": "Write a Related Work section based on these abstracts. Use numerical citations like [1], [2], etc.\n\nMain Abstract:\nThe dynamics of belief and knowledge is one of the major components of any autonomous system that should be able to incorporate new pieces of information. We show that knowledge base dynamics has interesting connection with kernel change via hitting set and abduction. The approach extends and integrates standard techniques for efficient query answering and integrity checking. The generation of hitting set is carried out through a hyper tableaux calculus and magic set that is focussed on the goal of minimality. Many different view update algorithms have been proposed in the literature to address this problem. The present paper provides a comparative study of view update algorithms in rational approach.\n\nCited Abstract(s):\n[1]: The dynamics of belief and knowledge is one of the major components of any autonomous system that should be able to incorporate new pieces of information. We introduced the knowledge base dynamics to deal with two important points: fi rst, to handle belief states that need not be deductively closed; and the second point is the ability to declare certain parts of t he belief as immutable. In this paper, we address another, radically new approach to this problem. This approach is very close to the Hansson's dyadic representation of belief. Here, we consider the immutable part as defining a new logical system. By a logical syste m, we mean that it defines its own consequence relation and closure oper ator. Based on this, we provide an abductive framework for knowledge base dynamics.\n\n[2]: The dynamics of belief and knowledge is one of the major components of any autonomous system that should be able to incorporate new pieces of information. In this paper, we argue that to apply rationality result of belief dynamics theory to various practical problems, it should be generalized in two respects: first of all, it should allow a certain part of belief to be declared as immutable; and second, the belief state need not be deductively closed. Such a generalization of belief dynamics, referred to as base dynamics, is presented, along with the concept of a generalized revision algorithm for Horn knowledge bases. We show that Horn knowledge base dynamics has interesting connection with kernel change and abduction. Finally, we also show that both variants are rational in the sense that they satisfy certain rationality postulates stemming from philosophical works on belief dynamics.\n\n[3]: In this paper, we introduce a new concept of generalized partial meet contraction for contracting a sentence from a belief base. We show that a special case of belief dynamics, referred to as knowledge base dynamics, where certain part of the belief base is declared to be immutable, has interesting connections with abduction, thus enabling us to use abductive procedures to realize contractions. Finally, an important application of knowledge base dynamics in providing an axiomatic characterization for deleting view atoms from databases is discussed in detail.\n\n[4]: \n\n[5]: The dynamics of belief and knowledge is one of the major components of any autonomous system that should be able to incorporate new pieces of information. In order to apply the rationality result of belief dynamics theory to various practical problems, it should be generalized in two respects: first it should allow a certain part of belief to be declared as immutable; and second, the belief state need not be deductively closed. Such a generalization of belief dynamics, referred to as base dynamics, is presented in this paper, along with the concept of a generalized revision algorithm for knowledge bases (Horn or Horn logic with stratified negation). We show that knowledge base dynamics has an interesting connection with kernel change via hitting set and abduction. In this paper, we show how techniques from disjunctive logic programming can be used for efficient (deductive) database updates. The key idea is to transform the given database together with the update request into a disjunctive (datalog) logic program and apply disjunctive techniques (such as minimal model reasoning) to solve the original update problem. The approach extends and integrates standard techniques for efficient query answering and integrity checking. The generation of a hitting set is carried out through a hyper tableaux calculus and magic set that is focused on the goal of minimality.\n\n[6]:",
"role": "user"
},
{
"content": "We begin by recalling previous work on view deletion. Chandrabose [3] [6] and Delhibabu [2] [1] [5] , defines a contraction and revision operator in view deletion with respect to a set of formulae or sentences using Hansson's [4] belief change. Similar to our approach, he focused on set of formulae or sentences in knowledge base revision for view update wrt. insertion and deletion and formulae are considered at the same level. Chandrabose proposed different ways to change knowledge base via only database deletion, devising particular postulate which is shown to be necessary and sufficient for such an update process.",
"role": "assistant"
}
] |
science.healthver_entailment | science.healthver_entailment.635 | [
{
"content": "Output a JSON object with \"verdict\" and \"evidence\" fields based on the claim and abstract below.\n\nClaim: COVID-19 is not the only cause of the death in Critically Ill Coronavirus Patients. COVID-19 is well known for breathing problems, but other health complications also contribute towards the death from COVID-19. \n\nAbstract:\nThe mortality rate of coronavirus disease-19 (COVID-19) has been reported as 1-6% in most studies.\nThe cause of most deaths has been acute pneumonia.\nNevertheless, it has been noted that cardiovascular failure can also lead to death.\nThree COVID-19 patients were diagnosed based on reverse transcriptase-polymerase chain reaction of a nasopharyngeal swab test and radiological examinations in our hospital.\nThe patients received medications at the discretion of the treating physician.\nIn this case series, chest computed tomography scans and electrocardiograms, along with other diagnostic tests were used to evaluate these individuals.\nSudden cardiac death in COVID-19 patients is not common, but it is a major concern.\nSo, it is recommended to monitor cardiac condition in selected patients with COVID-19.",
"role": "user"
},
{
"content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"The cause of most deaths has been acute pneumonia.\"]}",
"role": "assistant"
}
] |
science.pubmedqa_qa | science.pubmedqa_qa.126 | [
{
"content": "Answer the question with YES, NO, or MAYBE based on the abstract.\n\nAbstract:\n(OBJECTIVE) To assess the acceptability to patients of the use of patients' first names by doctors and doctors' first names by patients in general practice.\n(DESIGN) An administered questionnaire survey.\n(SETTING) 5 General practices in Lothian.\n(PATIENTS) 475 Patients consulting 30 general practitioners.\n(MAIN OUTCOME MEASURE) Response by patients to questionnaire on attitude to use of first names.\n(RESULTS) Most of the patients either liked (223) or did not mind (175) being called by their first names. Only 77 disliked it, most of whom were aged over 65. Most patients (324) did not, however, want to call the doctor by his or her first name.\n\nQuestion:\nShould general practitioners call patients by their first names?\n\nAnswer:",
"role": "user"
},
{
"content": "YES",
"role": "assistant"
}
] |
science.chemsum_single_document_summarization | science.chemsum_single_document_summarization.102225 | [
{
"content": "Write an abstract for this chemistry paper.\n\nTitle:\nCatalyst Controlled Regiodivergent Arylboration of Dienes\n\nPaper sections:\n\n<p>Difunctionalization of alkenes is an important class of reactions because molecular complexity can be rapidly generated from simple precursors.1 Variants that involve conjugated dienes are particularly interesting as the potential for controlled formation of multiple isomeric products becomes possible. Known methods for conjugated diene difunctionalization include (but are not limited to)2 diamination,3 dihydroxylation,4 diarylation(vinylation),5 diboration,6 and cycloadditions.7</p><p>An important emerging class of reactions includes carboboration of π-bonds.8 These reactions have been shown to function with alkyne, allene, or alkene derived substrates.9,10 Only recently has the carboboration of dienes (butadiene and 2-substituted dienes) been reported.9a,d A subclass of carboboration reactions involves the arylboration of alkenyl arenes by Pd/Cu-cooperative catalysis that was independently reported by our lab11 and that of Semba and Nakao.12–14 Our lab has recently reported a method for arylboration of isoprene and its derivatives in which an additive effect of DMAP was uncovered.15 Although this method did allow for control of regioselectivity, translation to dienes of varying substitution patterns was unsuccessful. Development of diene arylboration reactions are challenging as based on the proposed mechanism, formation of isomeric Cu–16 and Pd–π-allyl17 complexes can be formed, which may lead to poorly selective reactions (Scheme 1A). Herein, we describe a process for the regio divergent arylboration of a variety of substituted dienes by Pd/Cu-cooperative catalysis (Scheme 1B).</p><p>Our initial investigations began with arylboration of 1-phenylbutadiene. Translation of the optimal reaction conditions described for arylboration of styrenes (5 mol % SIMesCuCl, 1 mol % Pd-XPhos G3,18 PhBr, (Bpin)2, NaOt-Bu, toluene, 22 °C)11 to that of 1-phenylbutadiene led to formation of 2 in good yield and 6:1 regioselectivity.19 Standard optimization of this initial result led to the conditions illustrated in Table 1, entry 1. Key to identifying a set of conditions that are highly selective for formation of 2 was the use of the sterically more demanding IPrCuCl vs SIMesCuCl. Although use of Pd-PAd2n-Bu gave rise to product in the highest yield and selectivity,20 it was found that other sterically demanding and electron-rich phosphines also allowed for highly regioselective reaction to occur (Table 1, entries 3 and 4). When PhBr was replaced with PhI or PhOTf the product was generated in good yield and selectivity (Table 1, entries 6–8). In the case of PhOTf, use of XPhos was required, as use of PAd2n-Bu did not lead to product formation.21 Use of PhCl was not tolerated as only recovered starting material was observed (Table 1, entry 9).</p><p>The scope of the reaction was evaluated with respect to the aryl bromide and diene component (Scheme 2). A variety of aryl bromides underwent highly regioselective arylboration. Notably, electron rich (products 5 and 6), electron deficient (products 3, 4, 7), and sterically hindered aryl bromides (products 6 and 7) all functioned well. With sterically demanding aryl bromides it was found that use of Pd–Pt-Bu3 allowed for product formation in slightly higher yields and regioselectivities.19 Various heterocyclic aryl bromides could also be used with little effect on the reaction yield or selectivity (products 8–11). 1-Phenylbutadiene could also be substituted for other sterically and electronically modified 1,3-dienes with no significant change in yield or selectivity (products 13–17). It should be noted that in some of these examples (15–17), the starting dienes were used as mixtures of E- and Z-isomers, yet the product was always generated as a single alkene isomer (>20:1 E:Z). An alkyl-substituted diene could also be employed and arylboration product 18 was formed in good yield and selectivity. However, in this case the formation of a 1,1-arylboration product was also generated in ~20% yield.19 Finally, the reactions could be run on a 5 mmol scale with little change in yield or selectivity.</p><p>In the case of 1,3-susbtituted diene 19 the product of a 1,4-arylboration (20) was exclusively formed (Scheme 3A). In this reaction, a mixture of alkene isomers was also observed. Arylboration product 20 is likely generated because formation of the corresponding 1,2-arylboration product would require formation of a quaternary carbon. Thus, a strategy is provided such that regioselectivity can be controlled by design of substrate.</p><p>For reaction involving 1,4-subtituted diene 21, the diastereoselectivty of the process could be explored (Scheme 3B). Under standard reaction conditions, a mixture of 1,2- and 1,4-arylboration products (22 and 23, respectively) were formed with the former being slightly preferred. The (1,2)-arylboration product 22 could be easily purified after oxidation of the Bpin unit to an alcohol.19 Furthermore, 1,2-arylboration product 22 was generated in >20:1 dr with the anti-addition isomer being major.</p><p>Cyclic diene 24 was also investigated under the optimized conditions and provided 27 in 68% yield as a single regio- and diastereomer (not shown).19 Furthermore, enantioselective reactions could be carried out with chiral catalyst 25 (Scheme 4).11c,22,23 In each case, the products (26–28) were generated with high levels and enantioselectivity and diastereoselectivity. The absolute stereochemistry of 27 was established by X-ray crystallography. Attempts to use this chiral catalyst with the acyclic dienes presented in Schemes 2 and 3 resulted in formation of racemic products.</p><p>The products generated by these reactions represent useful intermediates for chemical synthesis (Scheme 5). Application of C–C24 and C–O25 cross-coupling reactions with 4 led to synthesis of 29 and 30, respectively. Zweifel-type olefination with 13 led to formation of diene 31, whereas oxidation of 14 allowed for synthesis of 32.26</p><p>The mechanism of the 1,2-arylboration process was also studied (Scheme 6A). Treatment of 1-phenylbutadiene through in situ prepared IPrCuBpin led to the rapid formation of Cu-complex 33 as determined by 1H NMR analysis. It is important to note that this complex is formed exclusively as the Z-alkene isomer and is the formal 1,4-addition product of the IPrCuBpin across the diene. The formation of the Z-alkene may either be the result of a formation of thermodynamically more stable complex27 or a kinetically favored insertion to an s-cis-diene.28 To provide evidence that this complex is generated under the catalytic reaction conditions, it could be subjected to Pd–PAd2n-Bu G3 and PhBr to yield the expected 1,2-arylboration product 2 with high regioselectivity. In addition, the intermediate could be quenched with AcOH to furnish allylborane 34 as the Z-isomer.10</p><p>Many factors may control the regioselectivity of these reactions. If transmetalation is the selectivity-determining step, two likely possibilities present themselves for the reaction between Cu-complex 33 and ArPdBrLn (Scheme 6B, 36 and 37).29 While at this time it is too early to confirm which transmetalation pathway is major, early evidence points toward model 36. The regioselectivity of the reaction is largely independent of halide (PhBr, PhI) or pseudohalide (PhOTf) employed (Table 1, entries 1, 6, 8). Especially evident is that PhOTf undergoes reaction with high selectivity.21 It is unlikely that the triflate ligand can bridge Cu and Pd as is likely necessary for model 37.</p><p>With an effective process for the 1,2-arylboration of a variety of dienes with aryl bromides, the development of a 1,4-arylboration was targeted (Scheme 7). The knowledge gained from the mechanistic studies outlined in Scheme 6 was crucial in this regard. It was reasoned that by proper choice of ligand bound to Cu and Pd, a transmetalation process might be favored at the C(1)–Cu bond rather than at C3 to generate 38. Furthermore, if the process can be tuned to achieve 1,4-selectivity, the desired arylboration product (e.g., 39) should be the Z-alkene isomer.</p><p>Two guiding principles were used to aid in the identification of a set of conditions that favor 1,4-arylboration. (1) Use of less sterically demanding ligands might open the possibility for direct transmetalation at the sterically encumbered C(1)–Cu bond. (2) Use of a more electrophilic Pd-complex might prefer to react at the C(1)–Cu bond rather than at the C(2)–C(3) double bond. Largely on the basis of these principles, it was identified that use of SIMesCuCl and PdJackiePhos G330 allowed for highly selective formation of 32 as the Z-isomer (Scheme 8). The formation of the Z-isomer is likely due to a rapid reductive elimination with the electron deficient JackiePhos ligand prior to isomerization with Pd–π-allyl complexes.</p><p>On the basis of the optimized conditions, the scope of the 1,4-arylboration reactions was briefly investigated (Scheme 8). Several points are noteworthy. (1) Electron deficient aryl bromides function well in the reaction; however, mixtures of 1,4- and 1,2-arylboration products are formed (product 41). This may be due to a slower reductive elimination that may allow for isomerization to a Pd-complex analogous 35 (Scheme 6B). (2) Use of electron rich aryl bromides as well as heterocyclic aryl bromides resulted in low yield.19 (3) Other electronically modified dienes work well (products 44 and 45). The regioselectivity of reactions with electron rich dienes was low, perhaps due to a less selective transmetalation process.</p><p>In conclusion, a method for the regioselective arylboration of dienes has been developed. This has been achieved by design of substrate and by modification of reaction conditions. Mechanistic studies have also been carried out to elucidate the details of this process and allow for further reaction development. Future efforts will be directed toward expanding the scope of this process, development of enantioselective variants, and application in synthesis.</p>",
"role": "user"
},
{
"content": "A method for the regiodivergent arylboration of dienes is presented. These reactions allow for the formation of a diverse range of synthetically versatile products from simple precursors. Through mechanistic studies, these reactions likely operate by initial addition of a Cu\\xe2\\x80\\x93Bpin complex across the diene followed by Pd-catalyzed cross coupling with an aryl halide or pseudohalide.",
"role": "assistant"
}
] |
science.genia_ner | science.genia_ner.162 | [
{
"content": "Extract all unique entities of these types from the biomedical abstract: [\"CellLine\", \"DNA\", \"Protein\", \"CellType\", \"RNA\"].\n\nOutput a JSON object with entity types as keys and lists of extracted entities as values. If no entities found for a type, use [].\n\nAbstract:\n\nB-cell proliferation and induction of early G1-regulating proteins by Epstein-Barr virus mutants conditional for EBNA2. Infection of primary B-lymphocytes by Epstein-Barr virus (EBV) leads to growth transformation of these B-cells in vitro. EBV nuclear antigen 2 (EBNA2), one of the first genes expressed after EBV infection of B-cells, is a transcriptional activator of viral and cellular genes and is essential for the transforming potential of the virus. We generated conditional EBV mutants by expressing EBNA2 as chimeric fusion protein with the hormone binding domain of the estrogen receptor on the genetic background of the virus. Growth transformation of primary normal B-cells by mutant virus resulted in estrogen-dependent lymphoblastoid cell lines expressing the chimeric EBNA2 protein. In the absence of estrogen about half of the cells enter a quiescent non-proliferative state whereas the others die by apoptosis. EBNA2 is thus required not only for initiation but also for maintenance of transformation. Growth arrest occurred at G1 and G2 stages of the cell cycle, indicating that functional EBNA2 is required at different restriction points of the cell cycle. Growth arrest is reversible for G1 / G0 cells as indicated by the sequential accumulation and modification of cell cycle regulating proteins. EBV induces the same cell cycle regulating proteins as polyclonal stimuli in primary B-cells. These data suggest that EBV is using a common pathway for B-cell activation bypassing the requirement for antigen, T-cell signals and growth factors.",
"role": "user"
},
{
"content": "{\"CellLine\": [\"estrogen - dependent lymphoblastoid cell lines\"], \"CellType\": [\"B - cells\", \"primary normal B - cells\", \"B - cell\", \"G1 / G0 cells\", \"the cells\", \"primary B - cells\", \"primary B - lymphocytes\", \"these B - cells\"], \"DNA\": [\"viral\", \"cellular genes\", \"viral and cellular genes\"], \"Protein\": [\"chimeric fusion protein\", \"one of the first genes expressed after EBV infection of B - cells\", \"the chimeric EBNA2 protein\", \"cell cycle regulating proteins\", \"EBNA2\", \"EBNA2 protein\", \"estrogen receptor\", \"the same cell cycle regulating proteins\", \"EBV nuclear antigen 2 (EBNA2)\", \"G1 - regulating proteins\", \"growth factors\"], \"RNA\": []}",
"role": "assistant"
}
] |
science.pico_ner | science.pico_ner.1342 | [
{
"content": "Find entities of these types in the text: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nOutput a JSON object with entity types as keys and lists of found entities as values.\n\nText:\nBradykinin-induced cough reflex markedly increases in patients with cough associated with captopril and enalapril . We studied the effects of angiotensin converting enzyme ( ACE ) inhibitors on cough responses to bradykinin ( BK ) , substance P ( SP ) and citric acid in a double blind , random study on 10 hypertensive patients receiving ACE inhibitors . Of these patients , five had reported cough with ACE inhibitors . Cough responses to citric acid were similar between patients with and without cough , and SP up to 10 ( -5 ) M did not cause cough in any of the subjects . BK caused cough at 13.4 +/- 1.2 ( -log M ) in 5 patients with cough associated with ACE inhibitors , but it did not cause cough at concentrations up to 10 ( -5 ) M in other 5 patients . One month after the withdrawal of ACE inhibitors , 5 patients were free from cough symptoms , and BK did not cause cough up to 10 ( -5 ) M in these patients , except for one who coughed at 10 ( -9 ) M , without changes in responses to citric acid . BK caused cough at 14.3 +/- 0.7 ( -log M ) although BK1-7 , a major metabolite of BK by ACE , caused cough at 5.7 +/- 0.7 ( -log M ) in another 3 patients with cough associated with ACE inhibitor . These results suggest that impaired metabolism of BK induced by ACE inhibitors may relate to the manifestation of cough in hypertensive patients receiving ACE inhibitors .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"angiotensin converting enzyme ( ACE ) inhibitors\", \"cough responses to bradykinin ( BK )\", \"citric acid\", \"substance P ( SP )\", \"ACE inhibitors\", \"BK\"], \"Outcome\": [\"cough symptoms\", \"cough\", \"cough responses\"], \"Participant\": [\"five\", \"cough\", \"cough associated with captopril and enalapril\", \"hypertensive\", \"10\", \"5\"]}",
"role": "assistant"
}
] |
science.bc7_litcovid_topic_classification | science.bc7_litcovid_topic_classification.21494 | [
{
"content": "Classify the topics of this COVID-19 literature paragraph by selecting from these categories:\n\n- Epidemic Forecasting: Predictions about COVID-19 spread\n- Treatment: Therapeutic methods or medication\n- Prevention: Measures to prevent spread\n- Mechanism: Biological mechanisms of the virus\n- Case Report: Report of a COVID-19 case\n- Transmission: How COVID-19 is transmitted\n- Diagnosis: Diagnosing COVID-19 in individuals\n\nYour output should be a valid JSON array with topics in double quotes and separated by commas. No additional text should be included.\n\nArticle:\n\nChloroquine (CQ) and hydroxychloroquine (HCQ) are undergoing several clinical trials for evaluating their efficacy and safety as antiviral drugs. Yet, there is still a great debate about their efficacy in combating COVID-19. This study aimed to evaluate the feasibility of intranasal and/or pulmonary administration of CQ/HCQ for COVID-19 using Bio/chemoinformatics tools. We, hereby, hypothesize the success of the intranasal and the pulmonary routes through a gelatin matrix to overcome several challenges related to CQ and HCQ pharmacodynamics and pharmacokinetics properties and to increase their local concentrations at the sites of initial viral entry while minimizing the potential side effects. Molecular docking on the gelatin-simulated matrix demonstrated high loading values and a sustained release profile. Moreover, the docking on mucin as well as various receptors including Angiotensin-converting enzyme 2 (ACE-2), heparin sulphate proteoglycan and Phosphatidylinositol binding clathrin assembly protein (PICALM), which are expressed in the lung and intranasal tissues and represent initial sites of attachment of the viral particles to the surface of respiratory cells, has shown good binding of CQ and HCQ to these receptors. The presented data provide an insight into the use of a novel drug formulation that needs to be tested in adequately powered randomized controlled clinical trials; aiming for a sustained prophylaxis effect and/or a treatment strategy against this pandemic viral infection.",
"role": "user"
},
{
"content": "[\"Treatment\", \"Mechanism\"]",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_elife_single_doc_summ | science.scientific_lay_summarisation_elife_single_doc_summ.2154 | [
{
"content": "Summarize the article in an uncomplicated manner.\n\nTitle: Determining the genetic basis of anthracycline-cardiotoxicity by molecular response QTL mapping in induced cardiomyocytes\nArticle:\nAbstract:\nAnthracycline-induced cardiotoxicity( ACT) is a key limiting factor in setting optimal chemotherapy regimes, with almost half of patients expected to develop congestive heart failure given high doses. However, the genetic basis of sensitivity to anthracyclines remains unclear. We created a panel of iPSC-derived cardiomyocytes from 45 individuals and performed RNA-seq after 24 hr exposure to varying doxorubicin dosages. The transcriptomic response is substantial: the majority of genes are differentially expressed and over 6000 genes show evidence of differential splicing, the later driven by reduced splicing fidelity in the presence of doxorubicin. We show that inter-individual variation in transcriptional response is predictive of in vitro cell damage, which in turn is associated with in vivo ACT risk. We detect 447 response-expression quantitative trait loci( QTLs) and 42 response-splicing QTLs, which are enriched in lower ACT GWAS p-values, supporting the in vivo relevance of our map of genetic regulation of cellular response to anthracyclines.\nIntroduction:\nAnthracyclines, including the prototypical doxorubicin, continue to be used as chemotherapeutic agents treating a wide range of cancers, particularly leukemia, lymphoma, multiple myeloma, breast cancer, and sarcoma. A well-known side-effect of doxorubicin treatment is anthracycline-induced cardiotoxicity( ACT). For some patients ACT manifests as an asymptomatic reduction in cardiac function, as measured by left ventricular ejection fraction( LVEF), but in more extreme cases ACT can lead to congestive heart failure( CHF). The risk of CHF is dosage-dependent: an early study( Von Hoff et al., 1979) estimated 3% of patients at 400 mg/m2, 7% of patients at 550 mg/m2, and 18% of patients at 700 mg/m2 develop CHF, where a more recent study puts these numbers at 5%, 26% and 48% respectively( Swain et al., 2003). Reduced LVEF shows a similar dosage-dependent pattern, but is not fully predictive of CHF. Perhaps most daunting for patients is that CHF can occur years after treatment: out of 1807 cancer survivors followed for 7 years in a recent survey a third died of heart diseases compared to 51% of cancer recurrence( Vejpongsa and Yeh, 2014). Various candidate gene studies have attempted to find genetic determinants of ACT, but are plagued by small sample sizes and unclear endpoint definitions, resulting in limited replication between studies. Two ACT genome-wide association studies( GWAS) have been published( Aminkeng et al., 2015; Schneider et al., 2017). While neither found genome-wide significant associations using their discovery cohorts, both found one variant that they were able to replicate in independent cohorts. A nonsynonymous coding variant, rs2229774, in RARG( retinoic acid receptor γ) was found to be associated with pediatric ACT using a Canadian European discovery cohort of 280 patients( Aminkeng et al., 2015), and replicated in both a European( p=0. 004) and non-European cohort( p=1×10−4). Modest signal( p=0. 076) supporting rs2229774’s association with ACT was also reported in a recent study primarily focused on trastuzumab-related cardiotoxicity( Serie et al., 2017). RARG negative cell lines have reduced retinoic acid response element( RAREs) activity and reduced suppression of Top2b( Aminkeng et al., 2015), which has been proposed as a mediator of ACT. In a different study, a GWAS in 845 patients with European-ancestry from a large adjuvant breast cancer clinical trial, 51 of whom developed CHF, found no variants at genome-wide significance levels( Schneider et al., 2017). However, one of the most promising variants, rs28714259( p=9×10−6 in discovery cohort), was genotyped in two further cohorts and showed modest replication( p=0. 04, 0. 018). rs28714259 falls in a glucocorticoid receptor protein binding peak, which may play a role in cardiac development. An exciting approach to studying complex phenotypes, including disease, in human is to use induced pluripotent stem cells( iPSC) and derived differentiated cells as in vitro model systems. Work by us and others has demonstrated that iPSCs and iPSC-derived cell-types are powerful model systems for understanding cell-type specific genetic regulation of transcription( Thomas et al., 2015; Burrows et al., 2016; Banovich et al., 2018; Kilpinen et al., 2017; Alasoo et al., 2017), but it is less established whether these systems can be used to model the interplay of genetic and environmental factors in disease progression. Encouragingly, the response of iPSC-derived cardiomyocytes( ICs) to doxorubicin was recently extensively characterized( Burridge et al., 2016). ICs derived from four individuals who developed ACT after doxorubicin treatment( ‘DOXTOX’ group) and four who did not( ‘DOX’ group), showed clear differences in viability( via apoptosis), metabolism, DNA damage, oxidative stress and mitochondrial function when exposed to doxorubicin. These observations suggest that ICs recapitulate in vivo inter-individual differences in doxorubicin sensitivity. Gene expression response differences between the DOX and DOXTOX groups were found using RNA-sequencing data, but the sample size was insufficient( RNA-seq was generated for only three individuals in each group) to attempt mapping of genetic variants that might explain the observed functional differences between individuals. Here we used a panel of iPSC-derived cardiomyocytes from 45 individuals, exposed to five different drug concentrations, to map the genetic basis of inter-individual differences in doxorubicin-sensitivity. We find hundreds of genetics variants that modulate the transcriptomic response, including 42 that act on alternative splicing. We show that the IC transcriptomic response predicts cardiac troponin levels in culture( indicative of cell lysis) in these cell-lines, and that troponin level is itself predictive of ACT. Finally we demonstrate that the mapped genetic variants show significant enrichment in lower ACT GWAS p-values.\nDiscussion:\nHuman iPSC-derived somatic cells provide a powerful, renewable and reproducible tool for modeling cellular responses to external perturbation in vitro, especially for non-blood cell-types such as cardiomyocytes which are extremely challenging to collect and even then are typically only available post-mortem. We established a sufficiently large iPSC panel to effectively query the transcriptomic response of differentiated cardiomyocytes to doxorubicin. We were also able to characterize the role of genetic variation in modulating this response, both in terms of total expression and alternative splicing. There are, of course, caveats associated with using an in vitro system, which may not accurately represent certain aspects cardiac response to doxorubicin in vivo. That said, the replication of GTEx heart eQTLs, association of troponin levels with predicted ACT-risk( Burridge et al., 2016), and the observed GWAS enrichment, all support the notion that the IC system recapitulates substantial elements of in vivo biology. It is challenging to quantify this agreement, and there are in vivo factors that are certainly not represented. For example, excessive fibrosis may contribute to ACT( Cascales et al., 2013; Zhan et al., 2016; Farhad et al., 2016; Heck et al., 2017), although is unclear how substantial this contribution is as well as whether fibroblasts are directly activated by doxorubicin exposure or simply respond indirectly to cardiomyocyte damage. While our FACS analysis shows cardiomyocytes are the dominant cell type in our cultures, heterogeneity remains and other cell types could be mediating some of the observed changes. For many diseases such as ACT which involve an environmental perturbation it is reasonable to suppose that eQTLs detected at steady-state are only tangentially relevant when attempting to interpret disease variants. Such concerns motivated us to focus on response eQTLs, that is, variants that that have functional consequences under specific cellular conditions because they interact, directly or indirectly, with the treatment. We used a statistical definition of reQTLs corresponding to cases where gene expression levels are significantly better explained using a model including an interaction term between genotype and treatment( represented as a categorical variable), compared to a model with only additive effects for genotype and treatment. Our characterization of the detected reQTL demonstrates that these variants are indeed candidate drivers of differences in individual transcriptomic response to doxorubicin. The strongest reQTL effects correspond to completely different response patterns for the major and minor alleles, while weaker effects correspond to more subtle modulation of the same response pattern. We note that it is not necessarily the case that such reQTLs are the only functionally relevant eQTLs. eSNPs with a marginal( additive) effect on expression of a gene responsive to doxorubicin( as most genes are) could still be important if the relationship between expression and ACT-risk is nonlinear, for example involving thresholding effects. We observed a statistical enrichment of expression and( to a lesser extent) splicing QTLs in ACT GWAS. However, with no reproducible genome-wide significant associations available, fine-mapping of causal variants remains fraught. We anticipate our findings will be increasingly valuable as larger-scale ACT GWAS become available. We derived ICs from healthy individuals so we do not known which individuals would develop ACT if they required anthracycline treatment. Mapping molecular response QTLs in larger panels of ICs from patients treated with anthracyclines who do or do not develop ACT symptoms would allow stronger conclusions to be drawn about the contribution of the detected( r) eQTLs to disease etiology. We used a panel of Hutterites individual since this homogeneous population offers unique advantages for mapping genetic traits: exposure to a fairly uniform environment and less variable genetic background, despite still representing much of European diversity( Newman et al., 2004). However, the genetic basis of ACT susceptibility is likely complex and some relevant genetic variation may not be well represented in this cohort. Finally, an interesting observation in our study is that splicing fidelity is reduced upon doxorubicin exposure. This is not completely unexpected since a key downstream side-effect of doxorubicin is increased oxidative stress, which has been previously associated with dysregulated splicing of specific genes( Disher and Skandalis, 2007; Seo et al., 2016). Our finding that this effect is prevalent across the transcriptome poses further questions about what known effects of doxorubicin might, in fact, be mediated by changes in RNA splicing.",
"role": "user"
},
{
"content": "Many cancers , including leukaemia , lymphoma and breast cancer , are treated with potent chemotherapy drugs such as anthracyclines . However , anthracyclines have strong side effects known as anthracycline cardiotoxicity , which affect the health of the heart . Almost half of the patients given high doses of anthracyclines develop chronic heart failure . While anthracycline cardiotoxicity is very common , people’s genes may contribute to how sensitive they are to these drugs but it is not understood which genes can cause this effect . Previous studies using only a small number of participants have not been able to pin down the genetic factors that make some patients respond well to anthracyclines , and others prone to developing heart failure when taking these drugs . To find out which genes affect anthracycline cardiotoxicity , Knowles , Burrows et al . transformed blood cells from 45 individuals into stem cells , which were then developed into heart muscle cells . Then , the activity of genes was analyzed by measuring the amount of RNA ( the template molecules used to make proteins ) produced by those genes . After the cells had been exposed for 24 hours to the anthracycline drug doxorubicin , hundreds of gene activity differences could be found in the heart muscle cells between individuals . Some of these differences were linked to poorer health of the cells after treatment with the drug . As a result , a number of genetic variants that could predispose patients to the side effects of doxorubicin were discovered . The experiments also revealed how doxorubicin disrupts an important process that separates ‘junk’ parts of the RNA from the parts that are used as a template for proteins . Being able to predict who is likely to be sensitive to drugs such as doxorubicin could help doctors to tailor chemotherapy treatments more effectively , minimising the risk of heart failure . In future , larger studies could lead to accurate predictions of a patient’s response to a particular chemotherapy drug to personalize their cancer treatment .",
"role": "assistant"
}
] |
science.pico_ner | science.pico_ner.4345 | [
{
"content": "Find entities of these types in the text: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nOutput a JSON object with entity types as keys and lists of found entities as values.\n\nText:\nMilk protein quantity and quality in low-birth-weight infants . IV . Effects on tyrosine and phenylalanine in plasma and urine . Well , appropriate-for-gestational age , low-birth-weight infants were divided into three gestational age groups and assigned randomly within each age group to one of five feeding regimens : pooled human milk ( BM ) ; formula 1 ( F1 ) = 1.5 gm/dl protein , 60 parts bovine whey proteins : 40 parts bovine caseins ; F2 = 3.0 gm/dl , 60:40 ; F3 = 1.5 gm/dl , 18:82 ; F4 = 3.0 gm/dl , 18:82 . Plasma and urine concentrations of tyrosine and phenylalanine were far higher in the infants fed F1 to F4 , especially F2 and F4 , than in the infants fed BM . These findings offer further evidence for the limited capacity of the low-birth-weight infant to catabolize tyrosine . Infants fed F3 had significantly higher plasma tyrosine concentrations than infants fed F1 , and those fed F4 had higher concentrations than those fed F2 . Thus , increased plasma tyrosine concentrations in low-birth-weight infants are related directly both to the quantity and to the quality of the protein in their diets .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"protein\", \"bovine whey proteins\", \"pooled human milk\", \"bovine caseins\"], \"Outcome\": [\"tyrosine\", \"catabolize tyrosine\", \"phenylalanine\", \"plasma tyrosine concentrations\"], \"Participant\": [\"infants\", \"low-birth-weight\", \"appropriate-for-gestational age\"]}",
"role": "assistant"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.