dataset
stringclasses 45
values | id
stringlengths 17
63
| messages
listlengths 2
2
|
---|---|---|
science.chemdner_ner | science.chemdner_ner.415 | [
{
"content": "Analyze the given text to identify specific entities. Focus on extracting relevant data.\n\nProvide your findings in JSON format. Each category should list the entities found.\n\nText:\n\nChemical profile and biological activities of Deguelia duckeana A.M.G. Azevedo (Fabaceae). Deguelia duckeana is popularly known as timbó and used by indigenous people as ictiotoxic. On account of there being no literature pertaining to the chemical profile or biological activity of this plant, the hexane, methanol and aqueous crude extracts from leaves, stems and roots were assayed that presented very high cytotoxic potential against Artemia salina, achieving 100% mortality in up to 5.0 µg mL(-1) concentration, but lower antioxidant potential on 2,2-diphenyl-1-picryl-hydrazyl and Fe(3+)/Phenanthroline assays. The phytochemical analysis of crude extracts showed the presence of flavonoids and related compounds as major constituents as well as steroids in all of them, and tannins in polar extracts. All the extracts were assayed for antibacterial activity but only the hexane extract of stems showed moderate activity on Staphylococcus aureus, which was fractionated and yielded a mixture of 3,5,4'-trimethoxy-4-prenylstilbene, lonchocarpine, 4-hydroxylonchocarpine and derricidine, reported for the first time in D. duckeana and other fraction with β-sitosterol and stigmasterol mixture.",
"role": "user"
},
{
"content": "{\"Chemical\": [\"Fe(3+)\", \"flavonoids\", \"\\u03b2-sitosterol\", \"Phenanthroline\", \"2,2-diphenyl-1-picryl-hydrazyl\", \"3,5,4\\u0027-trimethoxy-4-prenylstilbene\", \"tannins\", \"stigmasterol\", \"steroids\", \"methanol\", \"derricidine\", \"hexane\", \"4-hydroxylonchocarpine\", \"lonchocarpine\"]}",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_plos_single_doc_summ | science.scientific_lay_summarisation_plos_single_doc_summ.5373 | [
{
"content": "Transform the following sections from a biomedical study into a simple summary for the general public. Focus on making the information accessible and easy to understand.\n\nResearch Heading: Systematic tissue-specific functional annotation of the human genome highlights immune-related DNA elements for late-onset Alzheimer’s disease\nContent:\nAbstract:\nContinuing efforts from large international consortia have made genome-wide epigenomic and transcriptomic annotation data publicly available for a variety of cell and tissue types. However, synthesis of these datasets into effective summary metrics to characterize the functional non-coding genome remains a challenge. Here, we present GenoSkyline-Plus, an extension of our previous work through integration of an expanded set of epigenomic and transcriptomic annotations to produce high-resolution, single tissue annotations. After validating our annotations with a catalog of tissue-specific non-coding elements previously identified in the literature, we apply our method using data from 127 different cell and tissue types to present an atlas of heritability enrichment across 45 different GWAS traits. We show that broader organ system categories( e. g. immune system) increase statistical power in identifying biologically relevant tissue types for complex diseases while annotations of individual cell types( e. g. monocytes or B-cells) provide deeper insights into disease etiology. Additionally, we use our GenoSkyline-Plus annotations in an in-depth case study of late-onset Alzheimer’s disease( LOAD). Our analyses suggest a strong connection between LOAD heritability and genetic variants contained in regions of the genome functional in monocytes. Furthermore, we show that LOAD shares a similar localization of SNPs to monocyte-functional regions with Parkinson’s disease. Overall, we demonstrate that integrated genome annotations at the single tissue level provide a valuable tool for understanding the etiology of complex human diseases. Our GenoSkyline-Plus annotations are freely available at http://genocanyon. med. yale. edu/GenoSkyline.\nIntroduction:\nLarge consortia such as ENCODE[1] and Epigenomics Roadmap Project[2] have generated a rich collection of high-throughput genomic and epigenomic data, providing unprecedented opportunities to delineate functional structures in the human genome. As complex disease research rapidly advances, evidence has emerged that disease-associated variants are enriched in regulatory DNA elements[3, 4]. Therefore, functional annotation of the non-coding genome is critical for understanding the genetic basis of human complex diseases. Unfortunately, categorizing the complex regulatory machinery of the genome requires integration of diverse types of annotation data as no single annotation captures all types of functional elements[5]. Recently, we have developed GenoSkyline[6], a principled framework to identify tissue-specific functional regions in the human genome through integrative analysis of various chromatin modifications. In this work, we introduce GenoSkyline-Plus, a comprehensive update of GenoSkyline that incorporates RNA sequencing and DNA methylation data into the framework and extends to 127 integrated annotation tracks covering a spectrum of human tissue and cell types. To demonstrate the ability of GenoSkyline-Plus to systematically provide novel insights into complex disease etiology, we jointly analyzed summary statistics from 45 genome-wide association studies( GWAS; Ntotal≈3. 8M) and identified biologically relevant tissues for a broad spectrum of complex traits. We next performed an in-depth, annotation-driven investigation of Alzheimer’s disease( AD), a neurodegenerative disease characterized by deposition of amyloid-β( Aβ) plaques and neurofibrillary tangles in the brain. Late-onset AD( LOAD) includes patients with onset after 65 years of age and has a complex mode of inheritance[7]. Around 20 risk-associated genetic loci have been identified in LOAD GWAS[8]. However, our understanding of LOAD’s genetic architecture and disease etiology is still far from complete. Through integrative analysis of GWAS summary data and GenoSkyline-Plus annotations, we identified strong enrichment for LOAD associations in immune cell-related DNA elements, consistent with other data suggesting a crucial role for the immune system in AD etiology[9–11]. Jointly analyzing GWAS summary data for LOAD and Parkinson’s disease( PD), we identified substantial enrichment for pleiotropic associations in the monocyte functional genome. Our findings provide support for the critical involvement of the immune system in the etiology of neurodegenerative diseases, and suggest a previously unsuspected role for an immune-mediated pleiotropic effect between LOAD and PD.\nDiscussion:\nIncreasing evidence suggests that non-coding regulatory DNA elements may be the primary regions harboring risk variants in human complex diseases. In this work, we have substantially expanded our previously established GenoSkyline annotation by incorporating RNA-seq and DNA methylation into its framework, imputing incomplete epigenomic and transcriptomic annotation tracks, and extending it to more than 100 human tissue and cell types. With the help of integrative functional annotations, we identified strong enrichment for LOAD heritability in functional DNA elements related to innate immunity and liver tissue using hypothesis-free tissue-specific enrichment analysis. This enrichment was also found in immune-related DNA elements using PD data. Our analysis also clearly indicated that monocyte functional elements in particular appear to be highly relevant in explaining AD and PD heritability. Of note, we analyzed 45 complex diseases and traits in addition to AD and PD. The substantial enrichment for multiple psychiatric and neurological traits in the brain functional genome shows that the lack of brain enrichment in neurodegeneration is not due to poor quality of brain annotations. Further, the monocytes annotation track was the most significantly enriched for Crohn’s disease among the 45 GWAS, and was not ubiquitously enriched for a large number of traits. Consistent and biologically interpretable enrichment results on a large collection of complex traits demonstrate the effectiveness of our approach and increase the validity of novel findings. It is worth noting that multiple studies have highlighted the role of myeloid cells in the genetic susceptibility of neurodegenerative diseases[11]. Several genes expressed in myeloid cells( e. g. ABCA7, CD33, and TREM2) have been identified in GWAS and sequencing-based association studies for AD[8, 69, 70]. Further, AD risk alleles identified in GWASs have been shown to enrich for cis-eQTLs in monocytes[9]. In addition, two recent papers identified enrichment for AD heritability in active genome regions in myeloid cells[71, 72], which suggested a polygenic genetic architecture for immune-related DNA elements in AD etiology and hinted at a large number of unidentified, immune-related genes for AD. Compared to the aforementioned work, our study utilizes a better set of tissue-specific genome annotations and explicitly accounts for the similarity between different cell types through a multiple regression model. One major limitation in our analysis is lack of data for other potentially AD-relevant cell types such as microglia. Whether our findings correctly reflected the direct involvement of peripheral immune cells in neurodegenerative diseases rather than the detection of epigenomic similarities between monocytes and microglia remains to be carefully investigated in the future. Furthermore, we successfully identified enrichment for shared genetic components between AD and PD in the monocyte functional genome, which hints at a shared neuroinflammation pathway between these two neurodegenerative diseases. We note that several candidate loci with potential pleiotropic effects showed fairly marginal associations with AD and PD, which explains why they have been missed in traditional SNP-based association analysis. Importantly, SNPs in immune-related DNA elements explain a large proportion of AD and PD heritability in total. These results suggest that weak but pervasive associations related with immunity still remain unidentified. Further evaluations of these relationships using GWAS with larger sample sizes may provide insights into the shared biology of these neurodegenerative conditions. Through multi-tier enrichment analyses on 45 GWAS, an in-depth case study of neurodegenerative diseases, and validation of known non-coding tissue-specific regulatory machinery, we have demonstrated the ability of GenoSkyline-Plus to provide unbiased, genome-wide insights into the genetic basis of human complex diseases. The analyzed GWAS represent a variety of human complex diseases and traits, highlighting the effectiveness of our method in different contexts and genetic architecture. However, while our non-coding validation study demonstrated that GenoSkyline-Plus annotations indeed captured tissue-specific activity in a variety of intergenic machinery, there is a need to develop a more statistically robust framework to identify new non-coding elements rather than validate existing ones. Our approach of identifying the functionally active proportion of all elements in aggregate is only able to identify tissue specificity while considering large groups of highly specific non-coding elements. The availability of over 100 different annotation tracks introduces many multiple-testing issues that should be addressed in the case of a statistically sound analysis for tissue-specificity. We have also demonstrated how GenoSkyline-Plus and its explanatory power improve with the addition of more data. Currently, functionality in 28% of exonic regions still remains to be identified. As the quantity and quality of high-throughput epigenomic data continue to grow, GenoSkyline-Plus has the potential to further evolve and provide even more comprehensive annotations of tissue-specific functionality in the human genome. We will update our annotations when data for new tissue and cell types from the Roadmap consortium become available. Finally, several recent papers have introduced novel models to integrate functional annotations in tissue-specific enrichment analysis[73, 74]. Many models that do not explicitly incorporate functional annotation information have also emerged in transcriptome-wide association studies and other closely-related applications in human genetics research[75–78]. Our annotations, in conjunction with rapidly advancing statistical techniques and steadily increasing sample sizes in genetics studies, may potentially benefit a variety of human genetics applications and promise a bright future for complex disease genetics research.",
"role": "user"
},
{
"content": "After years of community efforts , many experimental and computational approaches have been developed and applied for functional annotation of the human genome , yet proper annotation still remains challenging , especially in non-coding regions . As complex disease research rapidly advances , increasing evidence suggests that non-coding regulatory DNA elements may be the primary regions harboring risk variants in human complex diseases . In this paper , we introduce GenoSkyline-Plus , a principled annotation framework to identify tissue and cell type-specific functional regions in the human genome through integration of diverse high-throughput epigenomic and transcriptomic data . Through validation of known non-coding tissue-specific regulatory regions , enrichment analyses on 45 complex traits , and an in-depth case study of neurodegenerative diseases , we demonstrate the ability of GenoSkyline-Plus to accurately identify tissue-specific functionality in the human genome and provide unbiased , genome-wide insights into the genetic basis of human complex diseases .",
"role": "assistant"
}
] |
science.craftchem_ner | science.craftchem_ner.711 | [
{
"content": "Analyze the following sentence to identify chemical entities of interest. List these entities in a JSON array format. If no entities are found, return an empty array.\n\nText to Analyze:\nEfferentfiberstotheearshowawelldevelopedintraganglionicspiralbundle(IGSB)withnodetectabledifferencescomparedtocontrols(e).",
"role": "user"
},
{
"content": "[]",
"role": "assistant"
}
] |
science.pico_ner | science.pico_ner.52 | [
{
"content": "Examine the following abstract and identify distinct entities. Focus on extracting entities related to participants, interventions, and outcomes.\n\nPresent your findings in a JSON format like: {\"Participant\": [\"example entity\", ...], \"Intervention\": [\"example entity\", ...], \"Outcome\": [\"example entity\", ...]}. If no entities are found for a category, use an empty list.\n\nAbstract Content:\n\nEffects of a videotape information intervention at discharge on diet and exercise compliance after coronary bypass surgery . BACKGROUND This study evaluated the relative effects on compliance with recommended lifestyle changes of two experimental videotapes that involved different approaches for preparing coronary artery bypass graft ( CABG ) patients for the posthospital recovery period . The tapes differed in the extent to which they portrayed the recovery period as a steady , forward progression versus a series of \" ups and downs . \" METHODS Two hundred sixteen male and female CABG patients were assigned randomly either to view one of the two videotapes before discharge from the hospital or to receive only the standard discharge preparation provided by the hospital . All patients completed measures of anxiety and self-efficacy at discharge , 1 month and 3 months after discharge from the hospital . Patients also completed measures of dietary fat consumption and activity level 1 and 3 months after discharge . RESULTS Relative to controls , patients who viewed either of the videotapes before hospital release reported higher self-efficacy for adhering to the recommended low-fat diet both at discharge and 1 month after surgery . Viewing either of the videotapes also resulted in significantly less dietary fat intake 1 month after hospital release compared with controls . Patients who viewed the tape that portrayed the recovery period as consisting of ups and downs also reported significantly more frequent moderate exercise at 1 month and more frequent strenuous exercise 3 months after discharge . CONCLUSIONS The experimental videotapes proved to be an effective method for increasing dietary and exercise compliance during the first 3 months after CABG .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"videotape information intervention\", \"one of the two videotapes before discharge from the hospital\", \"videotapes\", \"experimental videotapes\", \"to receive only the standard discharge preparation\"], \"Outcome\": [\"strenuous exercise\", \"activity level 1 and 3\", \"significantly less dietary fat intake 1 month after hospital release\", \"dietary fat consumption\", \"measures of anxiety and self-efficacy\", \"moderate exercise at 1 month\", \"self-efficacy for adhering to the recommended low-fat diet\", \"dietary and exercise compliance\"], \"Participant\": [\"coronary artery bypass graft ( CABG ) patients\", \"female\", \"Two hundred sixteen\", \"male\"]}",
"role": "assistant"
}
] |
science.scicite_classification | science.scicite_classification.4093 | [
{
"content": "Examine the citation provided below. Determine the primary purpose of this citation within the context of the research paper. \n\nCitation:\n\nLavergne et al. (2014) developed a two-step method for the enumeration of heterotrophic prokaryotes in coastal sediments by flow cytometry that enables high frequency analyses of 30 samples in less than 4 hours.",
"role": "user"
},
{
"content": "Method",
"role": "assistant"
}
] |
science.bc7_litcovid_topic_classification | science.bc7_litcovid_topic_classification.4093 | [
{
"content": "Analyze the given paragraph from COVID-19 research. Determine the relevant topics discussed in the text. Your response should be a JSON array containing the identified topics. Ensure the output is a valid JSON array with no additional commentary.\n\nText:\n\nPun, Matiram, Rachel Turner, Giacomo Strapazzon, Hermann Brugger, and Erik R. Swenson. Lower incidence of COVID-19 at high altitude: Facts and confounders. High Alt Med Biol. 21:217-222, 2020.-The rapid transmission, increased morbidity, and mortality of coronavirus disease 2019 (COVID-19) has exhausted many health care systems and the global economy. Large variations in COVID-19 prevalence and incidence have been reported across and within many countries worldwide; however, this remains poorly understood. The variability and susceptibility across the world have been mainly attributed to differing socioeconomic status, burden of chronic diseases, access to health care, strength of health care systems, and early or late adoption of control measures. Environmental factors such as pollution, ambient temperature, humidity, and seasonal weather patterns at different latitudes may influence how severe the pandemic is and the incidence of infection in any part of the world. In addition, recent epidemiological data have been used to propose that altitude of residence may not only influence those environmental features considered key to lesser viral transmission, but also susceptibility to more severe forms of COVID-19 through hypoxic-hypobaria driven genomic or nongenomic adaptations specific to high-altitude populations. In this review, we critically examine these factors and attempt to determine based upon available scientific and epidemiological data whether living in high-altitude regions might be protective against COVID-19 as recent publications have claimed.",
"role": "user"
},
{
"content": "[\"Treatment\", \"Prevention\"]",
"role": "assistant"
}
] |
science.acl_arc_intent_classification | science.acl_arc_intent_classification.1110 | [
{
"content": "Examine the given text from a research paper in the field of NLP. Your task is to identify the role of the citation within the context provided. Focus on understanding the intent behind the citation and classify it accordingly.\n\nSection Title:\nNone\n\nText Before Citation:\nresults are based on a corpus of movie subtitles (Tiedemann 2007), and are consequently shorter sentences, whereas the EnâEs results are based on a corpus of parliamentary proceedings (Koehn 2005). Horizontal axis: average number of transferred edges per sentence. Vertical axis: percentage of transferred edges that are correct.\n\nCitation Text:\nresults are based on a corpus of movie subtitles ( Tiedemann 2007 ) , and are consequently shorter sentences , whereas the En â Es results are based on a corpus of parliamentary proceedings ( Koehn 2005 ) .\n\nText After Citation:\nWe see in Figure 10 that for both domains, the models trained using posterior regularization perform better than the baseline model trained using EM. \u0000\u0000\u0000\u0000\u0000\u0000",
"role": "user"
},
{
"content": "Uses",
"role": "assistant"
}
] |
science.scitldr_aic | science.scitldr_aic.1554 | [
{
"content": "Review the provided sections from a computer science paper and distill the primary contribution into a single sentence. Provide only the summary sentence.\n\nSections:\n\nDelusional bias is a fundamental source of error in approximate Q-learning.\nTo date, the only techniques that explicitly address delusion require comprehensive search using tabular value estimates.\nIn this paper, we develop efficient methods to mitigate delusional bias by training Q-approximators with labels that are \"consistent\" with the underlying greedy policy class.\nWe introduce a simple penalization scheme that encourages Q-labels used across training batches to remain (jointly) consistent with the expressible policy class.\nWe also propose a search framework that allows multiple Q-approximators to be generated and tracked, thus mitigating the effect of premature (implicit) policy commitments.\nExperimental results demonstrate that these methods can improve the performance of Q-learning in a variety of Atari games, sometimes dramatically.\nQ-learning (Watkins & Dayan, 1992; Sutton & Barto, 2018) lies at the heart of many of the recent successes of deep reinforcement learning (RL) (Mnih et al., 2015; , with recent advancements (e.g., van Hasselt (2010); Bellemare et al. (2017) ; Wang et al. (2016) ; Hessel et al. (2017) ) helping to make it among the most widely used methods in applied RL.\nDespite these successes, many properties of Q-learning are poorly understood, and it is challenging to successfully apply deep Q-learning in practice.\nWhen combined with function approximation, Q-learning can become unstable (Baird, 1995; Boyan & Moore, 1995; Tsitsiklis & Roy, 1996; Sutton & Barto, 2018) .\nVarious modifications have been proposed to improve convergence or approximation error (Gordon, 1995; 1999; Szepesvári & Smart, 2004; Melo & Ribeiro, 2007; Maei et al., 2010; Munos et al., 2016) ; but it remains difficult to reliably attain both robustness and scalability.\nRecently, Lu et al. (2018) identified a source of error in Q-learning with function approximation known as delusional bias.\nIt arises because Q-learning updates the value of state-action pairs using estimates of (sampled) successor-state values that can be mutually inconsistent given the policy class induced by the approximator.\nThis can result in unbounded approximation error, divergence, policy cycling, and other undesirable behavior.\nTo handle delusion, the authors propose a policy-consistent backup operator that maintains multiple Q-value estimates organized into information sets.\nEach information set has its own backed-up Q-values and corresponding \"policy commitments\" responsible for inducing these values.\nSystematic management of these sets ensures that only consistent choices of maximizing actions are used to update Q-values.\nAll potential solutions are tracked to prevent premature convergence on any specific policy commitments.\nUnfortunately, the proposed algorithms use tabular representations of Q-functions, so while this establishes foundations for delusional bias, the function approximator is used neither for generalization nor to manage the size of the state/action space.\nConsequently, this approach is not scalable to RL problems of practical size.\nIn this work, we propose CONQUR (CONsistent Q-Update Regression), a general framework for integrating policy-consistent backups with regression-based function approximation for Q-learning and for managing the search through the space of possible regressors (i.e., information sets).\nWith suitable search heuristics, our framework provides a computationally effective means for minimizing the effects of delusional bias in Q-learning, while admitting scaling to practical problems.\nOur main contributions are as follows.\nFirst we define novel augmentations of standard Q-regression to increase the degree of policy consistency across training batches.\nWhile testing exact consistency is expensive, we introduce an efficient soft-consistency penalty that promotes consistency of new labels with earlier policy commitments.\nSecond, drawing on the information-set structure of Lu et al. (2018) , we define a search space over Q-regressors to allow consideration of multiple sets of policy commitments.\nThird, we introduce heuristics for guiding the search over regressors, which is critical given the combinatorial nature of information sets.\nFinally, we provide experimental results on the Atari suite (Bellemare et al., 2013) demonstrating that CONQUR can offer (sometimes dramatic) improvements over Q-learning.\nWe also show that (easy-to-implement) consistency penalization on its own (i.e., without search) can improve over both standard and double Q-learning.",
"role": "user"
},
{
"content": "We developed a search framework and consistency penalty to mitigate delusional bias.",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.1679 | [
{
"content": "Examine the abstracts provided from a main scientific article and its cited works. Craft a narrative that synthesizes the key insights and contributions, emphasizing their interconnections. Employ numerical citations such as [1], [2], etc., to reference the cited works.\n\nMain Article Summary:\nMore than ever, technical inventions are the symbol of our society's advance. Patents guarantee their creators protection against infringement. For an invention being patentable, its novelty and inventiveness have to be assessed. Therefore, a search for published work that describes similar inventions to a given patent application needs to be performed. Currently, this so-called search for prior art is executed with semi-automatically composed keyword queries, which is not only time consuming, but also prone to errors. In particular, errors may systematically arise by the fact that different keywords for the same technical concepts may exist across disciplines. In this paper, a novel approach is proposed, where the full text of a given patent application is compared to existing patents using machine learning and natural language processing techniques to automatically detect inventions that are similar to the one described in the submitted document. Various state-of-the-art approaches for feature extraction and document comparison are evaluated. In addition to that, the quality of the current search process is assessed based on ratings of a domain expert. The evaluation results show that our automated approach, besides accelerating the search process, also improves the search results for prior art with respect to their quality.\n\nCited Works Summary:\n[1]: Many machine learning algorithms require the input to be represented as a fixed-length feature vector. When it comes to texts, one of the most common fixed-length features is bag-of-words. Despite their popularity, bag-of-words features have two major weaknesses: they lose the ordering of the words and they also ignore semantics of the words. For example, \"powerful,\" \"strong\" and \"Paris\" are equally distant. In this paper, we propose Paragraph Vector, an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents. Our algorithm represents each document by a dense vector which is trained to predict words in the document. Its construction gives our algorithm the potential to overcome the weaknesses of bag-of-words models. Empirical results show that Paragraph Vectors outperforms bag-of-words models as well as other techniques for text representations. Finally, we achieve new state-of-the-art results on several text classification and sentiment analysis tasks.\n\n[2]: \n\n[3]: \n\n[4]: Motivation: The most widely used literature search techniques, such as those offered by NCBI's PubMed system, require significant effort on the part of the searcher, and inexperienced searchers do not use these systems as effectively as experienced users. Improved literature search engines can save researchers time and effort by making it easier to locate the most important and relevant literature. Results: We have created and optimized a new, hybrid search system for Medline that takes natural text as input and then delivers results with high precision and recall. The combination of a fast, low-sensitivity weighted keyword-based first pass algorithm to cast a wide net to gather an initial set of literature, followed by a unique sentence-alignment based similarity algorithm to rank order those results was developed that is sensitive, fast and easy to use. Several text similarity search algorithms, both standard and novel, were implemented and tested in order to determine which obtained the best results in information retrieval exercises. Availability: Literature searching algorithms are implemented in a system called eTBLAST, freely accessible over the web at http: invention.swmed.edu. A variety of other derivative systems and visualization tools provides the user with an enhanced experience and additional capabilities. Contact: [email protected]\n\n[5]: With an exponentially growing number of scientific papers published each year, advanced tools for exploring and discovering publications of interest are becoming indispensable. To empower users beyond a simple keyword search provided e.g. by Google Scholar, we present the novel web application PubVis. Powered by a variety of machine learning techniques, it combines essential features to help researchers find the content most relevant to them. An interactive visualization of a large collection of scientific publications provides an overview of the field and encourages the user to explore articles beyond a narrow research focus. This is augmented by personalized content based article recommendations as well as an advanced full text search to discover relevant references. The open sourced implementation of the app can be easily set up and run locally on a desktop computer to provide access to content tailored to the specific needs of individual users. Additionally, a PubVis demo with access to a collection of 10,000 papers can be tested online.\n\n[6]: Since the huge database of patent documents is continuously increasing, the issue of classifying, updating and retrieving patent documents turned into an acute necessity. Therefore, we investigate the efficiency of applying Latent Semantic Indexing, an automatic indexing method of information retrieval, to some classes of patent documents from the United States Patent Classification System. We present some experiments that provide the optimal number of dimensions for the Latent Semantic Space and we compare the performance of Latent Semantic Indexing (LSI) to the Vector Space Model (VSM) technique applied to real life text documents, namely, patent documents. However, we do not strongly recommend the LSI as an improved alternative method to the VSM, since the results are not significantly better.\n\n[7]: Finding relevant publications is important for scientists who have to cope with exponentially increasing numbers of scholarly material. Algorithms can help with this task as they help for music, movie, and product recommendations. However, we know little about the performance of these algorithms with scholarly material. Here, we develop an algorithm, and an accompanying Python library, that implements a recommendation system based on the content of articles. Design principles are to adapt to new content, provide near-real time suggestions, and be open source. We tested the library on 15K posters from the Society of Neuroscience Conference 2015. Human curated topics are used to cross validate parameters in the algorithm and produce a similarity metric that maximally correlates with human judgments. We show that our algorithm significantly outperformed suggestions based on keywords. The work presented here promises to make the exploration of scholarly material faster and more accurate.\n\n[8]: The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.",
"role": "user"
},
{
"content": "Full text similarity measures have previously been used to improve search results for MEDLINE articles, where a two step approach using the cosine similarity measure between vectors in combination with a sentence alignment algorithm yielded superior results compared to the boolean search strategy used by PubMed [4] . The Science Concierge [7] computes the similarities between papers' abstracts to provide content based recommendations, however it still requires an initial keyword search to retrieve articles of interest. The PubVis web application by Horn [5] , developed for visually exploring scientific corpora, also provides recommendations for similar articles given a submitted abstract by measuring overlapping terms in the document feature vectors. While full text similarity search approaches have shown potential in domains such as scientific literature, only few studies have explored this approach for the much harder task of retrieving prior art for a new patent application [6] , where much less overlap between text documents is to be expected due to the usage of very abstract and general terms when describing new inventions. Specifically, document representations created using recently developed neural network language models such as [3] [8] [2] or [1] were not yet evaluated on patent documents.",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.1945 | [
{
"content": "Examine the abstracts provided from a main scientific article and its cited works. Craft a narrative that synthesizes the key insights and contributions, emphasizing their interconnections. Employ numerical citations such as [1], [2], etc., to reference the cited works.\n\nMain Article Summary:\nAn optimal feedback controller for a given Markov decision process (MDP) can in principle be synthesized by value or policy iteration. However, if the system dynamics and the reward function are unknown, a learning agent must discover an optimal controller via direct interaction with the environment. Such interactive data gathering commonly leads to divergence towards dangerous or uninformative regions of the state space unless additional regularization measures are taken. Prior works proposed bounding the information loss measured by the Kullback–Leibler (KL) divergence at every policy improvement step to eliminate instability in the learning dynamics. In this paper, we consider a broader family of f-divergences, and more concretely α -divergences, which inherit the beneficial property of providing the policy improvement step in closed form at the same time yielding a corresponding dual objective for policy evaluation. Such entropic proximal policy optimization view gives a unified perspective on compatible actor-critic architectures. In particular, common least-squares value function estimation coupled with advantage-weighted maximum likelihood policy improvement is shown to correspond to the Pearson χ 2 -divergence penalty. Other actor-critic pairs arise for various choices of the penalty-generating function f. On a concrete instantiation of our framework with the α -divergence, we carry out asymptotic analysis of the solutions for different values of α and demonstrate the effects of the divergence function choice on common standard reinforcement learning problems.\n\nCited Works Summary:\n[1]: In this paper, we set forth a new vision of reinforcement learning developed by us over the past few years, one that yields mathematically rigorous solutions to longstanding important questions that have remained unresolved: (i) how to design reliable, convergent, and robust reinforcement learning algorithms (ii) how to guarantee that reinforcement learning satisfies pre-specified \"safety\" guarantees, and remains in a stable region of the parameter space (iii) how to design \"off-policy\" temporal difference learning algorithms in a reliable and stable manner, and finally (iv) how to integrate the study of reinforcement learning into the rich theory of stochastic optimization. In this paper, we provide detailed answers to all these questions using the powerful framework of proximal operators. The key idea that emerges is the use of primal dual spaces connected through the use of a Legendre transform. This allows temporal difference updates to occur in dual spaces, allowing a variety of important technical advantages. The Legendre transform elegantly generalizes past algorithms for solving reinforcement learning problems, such as natural gradient methods, which we show relate closely to the previously unconnected framework of mirror descent methods. Equally importantly, proximal operator theory enables the systematic development of operator splitting methods that show how to safely and reliably decompose complex products of gradients that occur in recent variants of gradient-based temporal difference learning. This key technical innovation makes it possible to finally design \"true\" stochastic gradient methods for reinforcement learning. Finally, Legendre transforms enable a variety of other benefits, including modeling sparsity and domain geometry. Our work builds extensively on recent work on the convergence of saddle-point algorithms, and on the theory of monotone operators.",
"role": "user"
},
{
"content": "An alternative proximal reinforcement learning scheme was introduced in [1] based on the extragradient method for solving variational inequalities and leveraging operator splitting techniques. Although the idea of exploiting proximal maps and updates in the primal and dual spaces is similar to ours, regularization in [1] is applied in the value function space to smoothen generalized TD learning algorithms, whereas we study regularization in the primal space.",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_arxiv | science.scientific_papers_summarization_single_doc_arxiv.174603 | [
{
"content": "Write an abstract for the research paper below. Your response should only include the abstract and no additional text.\n\nArticle text:\na key prediction of cdm hierarchical formation models ( e.g. , white & frenk 1991 ) is the `` down - sizing '' of star formation sites , whereby the characteristic galaxy mass dominating the co - moving star - formation rate decreases with redshift ( cowie et al . 1996 ) . at high redshift ,\nlyman - break galaxies dominate ; we believe they are associated with today s high - mass systems ( lowenthal et al .\n1997 ; steidel et al .\ntoday , the most intensely star - forming systems are low - mass hii galaxies . between @xmath1\nthe situation is less clear .\n`` luminous compact blue galaxies '' ( lcbgs ) appear to be the most rapidly evolving class of galaxy at intermediate redshift ( lilly et al .\n1998 ; mallen - ornelas et al . 1999 ) .\nthese enigmatic galaxies , initially identified by koo & kron ( 1988 ) , are luminous ( @xmath2 ) , small ( @xmath3kpc ) , and massive engines of star formation ( up to @xmath4 ) ( koo et al .\n1995 , guzmn et al .\n1998 , hammer et al .\nthey appear to form a link in redshift , size , and luminosity between lyman - break galaxies and hii galaxies today ( lowenthal et al .\n1997 ; see our figure 1 ) .\nlcbgs are a major contributor to the observed enhancement of the star - formation density of the universe at @xmath5 , and their mass and number density decline in concert with the rapid drop in the global sfr since @xmath6 ( guzmn et al .\n1998 ) . 0.25 in 0.15 in -0.15 in however\n, a debate continues about today s descendants of lcbgs . due to their narrow emission line - widths and number density , koo et al .\n( 1995 ) proposed lcbgs are the progenitors of today s lower - mass spheroidal ( sphs ) galaxies : if their current burst of star formation terminates , lcbgs fade by 4 mag in a few gyrs , yielding luminosities , surface - brightnesses , sizes , and colors similar to , e.g. , ngc 205 ( guzmn et al .\nlcbgs have also been proposed as the period when bulges of spiral galaxies form due to their high gas phase metallicities ( hammer et al .\n1999 , kobulnicky & zaritsky 1999 ) . in this picture , the bright , blue compact regions are embedded within a lower - surface brightness , disk of larger size , ( barton & van zee 2001 ) .\nbecause of the extreme density of galaxy clusters , they offer a distinctly different , yet heretofore unexplored environment in which to study the nature of lcbgs . for our purpose , there are two salient differences in galaxy populations in rich clusters and the field : first , the morphology - density relationship describes a decreasing fraction of spirals with increasing local surface density of galaxies ( dressler 1980 ) .\nsecond , the dwarf - galaxy population is much richer in clusters .\ndwarf galaxies are the most numerous type of galaxy regardless of environment , but the ratio of dwarf to giant galaxies increases with the overall density ( e.g. , trentham and hodgkins 2002 ) . because the relative densities of the two proposed descendants of lcbgs ( lower - mass spheroidals and spiral bulges ) are different in galaxy clusters and the field , the number density of lcbgs in intermediate galaxy clusters make the connection between the past ( lcbgs ) and today ( either low - mass spheroidals or spirals ) .\nspecifically , if lcbgs are relatively more numerous in intermediate - redshift clusters than in the field , this provides independent evidence that lcbgs are associated with today s lower - mass spheroidal population .\nwe currently are analyzing imaging data on 10 intermediate redshift galaxy clusters from the wiyn long term variability survey ( wltv ) to search for cluster lcbgs .\neach cluster has been repeatedly imaged over 5 years in a 10 arcmin field with the wiyn 3.5 m telescope .\nwe have a total of @xmath7 hrs integration per target in each of the ubriz bands taken under excellent seeing conditions ( typically @xmath8 0.75 ) .\ndeep , archival hst images are available of each cluster core .\nwe are in the process of gathering rest - frame oii and continuum narrow - band images of the 6 deepest clusters , four of which are visible to salt .\nwe have completed the first stage of an lcbg search in cluster ms0451 , at @xmath9 .\nwe frame our analysis in the measurement of galaxy enhancement .\nthe enhancement is the number of galaxies at a given redshift divided by the expected number in the field at that redshift ( estimated from comparable deep images without rich clusters ) .\nhence , if there is no galaxy over - density , the enhancement is one .\nfigure 2 shows the enhancement of lcbgs in the field of ms0451 . this is a robust , _ differential _ measurement , which , for this cluster appears to support the hypothesis that lcbgs evolve into sphs .\n0.15 in we also have calculated the photometric and structural characteristics of the lcbgs appearing in the inner 0.75 mpc of ms0451 using f702 wfpc2 images .\ncluster lcbgs also have similar colors , sizes , and profiles ( based on image concentration ) to field lcbgs , but somewhat lower luminosities or surface - brightnesses ( see figures 1 and 3 ) .\nare these differences indicative of a more rapid pace of `` down - sizing '' in clusters , akin to the environmental dependence of star formation rates ( balogh et al .\n1998 , martin et al .\n2000 ) ? to differentiate between an accelerated downsizing scenario and environmental effects of truncated or quenched star - formation ( due , e.g. , to stripping ) , star formation rates _ and _ mass measurements of cluster lcbgs are required .\nthese measurements , well suited to pfis , will tell us what differs between cluster and field : the mass - function of star - bursts , or the star - formation rate at a given mass ?\npreliminary results from our survey raise several important questions about cluster lcbg requiring spectroscopic measurements on 10m - class telescopes to answer .\nsalt s prime focus imaging spectrograph ( pfis ) will enable the needed high throughput , medium - resolution spectroscopy . here\nwe outline how we plan to use pfis for our lcbg studies . *\n* mass uniformity : * our initial evidence indicates photometric differences in cluster and field lcbgs .\nwe can characterize this difference more meaningfully by comparing field to cluster lcbg masses . in making this measurement for several clusters\n, we will map how the mass of star - bursting galaxies evolves with redshift and environment .\nbecause of their small sizes , lcbg virial masses are measured by combining spatially - unresolved emission line - widths and hst or wiyn size measurements .\nthe highest spectral resolutions of pfis are ample for lcbg line - width measurements . by using pfis s bank of narrow band ( @xmath10 ) filters ,\nwe can limit the range of the dispersed spectra on the detector to @xmath1175 ( rest - frame ) around the [ oii]@xmath123727 emission line for each cluster . this allows us to increase the spatial multiplexing of our multi - object masks ( see the mms concept described by bershady et al .\n, these proceedings ) and hence also our survey efficiency by about a factor of 4 .\nthe minimum source spatial separation in the dispersion direction is @xmath111 arcmin at the highest spectral resolutions ( grating angle @xmath13 ) , and decreases linearly with resolution ( @xmath14 , i.e. , the littrow condition for the vph gratings ) . at the resolution used by guzman et al .\n( 1998 ) , the minimum separation is 10 arcseconds . * * star formation rates : * if field and cluster lcbgs have similar size and mass functions , luminosity or surface - brightness differences are caused by lower star - formation rates plausibly due to gas stripping in the dense clusters environments .\nwe can measure these rates via low - resolution pfis mos observations that capture [ oii]@xmath123727 and other blue nebular lines that will allow us also to determine metallicity and estimate extinction .\ncombined mass , size , shape , color , star - formation characteristics will allow us to carefully quantify physical differences between field and cluster lcbgs , and determine if they share similar histories of stellar processing of baryonic matter . * * origins : cluster accretion ?\n* there is ample evidence in low redshift clusters that blue , star - forming galaxies are on the periphery of these systems , and just falling in . one way to test for an accretion origin of intermediate - redshift cluster lcbgs is to compare their velocity dispersion to the red cluster population ( for which some measurements already exist , and will be augmented ) .\nlcbg velocity dispersions can be calculated efficiently from the mos measurements described above , or via fabry - perot ( fp ) scanning with the lowest - resolution pfis etalon .\nthe advantage of the latter is the gain in spatial multiplex over the full 8 arcmin pfis field , thereby providing complete kinematic characterization of the emission - line population in the cluster out to several cluster radii .\nwe have presented the initial results from the wltv survey that will provide an extensive catalog of @xmath15 cluster and field lcbgs with high quality photometric information in areas around 10 clusters between @xmath0 . with cluster lcbgs and over - densities identified from these photometric data , we plan to use pfis to answer deeper questions about lcbgs : are cluster and field lcbgs the same ? does the evolution of cluster and field lcbgs differ ? what is the origin of cluster lcbgs ? the answers to these questions tie in with parallel studies of other cluster and field populations , and will yield a cohesive picture of galaxy evolution over the past 3 - 7 gyr .\nbalogh , m. l. , et al .\n1998 , apjl , 504 , l75 barton , e. j. & van zee , l. 2001 , apjl , 550 , l35 bershady , m. a. , jangren , a. , & conselice , c. j. 2000 , aj , 119 , 2645 cowie , l. l. , et al .\n1996 aj , 112 , 839 dressler , a. 1980 , apj , 236 , 351 guzmn , r. , et al .\n1997 , apj , 489 , 559 guzmn , r. , et al .\n1998 , apjl , 495 , l13 hammer , f. , et al .\n2001 , apj , 550 , 570 kobulnicky , h. a. & zaritsky , d. 1999 , apj , 511 , 118 koo , d. c. & kron , r. g. 1988 , apj , 325 , 92 koo , d. c. , et al .\n1995 , apjl , 440 , l49\nlilly , s. et al .\n1998 , apj , 500 , 75 lowenthal , j. d. et al .\n1997 , apj , 481 , 673 mall ' en - ornelas , et al . 1999 , apjl , 518 , l83 martin , c. l. , lotz , j. , & ferguson , h. c. 2000 , apj , 543 , 97 phillips , a. c. , et al .\n1997 , apj , 489 , 543 pisano , d. j. , et al .\n2001 , aj , 122 , 1194 steidel , c. c. , et al .\n1996 , aj , 112 , 352 trentham n. & hudgkin , s. 2002 , astro - ph 0202437 white , s. d. m. & frenk , c. s. 1991 , apj , 379 , 52",
"role": "user"
},
{
"content": "luminous compact blue galaxies ( lcbgs ) are a heterogeneous class which dominate an intermediate phase of galaxy evolution . \n these sources account for the majority of the star formation between @xmath0 , yet the identity of their present - day counterparts is an open question . \n an environmental dependence to their evolution may provide the answer . \n we have undertaken the first census of lcbgs in intermediate - redshift clusters using broad and narrow - band images from the wiyn long term variability study . \n several key clusters are in the southern sky . \n the southern african large telescope s ( salt ) prime focus imaging spectrograph ( pfis ) , with a field of view matching our wiyn data , will allow us to determine several fundamental characteristics of cluster lcbgs , including ( 1 ) star formation rates ( sfr ) and metallicities from low dispersion _ \n m_ulit-_o_bject _ s_pectroscopy ( mos ) ; ( 2 ) dynamical masses from line - widths measured via high dispersion mos ; and ( 3 ) cluster velocity dispersions of lcbgs relative to other cluster emission - line galaxies via fabry - perot imaging or mos . \n we tentatively connect these distant galaxies with their local , evolved counterparts , and discuss how pfis observations will enable us to make the physical distinction between lcbgs in clusters and the field . \n # 1_#1 _ # 1_#1 _ = # 1 1.25 in .125 in .25 in",
"role": "assistant"
}
] |
science.covidfact_entailment | science.covidfact_entailment.765 | [
{
"content": "Evaluate the following claim concerning COVID-19 using the provided research abstract. Your goal is to determine the relationship between the claim and the abstract content.\n\n- \"verdict\": Decide if the abstract supports or contradicts the claim.\n- \"evidence\": List sentences from the abstract that justify your decision.\n\nClaim: Prior infection by seasonal coronaviruses does also prevent sars-cov-2 infection and associated multisystem inflammatory syndrome in children\n\nResearch Abstract:\nBackground: Children have a lower rate of COVID-19, potentially related to cross-protective immunity conferred by seasonal coronaviruses (HCoVs).\nWe tested if prior infections with seasonal coronaviruses impacted SARS-CoV-2 infections and related Multisystem Inflammatory Syndrome (MIS).\nMethods: This cross-sectional observational study in Paris hospitals enrolled 739 pauci or asymptomatic children (HOS group) plus 36 children with suspected MIS (MIS group).\nPrevalence, antigen specificity and neutralizing capability of SARS-CoV-2 antibodies were tested.\nAntibody frequency and titres against Nucleocapsid (N) and Spike (S) of the four seasonal coronaviruses (NL63, HKU1, 229E, OC43) were measured in a subset of seropositive patients (54 SARS-CoV-2 (HOS-P subgroup) and 15 MIS (MIS-P subgroup)), and in 118 matched SARS-CoV-2 seronegative patients (CTL subgroup).\nFindings: SARS-CoV-2 mean prevalence rate in HOSP children was 11.7% from April 1 to June 1.\nNeutralizing antibodies were found in 55.6% of seropositive children, and their relative frequency increased with time (up to 100 % by mid-May).\nA majority of MIS children (25/36) were SARS-CoV-2 seropositive, of which all tested (n=15) had neutralizing antibodies.\nOn average, seropositive MIS children had higher N and S1 SARS-CoV-2 titres as compared to HOS children.\nPatients from HOS-P, MIS-P, and CTL subgroups had a similar prevalence of antibodies against the four seasonal HCoVs (66.9-100%).\nThe level of anti-SARS-CoV-2 antibodies was not significantly different in children who had prior seasonal coronavirus infection.\nInterpretation: Prior infection with HCoVs does not prevent SARS-CoV-2 infection and related MIS in children.\nChildren develop neutralizing antibodies after SARS-CoV-2 infection.",
"role": "user"
},
{
"content": "{\"verdict\": \"CONTRADICT\", \"evidence\": [\"The level of anti-SARS-CoV-2 antibodies was not significantly different in children who had prior seasonal coronavirus infection.\"]}",
"role": "assistant"
}
] |
science.bioasq_yesno_qa | science.bioasq_yesno_qa.121 | [
{
"content": "Examine the provided biomedical text and determine the answer to the question. Your response should be either \"yes\" or \"no\" based on the information given.\n\nQuestion: Can protein coding exons originate from ALU sequences?\n\nText: The Alu element has been a major source of new exons during primate evolution. Thousands of human genes contain spliced exons derived from Alu elements.\nMore than 25% of Alu exons analyzed by RNA-Seq have estimated transcript inclusion levels of at least 50% in the human cerebellum, indicating widespread establishment of Alu exons in human genes.\nhis study presents genomic evidence that a major functional consequence of Alu exonization is the lineage-specific evolution of translational regulation.\nOur data suggests that lineage-specific exonization events should be determined by the combination event of the formation of splicing sites and protection against site-specific mutation pressures. These evolutionary mechanisms could be major sources for primate diversification.\nExonization of Alu elements creates primate-specific genomic diversity\nOur data show that, once acquired, some exonizations were lost again in some lineages. In general, Alu exonization occurred at various time points over the evolutionary history of primate lineages, and protein-coding potential was acquired either relatively soon after integration or millions of years thereafter.\nOnce integrated, they have the potential to become exapted as functional modules, e.g., as protein-coding domains via alternative splicing. This particular process is also termed exonization and increases protein versatility\nalternative \"Alu-exons\" also carry the potential to greatly enhance genetic diversity by increasing the transcriptome of primates chiefly via alternative splicing.\nere, we report a 5' exon generated from one of the two alternative transcripts in human tumor necrosis factor receptor gene type 2 (p75TNFR) that contains an ancient Alu-SINE, which provides an alternative N-terminal protein-coding domain.",
"role": "user"
},
{
"content": "yes",
"role": "assistant"
}
] |
science.mslr2022_cochrane_multidoc_summarization | science.mslr2022_cochrane_multidoc_summarization.2610 | [
{
"content": "Review the provided collection of medical RCT summaries. Your objective is to distill the essential insights into a brief conclusion. Focus solely on the synthesis of findings without additional commentary. The RCT summaries are as follows:\n\n\n\nReport Title 1:\nComparison of visual and computerized interpretation of nonstress test results in a randomized controlled trial.\nReport Abstract 1:\nThis study tested the null hypothesis that the number of fetal surveillance tests and perinatal outcomes would not differ statistically between pregnancies randomized to visual or computerized interpretation of antepartum nonstress test results.\n A prospective, randomized controlled trial was conducted, which required a sample size of 404 patients. By using a random-number table with assignment codes concealed in opaque envelopes, half of the patients were randomized to computerized interpretation of nonstress test results and half to standard visual interpretation of nonstress test results. The amount of antepartum testing and the perinatal outcome were measured and compared between the groups. Logistic regression analysis was used to control for maternal risk factors while morbidity differences between the 2 groups were assessed.\n The 2 randomized groups were similar at baseline, but the computerized interpretation group had significantly fewer biophysical profiles compared with the visual interpretation group (1.3 +/- 1.8 vs 1.9 +/- 2.1; P =.002). The patients in the computerized interpretation group spent less time per test than patients in the visual interpretation group (12 vs 20 minutes; P =.038). After the 5 pregnancies with congenital anomalies were excluded, the overall perinatal outcome was similar in the 2 groups. The computerized interpretation group, however, had a slightly lower proportion of infants who required >/=2 days of neonatal intensive care (7.4% vs 12.4%; P =.086; odds ratio, 0.56; 95% confidence interval, 0.29-1.09). The average number of neonatal intensive care days was also slightly lower in the computerized interpretation group (0.4 vs 0.9; P =.105). Neither of these variables was statistically significant.\n Computerized interpretation of nonstress test results is associated with fewer additional fetal surveillance examinations, less time spent in testing, and a similar length of stay in the neonatal intensive care unit compared with standard visual interpretation.\n\n\nReport Title 2:\nA randomized trial of weekly cardiotocography in high-risk obstetric patients.\nReport Abstract 2:\nA randomized clinical trial of once-weekly antenatal fetal heart rate monitoring in 539 high-risk patients could find no benefit of monitoring in terms of perinatal mortality, morbidity or Apgar score. The previously well-documented association between abnormal antenatal fetal heart rate traces and low Apgar score was confirmed. A detailed case review showed that in this population monitoring was irrelevant to almost all of the 13 perinatal deaths.\n\n\nReport Title 3:\nA randomized controlled trial of non-stress antepartum cardiotocography.\nReport Abstract 3:\nIn a prospective randomized controlled study of non-stress antepartum cardiotocography (CTG), involving 569 tracings in 300 patients, \"non-reactive' traces showed a significant association with still-births and neonatal deaths, intrauterine growth retardation, admission to special care baby unit for conditions associated with intrauterine hypoxia, and low Apgar scores at 1 and 5 min. The report of the CTG was made available to the clinician in 144 patients and withheld in 156 patients. With the report available, significantly more patients were allowed to continue their antenatal care as out-patients, and significantly more antenatal in-patients were allowed home. There were no other significant differences in management, or outcome in the two groups.\n\n\nReport Title 4:\nNon-stress antenatal cardiotocography--a prospective randomized clinical trial.\nReport Abstract 4:\nA randomized controlled trial examined the effects of non-stress antepartum cardiotocography on obstetric management and assessed its usefulness as a diagnostic test of fetal compromise. Daily cardiotocograph recordings were made in 396 antenatal patients at increased risk of fetal compromise but were withheld from the clinicians responsible for care in half the cases. The frequency of intrapartum fetal distress and low Apgar score was similar in the two groups. The three normally-formed perinatal deaths all occurred in the revealed group but in only one case could earlier obstetric intervention have altered the outcome. Availability of non-stress cardiotocography was not associated with an increased rate of induction of labour or caesarean section.\n\n\nReport Title 5:\nThe value of antenatal cardiotocography in the management of high-risk pregnancy: a randomized controlled trial.\nReport Abstract 5:\nThe value of routine regular antenatal cardiotocography (CTG) in the management of high-risk pregnancy was assessed in a prospective randomized controlled study of 353 patients. All patients had a weekly CTG trace during the last 6 weeks of pregnancy and according to the random allocation the tracings were concealed from, or available to, the clinicians. Other methods of assessing fetal welfare were available to both groups. There was no significant difference between the concealed and revealed groups in the timing and mode of delivery, birthweights, Apgar score and neonatal morbidity. No apparent effect from the routine use of antenatal CTG in high-risk pregnancy was shown.",
"role": "user"
},
{
"content": "There is no clear evidence that antenatal CTG improves perinatal outcome, but further studies focusing on the use of computerised CTG in specific populations of women with increased risk of complications are warranted.",
"role": "assistant"
}
] |
science.ncbi_ner | science.ncbi_ner.434 | [
{
"content": "Examine the text below and extract disease-related entities. Present your findings in a structured JSON format.\n\nYour response should be a JSON object with keys as entity types and values as lists of entities. If no entities are found, return an empty list for that type.\n\nContent:\n\nAssignment of the aspartylglucosaminidase gene (AGA) to 4q33----q35 based on decreased activity in a girl with a 46,XX,del(4)(q33) karyotype. Aspartylglucosaminuria (AGU) is a recessive autosomally inherited lysosomal storage disorder due to deficiency of the enzyme aspartylglucosaminidase (AGA). The structural gene for this human enzyme (AGA) has been assigned to the region 4q21----qter. We determined the AGA activity in cultured fibroblasts of a girl with a 46, XX, del (4) (q33) karyotype. The results indicate that the girl is a hemizygote for AGA, permitting the assignment of human AGA to the region 4q33----qter..",
"role": "user"
},
{
"content": "{\"Disease\": [\"lysosomal storage disorder\", \"AGU\", \"Aspartylglucosaminuria\", \"deficiency of the enzyme aspartylglucosaminidase\"]}",
"role": "assistant"
}
] |
science.scicite_classification | science.scicite_classification.3425 | [
{
"content": "Examine the citation provided below. Determine the primary purpose of this citation within the context of the research paper. \n\nCitation:\n\nFor nocodazole treatment of embryos, L4 worms were grown on perm-1(RNAi) feeding plates at 20°C for 16-18 hr, then dissected in an open imaging chamber filled with osmotic support medium (Carvalho et al., 2011;\nWueseke et al., 2016) and 20 µg/ml nocodazole (Sigma).",
"role": "user"
},
{
"content": "Method",
"role": "assistant"
}
] |
science.chemdner_ner | science.chemdner_ner.196 | [
{
"content": "Analyze the given text to identify specific entities. Focus on extracting relevant data.\n\nProvide your findings in JSON format. Each category should list the entities found.\n\nText:\n\nIn silico drug repositioning: what we need to know. Drug repositioning, exemplified by sildenafil and thalidomide, is a promising way to explore alternative indications for existing drugs. Recent research has shown that bioinformatics-based approaches have the potential to offer systematic insights into the complex relationships among drugs, targets and diseases necessary for successful repositioning. In this article, we propose the key bioinformatics steps essential for discovering valuable repositioning methods. The proposed steps (repurposing with a purpose, repurposing with a strategy and repurposing with confidence) are aimed at providing a repurposing pipeline, with particular focus on the proposed Drugs of New Indications (DNI) database, which can be used alongside currently available resources to improve in silico drug repositioning.",
"role": "user"
},
{
"content": "{\"Chemical\": [\"thalidomide\", \"sildenafil\"]}",
"role": "assistant"
}
] |
science.chemprot_re | science.chemprot_re.633 | [
{
"content": "Examine the provided text and determine the interactions between chemical entities and genes. Focus on identifying any connections present.\n\nPresent your findings in a structured JSON format. Each relationship should be represented as an array entry. If no interactions are detected, return an empty array.\n\nText Content:\n\nSynthesis, crystal structure and antidiabetic activity of substituted (E) -3- (Benzo [d]thiazol-2-ylamino) phenylprop-2-en-1-one. A novel series of substituted (E) -3- (Benzo [d]thiazol-2-ylamino) phenylprop-2-en-1-onewere synthesized starting from 2-aminobenzothiazole and 1-aryl-3,3-bis- (methylsulfanyl) -2-propen-1-onesin the presence of a catalytic amount of sodium hydride in THF. The synthesised compounds ' structures were confirmed by IR, Mass spectrometry, (1) H NMR, (13) C NMR and HRMS spectral data. These compounds were evaluated for their antidiabetic activity, and most of the derivatives of (E) -3- (Benzo [d]thiazol-2-ylamino) phenylprop-2-en-1-one displayed significant antidiabetic activity.",
"role": "user"
},
{
"content": "[]",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.3408 | [
{
"content": "Examine the abstracts provided from a main scientific article and its cited works. Craft a narrative that synthesizes the key insights and contributions, emphasizing their interconnections. Employ numerical citations such as [1], [2], etc., to reference the cited works.\n\nMain Article Summary:\nIn this work we propose a novel approach to remove undesired objects from RGB-D sequences captured with freely moving cameras, which enables static 3D reconstruction. Our method jointly uses existing information from multiple frames as well as generates new one via inpainting techniques. We use balanced rules to select source frames; local homography based image warping method for alignment and Markov random field (MRF) based approach for combining existing information. For the left holes, we employ exemplar based multi-view inpainting method to deal with the color image and coherently use it as guidance to complete the depth correspondence. Experiments show that our approach is qualified for removing the undesired objects and inpainting the holes.\n\nCited Works Summary:\n[1]: \n\n[2]: Image-Based Rendering (IBR) allows high-fidelity free-viewpoint navigation using only a set of photographs and 3D reconstruction as input. It is often necessary or convenient to remove objects from the captured scenes, allowing a form of scene editing for IBR. This requires multi-view inpainting of the input images. Previous methods suffer from several major limitations: they lack true multi-view coherence, resulting in artifacts such as blur, they do not preserve perspective during inpainting, provide inaccurate depth completion and can only handle scenes with a few tens of images. Our approach addresses these limitations by introducing a new multi-view method that performs inpainting in intermediate, locally common planes. Use of these planes results in correct perspective and multi-view coherence of inpainting results. For efficient treatment of large scenes, we present a fast planar region extraction method operating on small image clusters. We adapt the resolution of inpainting to that required in each input image of the multi-view dataset, and carefully handle image resampling between the input images and rectified planes. We show results on large indoors and outdoors environments.\n\n[3]: We present a multiview image completion method that provides geometric consistency among different views by propagating space structures. Since a user specifies the region to be completed in one of multiview photographs casually taken in a scene, the proposed method enables us to complete the set of photographs with geometric consistency by creating or removing structures on the specified region. The proposed method incorporates photographs to estimate dense depth maps. We initially complete color as well as depth from a view, and then facilitate two stages of structure propagation and structure-guided completion. Structure propagation optimizes space topology in the scene across photographs, while structure-guide completion enhances, and completes local image structure of both depth and color in multiple photographs with structural coherence by searching nearest neighbor fields in relevant views. We demonstrate the effectiveness of the proposed method in completing multiview images.",
"role": "user"
},
{
"content": "Recent research begins to show interests on using the geometric connections among different views. Baek [3] present a multi-view based method to complete the user-defined region by jointly inpaint the color and depth image, which takes advantages from SfM to achieve geometric registration among different views. Similarly, Thonat [1] enable free-viewpoint image based rendering with reprojected information from neighboring views. Also a refined method is proposed in the following work [2] that performs inpainting on intermediate, local planes in order to preserve perspective as well as to ensure multi-view coherence. In contrast, we use local homography for achieving pixel-wise correspondences, with which the information loss caused by SfM could be effectively avoided. Also they assume that the input images are of high quality, while in contrast, our approach aims at dealing with more common scenarios, such as those taken with moving cameras and hence glutted with blurriness.",
"role": "assistant"
}
] |
science.covid_deepset_qa | science.covid_deepset_qa.412 | [
{
"content": "Analyze the following excerpt from a biomedical text. Identify the relevant information needed to respond to the inquiry. Provide only the necessary details in your answer.\n\nContext: Cumulative incidence rates of infection (i.e. seroincidence) and seroconversion rates were standardized according to the age structure of the community (French National Institute for Statistics and Economical Studies (INSEE) source). Baseline-proxy seroprevalence, cumulative incidence rates of infection, as well as seroconversion and seronegation rates, were expressed as percentages. Cumulative reverse distribution curves were used to show the distribution of antibody titers. In all tests, a P value,0.05 was considered significant. We estimated 95% confidence intervals (CIs) of proportions by using a cluster bootstrap technique with 1000 re-samples [19] . After bootstraping, we used an ANOVA model to compare mean cumulative incidence proportions between pandemic phases, within each age group. We used an alternating logistic regression model (ALR) with an exchangeable log Odds Ratio (OR) to test the intra-household correlation-adjusted association between factors and the seroconversion outcome. Data were analysed with respect to subject age. Initially, four age groups were considered: the children and adolescents (,20 yrs), young adults (20-39 yrs), middle-age adults (40-59 yrs), and elderly adults ($60 yrs). As the cumulative incidence of infection of the second and third groups were very close, both groups were merged into one adults group (20-59 yrs). Therefore we refer further in our study to three age groups: children and adolescents (,20 yrs), adults (20-59 yrs), elderly ($60 yrs). A total of 2,164 individuals from 772 households were enrolled between weeks 30 and 44 in the CoPanFlu-RUN cohort, allowing the collection of 1,932 sera at inclusion (sample 1). During this period, 136 households (17.7% of households) containing 464 individuals (21.4% of individuals) reported at least one case of ILI. Sixty subjects among the 464 individuals (12.9%, belonging to 33 households [24.3%]) were qRT-PCR positive, which documented the pH1N1/2009v infection. No positive qRT-PCR could be detected after week 37 and no ILI was reported after week 40, the end of the epidemic wave. The second follow up serum sample (sample 2) was obtained for 1,759 subjects at least five weeks after the end of the epidemic wave (weeks 45-52) which allowed the constitution of a serobank of 1,687 paired-sera. The profile of the cohort and the major outcomes are displayed in Figure 1 . Details on inclusions and serum sample timing with respect to the circulation of pH1N1/2009v over the island are provided in figure 2 . The socio-demographic and space-time characteristics of the cohort are detailed in Table 1 . Compared to the community of Reunion Island, the sample of 1,687 individuals for whom pairedsera were available, was older (,20 yrs: 27% vs 35%, and $60 yrs: 17,9% vs 11,3%) and composed of a slight excess of females (54.1% vs 51.5%). The imbalance was due to a deficit in subjects aged under 40 years, reflecting men at work and the fact that parents declined the second serum for children younger than five. Baseline-proxy (,pre-epidemic) HIA titers to the pH1N1/ 2009v were measured on sample 1 ( Table 2) , obtained from 249 subjects (103 households) recruited at the very beginning of the investigation during weeks 30 and 31 (phase A, Figure 2 ), when the epidemic activity in the cohort was still very low. Age distribution in this group was similar to that of the whole cohort (data not shown). The overall, the baseline-proxy seroprevalence rate (HIA $1/40), over all ages, was 43.4% (95%CI: 37.4%-49.6%). However the majority of positive sera had low antibody titers, at the cut off value for confirmation (i.e. = 1/40). The proportions of sera with HIA titer .1/40 were 0%, 3.0% and 24.6% in the young, middle-aged and older age groups respectively. These results indicate that pre-epidemic baseline antibody cross reactivity was stronger in the elderly ($60 yrs) and weaker in children and adolescents (,20 yrs) and adults (20-59 yrs), with highly significant differences between age groups (P,0.0001). The reverse cumulative distribution curves of HIA titers are displayed for each age group and for the whole cohort on Figure 3 . The proportion of seropositive sera (HI $1/40) steadily increased during the epidemic unfolding (phase B, W32-39) and in immediate post epidemic period (phase C, W40-44) when it reached its maximum level, then declined in the late post epidemic period (phase D, W45-52). This decline was significant enough to return the reverse cumulative distribution curve to baseline levels in the elderly. The cumulative incidence rates, obtained after subtraction of the age-specific baseline-proxy seroprevalence from the raw seroprevalence at each phase of the epidemic are shown in Table 2 (note that the cumulative incidence rates of infection represented for the group ''all ages'' were standardized according to age structure of the community). The cumulative incidence rates were much higher in children and adolescents (,20 yrs), indicating very active transmission of infection within this age group. As mentioned earlier, cumulative incidence rates peaked in phase C (W40-44), and then declined indicating some lability of the humoral immune response against the pH1N1/2009v. The age-related difference observed in the incidence rates was highly statistically significant (P,0.0001). To estimate more appropriately the decline of antibody titers occurring after the peak of the humoral response to the pH1N1/ 2009v, we considered paired-sera from the group of 264 subjects for whom the first serum sample (sample 1) was obtained just after the epidemic wave (phase C, W40-44), and the corresponding second sample was collected at the end of the survey (phase D, W45-52). Seronegation rates were 27.0% (61/226) for all age groups, 17.4% (12/69) in children and adolescents (,20 yrs), 32.3% (41/127) in adults (20-59 yrs) and 26.7% (8/30) in the elderly ($60 yrs). Differences between the seronegation rates according to age were statistically weakly significant (P = 0.0671). We then considered the 1687 individuals for whom paired sera were available and we measured the seroconversion rates according to age and to the time of first serum sample collection (phase A, B or C). Criteria of seroconversion were defined in the method section. As shown in table 3, there was a sharp decline in seroconversion rates across all the age groups, depending on whether participants were enrolled during phase A, phase B, or phase C (P,0.0001). To interpret these data, one should remember that antibodies at seroprotective levels (HIA $1/40), in serum samples 1 collected during the per epidemic phase B or early post epidemic phase C could represent either base line cross reactive antibodies or rising pH1N1/2009 specific antibodies due to a recent or ongoing infection. This ambiguity could lead to underestimation of the seroconversion rate for subjects enrolled in phases B and C. In order to solve this ambiguity, we specifically considered the group of 249 subjects in whom cross reactive antibodies were detected at the time of phase A (W30-31). The seroconversion rate of this group is the most indicative of the exposure of individuals to the whole epidemic wave. It was the highest (63,2%, P,0.0001) in children and adolescents (,20 yrs), and still significantly high in adults (39.4%, P,0.0001). We then tested in this particular group, the impact of (baseline) pre-epidemic cross reactive antibodies on the rate of seroconversion to pH1N1/2009 (Table 4) . No subject with HIA titer superior to 1/40 had evidence of seroconversion to pH1N1/2009. The seroconversion rate in individuals with a HIA titer equal to 1/40 was linked with age, being more important in children and adolescents (,20 yrs). The highest seroconversion rate (.56%) was registered in subjects with HIA titers inferior to 1/40, particularly for the under 20 years where it reached 85%. Hence, the risk of seroconversion decreased when pre-epidemic HIA titer was high after controlling for age (P,0.0001) (Figure 4) . The multivariate adjusted odds ratio for seroconversion were 0.15 (95%CI: 0.06-0.37, P,0.0001) per two-fold increase in baseline titer, 1.79 (95%CI: 1.23-2.59, P,0.003) per other household members who seroconverted, 5.33 (95%CI: 1.56-19.27, P,0.008) Figure 1 . The cohort profile and major outcomes. Figure 1 details the three phases of the protocol: i) inclusion (weeks 30-44) and serum samples S1 collection; ii) follow up for detection of ILI in households, qRT-PCR on nasal swabs and estimation of cumulative seroincidence rates; iii) end of the study (weeks 45-52) and samples S2 collection. HIA on paired sera (S1+S2) allowed estimating seroconversion rates. doi:10.1371/journal.pone.0025738.g001 Bp (baseline-proxy) seroprevalence rates were estimated on weeks 30-31 in each age group. b Cumulative incidence rates measured the raise between raw seroprevalence rates and age-specific baseline-proxy seroprevalence rate. In the group ''All ages'', cumulative incidence rates were standardized according to age structure of the community. doi:10.1371/journal.pone.0025738.t002 Data are numbers, percentages (95% confidence intervals) and ALR parameter test P value for comparison of seroconversion proportions according to time of first sample (S1) collection at inclusion, in each age group, after controlling for household selection. In the group ''All ages'', rates of seroconversion were standardized according to age structure of the community. NA: not assessed. Seroconversion was defined as a shift from seronegative at inclusion (i.e. HIA titer ,1/40) to seropositive on follow-up sample, or as a 4-fold increase of reciprocal HIA titer between first and second paired samples for sera tested seropositive on inclusion (i.e. HIA titer $1/40). for age ,20 years (vs age $60 years) and 11.35 (95%CI: 0.41-4.47, P = 0.62) for age 20-60 years (vs age $60 years). The observed and predicted seroconversion rates according to age and baseline HIA titer are displayed Figure 4 . Finally, we considered the 46 subjects who had been infected by the pandemic virus over the course of the study, verified by a positive qRT-PCR nasal swab, and for whom paired sera were available. Initial HIA antibody titers in this group were ,1/40, \n\nThe CoPanFlu-RUN cohort was set up to conduct a prospective population-based study investigating the herd immunity induced by the 2009 pandemic influenza virus and identifying risk factors for pH1N1/2009v infection from paired sera collected in an entire community. Most works published to date have used either extensive cross-sectional serosurveys on pre-and post-epidemic independent serum samples, the baseline immunity being assessed from stored frozen samples [5, 7, 8] , or non representative adult cohorts (military, health care workers, long-stay patients). Antibody titers were measured by HIA using a cut-off value set at 1/40 as classically recommended. This HIA titer at 1/40 is considered protective, i.e. conferring 50% protection against a viral challenge [20] . Our assay has introduced some changes in the experimental protocol compared to the classic one. The use of a non-inactivated viral antigen, i.e. a native virus, with nondenatured epitopes probably allows detection of antibodies to epitopes of the hemagglutinin not detected in the classic HIA test. This can induce slight differences in the sensitivity of detection of cross-reacting antibodies, but this does not modify the kinetics of Ab and the epidemiological evolution of seroprevalence and does not jeopardize the global comparability of serological results. This is confirmed by the fact that our HI assay detected seroprotective antibody titers in 93.5% and gave evidence seroconversion in 73.9% of qRT-PCR confirmed pH1N1/2009 influenza, all figures close to those reported in the literature [5, 21] . We considered that titers of .1/40, in sera collected from individuals enrolled during weeks 30 and 31 were cross reactive antibodies and not de novo antibodies triggered by the pandemic virus and hence used them as a proxy for baseline pre epidemic immunity. Several arguments support this assumption: i) the first case indicating autochthonous transmission in Reunion Island was reported by the epidemiological surveillance department of La Réunion on 21st July (week 30), i.e. the same day when inclusion started in our study cohort; ii) 7 to 15 days are required to develop an antibody response after viral infection; iii) On weeks 30 and 31, the epidemic activity due to the pandemic virus was very low in our study cohort and it became significant only after week 32. Hence, during weeks 30-31, 103 households were recruited and only 2 households reported ILI cases. Nasal swabs collected from these 2 individuals were tested qRT-PCR negative to the pandemic virus whereas one had evidence of coronavirus and rhinovirus using a multiplex RT-PCR to respiratory viruses (H. Pascalis, manuscript in preparation). In contrast, during weeks 32 to 39, 199 individuals belonging to 99 households reported ILI, among whom 60 individuals had documented infection by the pandemic virus. Our study shows that a substantial proportion of Reunion Island's population had pre-existing immunity to 2009 pandemic influenza virus\n\nInquiry: What are the results of the study?",
"role": "user"
},
{
"content": "a substantial proportion of Reunion Island's population had pre-existing immunity to 2009 pandemic influenza virus",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_pubmed | science.scientific_papers_summarization_single_doc_pubmed.9934 | [
{
"content": "Read the provided biomedical research paper and create a concise abstract. Ensure the abstract captures the main points of the paper.\n\nFull Paper:\nfd may occur in one bone ( monostotic ) or multiple bones ( polyostotic ) and may be associated with other conditions like mccune albright syndrome .\nclinical presentation may occur at any age , with the majority of lesions being detected by the age of 30 years .\ncommon sites of skeletal involvement are the long bones , ribs , craniofacial bones , and the pelvis .\na 32-year - old female presented with pathological fracture of the shaft of the right femur . radiological evaluation revealed features of fd ( with fracture of the right femoral shaft ) , with similar lesions in the opposite femur .\nthe right femoral fracture was treated with intramedullary nailing , and intraoperative biopsy confirmed the diagnosis of fd . in order to document the extent of bony involvement in the rest of the skeleton , whole body bone scintigraphy was performed 3 hours after intravenous injection of 99mtc - methylene diphosphonate ( mdp ) .\nmultiple sites of increased skeletal tracer uptake were detected in the skull , left mandible , multiple ribs , pelvis and both lower limbs [ figure 1 ] .\nsingle - photon emission computed tomography / computerized tomography ( spect / ct ) of the skull was performed for evaluation of the extent of skull bone involvement and showed involvement of the right sphenoid , ethmoid and frontal bones and left mandible , with evidence of bone expansion [ figure 2 ] .\nct component of spect - ct [ figure 3 ] shows bony expansion with ground glass appearance involving right sided ribs .\ninvolvement of the sphenoid wing ( commonly described as the pirate sign ) was also better depicted with spect / ct .\ntc-99 m mdp whole body images show multiple sites of increased tracer uptake , including the skull and facial bones , multiple ribs , pelvis and bones of both lower limbs .\nthe pirate sign indicates involvement of the right sphenoid wing hybrid spect / ct fusion to determine the extent of skull and facial bone involvement shows intense tracer uptake in the sphenoid , ethmoid and frontal bones non - contrast ct scan of the chest shows bony expansion and deformity of the right ribs , where intense osteoblastic activity was detected\nfd is postulated to occur as a result of a developmental failure in the remodeling of primitive bone to mature lamellar bone and a failure of the bone to realign in response to mechanical stress .\nfailure of maturation leaves a mass of immature isolated trabeculae enmeshed in dysplastic fibrous tissue that constantly turns over but never ( or very , very slowly ) completes the remodeling process . the combination of a lack of stress alignment and insufficient mineralization results in substantial loss of mechanical strength , leading to the development of pain , deformity , and pathologic fractures .\nthe etiology has been linked with a mutation in the gs alpha gene that occurs after fertilization in somatic cells and is located at chromosome 20q13.2 - 13.3 .\nthe monostotic presentation of fd is more frequent than polyostotic presentation , and lesions enlarge in proportion to skeletal growth .\npolyostotic lesions often continue to enlarge after skeletal maturity , with progressive deformity and an increase in pathologic fractures .\npatients with polyostotic disease and large , painful lesions in weight - bearing long bones are at the greatest risk for pathologic fracture\n. local expansion of fd in the maxilla , zygomatic , or ethmoid bones of the face can produce substantial functional and cosmetic deformity . at the initial presentation ,\nradionuclide bone scintigraphy is useful to demonstrate the extent of the disease . actively forming lesions in adolescents\nhave greatly increased isotope uptake that corresponds closely to the radiographic extent of the lesion .\npolyostotic fd has been demonstrated using f18-fluoride positron emission tomography / computed tomography ( pet / ct ) .\nthe utility of volume rendering has also been demonstrated in evaluation of the bony deformities .\nthe pirate sign has been previously described on planar bone scintigraphy in fd .\nrecent studies have described the advantage of performing bone spect in addition to ct to determine the extent of fd in the skull base and of hybrid spect / ct imaging in fd of the lumbar vertebra .\nthe present case illustrates the utility of hybrid spect / ct in documenting the complete extent of bony involvement of the skull base .",
"role": "user"
},
{
"content": "polyostotic fibrous dysplasia is a rare progressive benign disorder of the bone . \n bone scintigraphy is extremely useful in the initial evaluation for identifying the extent of disease . \n we report a case presenting with pathological fracture of the shaft of the right femur . \n after treatment of the fracture , bone scintigraphy revealed involvement of multiple bones including the skull and facial bones . \n the utility of single - photon emission computed tomography / computerized tomography in the evaluation of the extent of skull base involvement is highlighted .",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.3448 | [
{
"content": "Examine the abstracts provided from a main scientific article and its cited works. Craft a narrative that synthesizes the key insights and contributions, emphasizing their interconnections. Employ numerical citations such as [1], [2], etc., to reference the cited works.\n\nMain Article Summary:\nDetecting acoustic shadows in ultrasound images is important in many clinical and engineering applications. Real-time feedback of acoustic shadows can guide sonographers to a standardized diagnostic viewing plane with minimal artifacts and can provide additional information for other automatic image analysis algorithms. However, automatically detecting shadow regions is challenging because pixel-wise annotation of acoustic shadows is subjective and time consuming. In this paper we propose a weakly supervised method for automatic confidence estimation of acoustic shadow regions, which is able to generate a dense shadow-focused confidence map. During training, a multi-task module for shadow segmentation is built to learn general shadow features according based image-level annotations as well as a small number of coarse pixel-wise shadow annotations. A transfer function is then established to extend the binary shadow segmentation to a reference confidence map. In addition, a confidence estimation network is proposed to learn the mapping between input images and the reference confidence maps. This confidence estimation network is able to predict shadow confidence maps directly from input images during inference. We evaluate DICE, soft DICE, recall, precision, mean squared error and inter-class correlation to verify the effectiveness of our method. Our method outperforms the state-of-the-art qualitatively and quantitatively. We further demonstrate the applicability of our method by integrating shadow confidence maps into tasks such as ultrasound image classification, multi-view image fusion and automated biometric measurements.\n\nCited Works Summary:\n[1]: Most modern convolutional neural networks (CNNs) used for object recognition are built using the same principles: Alternating convolution and max-pooling layers followed by a small number of fully connected layers. We re-evaluate the state of the art for object recognition from small images with convolutional networks, questioning the necessity of different components in the pipeline. We find that max-pooling can simply be replaced by a convolutional layer with increased stride without loss in accuracy on several image recognition benchmarks. Following this finding -- and building on other recent work for finding simple network structures -- we propose a new architecture that consists solely of convolutional layers and yields competitive or state of the art performance on several object recognition datasets (CIFAR-10, CIFAR-100, ImageNet). To analyze the network we introduce a new variant of the \"deconvolution approach\" for visualizing features learned by CNNs, which can be applied to a broader range of network structures than existing approaches.\n\n[2]: Automatically detecting acoustic shadows is of great importance for automatic 2D ultrasound analysis ranging from anatomy segmentation to landmark detection. However, variation in shape and similarity in intensity to other structures make shadow detection a very challenging task. In this paper, we propose an automatic shadow detection method to generate a pixel-wise, shadow-focused confidence map from weakly labelled, anatomically-focused images. Our method: (1) initializes potential shadow areas based on a classification task. (2) extends potential shadow areas using a GAN model. (3) adds intensity information to generate the final confidence map using a distance matrix. The proposed method accurately highlights the shadow areas in 2D ultrasound datasets comprising standard view planes as acquired during fet al screening. Moreover, the proposed method outperforms the state-of-the-art quantitatively and improves failure cases for automatic biometric measurement.\n\n[3]: Attributing the pixels of an input image to a certain category is an important and well-studied problem in computer vision, with applications ranging from weakly supervised localisation to understanding hidden effects in the data. In recent years, approaches based on interpreting a previously trained neural network classifier have become the de facto state-of-the-art and are commonly used on medical as well as natural image datasets. In this paper, we discuss a limitation of these approaches which may lead to only a subset of the category specific features being detected. To address this problem we develop a novel feature attribution technique based on Wasserstein Generative Adversarial Networks (WGAN), which does not suffer from this limitation. We show that our proposed method performs substantially better than the state-of-the-art for visual attribution on a synthetic dataset and on real 3D neuroimaging data from patients with mild cognitive impairment (MCI) and Alzheimer's disease (AD). For AD patients the method produces compellingly realistic disease effect maps which are very close to the observed effects.\n\n[4]: In this paper, we propose DeepCut, a method to obtain pixelwise object segmentations given an image dataset labelled weak annotations, in our case bounding boxes. It extends the approach of the well-known GrabCut[1] method to include machine learning by training a neural network classifier from bounding box annotations. We formulate the problem as an energy minimisation problem over a densely-connected conditional random field and iteratively update the training targets to obtain pixelwise object segmentations. Additionally, we propose variants of the DeepCut method and compare those to a naive approach to CNN training under weak supervision. We test its applicability to solve brain and lung segmentation problems on a challenging fet al magnetic resonance dataset and obtain encouraging results in terms of accuracy.\n\n[5]: \n\n[6]: In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network to have remarkable localization ability despite being trained on image-level labels. While this technique was previously proposed as a means for regularizing training, we find that it actually builds a generic localizable deep representation that can be applied to a variety of tasks. Despite the apparent simplicity of global average pooling, we are able to achieve 37.1 top-5 error for object localization on ILSVRC 2014, which is remarkably close to the 34.2 top-5 error achieved by a fully supervised CNN approach. We demonstrate that our network is able to localize the discriminative image regions on a variety of tasks despite not being trained for them",
"role": "user"
},
{
"content": "* Weakly supervised image segmentation Weakly supervised automatic detection of class differences has been explored in other imaging domains (e.g. MRI). For example, [3] proposed to use generative adversarial network (GAN) to highlight class differences only from image-level labels (Alzheimer's disease or healthy). We used a similar idea in [2] and initialized potential shadow areas based on saliency maps [1] from a classification task between images containing shadow and those without shadow. Inspired by recent weakly supervised deep learning methods that have drastically improved semantic image analysis [5] [6] [4] and to overcome the limitations of [2] , we develop a confidence estimation algorithm that takes advantages of both types of weak labels, including image-level labels and a sparse set of coarse pixel-wise labels. Our method is able to predict dense, shadow-focused confidence maps directly from input US images in real-time.",
"role": "assistant"
}
] |
science.chemprot_ner | science.chemprot_ner.1010 | [
{
"content": "Analyze the following text to identify specific entities. Focus on the categories provided: [\"Chemical\", \"Protein\"].\n\nDeliver your findings as a JSON structure: {\"Chemical\": [\"ExampleChemical\", ...], \"Protein\": [\"ExampleProtein\", ...]}. Each key represents an entity type, and the associated list contains the entities found. \n\nEnsure the response is strictly the JSON format without any extra text.\n\nSource:\n\nHeterogeneous expression and polymorphic genotype of glutathione S-transferases in human lung.\nBACKGROUND: Glutathione S-transferases (GSTs) are involved in the detoxification of xenobiotics by conjugation with glutathione. One of the mu class genes of this superfamily of enzymes, GSTM1, is polymorphic because of a partial gene deletion. This results in a failure to express GSTM1 in approximately 50% of individuals. Several studies have linked GSTM1 null status to an increased risk of lung carcinoma. This study investigated the expression and distribution of GST isoenzymes in human lung, and developed a polymerase chain reaction (PCR) assay which would allow genotyping of archival, paraffin embedded lung tissue. METHODS: Distribution was examined using a panel of polyclonal anti-GST antibodies for immunohistochemistry in normal tissue of 21 tumour-bearing lungs. DNA for PCR was extracted from paraffin blocks and a control group of 350 blood lysates. As a positive control each assay amplified part of GSTM4, a mu class gene which is not polymorphic but which shows strong sequence homology to GSTM1. The presence of GST in bronchoalveolar lavage fluid was sought by Western analysis. RESULTS: Proximal airways contained pi class GST, alpha class GST, and mu class GST with expression concentrated in the brush border. In distal airspaces no alpha GST was expressed but pi GST and mu GST were present in alveolar cells and also alveolar macrophages. Pi class GST was present in bronchoalveolar lavage fluid. The PCR assay enabled genotypic determination using DNA extracted from archival material. Of the control group 56% were null at the GSTM1 locus. CONCLUSIONS: The distribution of GST isoenzymes in the lung is heterogeneous with an apparent decrease in GST in distal lung. Since GSTM1 status has already been associated with susceptibility to disease, the PCR assay developed will allow further studies of the relation between genotype and structural disorders in the lung using archival pathological material.",
"role": "user"
},
{
"content": "{\"Chemical\": [\"glutathione\", \"Glutathione\"], \"Protein\": [\"glutathione S-transferases\", \"mu class GST\", \"Pi class GST\", \"pi class GST\", \"GST\", \"mu GST\", \"GSTM4\", \"alpha GST\", \"Glutathione S-transferases\", \"alpha class GST\", \"GSTM1\", \"GSTs\", \"pi GST\"]}",
"role": "assistant"
}
] |
science.qasper_extractive_qa | science.qasper_extractive_qa.392.3 | [
{
"content": "Analyze the given research paper excerpts and address the question provided. Extract the necessary information directly from the text. Your answer should be formatted as a JSON object with the following structure:\n\n\"answer\": A list of text segments from the paper that directly respond to the question.\n\"evidence\": A list of text segments from the paper where the answer can be found.\n\nIf the question cannot be answered with the given text, return \"null\".\n\nDocument: Build Fast and Accurate Lemmatization for Arabic\n\nIn this paper we describe the complexity of building a lemmatizer for Arabic which has a rich and complex derivational morphology, and we discuss the need for a fast and accurate lammatization to enhance Arabic Information Retrieval (IR) results. We also introduce a new data set that can be used to test lemmatization accuracy, and an efficient lemmatization algorithm that outperforms state-of-the-art Arabic lemmatization in terms of accuracy and speed. We share the data set and the code for public.\n\nIntroduction\nLemmatization is the process of finding the base form (or lemma) of a word by considering its inflected forms. Lemma is also called dictionary form, or citation form, and it refers to all words having the same meaning.\nLemmatization is an important preprocessing step for many applications of text mining and question-answering systems, and researches in Arabic Information Retrieval (IR) systems show the need for representing Arabic words at lemma level for many applications, including keyphrase extraction BIBREF0 and machine translation BIBREF1 . In addition, lemmatization provides a productive way to generate generic keywords for search engines (SE) or labels for concept maps BIBREF2 .\nWord stem is that core part of the word that never changes even with morphological inflections; the part that remains after prefix and suffix removal. Sometimes the stem of the word is different than its lemma, for example the words: believe, believed, believing, and unbelievable share the stem (believ-), and have the normalized word form (believe) standing for the infinitive of the verb (believe).\nWhile stemming tries to remove prefixes and suffixes from words that appear with inflections in free text, lemmatization tries to replace word suffixes with (typically) different suffix to get its lemma.\nThis extended abstract is organized as follows: Section SECREF2 shows some complexities in building Arabic lemmatization, and surveys prior work on Arabic stemming and lemmatization; Section SECREF3 introduces the dataset that we created to test lemmatization accuracy; Section SECREF4 describes the algorithm of the system that we built and report results and error analysis in section SECREF5 ; and Section SECREF6 discusses the results and concludes the abstract.\n\nBackground\nArabic is the largest Semitic language spoken by more than 400 million people. It's one of the six official languages in the United Nations, and the fifth most widely spoken language after Chinese, Spanish, English, and Hindi. Arabic has a very rich morphology, both derivational and inflectional. Generally, Arabic words are derived from a root that uses three or more consonants to define a broad meaning or concept, and they follow some templatic morphological patterns. By adding vowels, prefixes and suffixes to the root, word inflections are generated. For instance, the word ÙØ³ÙÙØªØÙÙ> (wsyftHwn) “and they will open” has the triliteral root ÙØªØ> (ftH), which has the basic meaning of opening, has prefixes ÙØ³> (ws) “and will”, suffixes ÙÙ> (wn) “they”, stem ÙÙØªØ> (yftH) “open”, and lemma ÙØªØ> (ftH) “the concept of opening”.\nIR systems typically cluster words together into groups according to three main levels: root, stem, or lemma. The root level is considered by many researchers in the IR field which leads to high recall but low precision due to language complexity, for example words ÙØªØ¨Ø Ù ÙØªØ¨Ø©Ø ÙØªØ§Ø¨> (ktb, mktbp, ktAb) “wrote, library, book” have the same root ÙØªØ¨> (ktb) with the basic meaning of writing, so searching for any of these words by root, yields getting the other words which may not be desirable for many users.\nOther researchers show the importance of using stem level for improving retrieval precision and recall as they capture semantic similarity between inflected words. However, in Arabic, stem patterns may not capture similar words having the same semantic meaning. For example, stem patterns for broken plurals are different from their singular patterns, e.g. the plural Ø£ÙÙØ§Ù > (AqlAm) “pens” will not match the stem of its singular form ÙÙÙ > (qlm) “pen”. The same applies to many imperfect verbs that have different stem patterns than their perfect verbs, e.g. the verbs Ø§Ø³ØªØ·Ø§Ø¹Ø ÙØ³ØªØ·Ùع> (AstTAE, ystTyE) “he could, he can” will not match because they have different stems. Indexing using lemmatization can enhance the performance of Arabic IR systems.\nA lot of work has been done in word stemming and lemmatization in different languages, for example the famous Porter stemmer for English, but for Arabic, there are few work has been done especially in lemmatization, and there is no open-source code and new testing data that can be used by other researchers for word lemmatization. Xerox Arabic Morphological Analysis and Generation BIBREF3 is one of the early Arabic stemmers, and it uses morphological rules to obtain stems for nouns and verbs by looking into a table of thousands of roots.\nKhoja's stemmer BIBREF4 and Buckwalter morphological analyzer BIBREF5 are other root-based analyzers and stemmers which use tables of valid combinations between prefixes and suffixes, prefixes and stems, and stems and suffixes. Recently, MADAMIRA BIBREF6 system has been evaluated using a blind testset (25K words for Modern Standard Arabic (MSA) selected from Penn Arabic Tree bank (PATB)), and the reported accuracy was 96.2% as the percentage of words where the chosen analysis (provided by SAMA morphological analyzer BIBREF7 ) has the correct lemma.\nIn this paper, we present an open-source Java code to extract Arabic word lemmas, and a new publicly available testset for lemmatization allowing researches to evaluate using the same dataset that we used, and reproduce same experiments.\n\nData Description\nTo make the annotated data publicly available, we selected 70 news articles from Arabic WikiNews site https://ar.wikinews.org/wiki. These articles cover recent news from year 2013 to year 2015 in multiple genres (politics, economics, health, science and technology, sports, arts, and culture.) Articles contain 18,300 words, and they are evenly distributed among these 7 genres with 10 articles per each.\nWord are white-space and punctuation separated, and some spelling errors are corrected (1.33% of the total words) to have very clean test cases. Lemmatization is done by an expert Arabic linguist where spelling corrections are marked, and lemmas are provided with full diacritization as shown in Figure FIGREF2 .\nAs MSA is usually written without diacritics and IR systems normally remove all diacritics from search queries and indexed data as a basic preprocessing step, so another column for undiacritized lemma is added and it's used for evaluating our lemmatizer and comparing with state-of-the-art system for lemmatization; MADAMIRA.\n\nsystem Description\nWe were inspired by the work done by BIBREF8 for segmenting Arabic words out of context. They achieved an accuracy of almost 99%; slightly better than state-of-the-art system for segmentation (MADAMIRA) which considers surrounding context and many linguistic features. This system shows enhancements in both Machine Translation, and Information Retrieval tasks BIBREF9 . This work can be considered as an extension to word segmentation.\nFrom a large diacritized corpus, we constructed a dictionary of words and their possible diacritizations ordered by number of occurrences of each diacritized form. This diacritized corpus was created by a commercial vendor and contains 9.7 million words with almost 200K unique surface words. About 73% of the corpus is in MSA and covers variety of genres like politics, economy, sports, society, etc. and the remaining part is mostly religious texts written in classical Arabic (CA). The effectiveness of using this corpus in building state-of-the-art diacritizer was proven in BIBREF10 .For example, the word ÙØ¨ÙÙØ¯> (wbnwd) “and items” is found 4 times in this corpus with two full diacritization forms ÙÙØ¨ÙÙÙÙØ¯ÙØ ÙÙØ¨ÙÙÙÙØ¯Ù> (wabunudi, wabunudK) “items, with different grammatical case endings” which appeared 3 times and once respectively. All unique undiacritized words in this corpus were analyzed using Buckwalter morphological analyzer which gives all possible word diacritizations, and their segmentation, POS tag and lemma as shown in Figure FIGREF3 .\nThe idea is to take the most frequent diacritized form for words appear in this corpus, and find the morphological analysis with highest matching score between its diacritized form and the corpus word. This means that we search for the most common diacritization of the word regardless of its surrounding context. In the above example, the first solution is preferred and hence its lemma Ø¨ÙØ¯> (banod, bnd after diacritics removal) “item”.\nWhile comparing two diacritized forms from the corpus and Buckwalter analysis, special cases were applied to solve inconsistencies between the two diacritization schemas, for example while words are fully diacritized in the corpus, Buckwalter analysis gives diacritics without case ending (i.e. without context), and removes short vowels in some cases, for example before long vowels, and after the definite article اÙ> (Al) “the”, etc.\nIt worths mentioning that there are many cases in Buckwalter analysis where for the input word, there are two or more identical diacritizations with different lemmas, and the analyses of such words are provided without any meaningful order. For example the word Ø³ÙØ§Ø±Ø©> (syArp) “car” has two morphological analyses with different lemmas, namely Ø³ÙØ§Ø±> (syAr) “walker”, and Ø³ÙØ§Ø±Ø©> (syArp) “car” in this order while the second lemma is the most common one. To solve tis problem, all these words are reported and the top frequent words are revised and order of lemmas were changed according to actual usage in modern language.\nThe lemmatization algorithm can be summarized in Figure FIGREF4 , and the online system can be tested through the site http://alt.qcri.org/farasa/segmenter.html\n\nEvaluation\nData was formatted in a plain text format where sentences are written in separate lines and words are separated by spaces, and the outputs of MADAMIRA and our system are compared against the undiacritized lemma for each word. For accurate results, all differences were revised manually to accept cases that should not be counted as errors (different writings of foreign names entities for example as in ÙÙÙØº ÙÙÙØºØ ÙÙÙØ¬ ÙÙÙØ¬> (hwng kwng, hwnj kwnj) “Hong Kong”, or more than one accepted lemma for some function words, e.g the lemmas ÙÙØ ÙÙ٠ا> (fy, fymA) are both valid for the function word ÙÙ٠ا> (fymA) “while”).\nTable TABREF5 shows results of testing our system and MADAMIRA on the WikiNews testset (for undiacritized lemmas). Our approach gives +7% relative gain above MADAMIRA in lemmatization task.\nIn terms of speed, our system was able to lemmatize 7.4 million words on a personal laptop in almost 2 minutes compared to 2.5 hours for MADAMIRA, i.e. 75 times faster. The code is written entirely in Java without any external dependency which makes its integration in other systems quite simple.\n\nError Analysis\nMost of the lemmatization errors in our system are due to fact that we use the most common diacritization of words without considering their contexts which cannot solve the ambiguity in cases like nouns and adjectives that share the same diacritization forms, for example the word Ø£ÙØ§Ø¯ÙÙ ÙØ©> (AkAdymyp) can be either a noun and its lemma is Ø£ÙØ§Ø¯ÙÙ ÙØ©> (AkAdymyp) “academy”, or an adjective and its lemma is Ø£ÙØ§Ø¯ÙÙ Ù> (AkAdymy) “academic”. Also for MADAMIRA, errors in selecting the correct Part-of-Speech (POS) for ambiguous words, and foreign named entities.\nIn the full paper, we will quantify error cases in our lemmatizer and MADAMIRA and give examples for each case which can help in enhancing both systems.\n\nDiscussion\nIn this paper, we introduce a new dataset for Arabic lemmatization and a very fast and accurate lemmatization algorithm that performs better than state-of-the art system; MADAMIRA. Both the dataset and the code will be publicly available. We show that to build an effective IR system for complex derivational languages like Arabic, there is a a big need for very fast and accurate lemmatization algorithms, and we show that this can be achieved by considering only the most frequent diacritized form for words and matching this form with the morphological analysis with highest similarity score. We plan to study the performance if the algorithm was modified to provide diacritized lemmas which can be useful for other applications.\n\nTable 1: Examples of complex verb lemmatization cases\n\nTable 2: Examples of complex noun lemmatization cases\n\nFigure 2: Buckwalter analysis (diacritization forms and lemmas are highlighted)\n\nFigure 1: Lemmatization of WikiNews corpus\n\nTable 3: Lemmatization accuracy using WikiNews testset\n\nFigure 4: Lemmatization online demo (part of Farasa Arabic NLP tools)\n\nInquiry: Where did they collect their dataset from?",
"role": "user"
},
{
"content": "{\"answer\": [\"from Arabic WikiNews site https://ar.wikinews.org/wiki\"], \"evidence\": [\"To make the annotated data publicly available, we selected 70 news articles from Arabic WikiNews site https://ar.wikinews.org/wiki. These articles cover recent news from year 2013 to year 2015 in multiple genres (politics, economics, health, science and technology, sports, arts, and culture.) Articles contain 18,300 words, and they are evenly distributed among these 7 genres with 10 articles per each.\"]}",
"role": "assistant"
}
] |
science.mslr2022_cochrane_multidoc_summarization | science.mslr2022_cochrane_multidoc_summarization.1435 | [
{
"content": "Review the provided collection of medical RCT summaries. Your objective is to distill the essential insights into a brief conclusion. Focus solely on the synthesis of findings without additional commentary. The RCT summaries are as follows:\n\n\n\nReport Title 1:\n[A 2-year evaluation of different methods of dental caries prophylaxis in a sample school-aged population].\nReport Abstract 1:\nCaries is currently considered to be the most widespread disease affecting mankind; it is calculated in fact that around 90% of the world population suffers from caries. Carious pathology therefore plays a key role, not least due to the high social costs which it entails, and, for this reason, research into appropriate methods of prevention which can to some extent limit the incidence of caries are of particular interest. MATERIALS AND METHODS. The study was performed in a group of 1475 selected primary-school children. After having subdivided the population in three groups, an initial dental examination was carried out and the results obtained were quantified by calculating DMFT. The first group (492 children) was instructed to follow the rules of oral and dietary hygiene. The second group (501 children), in addition to the instructions given to the first group, also received fluoro-prophylactic treatment. Lastly, the third group (482 children) were used as a reference sample and did not receive any form of instruction or treatment. RESULTS. After two years DMFT values were increased in all three groups; in detail, values rose 0.41 in the first group, 0.38 in the second group, and 0.81 in the third group. DISCUSSION AND CONCLUSIONS. Scrupulous oral hygiene and a correct diet represent a prophylactic measure of outstanding importance in terms of carious pathology. The minimum difference observed between the first and second groups should probably be attributed to the fact that fluoro-prophylactic treatment requires a period of more than 24 months before appreciable results can be seen.\n\n\nReport Title 2:\nEvaluation of an educational program for children with high risk of caries.\nReport Abstract 2:\nThe goal of this study was to evaluate a 15-month educational program designed to children. The sample consisted of 60 six-year olds, randomly assigned into control and experimental group. The control consisted of tooth brushing training, once a year. The experimental group received intensive individual tooth brushing training every three months and guidance on oral health. Initially, both groups were assessed using plaque, gingival, dmfs and DMF-S indexes every three months. In the control, no statistically significant difference was observed for plaque and gingival indexes. The experimental group showed a statistically significant reduction in mean values for two indexes. The caries indexes showed no statistically significant difference. The proposed educational program developed was efficient in reducing gingival and plaque indexes as well caries incidence.\n\n\nReport Title 3:\nA cluster randomized controlled trial of a dental health education program for 10-year-old children.\nReport Abstract 3:\nUsing a cluster randomized trial, this study tested the effectiveness of a dental health education program designed to improve the oral hygiene and dental knowledge of 10-year-old children.\n Thirty-two primary schools in the northwest of England participated. After a baseline assessment of plaque and the completion of a dental knowledge questionnaire by the children, the schools were allocated randomly to active or control groups. Children in schools allocated to the active group received the dental health program, which consisted of four one-hour lessons. After four months the children were examined clinically and scored for plaque, and a second questionnaire was administered. The schools in the control group were then allocated randomly to receive the program or not over the following three months, the program being withdrawn from the schools who initially received it. A further assessment of plaque was made and a questionnaire administered seven months after the baseline of the study.\n The active groups had 20 percent and 17 percent lower mean plaque scores than the control group at four and seven months (P < .001). The children's knowledge of which type of toothbrush should be used and the role of disclosing tablets improved in the initial test group when compared with the control group and this was retained over the second part of the study.\n The children receiving the program had significantly lower mean plaque scores and greater knowledge about toothbrushes and disclosing tablets than the control children who had not received the program.\n\n\nReport Title 4:\nSchool-based intervention to promote preadolescents' gingival health: a community trial.\nReport Abstract 4:\nEvaluation of the effectiveness of a school-based oral health promotion intervention on preadolescents' gingival health.\n A community trial designed for a 3-month intervention study in a representative sample of 9-year-olds (n = 457) in 16 schools in Tehran, Iran. The schools were randomly assigned to three intervention groups and one control group, each group comprising two boys' and two girls' schools. The first group of children (n = 115) received intervention via class work, solving a set of puzzles containing oral health messages, under supervision of their health counsellor. The second group (n = 114), intervention via parents, included an oral health education leaflet and a brushing diary for supervising the child's tooth-brushing; the third group (n = 111) received a combination of both these interventions. The control group (n = 117) had no intervention. Effects of the intervention were assessed as changes in dental plaque and gingival bleeding. Improvements in gingival health were recorded when half of the index teeth with plaque at baseline became clean (acceptable oral hygiene) or when all index teeth with bleeding at baseline became healthy (healthy gingiva). Statistical analysis included chi square, anova, t-test, Number Needed to Treat (NNT) and generalized estimating equations (GEE).\n At baseline, none of the children were free of plaque and all except for three boys had bleeding. After the trial, acceptable oral hygiene was more frequent in the parental-aid (P < 0.001) and the combined groups (P < 0.05), and healthy gingiva in both groups (P < 0.001) in comparison with the control group. Outcomes in the class-work group did not differ from those in the control group. The GEE models confirmed a strong intervention effect on healthy gingiva in both groups where parents were involved: parental-aid group (OR = 7.7, 95% CI: 2.2-27.7) and combined group (OR = 6.6, 95% CI: 2.0-22.1). In all intervention groups more girls than boys achieved healthy gingiva (OR = 2.5-2.6). Parents' education showed no impact on the outcome.\n When a school-based oral health intervention involves parents it may result in a significant improvement in the gingival health of preadolescents with poor gingival health at baseline.",
"role": "user"
},
{
"content": "Currently, there is insufficient evidence for the efficacy of primary school-based behavioural interventions for reducing caries. There is limited evidence for the effectiveness of these interventions on plaque outcomes and on children's oral health knowledge acquisition. None of the included interventions were reported as being based on or derived from behavioural theory. There is a need for further high quality research to utilise theory in the design and evaluation of interventions for changing oral health related behaviours in children and their parents.",
"role": "assistant"
}
] |
science.pico_ner | science.pico_ner.2991 | [
{
"content": "Examine the following abstract and identify distinct entities. Focus on extracting entities related to participants, interventions, and outcomes.\n\nPresent your findings in a JSON format like: {\"Participant\": [\"example entity\", ...], \"Intervention\": [\"example entity\", ...], \"Outcome\": [\"example entity\", ...]}. If no entities are found for a category, use an empty list.\n\nAbstract Content:\n\nWhy does lag affect the durability of memory-based automaticity : loss of memory strength or interference ? In Rickard , Lau , and Pashler 's ( 2008 ) investigation of the lag effect on memory-based automaticity , response times were faster and proportion of trials retrieved was higher at the end of practice for short lag items than for long lag items . However , during testing after a delay , response times were slower and proportion of trials retrieved was lower for short lag items than for long lag items . The current study investigated the extent to which the lag effect on the durability of memory-based automaticity is due to interference or to the loss of memory strength with time . Participants repeatedly practiced alphabet subtraction items in short lag and long lag conditions . After practice , half of the participants were immediately tested and the other half were tested after a 7-day delay . Results indicate that the lag effect on the durability of memory-based automaticity is primarily due to interference . We discuss potential modification of current memory-based processing theories to account for these effects .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"trials\", \"tested after a 7-day delay .\", \"testing after a delay\", \"lag\", \"practice\", \"alphabet subtraction items in short lag and long lag conditions .\", \"immediately tested\"], \"Outcome\": [\"memory-based automaticity\", \"response times\", \"durability of memory-based automaticity\"], \"Participant\": [\"Participants\", \"memory-based automaticity :\"]}",
"role": "assistant"
}
] |
science.bc7_litcovid_topic_classification | science.bc7_litcovid_topic_classification.4175 | [
{
"content": "Analyze the given paragraph from COVID-19 research. Determine the relevant topics discussed in the text. Your response should be a JSON array containing the identified topics. Ensure the output is a valid JSON array with no additional commentary.\n\nText:\n\nThere is no data on patients with severe obesity who developed coronavirus disease 2019 (COVID-19) after bariatric surgery. Four gastric bypass operations, performed in a 2-week period between Feb 24 and March 4, 2020, in Tehran, Iran, were complicated with COVID-19. The mean age and body mass index were 46 +/- 12 years and 49 +/- 3 kg/m(2). Patients developed their symptoms (fever, cough, dyspnea, and fatigue) 1, 2, 4, and 14 days after surgery. One patient had unnoticed anosmia 2 days before surgery. Three patients were readmitted in hospital. All 4 patients were treated with hydroxychloroquine. In two patients who required admission in intensive care unit, other off-label therapies including antiretroviral and immunosuppressive agents were also administered. All patients survived. In conclusion, COVID-19 can complicate the postoperative course of patients after bariatric surgery. Correct diagnosis and management in the postoperative setting would be challenging. Timing of infection after surgery in our series would raise the possibility of hospital transmission of COVID-19: from asymptomatic patients at the time of bariatric surgery to the healthcare workers versus acquiring the COVID-19 infection by non-infected patients in the perioperative period.",
"role": "user"
},
{
"content": "[\"Treatment\"]",
"role": "assistant"
}
] |
science.scireviewgen_multidoc_summarization | science.scireviewgen_multidoc_summarization.3737 | [
{
"content": "Craft a cohesive chapter for a literature review using the abstracts provided. Focus on the chapter's theme, weaving together insights and findings from the abstracts. Highlight key points and note any research gaps.\nLiterature Review Title: Peer-to-Peer Information Retrieval: An Overview \nChapter Title: Architectures \n1. Abstract of Cited Paper (BIB001): In a Peer-To-Peer (P2P) system, autonomous computers pool their resources (e.g., files, storage, compute cycles) in order to inexpensively handle tasks that would normally require large costly servers. The scale of these systems, their \"open nature,\" and the lack of centralized control pose difficult performance and security challenges. Much research has recently focused on tackling some of these challenges; in this paper, we propose future directions for research in P2P systems, and highlight problems that have not yet been studied in great depth. We focus on two particular aspects of P2P systems - search and security - and suggest several open and important research problems for the community to address.\n\nReview Chapter:",
"role": "user"
},
{
"content": "There are multiple possible architectures for a peer-to-peer network. The choice for one of these affects how the network can be searched. To be able to search, one requires an index and a way to match queries against entries in this index. Although we will use a number of examples, it is important to realize that what the index is used for is application-specific. This could be mapping filenames to concrete locations in the case of file sharing, user identifiers to machine addresses for instant messaging networks, or terms to documents in the case of information retrieval. In all cases the crucial challenge is that of keeping the latency low whilst retaining the beneficial properties of peer-to-peer networks like self-organization and load balancing BIB002 ]. Based on this there are a number of subtasks for searching that all affect the latency. These search subtasks are relevant to tasks performed in all peer-to-peer networks. In the following paragraphs we discuss how these subtasks are performed in four commonly used peer-to-peer architectures using file sharing as example, since many techniques used in peer-to-peer information retrieval are adapted from innovations seen in file sharing networks. 2.5.1. Centralized Global Index. Early file sharing systems used a centralized global index located at a dedicated party, usually a server farm, which kept track of what file was located at which peer in the network. When peers joined the network they sent a list of metadata on files they wanted to share containing, for example, filenames, to the central party which would then include them in its central index. All queries that originated from the peers were directly routed to and processed by that central party. Hence, indexing and searching itself was completely centralized and followed the client-server paradigm. Actually obtaining files, or parts of files, was decentralized by downloading from peers directly. This is sometimes referred to as a brokered architecture, since the central party acts as a mediator between peers. The most famous example of this type of network is Napster. This approach avoids many problems of other peer-to-peer systems regarding query routing and index placement. However, it has at least two significant drawbacks. Firstly, a central party limits the scalability of the system. Secondly, and more importantly, this central party forms a single point of technical, and legal, failure BIB001 .",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_plos_single_doc_summ | science.scientific_lay_summarisation_plos_single_doc_summ.1773 | [
{
"content": "Transform the following sections from a biomedical study into a simple summary for the general public. Focus on making the information accessible and easy to understand.\n\nResearch Heading: Ebola virus glycoprotein directly triggers T lymphocyte death despite of the lack of infection\nContent:\nAbstract:\nFatal outcomes of Ebola virus( EBOV) infections are typically preceded by a ‘sepsis-like’ syndrome and lymphopenia despite T cells being resistant to Ebola infection. The mechanisms that lead to T lymphocytes death remain largely unknown; however, the degree of lymphopenia is highly correlative with fatalities. Here we investigated whether the addition of EBOV or its envelope glycoprotein( GP) to isolated primary human CD4+ T cells induced cell death. We observed a significant decrease in cell viability in a GP-dependent manner, which is suggestive of a direct role of GP in T cell death. Using immunoprecipitation assays and flow cytometry, we demonstrate that EBOV directly binds to CD4+ T cells through interaction of GP with TLR4. Transcriptome analysis revealed that the addition of EBOV to CD4+ T cells results in the significant upregulation of pathways associated with interferon signaling, pattern recognition receptors and intracellular activation of NFκB signaling pathway. Both transcriptome analysis and specific inhibitors allowed identification of apoptosis and necrosis as mechanisms associated with the observed T cell death following exposure to EBOV. The addition of the TLR4 inhibitor CLI-095 significantly reduced CD4+ T cell death induced by GP. EBOV stimulation of primary CD4+ T cells resulted in a significant increase in secreted TNFα; inhibition of TNFα-mediated signaling events significantly reduced T cell death while inhibitors of both necrosis and apoptosis similarly reduced EBOV-induced T cell death. Lastly, we show that stimulation with EBOV or GP augments monocyte maturation as determined by an overall increase in expression levels of markers of differentiation. Subsequently, the increased rates of cellular differentiation resulted in higher rates of infection further contributing to T cell death. These results demonstrate that GP directly subverts the host’s immune response by increasing the susceptibility of monocytes to EBOV infection and triggering lymphopenia through direct and indirect mechanisms.\nIntroduction:\nEbola virus( EBOV) is one of the deadliest pathogens known to exist as evidenced by the latest outbreak in West Africa that resulted in more than 28, 000 confirmed and suspected infections including more than 11, 000 fatalities[1]. Currently, experimental EBOV candidate vaccines and monoclonal antibody-based therapies are being tested in clinical trials[2, 3]; however, none have yet to be approved for treatment of infected patients. Gaining an in depth understanding of the mechanisms of EBOV’s unparalleled ability to counteract and disrupt the immune response is critical to developing targeted approaches aimed at reducing the pathogenesis directly or indirectly caused by the virus. A characteristic feature of EBOV infection is the rapid onset of lymphopenia, which is observed in both humans and experimentally infected non-human primates( NHP)[4–11]. Development of lymphopenia is typically observed in EBOV patients that succumb to disease, whereas survivors have been shown to maintain CD3+ T lymphocyte populations throughout the course of disease[12, 13]. Strikingly, lymphopenia occurs despite the inability of EBOV to infect lymphocytes[4]. On the other hand, dendritic cells( DCs) and cells derived from monocyte-macrophage lineages are among the primary targets of EBOV infection in vivo[11, 14]. EBOV infection of these cells results in their aberrant activation[15–18] and induction of Fas and tumor necrosis factor related cell death inducing factor( TRAIL) pathways[19, 20]. We and others recently demonstrated that the lack of proper maturation of these critical antigen presenting cells( APCs) results in a limited activation of antigen-specific T lymphocytes[21, 22] further contributing to the deficient adaptive immune response. In addition, infection of monocyte-macrophage lineage may also contribute to EBOV-associated pathogenesis by several mechanisms including the release of inflammatory mediators, which may contribute to their apoptosis and necrosis[23]. The only EBOV envelope glycoprotein( GP) was shown to bind and activate the TLR4 signaling pathway[24]. TLR4 is known to trigger both apoptotic and necrotic pathways via direct and/or indirect activation of infected cells or bystander cells, which may contribute to these inflammatory mechanisms[25]. Lastly, TLR4 stimulation of cells of the monocyte/macrophage lineage and DCs results in their activation and/or differentiation[26, 27]. Cellular differentiation may further contribute to the release of inflammatory mediators associated with the onset of a cytokine storm, which is a characteristic feature of EBOV infection[28–30]. Since lymphocytes are resistant to EBOV infection, the mechanisms causing lymphopenia during EBOV infection remain largely unknown. Hence, the primary goal of this study was to examine whether EBOV directly stimulates T lymphocytes and determine the direct and indirect role of TLR4 in mediating T cell death in the pathogenesis of EBOV infection.\nDiscussion:\nThis study demonstrates, for the first time, that despite the lack of infection of T lymphocytes, EBOV directly binds and induces T cell death. In addition, this study demonstrates that interaction of EBOV GP with TLR4 stimulates differentiation of monocytes, which results in an increased susceptibility to EBOV infection. We show that following EBOV infection, monocytes undergo activation, which is known to lead to secretion of TNFα[49]. TNFα can contribute to activation and bystander death of T lymphocytes, which is consistent with the previously demonstrated effects of TNFα on T cells[50]. Furthermore, we show a direct binding of EBOV to T cells partially involving TLR4 and presumably involving additional ligands, as previous studies have identified the lectins DC-SIGN and L-SIGN[51–53], folate receptor-α[54], Tyro3 receptor tyrosine kinases[55] as attachment factors for EBOV. We demonstrate that the binding triggers the activation of numerous inflammatory signaling pathways including interferon, TLR and cell death signaling pathways as evidenced by the transcriptional profile of EBOV-stimulated T cells. The observed effects of TLR4 inhibitors conclusively demonstrated the role of TLR4 in lymphocyte cell death. Finally, using vectored and bead delivery of GP we demonstrate the direct role of the protein in the induction of both MyD88-independent( TRAM1) and MyD88-dependent activation of the TLR4 signaling pathway and subsequent initiation of cell death pathways. Previous reports demonstrated the involvement of TNFα in apoptosis[56] as well as in necrosis following accumulation of reactive oxygen species[57]. Necrotic and apoptotic pathways are closely related, and following TLR4 activation, MyD88-dependent signaling pathway has also been reported to trigger necroptosis[58], which also can occur in EBOV-cultured T cell culture as we demonstrated MyD88-dependent activity( Fig 4A). The fact that multiple cell death mechanisms occur in the same EBOV-cultured T cells environment conjugated to the fact that different cell death mechanisms are interconnected between each other[59] could explain the drastic diminution of cell death observed in association with known specific inhibitors of apoptosis, necrosis or TNFα following EBOV-cultured T cells. Of note, HPIV3 also induced some TLR4 signaling( Fig 4A) but only low-level apoptosis( Fig 2) that can be explained by some differences in signaling pathways induced by direct engagement of TLR4 by HPIV3 versus EBOV or by different strength of TLR4 signaling( S4A Fig). A recent study demonstrated a wide-spread T lymphocyte activation in EBOV-infected patients receiving experimental antibody-based therapies; however, the percentage of CD4+ and CD8+ T cell responders was limited in comparison to the total percentage of activated T lymphocytes[60]. It was suggested that the activation may result from stimulation with immune complexes formed by the administered EBOV therapeutic monoclonal antibodies or convalescent plasma. However, based on the present data, activation may also be attributed to the activator role of GP. In parallel with these data, the activation of the TLR4 signaling pathway, sensitivity to TLR4 inhibitors and production of TNFα are routinely associated with LPS-induced bacterial sepsis[61]. Activation of innate immune pathways via pattern recognition receptors( PRR) including TLR4 has been shown to lead to systemic inflammation[62]. Previous studies indicated that successfully blocking TLR4 signaling significantly reduces the pathogenesis associated with bacterial sepsis and lethal influenza infection indicating the critical role of this innate immune-signaling pathway in exasperating immunological responses[63, 64]. Our findings suggest that TLR4 inhibitors may be of therapeutic value for the treatment of EBOV patients. For example, our recent study demonstrates a high level of protection against EBOV and the closely related Marburg virus in vivo by the TLR4 receptor antagonist Eritoran[65]. Lymphopenia is consistently observed in human and nonhuman primate models following infection with viruses that cause viral hemorrhagic fever( VHF)[4, 66, 67] and therefore, this phenomenon is not restricted to EBOV infection. Furthermore, lymphopenia has been observed during fatalities from several non-VHF-related pathogens or pathologies, including that caused by highly pathogenic influenza virus, West Nile virus and bacterial sepsis[68–70]. Overall, remarkable similarities exist between EBOV-associated symptoms and the pathological features associated with the diseases caused by these non-VHF-related etiological agents including immune suppression, toxic effects due to inflammatory mediators and high viremia in the case of viral infections[66]. As the onset of lymphopenia is highly correlative with fatal outcomes following infection with aforementioned pathogens, identification of factors contributing to T lymphocyte cell death may enable the development of broad acting therapeutics. Lastly, we note that all current EBOV candidates rely on GP as a sole antigen[71]; with all requiring extremely high doses to achieve protection. For example a recent clinical study utilizing the VSV-vectored EBOV vaccine VSVΔG/ZEBOVGP, which is known to replicate at high levels, required a vaccination dose of 2x107 PFU[72]. The current study demonstrated induction of cell death by VSVΔG/ZEBOVGP( S2A and S2B Fig). It is possible that the high vaccine doses required for the induction of protective immune responses are necessary to compensate for the reduced immunogenicity associated with the immune-modulating effects of GP presented in this study. Furthermore, another clinical trial of this vaccine demonstrated induction of a transient arthritis and dense CD4+ T lymphocytic vasculitis suggesting a pathophysiological role of vaccine-induced T lymphocytes. Both effects were related to the EBOV GP component of the vaccine, but not the VSV vector[73]. In addition, HPIV3/ΔF-HN/EboGP, which demonstrated toxic effects associated with GP in this work( Fig 2A and 2B) was also developed as a vaccine candidate[32]. Thus, GP exerts multiple immune modulating effects on immune cells, which may adversely affect the quality of the T cell response although it is also possible that the magnitude of lymphocytes death following vaccinations is modest and does not significantly affect the vaccine efficacy. The combination of our transcriptome analysis performed on CD4+ T cells and the perturbation of signaling cascades associated with cell death following CLI-095 treatment suggest an important role of the GP-TLR4 interaction in the pathogenesis of EBOV infection. Our study highlights diverse strategies used by EBOV to perpetrate lymphopenia through direct and indirect mechanisms, which results in both apoptotic and necrotic T cell death despite the lack of infection. We expanded these studies to include monocytes, which are permissive to EBOV. We found that the increase in activation/differentiation following EBOV-mediated stimulation of TLR4 resulted in a significantly increased rate of infection of monocytes. We note, however, that the effects of TLR-4 on differentiation are likely to be only partial, and other mechanisms involving GP may also contribute it. As mentioned above, several attachment factors have been identified for EBOV; their engagement as well as release of cytokines( e. g. TNFα) are likely to contribute cellular differentiation in an autocrine and/or paracrine manner. These data indicated that infection of monocytes and other cells amplify lymphocytes death by producing more viral particles and also by secreting TNFα and other proteins contributing death of lymphocytes. As TLR4 is expressed by multiple cell types, we suggest that the interactions of GP with TLR4 may have profound effects in vivo, and therefore, examination of TLR4 inhibitors as therapeutics for EBOV-infection is warranted. Overall, these data contribute to understanding of the ‘immune paralysis’ during EBOV infections.",
"role": "user"
},
{
"content": "The latest outbreak of Ebola virus ( EBOV ) in West Africa resulted in more than 28 , 000 human infections including more than 11 , 000 deaths thus highlighting the necessity for the development of countermeasures . Monocytes and dendritic cells are among the primary targets of EBOV infection; infection of these critical antigen presenting cells contributes to the immune deficiency observed in Ebola virus disease ( EVD ) . In contrast , lymphocytes are resistant to EBOV infection; however , in fatal EVD , pronounced lymphopenia is uniformly observed . Here we report that T lymphocyte cell death in the absence of detectable infection was observed in an EBOV glycoprotein ( GP ) -dependent manner . Using transcriptome analysis of EBOV-stimulated CD4+ T cells we show upregulation of both toll-like receptor 4 ( TLR4 ) and cell death associated pathways . Furthermore , we demonstrate that EBOV increases susceptibility of monocytes to infection by promoting cellular differentiation . Both EBOV-induced monocyte differentiation and cell death of T lymphocytes result from a direct interaction between GP and TLR4 . Blocking of TLR4 signaling significantly reduced both EBOV-induced T cell death and infection of monocytes . These data contribute to understanding of the ‘immune paralysis’ during EBOV infections and provide evidence for the development of targeted therapies for the treatment of EVD .",
"role": "assistant"
}
] |
science.pico_ner | science.pico_ner.72 | [
{
"content": "Examine the following abstract and identify distinct entities. Focus on extracting entities related to participants, interventions, and outcomes.\n\nPresent your findings in a JSON format like: {\"Participant\": [\"example entity\", ...], \"Intervention\": [\"example entity\", ...], \"Outcome\": [\"example entity\", ...]}. If no entities are found for a category, use an empty list.\n\nAbstract Content:\n\nContinued improvement in pressure-flow parameters in men receiving finasteride for 2 years . Finasteride Urodynamics Study Group . OBJECTIVES To assess the long-term effects of finasteride on pressure-flow parameters in men with urodynamically documented bladder outflow obstruction ( BOO ) . METHODS One hundred twenty-one men with benign prostatic enlargement ( BPE ) and lower urinary tract symptoms ( LUTS ) underwent a pressure-flow study ( PFS ) at 1 of 11 clinical centers . The PFS technique was standardized , and all tracings were read by a single reader unaware of the treatment group . Patients who were obstructed according to a modified Abrams-Griffiths nomogram were randomized to 5 mg finasteride ( n = 81 ) or placebo ( n = 40 ) for 12 months ; all patients continuing into an open extension received finasteride during the second 12 months of therapy . Results of the initial 12-month study demonstrated the benefit of finasteride treatment on PFS parameters . To examine the continuing effects over time , an analysis of the data from 54 patients who completed 24 months of treatment with finasteride is provided . RESULTS Detrusor pressure at maximum flow ( PdetQmax ) continued to decrease during the second 12 months of therapy ( decreases of 5.3 and 11.7 cm H2O at months 12 and 24 , respectively ) . The percentage of patients obstructed by Abrams-Griffiths classification decreased from 76.2 % at baseline to 66.7 % at month 12 and 59.6 % at month 24 . An intention-to-treat analysis yielded similar results . CONCLUSIONS Finasteride improves urodynamic measures of obstruction in men with BPE and LUTS , with continued improvement during the second 12 months of therapy .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"5 mg finasteride\", \"placebo\", \"finasteride\"], \"Outcome\": [\"pressure-flow parameters\", \"Detrusor pressure\", \"modified Abrams-Griffiths nomogram\", \"PFS parameters\", \"PdetQmax\", \"Abrams-Griffiths classification\"], \"Participant\": [\"men\", \"benign prostatic enlargement\", \"receiving finasteride for 2 years .\", \"with urodynamically documented bladder outflow obstruction ( BOO )\", \"lower urinary tract symptoms\", \"One hundred twenty-one\"]}",
"role": "assistant"
}
] |
science.pico_ner | science.pico_ner.3993 | [
{
"content": "Examine the following abstract and identify distinct entities. Focus on extracting entities related to participants, interventions, and outcomes.\n\nPresent your findings in a JSON format like: {\"Participant\": [\"example entity\", ...], \"Intervention\": [\"example entity\", ...], \"Outcome\": [\"example entity\", ...]}. If no entities are found for a category, use an empty list.\n\nAbstract Content:\n\nEffects of antihypertensive drugs on cognitive function in adolescents . The effects of clonidine and hydrochlorothiazide on cognitive function were studied in hypertensive adolescents requiring pharmacologic therapy for blood pressure control . Twenty-four adolescents with persistent blood pressure elevation ( greater than 95th % ) on placebo were randomized double blind to clonidine or hydrochlorothiazide treatment . A battery of cognitive tests were performed during the placebo phase and after 16 weeks of active therapy . Antihypertensive therapy resulted in significant blood pressure reduction ( p less than .01 ) . A slight interactive effect was observed in arithmetic performance ( p less than .05 ) . All other parameters of cognitive function were not affected by either treatment .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"hydrochlorothiazide\", \"Antihypertensive therapy\", \"antihypertensive\", \"clonidine\"], \"Outcome\": [\"blood pressure reduction\", \"blood pressure\", \"cognitive tests\", \"cognitive function\", \"arithmetic performance\"], \"Participant\": [\"hypertensive\", \"persistent blood pressure elevation ( greater than 95th % )\", \"Twenty-four\", \"adolescents\"]}",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_pubmed | science.scientific_papers_summarization_single_doc_pubmed.7305 | [
{
"content": "Read the provided biomedical research paper and create a concise abstract. Ensure the abstract captures the main points of the paper.\n\nFull Paper:\nbonding to enamel is been considered satisfactory , but adhesion to dentin is much more complex due to the nature of the substrate . the best way to test the effectiveness of a biomaterial , as well as composite or luting cement or a bonding system , is in vivo experimentation in controlled trials .\nhowever , it is quite impossible to distinguish by clinical evaluation and adhesion potential of a dentin - bonding system from many other parameters that influence either the quality or the longevity of a restoration .\ntheoretically , in vitro bonding tests would allow for measurement of the true strength of bonding of a given system to a dentinal substrate .\nunfortunately , in vitro dentin adhesion testing has not been very successful in predicting in vivo effectiveness , probably because of the number of factors influencing test results and clinical relevance.[13 ] several authors have already studied the influence of some of these factors , such as the origin of the substrate , substrate storage , dentin depth , type of composite , testing mode , etc .\nsome previous reviews have listed most of the variables influencing dentin bond strength , but none has quantified their importance or defined their statistical significance . the purpose of this study is to evaluate the true factor that affects the bond strength of one - step and two - step self - etch adhesive .\nthis quantitative overview could be considered as a meta - analysis study performed under non - experimental conditions to assess the effects of certain clinical parameters on bond strength of adhesives .\npotential papers were selected from articles published in 13 peer - reviewed journals using pubmed data base .\namerican journal of dentistry , journal of adhesive dentistry , british dental journal , dental materials , european journal of oral science , international dental journal , journal of dentistry , journal of dental research , journal of oral rehabilitation , journal of prosthetic dentistry , operative dentistry , quintessence international and journal of conservative dentistry . \n\nstudy with original data.paper published in english or translated to english.only studies measuring the bond strength of one - step and two - step self - etch dentin - bonding agents to be included.the bonding agents that are commercially available and tested by shear or tensile mode on sound dentin from permanent teeth.dentin-bonding agents involved in the analysis should have been tested in at least five different studies.the outcome measure had to be the dentin bond strength expressed in mpa / psi.light curing unit with intensity of 400 - 500 mw / cm.among the selected studies , some exclusion criteria applied.paper displaying only graphic data.dentin-bonding agents not used according to manufacturer 's instructions.dentin samples , which were contaminated or not treated under clinical conditions before application of the dentin - bonding agents , were excluded from the analysisstudy without data analysis . \n \npaper published in english or translated to english . only studies measuring the bond strength of one - step and two - step\nthe bonding agents that are commercially available and tested by shear or tensile mode on sound dentin from permanent teeth .\ndentin - bonding agents involved in the analysis should have been tested in at least five different studies .\nthe outcome measure had to be the dentin bond strength expressed in mpa / psi .\nlight curing unit with intensity of 400 - 500 mw / cm . among the selected studies , some exclusion criteria applied .\ndentin samples , which were contaminated or not treated under clinical conditions before application of the dentin - bonding agents , were excluded from the analysis study without data analysis .\nwe classified the parameters into four groups : group a - factors related to the dentin substrate ; group b - type of composite and dentin - bonding agents ; group c - bonding area and ; group d - thermocycling and testing mode .\nselected studies were included in the meta - analysis the problem of bond - strength testing is that the materials were seldom compared with a standard .\nrather , different materials were tested under various conditions and bond strengths are reported as point estimates and expressed in mpa .\nit is difficult to compute a standard effect size for each study , as required in a classic meta - analysis .\nthus , we decided to analyze directly the measures of bond strength from reports published in literature .\nthis quantitative overview was performed in non - experimental conditions since materials and conditions are not allocated randomly . from each report\nmeans and standard deviations of bond strengths were extracted and tabulated with corresponding experimental conditions . the statistical analysis software system mix 2.0 professional ( biostat , englewood , nj ) was used to perform the statistical analysis .\npotential papers were selected from articles published in 13 peer - reviewed journals using pubmed data base .\namerican journal of dentistry , journal of adhesive dentistry , british dental journal , dental materials , european journal of oral science , international dental journal , journal of dentistry , journal of dental research , journal of oral rehabilitation , journal of prosthetic dentistry , operative dentistry , quintessence international and journal of conservative dentistry .\n\n study with original data.paper published in english or translated to english.only studies measuring the bond strength of one - step and two - step self - etch dentin - bonding agents to be included.the bonding agents that are commercially available and tested by shear or tensile mode on sound dentin from permanent teeth.dentin-bonding agents involved in the analysis should have been tested in at least five different studies.the outcome measure had to be the dentin bond strength expressed in mpa / psi.light curing unit with intensity of 400 - 500 mw / cm.among the selected studies , some exclusion criteria applied.paper displaying only graphic data.dentin-bonding agents not used according to manufacturer 's instructions.dentin samples , which were contaminated or not treated under clinical conditions before application of the dentin - bonding agents , were excluded from the analysisstudy without data analysis . \n \npaper published in english or translated to english . only studies measuring the bond strength of one - step and two - step\nthe bonding agents that are commercially available and tested by shear or tensile mode on sound dentin from permanent teeth .\ndentin - bonding agents involved in the analysis should have been tested in at least five different studies .\nthe outcome measure had to be the dentin bond strength expressed in mpa / psi .\nlight curing unit with intensity of 400 - 500 mw / cm . among the selected studies , some exclusion criteria applied .\ndentin samples , which were contaminated or not treated under clinical conditions before application of the dentin - bonding agents , were excluded from the analysis study without data analysis .\nwe classified the parameters into four groups : group a - factors related to the dentin substrate ; group b - type of composite and dentin - bonding agents ; group c - bonding area and ; group d - thermocycling and testing mode .\nthe problem of bond - strength testing is that the materials were seldom compared with a standard .\nrather , different materials were tested under various conditions and bond strengths are reported as point estimates and expressed in mpa .\nit is difficult to compute a standard effect size for each study , as required in a classic meta - analysis .\nthus , we decided to analyze directly the measures of bond strength from reports published in literature .\nthis quantitative overview was performed in non - experimental conditions since materials and conditions are not allocated randomly . from each report means and standard deviations of bond strengths were extracted and tabulated with corresponding experimental conditions .\nthe statistical analysis software system mix 2.0 professional ( biostat , englewood , nj ) was used to perform the statistical analysis . differences between means were computed with analysis of variance .\ntables 2 and 3 show descriptive statistics of different parameters tabulated from the selected studies according to the four groups described in materials and methods section . some parameters , such as tooth storage medium and dentin surface topography , were not included in the statistical analysis , due to their greater variability or lack of information .\nit was noted that about one - third of the selected articles provided such information and that the distinction between the different failure modes was not always available and clearly identified .\none - way anova comparison between one - step and two - step self - etch adhesives is given in table 4 the p - values indicate the potential relation between each parameter and the bond strength .\ntheir respective influences were given by the p - value at the significance level of 0.05( ) .\nall the studied parameters showed no significant difference , except for dentin origin / site and bonding area .\nin addition , statistical analysis done with anova showed statistical significance between the one - step and two - step self - etch adhesives .\ndescriptive statistics of the different parameters studied using one - step self - etch adhesives descriptive statistics of the different parameters studied using two - step self - etch adhesives comparison between one - step and two - step self - etch adhesives using anova\nas quoted by dersimonian and laird ( 1986 ) meta - analysis is the statistical analysis of a collection of analytic results for the purpose of integrating the findings . \nthe advantage of meta - analysis over the traditional literature review is the possibility of combining results from many studies and analyzing the factors that affect the bond strength of self - etch adhesives . in the studies\nselected , the tensile or shear testing was done using a constant crosshead speed and the samples were continuously loaded such that the crack will propagate at increasing speed until separation . in a clinical situation ,\nif these loads are high enough , small extensions of cracks ensure sub - critical crack growth in bonded area as explained by a. a. griffith .\naccording to griffith 's theory of brittle solid it is stated that the low fracture strength observed in experiments , as well as the size - dependence of strength , were due to the presence of microscopic flaws in the bulk material . in this practical situation ,\nthe crack speed is much lower than during tensile or shear forces , which is used to determine the adhesion potential of a dentin - bonding system .\nthe location of the initial flaw is unknown , and the stress concentration at the bonded interface is non - uniform and unpredictable .\nmoreover , the stress distribution is highly dependent on the test geometry , on the loading mode and on the material properties.[2729 ] our analysis revealed that the origin of dentin , site of bonding and bonding area had significant influence on bond strength of one - step and two - step self - etch adhesive .\nthe studies have also showed that there is a difference in bond strength between human teeth and bovine teeth .\nthis difference explains the fact that dentinal tubules are large in the coronal portion of bovine dentin , but one study by nakamichi et al , showed that there is no difference in bond strength between bovine and human dentin when only the superficial layers was used .\noilo and olsson showed that the tensile strength is higher on the buccal surface compared to the occlusal surface , which could be explained by the fact that the tubules are lower in number with less area percentage on the buccal surface than on the occlusal surface .\nthe analysis also showed a definite difference between enamel and dentin bonding which might be explained by various reasons .\ndentin has a heterogeneous structure and different orientation of tubules with root dentin having smaller and more numerous branching of tubules when compared to crown dentin .\nacidity of monomer also caused change in surface chemistry and morphology of dentin , which in turn can influence bonding .\na significantly thicker hybrid layer was noted in areas with perpendicular tubule orientation than in areas with parallel tubule orientation surface area significantly affects the bond strength in one - step and two - step self - etch adhesives .\nbond strength is calculated as the load at failure divided by the cross - sectional area of the bonded surface .\nan inverse relationship between bond strength and bonded surface area has been recently shown , confirming previous studies .\nthese authors explain the lower strengths of large bonded samples by the greater number of defects within the adhesive joint .\nburrow mf et al , stated that clearfil se bond showed no statistical difference to g - bond .\ndo amaral rc et al , suggested application of bonding agents with agitation on the dentin surface to improving the resin - dentin bond strength of one - step self - etch adhesives . when bonded to dentin\n, the adhesives with simplified application procedures ( in particular , one - step self - etch adhesives ) still underperform as compared to conventional three - step adhesives . \nmild two - step self - etch adhesives that provide additional chemical bonding , appear to most optimally combine bonding effectiveness with a simplified application protocol our analysis has showed that two - step self - etch adhesive system showed a superior in vitro performance in comparison to one - step self - etch system .\nthese results are in accordance with those of other studies . the bonding ability of the all - in - one adhesives depends on their specific composition . in light of the low in vitro bond strengths and high rate of spontaneous failures of some all - in - one\nadhesives compared to those of the two - step adhesives , the newest adhesives should be screened more strictly before they are recommended for clinical use .\none - step self - etch adhesives showed higher bond strengths on ground enamel and no reductions in resin - enamel bonds were observed after 12 months of water storage .\nmarginal adaptation was significantly better for clearfil s3 bond than for adper prompt l - pop .\nserious limitation of all - in - one adhesives are as follows : incomplete polymerization and continued demineralization of the adjacent dentin structure in the tubules . for all - in - one adhesives to be acidic ,\nstudies have shown that this water acts as a major interfering factor in polymerization which leads to unpolymerized acidic and aggressive monomers to continue etching the dentin , thereby leading to a detrimental impact on the bond .\nother factor leading to low bond strength and failure of resin - based adhesives in vivo are harsh conditions of the oral environment , such as intraoral temperature , moisture contact , fatigue of bond due to tooth flexure and bacterial enzymes . the shelf life of one - component self - etching adhesives is determined by their chemical composition . in conventional methacrylate - based adhesives\n( adhese ) , the inherently acidic environment of such formulations leads to monomer degradation due to hydrolysis .\nin contrast , methacrylamide - based adhesives are stable to aqueous acid and exhibit much superior storage stability without monomer degradation - related losses in adhesion performance .\nintermediary strong self - etch adhesives ( adhese and xeno iii ) exhibited significantly superior shear bond strength values compared to both \nstrong ( prompt - l - pop ) and weak ( clearfil se bond ) self - etch systems , which implies that ph of the adhesive has a significant effect on the bond strength .\nsuzuki and finger found a linear relationship between the remaining dentin thickness to the pulp and the shear bond strengths of three different adhesives , an observation subsequently confirmed by other authors the analysis surprisingly showed that type of composite and thermocycling did not significantly affect bond strength . nevertheless , three studies reported a decrease in adherence after thermocycling whereas other studies reported an increase , depending on the adhesive system tested .\nfrom this meta - analysis study , we have analyzed that two - step self - etch adhesive showed better bonding ability than one - step self - etch adhesives .\nthereby , several clinical parameters among which dentin origin , site and area of bonding have shown a significant influence on mean bond strength of adhesive system .\nthus , our study suggests , that the practitioners can easily control these parameters by using standardized protocol for the clinical success of self - etch adhesive systems .",
"role": "user"
},
{
"content": "aim : the purpose of this study is to critically evaluate the factors that affect the bond strength of one - step and two - step self - etch adhesives by using meta-analysis.materials and methods : potential papers that were selected according to inclusion and exclusion criteria from articles were published in 13 peer - reviewed journals using \n pubmed data base . from each report , means and standard deviations of bond strengths were extracted and tabulated with corresponding experimental conditions.results:all the studied parameters showed no significant difference , except for dentin origin / site and bonding area . \n in addition , statistical analysis done with anova showed statistical significance between the one - step and two - step self - etch adhesives.conclusions:our analysis has showed that two - step self - etch adhesive system showed a superior in vitro performance in comparison to one - step self - etch system . \n nevertheless , certain factors such as dentin origin , site and area of bonding affect the bond strength of adhesives .",
"role": "assistant"
}
] |
science.scireviewgen_multidoc_summarization | science.scireviewgen_multidoc_summarization.4844 | [
{
"content": "Craft a cohesive chapter for a literature review using the abstracts provided. Focus on the chapter's theme, weaving together insights and findings from the abstracts. Highlight key points and note any research gaps.\nLiterature Review Title: Segment Routing: a Comprehensive Survey of Research Activities, Standardization Efforts and Implementation Results \nChapter Title: F. Network Programming \n1. Abstract of Cited Paper (BIB001): Network operators seek more flexibility to provide added-value services to their customers. We propose to leverage the IPv6 Segment Routing architecture with a example usecase. We also provide an implementation of SR-IPv6 and we evaluate its performance on broadband routers. \n2. Abstract of Cited Paper (BIB002): This paper presents an architecture to support Vir- tual Network Functions (VNFs) chaining using the IPv6 Segment Routing (SR) network programming model. Two classes of VNFs are considered: SR-aware and SR-unaware. The operations to support both SR-aware and SR-unaware VNFs are described at an architectural level and we propose a solution for SR-unaware VNFs hosted in a NFV node. An Open Source implementation of the proposed solution for a Linux based NFV host is available and a set of performance measurements have been carried out in a testbed. \n3. Abstract of Cited Paper (BIB003): Network load-balancers generally either do not take application state into account, or do so at the cost of a centralized monitoring system. This paper introduces a load-balancer running exclusively within the IP forwarding plane, i.e. in an application protocol agnostic fashion - yet which still provides application-awareness and makes real-time, decentralized decisions. To that end, IPv6 Segment Routing is used to direct data packets from a new flow through a chain of candidate servers, until one decides to accept the connection, based on its local state. This way, applications themselves naturally decide on how to share incoming connections, while incurring minimal network overhead, and no out-of-band signaling. Tests on different workloads - including realistic workloads such as replaying actual Wikipedia access traffic towards a set of replica Wikipedia instances - show significant performance benefits, in terms of shorter response times, when compared to a traditional random load-balancer. \n4. Abstract of Cited Paper (BIB004): IPv6 Segment Routing is a major IPv6 extension that provides a modern version of source routing that is currently being developed within the Internet Engineering Task Force (IETF). We propose the first open-source implementation of IPv6 Segment Routing in the Linux kernel. We first describe it in details and explain how it can be used on both endhosts and routers. We then evaluate and compare its performance with plain IPv6 packet forwarding in a lab environment. Our measurements indicate that the performance penalty of inserting IPv6 Segment Routing Headers or encapsulating packets is limited to less than 15%. On the other hand, the optional HMAC security feature of IPv6 Segment Routing is costly in a pure software implementation. Since our implementation has been included in the official Linux 4.10 kernel, we expect that it will be extended by other researchers for new use cases. \n5. Abstract of Cited Paper (BIB005): IPv6 Segment Routing is a recent IPv6 extension that is generating a lot of interest among researchers and in industry. Thanks to IPv6 SR, network operators can better control the paths followed by packets inside their networks. This provides enhanced traffic engineering capabilities and is key to support Service Function Chaining (SFC). With SFC, an end-to-end service is the composition of a series of in-network services. Simple services such as NAT, accounting or stateless firewalls can be implemented on a per-packet basis. However, more interesting services like transparent proxies, transparent compression or encryption, transcoding, etc. require functions that operate on the bytestream.In this paper, we extend the IPv6 implementation of Segment Routing in the Linux kernel to enable network functions that operate on the bytestream and not on a per-packet basis. Our SRv6Pipes enable network architects to design end-to-end services as a series of in-network functions. We evaluate the performance of our implementation with different microbenchmarks. \n6. Abstract of Cited Paper (BIB006): In this paper we consider the use of IPv6 Segment Routing (SRv6) for Service Function Chaining (SFC) in an NFV infrastructure. We first analyze the issues of deploying Virtual Network Functions (VNFs) based on SR-unaware applications, which require the introduction of SR proxies in the NFV infrastructure, leading to high complexity in the configuration and in the packet processing. Then we consider the advantages of SR-aware applications, focusing on a firewall application. We present the design and implementation of the SERA (SEgment Routing Aware) firewall, which extends the Linux iptables firewall. In its basic mode the SERA firewall works like the legacy iptables firewall (it can reuse an identical set of rules), but with the great advantage that it can operate on the SR encapsulated packets with no need of an SR proxy. Moreover we define an advanced mode, in which the SERA firewall can inspect all the fields of an SR encapsulated packet and can perform SR-specific actions. In the advanced mode the SERA firewall can fully exploit the features of the IPv6 Segment Routing network programming model. A performance evaluation of the SERA firewall is discussed, based on its result a further optimized prototype has been implemented and evaluated. \n7. Abstract of Cited Paper (BIB007): With the development of large-scale data centers, Virtual Machine (VM) migration is a key component for resource optimization, cost reduction, and maintenance. From a network perspective, traditional VM migration mechanisms rely on the hypervisor running at the destination host advertising the new location of the VM once migration is complete. However, this creates a period of time during which the VM is not reachable, yielding packet loss.This paper introduces a method to perform zero-loss VM migration by using IPv6 Segment Routing (SR). Rather than letting the hypervisor update a locator mapping after VM migration is complete, a logical path consisting of the source and destination hosts is pre-provisioned. Packets destined to the migrating VM are sent through this path using SR, shortly before, during, and shortly after migration - the virtual router on the source host being in charge of forwarding packets locally if the VM migration has not completed yet, or to the destination host otherwise. The proposed mechanism is implemented as a VPP plugin, and feasibility of zero-loss VM migration is demonstrated with various workloads. Evaluation shows that this yields benefits in terms of session opening latency and TCP throughput. \n8. Abstract of Cited Paper (BIB008): We consider the IPv6 Segment Routing (SRv6) technology for Service Function Chaining of Virtual Network Functions (VNFs). Most of the VNFs are legacy VNFs (not aware of the SRv6 technology) and expect to process traditional IP packets. An SR proxy is needed to support them. We have extended the implementation of SRv6 in the Linux kernel, realizing an open source SR-proxy, referred to as SRNK (SR-Proxy Native Kernel). The performance of the proposed solution (SRNKvl) has been evaluated, identifying a poor scalability with respect to the number of VNFs to be supported in a node. Therefore we provided a second design (SRNKv2), enhancing the Linux Policy Routing framework. The performance of SRNKv2 is independent from the number of supported VNFs in a node. We compared the performance of SRNKv2 with a reference scenario not performing the encapsulation and decapsulation operation and demonstrated that the overhead of SRNKv2 is very small, on the order of 3.5%.\n\nReview Chapter:",
"role": "user"
},
{
"content": "Despite the fact that Network Programming is the most attractive and innovative feature of SR, we have found only eight papers related to this topic. As shown in table IX, these works can be classified in Service Function Chaining related ones, or operational function. The latter aim at implementing operational functions, such as a load balancer and a zero-loss VM migration tool, by exploiting the network programming feature of SR. In the following we provide a brief overview of the references classified as Network Programming related works. In BIB001 the Linux SRv6 implementation described in BIB002 is enhanced introducing the support for Service Function Chaining. More in detail, the Linux kernel provides an API to map a service segment, i.e. the identifier of a network service running on a Virtual Machine. The experimental evaluation show that the impact of SR operations and service segment processing on the packet forwarding capabilities is limited, i.e. with a reduction lower than 10%. The architecture of a network domain supporting Service Function Chaining (SFC) through SRv6 is investigated in BIB003 . The authors mainly focus on the implementation of a VNF node able to host multiple VNF instances. The main components of the VNF nodes are the SR/VNF connector, in charge of logically connecting the SR routing with local VNFs, and the VNFs, supporting a specific network functions. The VNFs can be SR-aware or SR-unaware: i) SR-aware VNFs can process the information contained in the SR header (SRH) of incoming packets; ii) SR-unaware VNFs are not able to handle the SRH thus, in order to correctly apply the VNF to the original packet, the SR/VNF connector must preprocess the packet by removing the SR encapsulation, and re-apply it when the packet is returned by the VNF. The authors propose a Linux-based implementation of a VNF node supporting both SR-aware and SR-unaware VNFs. More in detail, using the netfilter framework, a new kernel module called srext (Segment Routing EXTensions) is implemented to act as a SR/VNF connector and support SR-unaware VNFs. A virtualized testbed based on Virtualbox ang Vagrant is realized to evaluate the processing overhead introduced by the proposed implementation with respect to a classical IPv6 forwarding solution. SRV6Pipes BIB006 is an extension of the IPv6 implementation of Segment Routing in the Linux kernel which enables chaining and operation of in-network functions operating on streams. SRv6 is used to enforce an end-to-end path between the client and the server passing through the equipment hosting the networking functions. The rationale behind SRv6Pipes stands in the fact that some network functions need to include a TCP implementation to work on the streams. SRv6Pipes leverages \"the TCP stack that is already present in the Linux kernel\" and implements a transparent TCP proxy to offload to it the TCP functionalities and terminates the TCP connections where a network function is deployed. In this way, it is possible to expose to the network functions the bytestreams they need to process. Special addressing is used to specify network functions and their parameters. In particular each proxy exposes an 80 prefix. The first 80 bits are used to traverse the proxy, the following 16 bits are used to identify the network function and the remaining ones are used to specify the parameters of the function. [78] defines the concept of SR Aware VNF, as an application that is able to process the SR information in the packet. Moreover, BIB007 also proposes the implementation an SR Aware (SERA) Firewall application. The SERA Firewall is able to work both as a legacy firewall (basic mode), or define filtering rules that also include condition on the SR fields (advanced mode). In the basic mode, it can apply the normal firewall processing to the original packets even if they have an SR based SFC encapsulation. It means that the filtering rules are applied to the original values of the packet header (the SR related fields are transparent). In the advanced mode, SERA offers new matching capabilities and new SR specific actions. Some examples are the seg6-go-next, which sends the packet toward the next segment in the SL, the seg6-skip-next which allows to skip the next SID in the SL, or the seg6-go-last which sends the packet directly to the last SID of the SL. Segment Routing Load Balancing (SRLB) BIB004 is an Application-aware load-balancer that avoids the cost due to the monitoring tasks. It is thought to work in the contest of a Data Center network, where several instances of the same application are instantiated in different hosts machines. Each host machine is equipped with a VPP based virtual router which dispatches packets between physical NICs and application-bound virtual interfaces. The Load Balancer (LB) is located at the edge of the Data Center network. A request for an application is represented by the first packet sent from the client to the application server (generally a syn message). When a new request arrives at the LB, the Service Hunting function, which consists in finding a server that can serve the current request, is executed. LB exploits SR to query a subset of servers that host the requested application. Specifically, LB encodes the set of randomly selected potential servers in the segment list, then encapsulates the first packet of the request. When the first server receives the packet, it can decide whether to deliver it to the application, and consequently assign to it the processing of this request, or can forward the packet to the next server in the segment list. The decision about accept/refuse a request is taken according to a connection acceptance policy, that takes into account the internal state of the application (CPU usage, memory usage, etc.). [80] proposes a new architecture for the Content Delivery Networks (CDNs) building upon the results of BIB004 . In this new paradigm, CDN decisions (cache vs origin servers) are offloaded to the dataplane. This architecture founds on two fundamental pieces: i) chunk-level content addressing and ii) in-network server selection. The first is realized assigning a unique and globally routable IPv6 address to each chunk. Instead, the in-network server selection leverages these identifiers exposed as IP addresses to make in-band forwarding decisions which are later binded to a SRv6 steering policy. In particular, upon arrival of a request at a CDN proxy, the IPv6 identifier is used by the prediction engine to perform cache admission by estimating the popularity of requests with a Least-Recently-Used (LRU) filter. If not available (cache miss), 6LB BIB004 is used to forward requests directly to the origin servers instead of proxying them at the cache. According to the authors, this mechanism allows to reduce the load on the edge cache and avoid the negative effects on the Quality of Experience. BIB008 proposes an SR based live migration technique which achieves zero packet loss. The starting point is an SRv6 enabled Data center network. Two new SR functions are defined and used in the process of VM live migration. The first one is forward to local if present, which is a conditional version of the END function. Specifically, in case the last SID in the SL is locally available, then the packet is directly processed, thus ignoring possible intermediate SIDs. The second one is buffer and forward to local, that forces the node to inspect the last SID and, in case it is not locally available, the packet is buffered until the SID becomes available. Now, assuming that a VM is migrated from host A to host B, then during the migration process all the requests received by the Data center gateway are tunneled using an SL where: i) the first SID points at the forward to local if present function implemented at A, ii) the second SID points at the buffer and forward to local function implemented at B, and iii) the last SID is referred to the VM. In this way, until the VM is available at A, the packets are directly delivered to the VM thanks to the forward to local if present function. Then, during the downtime, the packets directed to the VM are buffered at B thanks to the buffer and forward to local. Finally, when the VM becomes available at B, the buffered packets, as well as new arrivals, are directly delivered to the VM at B. In this way it is possible to implement VM live migration having no packet loss. SRNK BIB009 is a SR-proxy for legacy VNFs which are not aware of the SRv6 technology and expect to process traditional IP packets. SRNK extends the implementation of SRv6 in the Linux kernel BIB005 adding the support for the End.AS and End.AD behaviors. The performance of the proposed solution has been evaluated by the authors which identified a poor scalability with respect to the number of VNFs to be supported within an NFV node. With an enhancement of the Linux Policy Routing framework, they provided a second design SRNKv2, which does not depend on the number of supported VNFs in a node. They compared the performance with a reference scenario not performing the encapsulation and decapsulation operation and demonstrated that the overhead of SRNKv2 is very small, on the order of 3.5%.",
"role": "assistant"
}
] |
science.chemdner_ner | science.chemdner_ner.1785 | [
{
"content": "Analyze the given text to identify specific entities. Focus on extracting relevant data.\n\nProvide your findings in JSON format. Each category should list the entities found.\n\nText:\n\nHepatotumorigenicity of ethyl tertiary-butyl ether with 2-year inhalation exposure in F344 rats. Carcinogenicity of ethyl tertiary-butyl ether (ETBE) was examined with inhalation exposure using F344/DuCrlCrlj rats. Groups of 50 male and 50 female rats, 6 week old at commencement, were exposed to ETBE at 0, 500, 1,500 or 5,000 ppm (v/v) in whole-body inhalation chambers for 6 h/day, 5 days/week for 104 weeks. A significant increase in the incidence of hepatocellular adenomas was indicated in males exposed at 5,000 ppm, but not in females at any concentration. In addition, significantly increased incidences of eosinophilic and basophilic cell foci were observed in male rats at 5,000 ppm. Regarding non-neoplastic lesions, rat-specific changes were observed in kidney, with an increase in the severity of chronic progressive nephropathy in both sexes at 5,000 ppm. Increased incidences of urothelial hyperplasia of the pelvis were observed at 1,500 ppm and above, and mineral deposition was apparent in the renal papilla at 5,000 ppm in males. There were no treatment-related histopathological changes observed in any other organs or tissues in either sex. The present 2-year inhalation study demonstrated hepatotumorigenicity of ETBE in male, but not in female rats.",
"role": "user"
},
{
"content": "{\"Chemical\": [\"ethyl tertiary-butyl ether\", \"ETBE\"]}",
"role": "assistant"
}
] |
science.pico_ner | science.pico_ner.595 | [
{
"content": "Examine the following abstract and identify distinct entities. Focus on extracting entities related to participants, interventions, and outcomes.\n\nPresent your findings in a JSON format like: {\"Participant\": [\"example entity\", ...], \"Intervention\": [\"example entity\", ...], \"Outcome\": [\"example entity\", ...]}. If no entities are found for a category, use an empty list.\n\nAbstract Content:\n\nThe utility of laparoscopy in the diagnosis of acute appendicitis in women of reproductive age . AIM To see whether laparoscopy improves the accuracy of a clinical diagnosis of acute appendicitis in women of reproductive age , and to determine what the long-term sequelae are of not removing an appendix deemed at laparoscopy to be normal . METHOD The initial part of the study was undertaken during 1991-1992 . Female patients between 16 and 45 years were eligible for inclusion once a clinical decision had been made to perform an appendicectomy for suspected acute appendicitis . Following consent , patients were randomized into two groups . One group had open appendicectomy , as planned . The other group had laparoscopy , followed by open appendicectomy only if the appendix was seen to be inflamed or was not visualized . The end points for the study were the clinical outcomes of all patients , and the results of histology , where appropriate . An attempt was made to contact all patients at 10 years to determine whether they had had a subsequent appendicectomy , or had been diagnosed with another abdominal condition that might be relevant to the initial presentation in 1991-1992 . RESULTS Laparoscopic assessment was correct in all cases in which the appendix was visualized . Diagnostic accuracy was improved from 75 % to 97 % . Laparoscopy was associated with no added complications , no increase in hospital stay in patients who went on to appendicectomy , and a reduction in hospital stay for those who underwent laparoscopy alone . No patients developed a problem over the 10-year follow-up period from having a normal-looking appendix not removed at laparoscopy . CONCLUSION Laparoscopic assessment of the appendix is reliable , and to leave a normal-looking appendix at laparoscopy does not appear to cause any long-term problems .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"open appendicectomy\", \"laparoscopy\"], \"Outcome\": [\"Diagnostic accuracy\", \"histology\", \"reduction in hospital stay\", \"no added complications\", \"no increase in hospital stay\"], \"Participant\": []}",
"role": "assistant"
}
] |
science.data_reco_mcq_sc | science.data_reco_mcq_sc.949 | [
{
"content": "Explore the following research question and associated terms. Identify the most suitable dataset from the list provided. Use the dataset descriptions to guide your choice.\n\nInquiry: I want to train a dependency parser for Norwegian.\n\nKeywords: dependency parsing text\n\nDataset Information:\nSPLASH: A dataset of utterances, incorrect SQL interpretations and the corresponding natural language feedback.\nUniversal Dependencies: The Universal Dependencies (UD) project seeks to develop cross-linguistically consistent treebank annotation of morphology and syntax for multiple languages. The first version of the dataset was released in 2015 and consisted of 10 treebanks over 10 languages.\nSherLIiC: SherLIiC is a testbed for lexical inference in context (LIiC), consisting of 3985 manually annotated inference rule candidates (InfCands), accompanied by (i) ~960k unlabeled InfCands, and (ii) ~190k typed textual relations between Freebase entities extracted from the large entity-linked corpus ClueWeb09. Each InfCand consists of one of these relations, expressed as a lemmatized dependency path, and two argument placeholders, each linked to one or more Freebase types.\nNoReC_fine: NoReC_fine is a dataset for fine-grained sentiment analysis in Norwegian, annotated with respect to polar expressions, targets and holders of opinion.\n\nAvailable Datasets: A: SPLASH B: Universal Dependencies C: SherLIiC D: NoReC_fine",
"role": "user"
},
{
"content": "Universal Dependencies",
"role": "assistant"
}
] |
science.bioasq_yesno_qa | science.bioasq_yesno_qa.1009 | [
{
"content": "Examine the provided biomedical text and determine the answer to the question. Your response should be either \"yes\" or \"no\" based on the information given.\n\nQuestion: Thymoquinone is ineffective against radiation induced enteritis, yes or no?\n\nText: n this study, we found that thymoquinone (TQ) could mitigate intestinal damages induced by irradiation.\nIn this study, we found that thymoquinone (TQ) could mitigate intestinal damages induced by irradiation.\nIn this study, we found that thymoquinone (TQ) could mitigate intestinal damages induced by irradiation.\nTQ might be used for radiation enteritis treatment.",
"role": "user"
},
{
"content": "no",
"role": "assistant"
}
] |
science.drug_combo_extraction_re | science.drug_combo_extraction_re.64 | [
{
"content": "Analyze the given paragraph from a biomedical study. Focus on identifying drug pairings within a specified sentence. Your goal is to determine the nature of these pairings.\n\nPresent your findings as a JSON array. Each element should be an array containing the drug names and their interaction type. Use the exact names from the sentence.\n\nOnly consider the specified sentence for your analysis. The paragraph is for context only.\n\nReturn solely the JSON array. If no pairings are found, return an empty array.\n\nContext Paragraph: Lipoprotein-associated phospholipase A2 mass is significantly reduced in dyslipidemic patients treated with lifestyle modification and combination lipid-modifying drug therapy. Lipoprotein-associated phospholipase A2 (Lp-PLA(2)) mass is a novel inflammatory biomarker. In human blood, Lp-PLA(2) is predominately associated with low-density lipoprotein (LDL). This study examines the ability of lifestyle modification (diet and exercise) and combination lipid therapy to reduce Lp-PLA(2) levels while also determining the relationship between changes in LDL cholesterol and Lp-PLA(2). Thirty dyslipidemic patients who received lifestyle intervention and combination lipid therapy for an average of 6 months were included in these analyses (mean age, 60.9 years); 40% had stable angiographically established coronary artery disease, 40% had the metabolic syndrome, and 70% were men. Drug therapy included omega-3 fish oil , extended-release niacin , colesevelam hydrochloride , and a fixed combination of 10-mg ezetimibe and 40-mg simvastatin . The study revealed a 33% reduction in mean Lp-PLA(2) values (baseline 224.9+/-47.5 vs posttreatment 149.5+/-35.5 ng/mL; P<.001). Significant changes in mean LDL cholesterol from baseline (127.9+/-49.3 vs posttreatment 65.2+/-32.1 mg/dL; P<.001) were also observed. However, regression analysis revealed only a weak positive relationship between changes in LDL cholesterol and Lp-PLA(2) mass (R(2)=0.29; P<.01). Thus, Lp-PLA(2) mass is significantly reduced with lifestyle and combination lipid therapy. Changes in Lp-PLA(2) were only partially explained by the changes observed for LDL cholesterol.\n\nTarget Sentence: Drug therapy included omega-3 fish oil , extended-release niacin , colesevelam hydrochloride , and a fixed combination of 10-mg ezetimibe and 40-mg simvastatin .",
"role": "user"
},
{
"content": "[[\"omega-3\", \"niacin\", \"colesevelam hydrochloride\", \"ezetimibe\", \"simvastatin\", \"POS\"]]",
"role": "assistant"
}
] |
science.chemprot_re | science.chemprot_re.87 | [
{
"content": "Examine the provided text and determine the interactions between chemical entities and genes. Focus on identifying any connections present.\n\nPresent your findings in a structured JSON format. Each relationship should be represented as an array entry. If no interactions are detected, return an empty array.\n\nText Content:\n\nInhibition of gelatinase A (MMP-2) by batimastat and captopril reduces tumor growth and lung metastases in mice bearing Lewis lung carcinoma. We have examined the effects of the synthetic matrix metalloproteinase inhibitor, batimastat (BB-94) and the angiotensin-converting enzyme inhibitor, captopril, on metalloproteinase activity of murine Lewis-lung-carcinoma cells (3LL) in vitro, and on local growth and lung metastasis of the same tumor implanted intramuscularly in syngeneic C57BL/6 mice. The effect of BB-94 and captopril on the survival of the 3LL-tumor-bearing mice was also examined. Here we report that captopril treatment resulted in decreased transcription and protein levels of gelatinase A by 3LL cells. Both BB-94 and captopril also prevented substrate degradation by gelatinase A and B released in conditioned medium by cultured cells. Treatment of tumor-bearing animals with BB-94 (i.p.) or captopril (in drinking water) resulted in significant inhibition of the mean tumor volume (25 and 33% respectively) and of the mean lung metastasis number (26 and 29% respectively). When both agents were given, they acted in synergy, resulting in 51 and 80% inhibition of tumor growth and metastasis. The survival time of the mice treated with both BB-94 and captopril was also significantly longer compared with the groups treated with each agent alone or with the vehicle. Our data support the hypothesis of an essential role of metalloproteinase (s) in the metastatic process. Moreover, blockade of invasion, angiogenesis and other processes mediated by metalloproteinases may underlie the anti-tumor and anti-metastatic effect of BB-94 and captopril and their combination. It is conceivable that this combination could be tested in selected clinical conditions as an adjuvant modality to cytotoxic therapy.",
"role": "user"
},
{
"content": "[[\"batimastat\", \"INHIBITOR\", \"MMP-2\"], [\"batimastat\", \"INHIBITOR\", \"matrix metalloproteinase\"], [\"batimastat\", \"REGULATOR\", \"metalloproteinase\"], [\"captopril\", \"INHIBITOR\", \"MMP-2\"], [\"captopril\", \"INHIBITOR\", \"angiotensin-converting enzyme\"], [\"BB-94\", \"INDIRECT-DOWNREGULATOR\", \"gelatinase A and B\"], [\"BB-94\", \"INHIBITOR\", \"matrix metalloproteinase\"], [\"captopril\", \"INHIBITOR\", \"gelatinase A\"], [\"batimastat\", \"INHIBITOR\", \"gelatinase A\"], [\"captopril\", \"REGULATOR\", \"metalloproteinase\"], [\"captopril\", \"INDIRECT-DOWNREGULATOR\", \"gelatinase A\"], [\"BB-94\", \"REGULATOR\", \"metalloproteinase\"]]",
"role": "assistant"
}
] |
science.chemsum_single_document_summarization | science.chemsum_single_document_summarization.4329 | [
{
"content": "Craft a concise summary for a chemistry research paper using the provided title and selected text segments.\n\nHeading:\nCatalytic Deuterium Incorporation within Metabolically Stable \\xce\\xb2-Amino C\\xe2\\x94\\x80H Bonds of Drug Molecules\n\nContent Extracts:\n\n<p>Deuterium-labeled pharmaceuticals are pivotal diagnostic tools in research aimed at determination of the corresponding biological outcomes and metabolites.1-6 Drugs containing C─D bonds have been prepared through multistep synthesis involving the reduction of unsaturated or halogenated intermediates.3 However, innovations in organometal-catalyzed C─H activation have enabled direct hydrogen isotope exchange (HIE) at C─H bonds for deuterium.4-6 In particular, HIE reaction targeting C(sp3)─H bonds of pharmaceuticals that contain an N-alkylamine unit is in high demand because these entities constitute over 50% of the top-selling commercial drugs.7 The state-of-the-art processes include Beller's α- and β-amino C─H deuteration of metoclopramide (A1) and two other structurally related drug molecules promoted by Ru-based Shvo catalyst (Figure 1A).5 MacMillan's photoredox-mediated α-deuteration and α-tritiation represents a notable strategy for isotopic labelling of a range of N-alkylamine-based drugs (Figure 1B).6 Still, development of methods for regioselective deuteration of poorly reactive β-amino C(sp3)─H bonds of drugs containing Lewis acid- and base-sensitive functional groups with an inexpensive deuterium source and promoted by non-precious metal-based catalysts is a significant challenge.8,9 Regioselective deuteration of metabolically stable β-amino C─H bonds (vs more labile α-amino C─H bonds) is particularly attractive as it minimizes the loss of the label due to exchange.1-3</p><p>We began by contemplating a possible way to design a method for deuteration of biologically active compounds that contain an N-alkylamine unit (1) with readily available acetone-d6\n2 as a source of deuterium (Figure 1C). We considered utilizing a combination of Lewis acid and Brønsted base catalysts that would function cooperatively.10-12 We envisioned that B(C6F5)3 could receive a hydride from an amine (1), generating a borohydride and an iminium ion (I).13-19 Subsequently, a Brønsted basic amine catalyst would deprotonate I, furnishing enamine II.13-16 Concurrently, the N-alkylamine could dedeuterate B(C6F5)3-activated acetone-d6\n2, generating an enolate and a deuterated ammonium ion (III).20 Ensuing deuteration (IV) of the enamine II by III gives an iminium ion; subsequent borohydride reduction would afford β-deuterated product 3. Here, we report the development of a catalyst system for β-amino C─H deuteration of bioactive amines.</p><p>We first set out to identify a desirable combination of catalysts. We probed the ability of B(C6F5)3 and various Brønsted bases to catalyze the reaction between verapamil 1a and acetone-d6\n2 (6.8 equivalent), generating 3a (Table 1). Treatment of 1a and 2 with 5.0 mol% B(C6F5)3 and 10 mol% NEt3, NBn3, or 1,2,2, 6,6-pentamethylpiperidine (PMP) afforded 3a in >90% yield (toluene, 125 °C, 1 h); 16-34% of β-amino C─H bonds were converted to C─D bonds (entries 1–3). With more Brønsted basic 1,8-diazabicyclo[5.4.0]undec-7-ene (DBU), no labeling was observed (entry 4). When the transformation was performed without a Brønsted base co-catalyst, 3a was generated with 35% and 21% deuterium incorporation (entry 5), suggesting that N-alkylamines 1a and/or 3a can promote deprotonation of the iminium ion (I, Figure 1C; NR3 = 1a and/or 3a). Deuterium incorporation diminished to <10% with 5.0 mol% of B(C6F5)3 and reaction temperature of 100 °C (entry 6), but when the reaction mixture was heated at 150 °C, 3a was obtained with >80% labeling (entry 7). With 10 mol% B(C6F5)3 there was only minor improvement (entry 8, 88% and 92%). However, by reacting 1a with two batches of 5.0 mol% of B(C6F5)3 and 2 (6.8 equivalent), we were able to obtain 3a with >95% deuterium incorporation (entry 9). There was no labeling without B(C6F5)3 or when the less hindered BF3 or the less Lewis acidic BPh3 were used (entries 10–12). These findings support the notion that strongly acidic B(C6F5)3 together with sterically demanding and electron-rich N-alkylamine constitute the most effective combination.10</p><p>Acyclic β-amino C─H bonds in a number of pharmaceuticals (1a–1j) underwent efficient deuteration (Table 2). This protocol was found to be compatible with compounds that contain an array of Lewis acid-sensitive functional groups. In addition to the N-alkylamine units of 1a–1j, cyano (1a), ester (1b), amide (1d, 1e, 1f) and ketone (1j) were tolerated to give the deuteration products 3a–3j in 77 to >95% yield after purification by silica gel chromatography. Labeling took place with high regioselectivity for β-amino C─H bonds. In addition, drug molecules that possess acidic α-carbonyl C─H bonds also underwent efficient deuteration (3d, 3j) based on analysis of 1H NMR spectra of unpurified mixtures. Nonetheless, in case of 3d, α-carbonyl C─D bonds of indolin-2-one underwent H─D exchange during purification.</p><p>For substrates that possess electronically and sterically disparate β-amino C─H bonds (1a, 1b, 1c, 1d, 1g, 1j), deuterium labeling occurred at varying levels. With verapamil 1a, benzylic C2─H and non-benzylic C4─H bonds were converted to C─D bonds in >95%, but deuteration of non-benzylic C4─H bonds was more efficient (Table 1, entries 5-9). With dicyclomine 1b, while 90% of C2'─H bonds of N-ethyl groups was deuterated, only 23% of C1─H bonds adjacent to an ester group were converted to C─D bonds. Similar reactivity was observed with clomiphene 1c: 90% of C2'─H bonds were converted into C2'─D bonds, but 15% of α-aryloxy C1─H bonds were deuterated. Ropinirole 1d was labeled at benzylic C1─H bonds (63%), and C2'─H bonds of N-propyl group (86%).</p><p>Although the catalytic protocol tolerates an array of functional groups, isotopic labeling was more efficient with substrates bearing a protecting group. For instance, 80% of β-C─H bonds of lidocaine 1e, which possesses acidic amide N─H bonds, was converted to C─D bonds to give 3e. However, deuteration of N-benzyl-protected lidocaine 1f proceeded more efficiently to afford 3f with 96% d-incorporation; in addition, deuteration of α-carbonyl C─H bonds was also observed (9%). Cinacalcet 1g containing a secondary amine moiety was a compatible substrate to provide 3g (63% [C2] and 8% [C2']), and N-benzyl-protected cinacalcet 3h was obtained with >98% of β-amino C2─H bonds selectively converted into C─D bonds. With less sterically hindered secondary amines nortriptyline 1i and propafenone 1j, their reaction with acetone-d6 may inhibit labeling. However, with an N-benzhydryl group installed, 3i and 3j could be readily generated. Silyl protection of the secondary alcohol proved to be effective in the case of propafenone 1j, giving 3j (76% [C2] and 0% at more sterically hindered [C2']).</p><p>Next, we investigated possible labeling of various pharmaceuticals that contain cyclic β-amino C─H bonds (Table 3; 1k–1s). A variety of Lewis acid-sensitive heterocycles such as piperidine (1k–1q), 1,4-diazepane (1r), piperazine (1s), thiophene (1k, 1l), indanone (1m), benzodioxole (1o, 1p), benzothiophene (1q), as well as benzoimidazole (1r) were tolerated to give the corresponding deuteration products in 85 to >95% yield. With clopidogrel 1k, prasugrel 1l, and donepezil 1m, both β-amino C─H bonds and enolizable α-carbonyl C─H bonds underwent efficient deuteration to give 3k–3m, but acidic α-keto C─D bond of 3l was converted to C─H bond during purification. Less acidic α-amide C─H bond of bupivacaine 1n was not deuterated. With bupivacaine 1n and raloxifene 1q that contain acyclic and cyclic β-amino C─H bonds, labeling of the cyclic C─H bond was more efficient (>90% vs ≤29% for the acyclic C─H). N-Benzyl (1o) and N-benzhydryl (1p)-protected paroxetine gave 3o and 3p, respectively. The level of labeling for the more hindered 3p (94%) was superior to 3o (76%). Furthermore, deuteration of the C5─H bond occurred selectively, while the tertiary C3─H bond remained intact. Deuteration of emedastine 1r was found to take place at C2─H (33%) and C6─H (60%) bonds. All eight C─H bonds of piperazine ring of O-TBS-protected dropropizine 1s underwent deuteration to afford 3s (>86%). Using this protocol, α-amino C─H deuteration occurred only when these bonds were also alpha to a carbonyl group (3f) or beta to a N atom (3s).</p><p>The method is scalable. Treatment of 1.4 g (3.0 mmol) of verapamil 1a with 5.0 mol% B(C6F5)3, 20 mmol of acetone-d6 (toluene, 12 h, 150 °C), followed by filtration through a pad of silica gel and repeating the aforementioned procedure afforded 3a in 95% yield (2.9 mmol, 1.3 g) and >93% deuterium incorporation (Scheme 1).</p><p>To summarize, we have designed an efficient and regioselective deuterium labeling of β-amino C─H bonds in various bioactive molecules, provided that sufficient steric congestion is present around the reacting amine. By implementing the cooperative action of B(C6F5)3 and N-alkylamine catalyst system, we show that it is possible to convert an N-alkylamine-based pharmaceutical compound to the corresponding enamine, and that the same catalyst system can generate a labeling agent from acetone-d6. The principles outlined herein, entailing conversion of amine containing drugs into enamines and its reaction with in situ generated electrophilic partner, provide a new rational framework for late-stage modification of a drug candidate. Studies along these lines are in progress.</p>",
"role": "user"
},
{
"content": "An efficient deuteration process of \\xce\\xb2-amino C\\xe2\\x94\\x80H bonds in various N-alkylamine-based pharmaceutical compounds has been developed. Catalytic reactions begin with the action of Lewis acidic B(C6F5)3 and Br\\xc3\\xb8nsted basic N-alkylamine, converting a drug molecule into the corresponding enamine. The acid/base catalysts also promote the dedeuteration of acetone-d6 to afford a deuterated ammonium ion. Ensuing deuteration of the enamine then leads to the formation of \\xce\\xb2-deuterated bioactive amines with up to 99% deuterium incorporation.",
"role": "assistant"
}
] |
science.pico_ner | science.pico_ner.3235 | [
{
"content": "Examine the following abstract and identify distinct entities. Focus on extracting entities related to participants, interventions, and outcomes.\n\nPresent your findings in a JSON format like: {\"Participant\": [\"example entity\", ...], \"Intervention\": [\"example entity\", ...], \"Outcome\": [\"example entity\", ...]}. If no entities are found for a category, use an empty list.\n\nAbstract Content:\n\nQuantitative measurements of HER2 and phospho-HER2 expression : correlation with pathologic response to neoadjuvant chemotherapy and trastuzumab . BACKGROUND Preoperative therapy with chemotherapy and the HER2-targeted monoclonal antibody trastuzumab is valuable for patients with large or locally advanced HER2-positive ( HER2+ ) breast cancers but traditional methods of measuring HER2 expression do not accurately stratify patients for likelihood of response . Quantitative immunofluorescent approaches have the potential to provide a mathematically continuous measure of HER2 . Here we seek to determine whether quantitative measurement of HER2 or phospho-HER2 correlates with likelihood of response to trastuzumab- containing neoadjuvant therapy . METHODS We evaluated core biopsy samples from 27 HER2+ breast cancer patients enrolled in a preoperative clinical trial using trastuzumab , nab-paclitaxel and carboplatin combination therapy ( BrUOG BR-211B ( NCT00617942 ) ) . Tumor core biopsies were taken before initiation of treatment and 9-13 days after patients received \" run-in \" doses of either single agent trastuzumab or nab-paclitaxel . The AQUA method of quantitative immunofluorescence was used for analysis of in situ protein expression . Patients then received 18 weeks of treatment , followed by surgery to assess pathologic response to the neoadjuvant regimen . RESULTS A HER2 score of 2111 by AQUA analysis has been shown to be equivalent to HER2 3+ by immunohistochemical staining in previous studies . Of 20 evaluable patients , 10 cases who achieved a pathologic complete response ( pathCR ) with neoadjuvant treatment had a mean HER2 level of 10251 compared with 4766 in the patients without pathCR ( p = 0.0021 ) . Measurement of phospho-HER2 showed no difference in pathCR vs non-pathCR groups . In 9 patients who had HER2 levels repeated after a single treatment with trastuzumab there was no evidence of a reduction in the HER2 or phospho-HER2 levels following that exposure . CONCLUSIONS High levels of HER2 are associated with achievement of a pathCR in the preoperative setting , while levels of Phospho-HER2 were not predictive of response . This data suggests that accurate measurement of HER2 may help determine the likelihood of response in the pre-surgical setting . Further validation in larger cohorts is required , but this pilot data shows the feasibility of this approach .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"trastuzumab , nab-paclitaxel and carboplatin combination therapy\", \"trastuzumab\", \"nab-paclitaxel\", \"neoadjuvant chemotherapy and trastuzumab\", \"HER2-targeted monoclonal antibody trastuzumab\", \"Preoperative therapy with chemotherapy\"], \"Outcome\": [\"High levels of\", \"pathologic complete response ( pathCR )\", \"in the HER2 or phospho-HER2 levels\", \"levels of Phospho-HER2\", \"Measurement of phospho-HER2\", \"pathologic response\"], \"Participant\": [\"core biopsy samples from 27 HER2+ breast cancer patients enrolled in a preoperative clinical trial using trastuzumab , nab-paclitaxel and carboplatin combination therapy ( BrUOG BR-211B ( NCT00617942 ) )\", \"patients with large or locally advanced HER2-positive ( HER2+ ) breast cancers\"]}",
"role": "assistant"
}
] |
science.chia_ner | science.chia_ner.366 | [
{
"content": "Analyze the provided clinical trial criteria. Identify and extract distinct entities from the text.\n\nReturn the results as a JSON structure. Ensure the output is strictly in JSON format without any additional commentary.\n\nCriteria Text:\n\nPrevious treatment with brentuximab vedotin.\nPreviously received an allogeneic transplant.\nPatients with current diagnosis of primary cutaneous ALCL (patients whose ALCL has transformed to sALCL are eligible).\nKnown cerebral/meningeal disease including signs or symptoms of progressive multifocal leukoencephalopathy (PML)\nFemale patients who are lactating and breastfeeding or pregnant\nKnown human immunodeficiency virus (HIV) positive\nKnown hepatitis B surface antigen-positive, or known or suspected active hepatitis C infection",
"role": "user"
},
{
"content": "{\"Condition\": [\"meningeal disease\", \"ebral/me ease in\", \"sALCL\", \"progressive multifocal leukoencephalopathy\", \"primary cutaneous ALCL\", \"PML\", \"hepatitis C infection\"], \"Device\": [], \"Drug\": [\"brentuximab\"], \"Measurement\": [\"HIV\", \"hepatitis B surface antigen\", \"human immunodeficiency virus\"], \"Mood\": [], \"Multiplier\": [], \"Negation\": [], \"Observation\": [], \"Person\": [], \"Procedure\": [\"allogeneic transplant\"], \"Qualifier\": [\"active\"], \"ReferencePoint\": [], \"Scope\": [\"cerebral/meningeal disease\"], \"Temporal\": [], \"Value\": [\"positive\"], \"Visit\": []}",
"role": "assistant"
}
] |
science.bioasq_factoid_qa | science.bioasq_factoid_qa.129 | [
{
"content": "Given the following biomedical passages, answer the question succinctly. Ensure your response is precise and limited to the answer only.\n\nQuestion: Is Rheumatoid Arthritis more common in men or women?\n\nPassages: Our results show a high prevalence of RA in LAC women with a ratio of 5.2 women per man\nRA in LAC women is not only more common but presents with some clinical characteristics that differ from RA presentation in men. Some of those characteristics could explain the high rates of disability and worse prognosis observed in women with RA in LAC\nIntense anti-CCP2 reaction was 19.8-fold higher in females vs. males,\nmen (n = 67) and women (n = 225)\nResponses to treatment over time were better among men in this prebiologic era; women had worse progression despite similar treatment.\nBMI appears to be associated with RA disease activity in women, but not in men.\nA total of 5,161 RA patients (4,082 women and 1,079 men)\nIn women the DAS28 was significantly higher than in men due to higher scores for general health and tender joints. Likewise, HAQ and VAS pain were rated significantly higher in women.\n432 females, 125 males\nESR significantly increased with age, independent of other variables of disease activity. This increase was more pronounced in male than in female patients\nDisease patterns in RA vary between the sexes; the condition is more commonly seen in women, who exhibit a more aggressive disease and a poorer long-term outcome.\nThe female to male ratio was 2.5:1 and the mean age at diagnosis was 49.4 +/- 14.9 years for women and 55.3 +/-15.6 years for men (P < 0.0003)\nin 244 female and 91 male patients with rheumatoid arthritis.",
"role": "user"
},
{
"content": "Women",
"role": "assistant"
}
] |
science.data_reco_mcq_mc | science.data_reco_mcq_mc.719 | [
{
"content": "Consider the following research inquiry and associated key terms. Review the dataset summaries provided and determine which datasets are suitable for addressing the research question. Base your decision on your understanding of machine learning datasets.\nProvide your answer as a JSON list. For example, for the inquiry \"Semi supervised image classification\", a possible response could be: [\"CIFAR-10\", \"CIFAR-100\"]. Ensure the response contains only the dataset names in JSON format.\n\nResearch Inquiry: I want to train a multi-path refinement CNN for object contour detection.\n\nKey Terms: object contour detection images\n\nDataset Summaries:\nCOCO: The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\nGQA: The GQA dataset is a large-scale visual question answering dataset with real images from the Visual Genome dataset and balanced question-answer pairs. Each training and validation image is also associated with scene graph annotations describing the classes and attributes of those objects in the scene, and their pairwise relations.\nBSDS500: Berkeley Segmentation Data Set 500 (BSDS500) is a standard benchmark for contour detection. This dataset is designed for evaluating natural edge detection that includes not only object contours but also object interior boundaries and background boundaries.\nCOCO-Tasks: Comprises about 40,000 images where the most suitable objects for 14 tasks have been annotated.\nSNIPS: The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity: SearchCreativeWork (e.g. Find me the I, Robot television show), GetWeather (e.g.\n\nDataset Options: A: COCO B: GQA C: BSDS500 D: COCO-Tasks E: SNIPS",
"role": "user"
},
{
"content": "[\"COCO\", \"BSDS500\"]",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_pubmed | science.scientific_papers_summarization_single_doc_pubmed.5082 | [
{
"content": "Read the provided biomedical research paper and create a concise abstract. Ensure the abstract captures the main points of the paper.\n\nFull Paper:\nwe describe one patient who was affected by an and presented high level increase of serum liver enzymes , along with sever thrombocytopenia .\na 14-year - old boy with an was admitted in the pediatric psychiatric emergency department of alzahra hospital with impaired electrolyte levels , bradycardia , hypotension , liver dysfunction , and thrombocytopenia .\na ten - time increase in liver enzymes and thrombocytopenia were observed on admission . after two months of treatment ,\nimprovement of initial clinical symptoms and recovery of liver enzymes and thrombocytopenia after the treatment suggested that liver dysfunction and thrombocytopenia may be observed in an patients and should be taken care of by physicians .\na 14-year - old boy with an 8-month history of an referred to our outpatient clinic ( department of pediatrics , alzahra hospital ) . because of his physical condition ( weight loss , bradycardia , imbalanced electrolytes , hypothermia , hypotension , and lung and liver dysfunction ) we decided to hospitalize him ( figure 1 ) . a male patient with anorexia nervosa . on admission to the psychiatric unit ,\nhis body weight and body mass index ( bmi ) were 31 kg and 13.2 , respectively .\nin addition , his liver enzymes were high , and his platelet decreased as shown in table 1 .\nother causes of acute liver dysfunction such as hepatotoxic drugs , alcohol , cocaine and viral hepatotoxic diseases were evaluated and then ruled out .\nwe started to treat the patient with conservative management including stabilizing electrolytes by serum therapy and nourishment using a high calorie diet . a marked decrease in serum transaminase and an increase in platelet count\nwere observed with a gradual normalization of all the biochemical parameters within one months of hospitalization .\nhe started eating and his electrolytes got balanced and his family became happy about his recovery .\ntherefore , he was discharged after one months of hospitalization and successfully continued his rehabilitation program on an outpatient basis .\nthis was one of the uncommon cases of an with a high level increase ( 10 times more ) of liver enzymes and sever thrombocytopenia .\nminor degrees of liver injury have been reported in up to 40% of patients with an.13 although the mechanism of liver injury in an has thought to be due to protein - calorie malnutrition of the kwashiorkor type accompanied with fatty changes , this has not been rigorously demonstrated and the mechanism is still unknown.1314 in an , malnutrition has been reported to cause thrombocytopenia along with liver dysfunction.2 one report has described temporal changes in liver enzyme levels , platelet count , and thyroid peroxidase ( tpo ) levels in a patient with an.3 acute liver failure with a massive increase of serum liver enzyme may be a sever , though relatively uncommon , complication caused by severe malnutrition such as that secondary to an.46 as reported in the literature , serum transaminases do not reach such high values during the refeeding syndrome.7 to avoid complications of refeeding syndromes,7 nutritional rehabilitation is started with a low - energy oral and parental nutrition ( 25 kcal / kg body weight ) . therefore , follow up of serum liver enzymes should be part of our evaluation to prevent the uncommon but serious risk of acute liver failure .\nour case report showed an an patient with increased level of liver enzymes about 10 times more than normal range , as well as severe thrombocytopenia , which is uncommon .\ntherefore , it is concluded that physicians should be aware of high level increase of liver enzymes and severe thrombocytopenia which may occur in an .",
"role": "user"
},
{
"content": "background : anorexia nervosa ( an ) is a difficult - to - treat psychosomatic disease . \n very few cases of acute liver failure associated with an have been described . \n we describe one patient who was affected by an and presented high level increase of serum liver enzymes , along with sever thrombocytopenia . \n then , we discuss the possible etiopathogenic factors.methods:a 14-year - old boy with an was admitted in the pediatric psychiatric emergency department of alzahra hospital with impaired electrolyte levels , bradycardia , hypotension , liver dysfunction , and thrombocytopenia.results:a ten - time increase in liver enzymes and thrombocytopenia were observed on admission . after two months of treatment , \n the levels were within the normal range.conclusions:improvement of initial clinical symptoms and recovery of liver enzymes and thrombocytopenia after the treatment suggested that liver dysfunction and thrombocytopenia may be observed in an patients and should be taken care of by physicians .",
"role": "assistant"
}
] |
science.chemsum_single_document_summarization | science.chemsum_single_document_summarization.3137 | [
{
"content": "Craft a concise summary for a chemistry research paper using the provided title and selected text segments.\n\nHeading:\nHydrogenation of unactivated enamines to tertiary amines: rhodium complexes of fluorinated phosphines give marked improvements in catalytic activity\n\nContent Extracts:\nIntroduction\n<p>A potentially very direct method to produce tertiary amines is by the hydrogenation of enamines. While the hydrogenations of enamides, bearing coordinating acyl substituents is probably the most developed and studied of all hydrogenation processes, studies on the hydrogenation of unactivated enamines are scarce and several important problems need to be solved. Some time ago, the enantioselective variant was highlighted by several pharma companies as one of the more important aspirational transformations for production of pharmaceuticals [1][2][3]. Several examples of highly enantioselective and quite reactive processes have appeared for enamines that are activated by a chelating group, or can potentially isomerise to an NH imine during catalysis [4,5]. A few papers have appeared with good enantioselectivity for some quite specific enamines, but despite the importance of these contributions, catalyst loadings around 1 mol % are used [6][7][8][9][10][11][12][13]. Commercial applications generally require catalyst loading below 0.05 mol %. We are not aware of any achiral or chiral homogeneous catalysts that promote these reactions at this substrate/catalyst ratio, so the intrinsic lower reactivity of these substrates needs to be addressed with new catalysts. In a recent study on hydroaminomethylation, i.e., domino hydroformylation-enamine formation-enamine hydrogenation, we noted that the enamine hydrogenation was the slowest reaction in the process, and use of an electron-deficient phosphine sped up the reduction step significantly [14]. DFT calculations revealed that in the hydrogenation of these aldehyde-derived enamines, the final stage of hydrogenation, reductive elimination was the rate-determining step. This is in contrast to nearly all studies on homogeneous hydrogenation of alkenes where oxidative addition, and probably more often migratory insertions are rate-determining and accelerated by electron-rich phosphine ligands. Prior to embarking on a quest for highly active Rh catalysts for enantioselective enamine hydrogenation, we investigated if more commonly encountered enamine substrates are also reduced much faster using Rh complexes of electron-withdrawing phosphines. In this paper, we report how a range of enamines can be successfully hydrogenated in high yield using low levels of rhodium, including some very deactivated enamines that do not hydrogenate using conventional catalysts.</p>\n\nResults and Discussion\n<p>The majority of the enamines produced in this study were synthesised from the parent ketones and secondary amines by adapting literature procedures (Scheme 1) [15]. Since isolation of pure enamines is not a completely trivial task, Supporting Information File 1 gives full details for the synthesis and purification of enamines 1a-i. One of the main modifications made to the synthetic procedure is at the end of the reaction, wet diethyl ether was added in order to precipitate all titanium salts (this strategy was previously used after formation of imine bonds using TiCl 4 ) [16]. Enamine 1g was more stable than all other enamines with disubstituted double bond studied here; no hydrolysis was observed in wet chloroform even after 6 hours. It is also worth mentioning that tetrasubstituted enamines are very stable towards hydrolysis. Consequently, enamines 1b and 1c were isolated by acid-basic work-up with purities of over 99% (see Supporting Information File 1 for details).</p><p>Enamine 1j cannot be prepared using this strategy, and therefore we developed a new branched-selective hydroaminovinylation procedure [17][18][19][20]. Some time ago, this enamine was detected in a product mixture with up to 39% selectivity [20]. A key aspect that prevents better selectivity is that, in general, Rh catalysed hydroformylations of 'alkyl' alkenes of type RCH 2 CH=CH 2 give mainly the linear product [21,22]. Since we had recently discovered that Rh complexes of the 'BOBPHOS' ligand unexpectedly give unprecedented branched regioselectivity in enantioselective hydroformylation of alkyland arylalkenes [23,24], we reconsidered this cyclisation reaction using the new catalyst (Scheme 2). We were pleased to find that the selectivity is increased to 78%. Since the desired product is achiral, there is no need to use enantiopure BOBPHOS for this synthesis. When the reaction was performed on 8 mmol scale, using a BOBPHOS sample made from racemic biphenol derivative 3, the enamine 1j was isolated in an overall yield of 60%.</p><p>We initially wanted to establish the generality of the previous observation that electron-withdrawing ligands enhance the rate of Rh catalysed hydrogenation relative to more electron donating ligands such as triphenylphosphine. The hydrogenation of enamine 1e at a S/Rh ratio of 250 at 65 °C proceeded at a suitable rate, such that simply measuring conversion at the times given provides a meaningful measure of the relative rates of hydrogenation for Rh catalysts derived from electrondonating and electron-withdrawing ligands. A screen of monodentate ligands was performed for hydrogenation of 1e with catalysts derived from ligands 4-9 (Scheme 3). Compared to triphenylphosphine, more electron-poor ligands, particularly commercially available 7, 8 [25] or also commercial product 9, show faster rates of hydrogenation of 1e. It can be envisaged that other less electron-donating phosphines could also be used to good effect, providing they are stable under the reaction conditions. It is possible that stability is an issue with the strong π-acceptor ligand triphenylphosphite. We note here that an earlier attempt by some of us using chiral phosphites in this type of reaction gave very low conversions to product under these conditions. The ligand electronic effect clearly supports our earlier proposal of the reductive elimination as rate determining step in this process [14]. Using readily available simple ligand 8, combined with [Rh(COD)Cl] 2 , we also studied the hydrogenation of a range of other enamines with [Rh(COD)Cl] 2 /PPh 3 as a control. Table 1 shows very clearly the improved performance of the less strongly donating phosphine ligand for this process. For enamines 1g, 1h and 1i, experiments with much lower catalyst loadings were performed in order to prove that the rate of hydrogenation is faster when 8 is used instead of 4. Of particular note is the hydrogenation of the deactivated enamines 1b and 1c. It is well known that, even without deactivating nitrogen substituents, the hydrogenation of tetrasubstituted alkenes is not generally achieved with Rh catalysts [26,27]; Crabtree's catalyst is often used to accomplish this type of task [26,27]. The ability of this catalyst combination to conduct this type of transformation, as shown in Table 1, entries 6 and 8 are of synthetic value.</p><p>Trisubstituted enamine 1d (Table 1, entry 9) is slower to reduce than all disubstituted enamines (except entry 1). Enamine 1j (Table 1, entry 31) shows a much faster rate of hydrogenation than 1d (entry 9), presumably due to the fact that there is a ring strain due to the double bond in a 6-membered ring, which is released after the double bond is hydrogenated. 1a does not get hydrogenated with the catalysts studied. It is likely that this is due to the substrate binding to the catalyst via the pyridine nitrogen and deactivating the catalyst. In order to provide support for this, the normally high-yielding hydrogenation of 1d was carried out in the presence of 30 equivalents of pyridine relative to Rh, and the conversion dropped to 10%. Comparing Table 1, entries 12, 14 and 16, it is clear that electron-poor enamines get hydrogenated faster. A possible reason for the enamine 1f being reduced slower than 1g may come from the fact that 1f is a much more stable enamine (see enamine synthesis section, Supporting Information File 1).</p><p>It is well known that the reductive elimination is sped up with more bulky ligands -i.e., when the bulk around the transition state is larger, this step occurs more readily. Enamines 1h and 1i are more bulky due to their N-benzyl substituents, and therefore are hydrogenated with faster rates. Table 1, entry 29 represents a TON of 4550 mol mol −1 which is, to the best of our knowledge, the highest TON in an enamine hydrogenation reported up to date. We found it convenient to carry out these reactions at 30-60 bar of H 2 gas (in order to compare reactivities of enamines with triphenylphosphine as a ligand). Full conversion is also possible at 5 bar pressure (Table 1, entry 30), but we did not observe product using a balloon of hydrogen (~1 bar). The latter observation contrasts somewhat with the results of reference [7], when using 1 mol % of a [Rh(diphosphine)(COD)] cation on a disubstituted enamine: complete conversion can be realised in 2-18 hours. It can be assumed that the catalysts used here are less efficient at activating hydrogen relative to more electron-rich metal systems, meaning below a certain pressure threshold, hydrogen activation does not proceed at a sufficient rate.</p><p>In order to prove that toluene is not the only solvent where an electronic effect holds, a polar protic solvent (MeOH) was chosen (Table 2). The electronic effect still holds in MeOH as a solvent, although it is less pronounced, and the best rate of conversion is found in toluene. Another solvent explored in this study was (R)-limonene. This solvent is now being used as a green alternative to hexane in the cleaning industry and extraction [28][29][30], but barely has been exploited in synthetic chemistry so far [29]. While being an environmentally benign, fairly cheap, waste-derived chemical, it might seem counter-intuitive to use it in hydrogenation since it contains 2 double bonds itself. However, in hydrogenation of enamines, as was shown above, enamine hydrogenation benefits from electron-poor ligands, so the hope was that the enamine hydrogenation would be competitive over limonene hydrogenation. In addition, this solvent enables us to study the relative reactivities of these C=C bonds, as well as possibly giving a greener procedure. Examples of hydrogenation of 1h in limonene as a solvent are shown in Table 3.</p><p>The results shown in Table 3 suggest that limonene is a promising solvent for this process. As expected, triphenylphosphine shows higher selectivity in hydrogenation of the limonene's disubstituted double bond, and low conversion to amine. Ligand 8 allows good conversion to amine with relatively low amounts of limonene hydrogenated at the least substituted double bond. While this solvent is not completely inert, it is envisaged that the mixture of limonene and dihy- drolimonene (10) would be a perfectly suitable solvent mixture to recycle and reuse. (R)-Limonene is a high boiling solvent, creating a disadvantage for processes where the solvents are removed by evaporation. However, in amine synthesis in general, amines are isolated by extraction into acid, and this was demonstrated here (see Supporting Information File 1). We suggest that (R)-limonene is worth considering as a sustainable, benign solvent for amine synthesis in the future.</p>\n\nConclusion\n<p>Overall, the primary outcome from this study is to demonstrate that highly active Rh catalysts for enamine hydrogenation are a possibility, but they require quite different ligands to those needed for enamide hydrogenation. From a synthetic perspective, large scale reduction processes generally prefer the use of hydrogen gas to any other reductant, since it potentially saves on cost, waste, atom-economy, solvent and water use; the catalysts identified here could be useful in this regard. While it is possible some heterogeneous hydrogenation catalysts could accomplish enamine reductions, the issues with functional group tolerance would be problematic in many cases. From a more general synthetic viewpoint, the use of reagents such as sodium triacetoxyborohydride or sodium cyanoborohydride can be appealing at small scale where the practical issues noted above are not so important. However, the formation of tertiary amines from aryl ketones using hydride reagents has been reported to be problematic [31]. In addition, the hydride reductions, whether carried out as reductive amination or reduction of enamines need stoichiometric acetic acid to promote the formation of the iminium ion that is the substrate reduced in hydride reductions, which might not be compatible with other functional groups. To the best of our knowledge, the hydrogenation of tetrasubstituted enamines has not been carried out before. The use of green, non-toxic and renewable solvent (R)limonene is introduced here as a potentially promising solvent for amine synthesis. This solvent could prove a particularly useful green solvent for any reaction that involved an aqueous/ organic work-up as purification step, particularly if catalysts could be recycled, although that is likely to be challenging in moisture sensitive catalytic hydrogenation chemistry.</p><p>The ligand electronic effects seem counter intuitive at first glance, but they support the finding by DFT calculations that enamine hydrogenation has a different rate determining step to most other alkene hydrogenations, and show that these observations are a general phenomenon of synthetic use, since some of the enamines studied here are rather unreactive using normal catalysts and/or in reductions using hydride reagents. Research on the creation of enantioselective enamine hydrogenation catalysts that can operate at industrially acceptable catalysts loadings may well benefit from chiral π-acceptor phosphines as ligands and this is being actively researched in our laboratory.</p>",
"role": "user"
},
{
"content": "In the hydrogenation of sluggish unactivated enamine substrates, Rh complexes of electron-deficient phosphines are demonstrated to be far more reactive catalysts than those derived from triphenylphosphine. These operate at low catalyst loadings (down to 0.01 mol %) and are able to reduce tetrasubstituted enamines. The use of the sustainable and environmentally benign solvent (R)-limonene for the reaction is also reported with the amine isolated by acid extraction.",
"role": "assistant"
}
] |
science.chemdner_ner | science.chemdner_ner.141 | [
{
"content": "Analyze the given text to identify specific entities. Focus on extracting relevant data.\n\nProvide your findings in JSON format. Each category should list the entities found.\n\nText:\n\nInfluence of low ultrasound intensity on the degradation of dextran catalyzed by dextranase. In our current research work, the effect of ultrasound irradiation on the enzymatic activity and enzymatic hydrolysis kinetic parameters of dextran catalysis by dextranase were investigated. Furthermore, the effects of ultrasound irradiation on the structure of dextranase were investigated with the aid of fluorescence spectroscopy and circular dichroism (CD) spectroscopy. The maximum activity of dextranase was observed when the sample was treated with ultrasound at 25 kHz, 40 W for 15 min, under which the enzyme activity increased by 13.43% compared the routine thermal incubation at 50 °C. Experimental Kinetics results, demonstrated that, both the V(max) and K(M) values of dextranase increased with ultrasound-treated compared with the incubation at 50 °C. Likewise, both the catalytic and specificity constants were higher under the effects of an ultrasonic field, indicating that, the substrate is converted into the product at an increased rate when compared with the incubation at 50 °C. On the other hand, fluorescence and CD spectra reflected that the ultrasound irradiation had increased the number of tryptophan on dextranase surface with increased α-helix by 15.74% and reduced random coil by 5.41% upon ultrasound-treated dextranase protein compared to the control, which were helpful for the improvement of its activity.",
"role": "user"
},
{
"content": "{\"Chemical\": [\"tryptophan\"]}",
"role": "assistant"
}
] |
science.scicite_classification | science.scicite_classification.2548 | [
{
"content": "Examine the citation provided below. Determine the primary purpose of this citation within the context of the research paper. \n\nCitation:\n\nKongkam et al. 2009 (40) Case series report Morphine or other opioid and\nnonopioid drugs Meningitis in 2 of 13 patients (15.4%); paraspinous abscess with\nbacterial meningitis during IT pump use in one patient required pump removal; subclavian venous catheter sepsis and intracardiac fungal ball…",
"role": "user"
},
{
"content": "Background",
"role": "assistant"
}
] |
science.data_reco_mcq_sc | science.data_reco_mcq_sc.3749 | [
{
"content": "Explore the following research question and associated terms. Identify the most suitable dataset from the list provided. Use the dataset descriptions to guide your choice.\n\nInquiry: **The proposed AM algorithm learns a generative model of real images using a prior distribution learned by a D\n\nKeywords: image synthesis images\n\nDataset Information:\nSynbols: Synbols is a dataset generator designed for probing the behavior of learning algorithms. By defining the distribution over latent factors one can craft a dataset specifically tailored to answer specific questions about a given algorithm.\nRL Unplugged: RL Unplugged is suite of benchmarks for offline reinforcement learning. The RL Unplugged is designed around the following considerations: to facilitate ease of use, the datasets are provided with a unified API which makes it easy for the practitioner to work with all data in the suite once a general pipeline has been established.\nImageNet: The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nScan2CAD: Scan2CAD is an alignment dataset based on 1506 ScanNet scans with 97607 annotated keypoints pairs between 14225 (3049 unique) CAD models from ShapeNet and their counterpart objects in the scans. The top 3 annotated model classes are chairs, tables and cabinets which arises due to the nature of indoor scenes in ScanNet.\n\nAvailable Datasets: A: Synbols B: RL Unplugged C: ImageNet D: Scan2CAD",
"role": "user"
},
{
"content": "ImageNet",
"role": "assistant"
}
] |
science.pico_ner | science.pico_ner.4220 | [
{
"content": "Examine the following abstract and identify distinct entities. Focus on extracting entities related to participants, interventions, and outcomes.\n\nPresent your findings in a JSON format like: {\"Participant\": [\"example entity\", ...], \"Intervention\": [\"example entity\", ...], \"Outcome\": [\"example entity\", ...]}. If no entities are found for a category, use an empty list.\n\nAbstract Content:\n\nManagement of the thyroid isthmus in tracheostomy : a prospective and retrospective study . The thyroid isthmus is often encountered while a tracheostomy is being performed . This study details retrospective and prospective comparison of electrocautery division of the isthmus with older techniques . In this study , electrocautery division of the thyroid isthmus during tracheostomy is faster and as safe as other techniques with respect to blood loss , perioperative complications , and airway outcome .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"electrocautery division of the thyroid isthmus\", \"electrocautery division of the isthmus\", \"tracheostomy\"], \"Outcome\": [\"thyroid isthmus\", \"blood loss , perioperative complications , and airway outcome\", \"faster and as safe\"], \"Participant\": [\"Management of the thyroid isthmus in tracheostomy : a prospective and retrospective study .\"]}",
"role": "assistant"
}
] |
science.data_reco_mcq_mc | science.data_reco_mcq_mc.1700 | [
{
"content": "Consider the following research inquiry and associated key terms. Review the dataset summaries provided and determine which datasets are suitable for addressing the research question. Base your decision on your understanding of machine learning datasets.\nProvide your answer as a JSON list. For example, for the inquiry \"Semi supervised image classification\", a possible response could be: [\"CIFAR-10\", \"CIFAR-100\"]. Ensure the response contains only the dataset names in JSON format.\n\nResearch Inquiry: I want to jointly solve three problems (i.e. saliency estimation, action recognition and\n\nKey Terms: spatio-temporal problems video\n\nDataset Summaries:\nHMDB51: The HMDB51 dataset is a large collection of realistic videos from various sources, including movies and web videos. The dataset is composed of 6,766 video clips from 51 action categories (such as “jump”, “kiss” and “laugh”), with each category containing at least 101 clips.\nSumMe: The SumMe dataset is a video summarization dataset consisting of 25 videos, each annotated with at least 15 human summaries (390 in total).\nV-COCO: Verbs in COCO (V-COCO) is a dataset that builds off COCO for human-object interaction detection. V-COCO provides 10,346 images (2,533 for training, 2,867 for validating and 4,946 for testing) and 16,199 person instances.\nNTU RGB+D: NTU RGB+D is a large-scale dataset for RGB-D human action recognition. It involves 56,880 samples of 60 action classes collected from 40 subjects.\nDISFA: The Denver Intensity of Spontaneous Facial Action (DISFA) dataset consists of 27 videos of 4844 frames each, with 130,788 images in total. Action unit annotations are on different levels of intensity, which are ignored in the following experiments and action units are either set or unset.\nPKU-MMD: The PKU-MMD dataset is a large skeleton-based action detection dataset. It contains 1076 long untrimmed video sequences performed by 66 subjects in three camera views.\n\nDataset Options: A: HMDB51 B: SumMe C: V-COCO D: NTU RGB+D E: DISFA F: PKU-MMD",
"role": "user"
},
{
"content": "[\"HMDB51\", \"SumMe\"]",
"role": "assistant"
}
] |
science.scireviewgen_multidoc_summarization | science.scireviewgen_multidoc_summarization.1861 | [
{
"content": "Craft a cohesive chapter for a literature review using the abstracts provided. Focus on the chapter's theme, weaving together insights and findings from the abstracts. Highlight key points and note any research gaps.\nLiterature Review Title: Supervised Classification: Quite a Brief Overview \nChapter Title: Feature Extraction and Selection \n1. Abstract of Cited Paper (BIB001): Variable and feature selection have become the focus of much research in areas of application for which datasets with tens or hundreds of thousands of variables are available. These areas include text processing of internet documents, gene expression array analysis, and combinatorial chemistry. The objective of variable selection is three-fold: improving the prediction performance of the predictors, providing faster and more cost-effective predictors, and providing a better understanding of the underlying process that generated the data. The contributions of this special issue cover a wide range of aspects of such problems: providing a better definition of the objective function, feature construction, feature ranking, multivariate feature selection, efficient search methods, and feature validity assessment methods. \n2. Abstract of Cited Paper (BIB002): A conventional way to discriminate between objects represented by dissimilarities is the nearest neighbor method. A more efficient and sometimes a more accurate solution is offered by other dissimilarity-based classifiers. They construct a decision rule based on the entire training set, but they need just a small set of prototypes, the so-called representation set, as a reference for classifying new objects. Such alternative approaches may be especially advantageous for non-Euclidean or even non-metric dissimilarities. The choice of a proper representation set for dissimilarity-based classifiers is not yet fully investigated. It appears that a random selection may work well. In this paper, a number of experiments has been conducted on various metric and non-metric dissimilarity representations and prototype selection methods. Several procedures, like traditional feature selection methods (here effectively searching for prototypes), mode seeking and linear programming are compared to the random selection. In general, we find out that systematic approaches lead to better results than the random selection, especially for a small number of prototypes. Although there is no single winner as it depends on data characteristics, the k-centres works well, in general. For two-class problems, an important observation is that our dissimilarity-based discrimination functions relying on significantly reduced prototype sets (3-10% of the training objects) offer a similar or much better classification accuracy than the best k-NN rule on the entire training set. This may be reached for multi-class data as well, however such problems are more difficult.\n\nReview Chapter:",
"role": "user"
},
{
"content": "The curse of dimensionality indicates that in particular cases it can be beneficial for the performance of our classifier to lower the feature dimensionality. This may be applicable, for instance, if one has little insight in the classification problem at hand, in which case one tends to define lots of potentially useful features and/or dissimilarities in the hope that at least some of them pick up what is important to discriminate between the two classes. Carrying out a more or less systematic reduction of the dimensionality after defining such large class of features can lead to acceptable classification results. Roughly speaking, there are two main approaches BIB002 BIB001 . The first one is feature selection and the second one is feature extraction. The former reduces the dimensionality by picking a subset from the original feature set, while the latter allows the combination of two or more features into fewer new features. This combination is often restricted to linear transformations, i.e., weighted sums, of original features, meaning that one considers linear subspaces of the original feature space. In principle, however, feature extraction also encompasses nonlinear dimensionality reductions. Feature selection is, by construction, linear, where the possible subspace is even further limited to linear spaces that are parallel to the feature space axes. Lowering the feature dimensionality by feature selection can also aid in interpreting classification results. At least it can shed some light on what features seem to matter mostly and possibly we can gain some insight into their interdependencies, for instance, by studying the coefficients of a trained linear classifier. Aiming for a more interpretable classifier, we might even sacrifice some of the performance for the sake of a really limited feature set size. Feature selection can also be used to select the right prototypes when employing the dissimilarity approach BIB003 , as in this case every feature basically corresponds to all distances to a particular prototype.",
"role": "assistant"
}
] |
science.pico_ner | science.pico_ner.612 | [
{
"content": "Examine the following abstract and identify distinct entities. Focus on extracting entities related to participants, interventions, and outcomes.\n\nPresent your findings in a JSON format like: {\"Participant\": [\"example entity\", ...], \"Intervention\": [\"example entity\", ...], \"Outcome\": [\"example entity\", ...]}. If no entities are found for a category, use an empty list.\n\nAbstract Content:\n\nEvaluation of asthma knowledge and quality of life in Hungarian asthmatics . BACKGROUND The objective was to develop an educational instrument , to assess its impact as an intervention instrument and to examine quality of life ( QoL ) . METHODS 119 asthmatics were randomized ( 64 in the intervention and 55 in the reference group ) . The education instrument was developed based on the EuroPharm-Forum Guidelines and its impact assessed by a self-developed questionnaire . Patients ' QoL , asthma knowledge was assessed twice , once before and after the education seminar , education was only provided for the intervention group . QoL was measured with the St George 's Respiratory Questionnaire ( SGRQ ) and a visual analogue scale ( VAS ) . RESULTS We found significant differences in answers to the asthma questions , by 40 % improvement , but no changes in the control group . In inhaler-use technique , we could not find significant changes neither in the intervention nor in the control group . There were no significant differences between the results of the two visits neither with the VAS nor with the SGRQ on the QoL data . CONCLUSION The results indicate that asthmatics experience lower QoL . As the subjects were regularly controlled asthmatics they had better general knowledge and inhaler-use technique was expected . The results suggest that it is necessary to regularly refresh asthma knowledge , to assess patients ' self-management plans to achieve long-term effectiveness of asthma management .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"reference\", \"educational instrument\", \"education instrument\", \"education seminar\"], \"Outcome\": [\"QoL data\", \"St George \\u0027s Respiratory Questionnaire ( SGRQ )\", \"SGRQ\", \"visual analogue scale ( VAS )\", \"QoL\", \"VAS\"], \"Participant\": [\"119\", \"asthmatics\"]}",
"role": "assistant"
}
] |
science.mslr2022_cochrane_multidoc_summarization | science.mslr2022_cochrane_multidoc_summarization.2582 | [
{
"content": "Review the provided collection of medical RCT summaries. Your objective is to distill the essential insights into a brief conclusion. Focus solely on the synthesis of findings without additional commentary. The RCT summaries are as follows:\n\n\n\nReport Title 1:\nTwo-part implants inserted in a one-stage or a two-stage procedure. A prospective comparative study.\nReport Abstract 1:\nThe aim of this study was to evaluate the feasibility of using a two-part implant system in a one-stage procedure and to monitor the microflora in the peri-implant area in relation to clinical and radiographic outcome.\n After randomisation, 40 edentulous patients (Cawood & Howell class V-VI) received two IMZ implants in the anterior mandible inserted by either a one-stage (n = 20) or a two-stage (n = 20) surgical procedure for overdenture treatment. A standardised clinical and radiographic evaluation was performed after denture insertion as well as 6 and 12 months thereafter. Twelve months after loading, peri-implant samples were collected and analysed for the presence of putative periodontal pathogens using culture technique.\n No striking differences were found between the two groups with regard to the clinical parameters during the evaluation period. The mean bone loss in the first year of functioning was 0.6 mm in both groups. With regard to the gingiva score, plaque score, bleeding score or bone loss between T0 and T12, no associations were found with the presence of the cultured microorganisms. An association was present between pockets >or= 4 mm and the presence of Peptostreptococcus micros in the two-stage group.\n The short-term results indicate that two-part implants inserted in a one-stage procedure may be as predictable as inserted in the common two-stage procedure. The peri-implant sulcus can and does harbour potential periodontal pathogens without significant signs of tissue breakdown.\n\n\nReport Title 2:\nSubmerged or non-submerged healing of endosseous implants to be used in the rehabilitation of partially dentate patients.\nReport Abstract 2:\nTo evaluate bone-level alterations that occurred at implants of the Astra Tech(R) System that were placed in the load carrying, posterior parts of the dentition using either a submerged (two-stage) or a non-submerged (one-stage) installation protocol.\n Eighty-four patients that required 115 fixed partial dentures (FPDs or cases) entered the prospective study. All subjects were assigned one patient and > or =one case numbers. For the randomization of cases, a custom-made program based on balanced random permuted blocks was utilized. The cases were assigned to two treatment groups, namely one-stage installation procedure, non-submerged technique (group A) and two-stage installation procedure, submerged technique (group B). Several subjects contributed with cases to both groups A and B. Periodontal, endodontal and open caries lesions were treated prior to implant installation. All patients received careful oral hygiene instruction and training in self-performed plaque control measures. The surgical technique used for fixture installation followed the outline described in the manual for the Astra Tech System. The FPDs were placed 3 months (mandible) and 6 months (maxilla) following implant installation. Immediately following FPD placement, a baseline examination was performed that included assessment of plaque, soft-tissue inflammation and bone level. Clinicians who were otherwise not involved in the study performed the radiographic measurements. Clinical and radiographical examinations were repeated once a year after the baseline examination.\n The primary outcome variable was the change in the bone level at the implants from the time of placement of the bridge (FPD) to the 1- and 2-year reexaminations. Fisher's permutation test was used to test if differences existed between groups A and B, and between patients (men/women, smokers/non-smokers, age), sites (maxilla/mandible) and implants (length, diameter). Pitman's test was used to study correlations between bone shape and quality data and different radiographic bone-level data.\n It was demonstrated that tissue healing following implant installation appeared to be independent of the surgical protocol, i.e. whether the marginal portions of the implants during surgery were fully or only partly submerged under the ridge mucosa. Thus, (i) in both treatment groups the number of implants that failed to osseointegrate (early failures) was small (<2%); (ii) at the end of the recommended periods of bone healing prior to loading, - in both groups, maxilla=6 months and mandible=3 months - the level of the marginal bone was close to the coronal rim of the fixture; group A: 1.54+/-0.92 mm, group B: 1.31+/-0.77 mm. The current study also demonstrated that irrespective of surgical protocol (two-stage, one-stage), implants supporting the FPDs exhibited only small amount of radiographic bone loss during the first year of function (group A: 0.02+/-038 mm, group B: 0.17+/-0.64 mm). Moreover, during the second year of function, the amount of additional bone loss that occurred in the two treatment groups was close to zero.\n Periimplant bone-level change during function seemed to be unrelated to whether initial soft- and hard-tissue healing following implant installation had occurred under submerged or non-submerged conditions.\n\n\nReport Title 3:\nComparison of soft tissue healing and osseointegration of IMZ implants placed in one-stage and two-stage techniques: a pilot study.\nReport Abstract 3:\nThe osseointegration of endosseous dental implants is well documented. Implants have proven to be a viable treatment option for the replacement of missing teeth. The surgical phase of implant dentistry for most implant systems involves two stages--the placement of the implant and its subsequent uncovering. The IMZ (Interpore International) implant system is classically a two-stage system. No significant differences were demonstrated in osseointegration and soft tissue healing for IMZ implants placed in one-stage and two-stage procedures in this pilot study.\n\n\nReport Title 4:\nA prospective multicenter study using two different surgical approaches in the mandible with turned Brånemark implants: conventional loading using fixed prostheses.\nReport Abstract 4:\nThe use of a submerged implant system in a nonsubmerged surgical procedure has been reported to have promising results. At the time this study was initiated, no prospective, comparative studies with randomization between submerged and nonsubmerged surgical techniques had been published.\n To evaluate the submerged and nonsubmerged surgical techniques when treating mandibular edentulism using a submerged implant system, with regard to implant survival and complications.\n A total of 77 patients were included and treated at nine clinics in Sweden and Norway. In total, 404 Brånemark System implants (standard and MkII implants) were inserted in the edentulous mandible; 198 implants according to the nonsubmerged protocol and 206 implants according to the traditional submerged procedure. The follow-up period was up to 36 months after prosthesis insertion.\n In the nonsubmerged group, 17 implants out of 198 implants (8.6%) were lost and in the submerged group, 5 out of 206 implants (2.4%) were lost. All implant failures occurred before the delivery of the final prosthesis. No major complications were reported during the implant surgery. However, at the clinical check-up postoperatively and at the abutment connection surgery, 6 patients in the nonsubmerged group complained of pain at the implant sites, whereas there were no complaints of pain in the submerged group.\n The results of this study suggest that a turned Brånemark implant designed for a submerged implant placement procedure can be used in a nonsubmerged procedure and may be as predictable as the conventional submerged approach.",
"role": "user"
},
{
"content": "The number of patients included in the trials was too small to draw definitive conclusions. The 1-stage approach might be preferable in partially edentulous patients since it avoids one surgical intervention and shortens treatment times, while a 2-stage submerged approach could be indicated when an implant has not obtained an optimal primary stability or when barriers are used for guided tissue regeneration, or when it is expected that removable temporary prostheses could transmit excessive forces on the penetrating abutments especially in fully edentulous patients.",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_elife_single_doc_summ | science.scientific_lay_summarisation_elife_single_doc_summ.506 | [
{
"content": "Transform the provided sections of a biomedical research paper into a digestible summary for a general audience. Focus on simplifying complex ideas while retaining key scientific insights.\n\nResearch Title: Dynamics of venom composition across a complex life cycle\nContent Overview:\nAbstract:\nLittle is known about venom in young developmental stages of animals. The appearance of toxins and stinging cells during early embryonic stages in the sea anemone Nematostella vectensis suggests that venom is already expressed in eggs and larvae of this species. Here, we harness transcriptomic, biochemical and transgenic tools to study venom production dynamics in Nematostella. We find that venom composition and arsenal of toxin-producing cells change dramatically between developmental stages of this species. These findings can be explained by the vastly different interspecific interactions of each life stage, as individuals develop from a miniature non-feeding mobile planula to a larger sessile polyp that predates on other animals and interact differently with predators. Indeed, behavioral assays involving prey, predators and Nematostella are consistent with this hypothesis. Further, the results of this work suggest a much wider and dynamic venom landscape than initially appreciated in animals with a complex life cycle.\nIntroduction:\nVenoms and the toxins they include are mostly used by animals for antagonistic interactions, such as prey capture and defense from predators( Fry et al., 2009; Casewell et al., 2013). Pharmacological research has been focused almost exclusively on the venoms of the adult stages despite the fact that many animals display remarkable transformations in body architectures and ecology during their development( Ruppert et al., 2004). Such vast differences may dictate different interspecific interactions for distinct life stages( Wilbur, 1980). As venom is hypothesized to be metabolically expensive, and in many cases highly specific( Nisani et al., 2012; Casewell et al., 2013), it is plausible that its composition might change between different life stages. Indeed, some ontogenetic variation was reported in the venoms of snakes( Gibbs et al., 2011), spiders( Santana et al., 2017) and cone snails( Safavi-Hemami et al., 2011), but until now this phenomenon was not studied thoroughly in an animal with a complex life cycle throughout its development. The oldest extant group of venomous animals is the marine phylum Cnidaria, which includes sea anemones, corals, jellyfish and hydroids. Cnidarians are typified by their stinging cell, the nematocyte, that harbors a unique and highly complex organelle, the nematocyst( Kass-Simon and Scappaticci, 2002; David et al., 2008). This proteinaceous organelle is utilized as a miniature venom delivery system( Thomason, 1991; Lotan et al., 1995). Most cnidarians have a complex life cycle that includes both sessile benthic and mobile pelagic life stages of wide size distribution and ecological interactions. For example, the canonical life cycle of anthozoans( corals and sea anemones) includes a swimming larval stage( planula) that metamorphoses into a sessile polyp stage that matures into the reproductive adult. The starlet sea anemone, Nematostella vectensis, is becoming a leading cnidarian lab model as unlike many other cnidarian species it can be grown in the lab throughout its life cycle. This makes Nematostella a unique system to study the venom of an animal with a complex life cycle. Another advantage is that the high genetic homogeneity of the common Nematostella lab strain minimizes individual genetic variation, which is far from trivial in most other venomous animals collected from the wild in limited numbers. Further, Nematostella has a sequenced genome and stage-specific transcriptomes and various molecular tools are available for its genetic manipulation( Wikramanayake et al., 2003; Putnam et al., 2007; Renfer et al., 2010; Helm et al., 2013; Layden et al., 2016). The Nematostella experimental toolbox is unique, not only for cnidarians, but also for venomous animals in general. During the Nematostella life cycle, females release a gelatinous egg package and males release sperm into the water( Hand and Uhlinger, 1992). After fertilization occurs, the zygote cleavage begins, forming a blastula and less than 24 hr post-fertilization( hpf) gastrulation is completed. A planula larva emerges from the egg package 48–72 hpf and starts swimming in the water. Six to seven days after fertilization, the planula settles in soft substrate and starts to metamorphose into a primary polyp and sexual maturation takes about 4 months under lab conditions( Hand and Uhlinger, 1992)( Figure 1A). Whereas the egg and planula are roughly spherical and measure only about 250 µm, the morphologically differentiated elongated adult polyp can reach the length of 4 cm in the wild and up to 20 cm in the lab( Williams, 1975; Hand and Uhlinger, 1992). Venom is produced in Nematostella by ectodermal gland cells and nematocytes( Moran et al., 2012b; Moran et al., 2013). Although prey capture and production of Nv1, a major sodium channel modulator toxin( Moran et al., 2008a; Moran et al., 2012b), begins at the sessile primary polyp stage, nematocysts appear as early as 48 hpf in the swimming planula( Marlow et al., 2009). These are strong indications that venom is likely present already in early life stages and its composition might dramatically change across the Nematostella life cycle. Further complexity in the regulation of venom production may be due to Nematostella producing different cell types that are unevenly distributed between various tissues( Moran et al., 2013)( Figure 1B). Using an integrated and comparative approach, we carefully quantify and characterize the spatiotemporal expression of known and novel Nematostella toxins across different developmental stages and employ transgenesis to understand the dynamics of venom production in a species with a complex life cycle. We correlate the dynamics of toxin expression in life stages with organismal-level behavior experiments that highlight differential responses of prey and predators when exposed to these stages.\nDiscussion:\nOur finding of different expression levels of toxins in different developmental stages and adult tissues strongly suggests that venom composition changes across development and that each arsenal of toxins might have been shaped by selection for different biotic interactions. As Nematostella develops from a non-predatory, swimming larva to an adult sessile predatory polyp that is 150-fold larger than the larva( Figure 1A), its interspecific interactions vastly change across development. For example, Nematostella egg packages can be consumed by grass shrimp, but it is highly unlikely that adult polyps are part of the grass shrimp diet( Table 1; Video 2). These observations correlate with the expression dynamics of Nv1 that is highly toxic for shrimps. Unlike the grass shrimp, Fundulus larvae do not consume Nematostella egg packages and actively expel from their mouths planulae, which were erroneously perceived as food( Videos 3 and 4). They consumed only individual eggs following cysteine washes, which do not represent native conditions. The reduced occurrence of predation can be explained by the presence of various toxins, such as members of NEP3 family, which may protect Nematostella from fish predators even at very early developmental stages. This notion is supported by the toxic effects of NEP3 on zebrafish larvae. We hypothesize that these dynamic interactions, coupled with the potentially high metabolic cost of toxins( Nisani et al., 2012), have driven the evolution of a distinct venom composition in each developmental stage. Moreover, in the case of cnidarians, an additional metabolic cost stems from the fact that nematocysts are single-use venom delivery apparatuses that have to be reproduced in very high numbers after each antagonistic interaction. Hence, there is a clear advantage in using a highly adapted venom in each developmental stage. Because venom in planulae is used purely for defense, whereas the venom in polyps is used both for defense and for prey capture, our results also relate to previous results in scorpions( Inceoglu et al., 2003) and cone snails( Dutertre et al., 2014), where venom compositions for defense and prey-capture were shown to differ. Further, some of the toxins we localized can be attributed for specific functions based on their expression patterns. For example, it is very likely that NvePTx1 is a defensive toxin due to its occurrence in the egg and in ectodermal gland cells of the planula( Figures 2D–H and 7), whereas NEP8 is probably used in the polyp for killing prey after it is swallowed, as it is expressed exclusively in the lower pharynx and mesenteries( Figures 4A and 7). Chemical protection of the eggs was reported in several animals such as black widow spiders( Buffkin et al., 1971; Lei et al., 2015), snails( Dreon et al., 2013), octopuses and some fishes and amphibians( Bane et al., 2014). Our data on Nematostella NvePTx1 expressed in eggs and embryonic stages further support the idea of ecological importance of chemical defense in early life stages across the animal kingdom. Our findings that NEP3 family members are expressed in different population of nematocytes reveals that nematocyte diversity in Nematostella exceeds well-beyond the morphology-based assessments, which revealed only two types of nematocysts in Nematostella: basitrichous haplonemas( also called basitrichs) and microbasic mastigophores( Frank and Bleakney, 1976; Zenkert et al., 2011). A study in Hydra discovered that two members of a single pore-forming toxin family are expressed in two morphologically distinct types of nematocytes( Hwang et al., 2007), but to the best of our knowledge, similar complexity of toxin expression patterns in morphologically similar nematocytes was never reported before. Further, mechanisms of venom biosynthesis at exact cellular level resolution have not been reported in more complex venomous organisms as well. However, at lower resolution, some reports suggest that within one venom gland different secretory units are specialized on production of a limited number of toxins( Dutertre et al., 2014; Undheim et al., 2015). Thus, specialized venom secretory cells are probably a common trait among venomous animals. The distinct expression patterns of the NEP3 family also provides important indications regarding toxin evolution. For example, the expression of NEP8 in pharyngeal nematocytes, and its absence from the tentacles and outer body wall, where its paralogs NEP3, NEP3-like and NEP4 are expressed( Figures 4A and 7), is an indication for sub- or neo-functionalization. The specialization of the different family members is also supported by their conservation in Edwardsiella( Figure 3B). Variation in expression patterns of the NEP3 family members and the fact that at least four different types of gland cells at distinct developmental stages and tissues express different toxins( Nv1, Nvlysin1b, NEP6 and NvePTx1) in Nematostella suggests a highly complex venom landscape in this species( Figure 7). At first glance, such a system might seem relatively inefficient. However, we hypothesize that harboring many different toxin-producing cell types, provides modularity and enables evolutionary plasticity of toxin expression. Indeed, our results as well as results of others, suggest that different sea anemones species express similar toxins in different cell types( Moran et al., 2012b) and different tissues( Macrander et al., 2016). This evolutionary plasticity might be one of the factors that made sea anemones such a successful group that inhabits all the world’s oceans for the last 600 million years.",
"role": "user"
},
{
"content": "Some animals produce a mixture of toxins , commonly known as venom , to protect themselves from predators and catch prey . Cnidarians – a group of animals that includes sea anemones , jellyfish and corals – have stinging cells on their tentacles that inject venom into the animals they touch . The sea anemone Nematostella goes through a complex life cycle . Nematostella start out life in eggs . They then become swimming larvae , barely visible to the naked eye , that do not feed . Adult Nematostella are cylindrical , stationary ‘polyps’ that are several inches long . They use tentacles at the end of their tube-like bodies to capture small aquatic animals . Sea anemones therefore change how they interact with predators and prey at different stages of their life . Most research on venomous animals focuses on adults , so until now it was not clear whether the venom changes along their maturation . Columbus-Shenkar , Sachkova et al . genetically modified Nematostella so that the cells that produce distinct venom components were labeled with different fluorescent markers . The composition of the venom could then be linked to how the anemones interacted with their fish and shrimp predators at each life stage . The results of the experiments showed that Nematostella mothers pass on a toxin to their eggs that makes them unpalatable to predators . Larvae then produce high levels of other toxins that allow them to incapacitate or kill potential predators . Adults have a different mix of toxins that likely help them capture prey . Venom is often studied because the compounds it contains have the potential to be developed into new drugs . The jellyfish and coral relatives of Nematostella may also produce different venoms at different life stages . This means that there are likely to be many toxins that we have not yet identified in these animals . As some jellyfish venoms are very active on humans and reef corals have a pivotal role in ocean ecology , further research into the venoms produced at different life stages could help us to understand and preserve marine ecosystems , as well as having medical benefits .",
"role": "assistant"
}
] |
science.scitldr_aic | science.scitldr_aic.616 | [
{
"content": "Review the provided sections from a computer science paper and distill the primary contribution into a single sentence. Provide only the summary sentence.\n\nSections:\n\nAnatomical studies demonstrate that brain reformats input information to generate reliable responses for performing computations.\nHowever, it remains unclear how neural circuits encode complex spatio-temporal patterns.\nWe show that neural dynamics are strongly influenced by the phase alignment between the input and the spontaneous chaotic activity.\nInput alignment along the dominant chaotic projections causes the chaotic trajectories to become stable channels (or attractors), hence, improving the computational capability of a recurrent network.\nUsing mean field analysis, we derive the impact of input alignment on the overall stability of attractors formed.\nOur results indicate that input alignment determines the extent of intrinsic noise suppression and hence, alters the attractor state stability, thereby controlling the network's inference ability.\nBrain actively untangles the input sensory data and fits them in behaviorally relevant dimensions that enables an organism to perform recognition effortlessly, in spite of variations DiCarlo et al. (2012) ; Thorpe et al. (1996) ; DiCarlo & Cox (2007) .\nFor instance, in visual data, object translation, rotation, lighting changes and so forth cause complex nonlinear changes in the original input space.\nHowever, the brain still extracts high-level behaviorally relevant constructs from these varying input conditions and recognizes the objects accurately.\nWhat remains unknown is how brain accomplishes this untangling.\nHere, we introduce the concept of chaos-guided input alignment in a recurrent network (specifically, reservoir computing model) that provides an avenue to untangle stimuli in the input space and improve the ability of a stimulus to entrain neural dynamics.\nSpecifically, we show that the complex dynamics arising from the recurrent structure of a randomly connected reservoir Rajan & Abbott (2006) ; Kadmon & Sompolinsky (2015) ; Stern et al. (2014) can be used to extract an explicit phase relationship between the input stimulus and the spontaneous chaotic neuronal response.\nThen, aligning the input phase along the dominant projections determining the intrinsic chaotic activity, causes the random chaotic fluctuations or trajectories of the network to become locally stable channels or dynamic attractor states that, in turn, improve its' inference capability.\nIn fact, using mean field analysis, we derive the effect of introducing varying phase association between the input and the network's spontaneous chaotic activity.\nOur results demonstrate that successful formation of stable attractors is strongly determined from the input alignment.\nWe also illustrate the effectiveness of input alignment on a complex motor pattern generation task with reliable generation of learnt patterns over multiple trials, even in presence of external perturbations.\nModels of cortical networks often use diverse plasticity mechanisms for effective tuning of recurrent connections to suppress the intrinsic chaos (or fluctuations) Laje & Buonomano (2013) ; Panda & Roy (2017) .\nWe show that input alignment alone produces stable and repeatable trajectories, even, in presence of variable internal neuronal dynamics for dynamical computations.\nCombining input alignment with recurrent synaptic plasticity mechanism can further enable learning of stable correlated network activity at the output (or readout layer) that is resistant to external perturbation to a large extent.\nFurthermore, since input subspace alignment allows us to operate networks at low amplitude while maintaining a stable network activity, it provides an additional advantage of higher dimensionality.\nA network of higher dimensionality offers larger number of disassociated principal chaotic projections along which different inputs can be aligned (see Appendix A, Fig. A1(c) ).\nThus, for a classification task, wherein the network has to discriminate between 10 different inputs (of varying frequencies and underlying statistics), our notion of untangling with chaos-guided input alignment can, thus, serve as a foundation for building robust recurrent networks with improved inference ability.\nFurther investigation is required to examine which orientations specifically improve the discrimination capability of the network and the impact of a given alignment on the stability of the readout dynamics around an output target.\nIn summary, the analyses we present suggest that input alignment in the chaotic subspace has a large impact on the network dynamics and eventually determines the stability of an attractor state.\nIn fact, we can control the network's convergence toward different stable attractor channels during its voyage in the neural state space by regulating the input orientation.\nThis indicates that, besides synaptic strength variance Rajan & Abbott (2006) , a critical quantity that might be modified by modulatory and plasticity mechanisms controlling neural circuit dynamics is the input stimulus alignment.",
"role": "user"
},
{
"content": "Input Structuring along Chaos for Stability",
"role": "assistant"
}
] |
science.medmentions_ner | science.medmentions_ner.1891 | [
{
"content": "Analyze the provided paragraph from a scientific document. Your goal is to identify and extract relevant entities.\n\nFormat your findings as a JSON object, where each key represents a category of entities, and the corresponding value is a list of entities identified in that category. If no entities are found for a category, return an empty list.\n\nFocus on precision and ensure that the output is strictly in JSON format.\n\nDocument:\n\nA screening assay for the identification of host cell requirements and antiviral targets for hepatitis D virus infection Hepatitis delta virus (HDV) is a minimalistic satellite virus of hepatitis B virus (HBV). HBV / HDV co-infection, i.e. \" hepatitis D \", is the most severe form of viral hepatitis. No effective therapy for HDV infection is available partly due to the fact that HDV is a highly host - dependent virus devoid of any potentially drugable enzyme encoded in its small genome. In this study we present a semi-automated method to evaluate HDV infection and replication under the influence of different drugs. We utilized a Huh-7/hNTCP cell culture based system in a 96- well plate format, an automated microscope and image acquisition as well as analysis with the CellProfiler software to quantify the impact of these drugs on HDV infection. For validation, three groups of potential anti-HDV agents were evaluated: To target ribozyme activity of HDV RNA, we screened ribozyme inhibitors but only observed marked toxicity. Testing innate antiviral mediators showed that interferons alpha-2a and beta-1a had a specific inhibitory effect on HDV infection. Finally, we screened a library of 160 human kinase inhibitors covering all parts of the human kinome. Overall, only inhibitors targeting the tyrosine kinase-like group had significant average anti-HDV activity. Looking at individual substances, kenpaullone, a GSK-3β and Cdk inhibitor, had the highest selective index of 3.44. Thus, we provide a potentially useful tool to screen for substances with anti-HDV activity and novel insights into interactions between HDV replication and the human kinome.",
"role": "user"
},
{
"content": "{\"AnatomicalStructure\": [\"host cell\", \"genome\", \"Huh-7/hNTCP cell culture\"], \"Bacterium\": [], \"BiologicFunction\": [\"activity\", \"HDV infection\", \"co-infection\", \"viral hepatitis\", \"antiviral\", \"anti-HDV activity\", \"replication\", \"hepatitis D virus infection\", \"hepatitis D\", \"insights\"], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [], \"Chemical\": [\"drugable enzyme\", \"inhibitors\", \"substances\", \"GSK-3\\u03b2\", \"beta-1a\", \"tyrosine kinase-like group\", \"drugs\", \"Cdk inhibitor\", \"interferons alpha-2a\", \"HDV RNA\", \"mediators\", \"kenpaullone\", \"antiviral targets\", \"kinase inhibitors\", \"anti-HDV agents\", \"ribozyme\"], \"ClinicalAttribute\": [], \"Eukaryote\": [\"human\"], \"Finding\": [], \"Food\": [], \"HealthCareActivity\": [\"evaluate\", \"therapy\", \"screen\", \"screened\", \"evaluated\", \"screening\", \"assay\"], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [\"selective index\", \"kinome\", \"CellProfiler software\", \"tool\", \"semi-automated method\"], \"MedicalDevice\": [\"microscope\"], \"Organization\": [\"library\"], \"PopulationGroup\": [], \"ProfessionalOrOccupationalGroup\": [], \"ResearchActivity\": [\"study\", \"validation\", \"analysis\"], \"SpatialConcept\": [\"well plate format\"], \"Virus\": [\"HDV\", \"virus\", \"minimalistic satellite virus\", \"HBV\", \"Hepatitis delta virus\", \"hepatitis B virus\"]}",
"role": "assistant"
}
] |
science.chemsum_single_document_summarization | science.chemsum_single_document_summarization.28591 | [
{
"content": "Craft a concise summary for a chemistry research paper using the provided title and selected text segments.\n\nHeading:\n3-Arylcoumarin derivatives manifest anti-proliferative activity through Hsp90 inhibition\n\nContent Extracts:\n\n<p>The 90 kDa heat shock protein (Hsp90) is an ATP-dependent chaperone that is responsible for the folding, activation and stabilization of more than 200 client proteins; of which approximately 50 are kinases, hormone receptors and/or transcription factors directly associated with signaling pathways that regulate cell growth.1,2 Hsp90 exists as a homodimer in normal cells and acts as a house-keeping chaperone to maintain protein homeostasis.3 However, in malignant cells, Hsp90 forms a super-chaperone complex that binds to cochaperones and immunophilins to modulate the function of proteins associated with all six hallmarks of cancer. These oncogenic Hsp90-dependent clients include Raf-1, Akt, CDK4, Src, hTERT and c-Met, all of which have developed an addiction to Hsp90 for their activity in transformed cells.4,5 Not surprisingly, inhibition of the Hsp90 protein folding machinery results in simultaneous disruption of these signaling cascades and induces client protein degradation through the ubiquitin-proteasome pathway, which can eventually lead to cell death.6,7 More than 16 small molecule inhibitors that target the Hsp90 N-terminus are under clinical evaluation in an effort to validate Hsp90 as a therapeutic target for the treatment of cancer.8,9 During Hsp90 N-terminal inhibition, induction of the pro-survival heat shock response also occurs at the same concentration needed to promote client protein degradation, which can often lead to cytostatic activity in lieu of cytotoxicity. Ideally, it would be beneficial to segregate these two activities so that one can induce client protein degradation, without inducing a pro-survival pathway for the treatment of cancer. Or conversely, induce the pro-survival pathways to aid cells exposed to toxic insults without causing client protein degradation, such as those found in neuro-degenerative diseases, including Alzheimer's, Parkinson's, Prion and Hodgkin's diseases.10-12 The existing and emerging therapeutic benefits associated with Hsp90 modulation highlight the importance of identifying novel Hsp90 inhibitors that can differentiate these attributes.</p><p>Hsp90 inhibitors for the treatment of cancers are under extensive investigation and recently, a lead molecule named KU-398 was shown to exhibit promising anti-cancer activity.13-19 Alongside KU-398, the natural product, silybin, was also recently identified as an Hsp90 inhibitor through an Hsp90-dependent firefly luciferase rematuration and Hsp90-dependent heme-regulated eIF2α kinase (HRI) activation assay.20 These studies suggested that similar to KU-398, silybin also binds to the Hsp90 C-terminus. Structural alignment of KU-398 and silybin suggests the coumarin and flavonone ring systems may be interchangeable (Figure 1), and that the corresponding 3-arylcoumarin derivatives could target the Hsp90 protein folding machinery. Interestingly, 3-Aryl containing coumarins are commonly produced by plants and often exhibit anti-oxidant, anti-inflammatory, anti-microbial activities, but more recently, they were shown to also manifest anti-cancer activity.21-24 As a consequence of these observations, compound 1 was designed to maintain the coumarin ring present in KU-398 while incorporating the 3-aryl appendage of silybin to provide a new class of Hsp90 C-terminal inhibitors.</p><p>Synthesis of compound 1 commenced with commercially available 2-methyl resorcinol (2), which was formylated under Vilsmeier–Haack conditions using phosphorus oxychloride (POCl3) and N,N-dimethylformamide (DMF), followed by hydrolysis to give formyl-resorcinol 3 (Scheme 1).25 Condensation of 3 with 3,4-methylenedioxyphenylacetic acid in the presence of acetic anhydride and pyridine produced the acylated 3-arylcoumarin, 5, which upon hydrolysis of the ester yielded compound 6. Finally, a Mitsunobu etherification proceeded upon treatment of 6 with N-methyl-4-hydroxylpiperidine to afford 3-arylcoumarin, 1.17</p><p>Biological evaluation of 3-arylcoumarin 1 for anti-proliferative activity against SKBr3 (estrogen receptor negative, Her2 over-expressing breast cancer cells) and MCF-7 (estrogen receptor positive breast cancer cells) cell lines was performed.18 Since Her2 and the estrogen receptor are two Hsp90-dependent client proteins and their stability and function are highly dependent upon Hsp90 folding machinery, studies with these two different cell lines were used. Compound 1 was shown to manifest anti-proliferative activity with an IC50 value of 5.21±1.14 μM against SKBr3 cells and 3.71±0.11 μM against MCF-7 cells. Western blot analyses of MCF-7 cell lysates confirmed selective degradation of Hsp90-dependent clients at concentrations that parallel the observed anti-proliferative activity (Figure 2), suggesting that 3-arylcoumarin derivatives target the Hsp90 protein folding machinery.</p><p>To generate structure-activity relationships for the phenyl appendage, commercially available coumarin 8, which lacks the 8-methyl substituent, was used. Bromination of 8 proceeded poorly following literature procedures.26-28 Therefore, compound 8 was first methylated with iodomethane in the presence of sodium hydride in DMF to afford compound 9, which underwent bromination with N-bromosuccinimide in the presence of catalytic sodium acetate. Subsequent demethylation of 10 with tribromoborane, followed by Mitsunobu coupling with N-methyl-4-hydroxylpiperidine, generated intermediate 12. The final products were then produced by Suzuki coupling of 12 with a series of phenylboronic acids, (13a–p) or fused heteroarylbronic acids (15a–b) to give a library of 3-arylcoumarin derivatives, 14a–p and 16a–b (Scheme 2).</p><p>Upon construction of this library, the corresponding 3-arylcoumarins were evaluated for anti-proliferative activity against SKBr3 and MCF-7 breast cancer cell lines. As shown in Table 1, 3-arycoumarin derivatives are more efficacious against MCF-7 than SKBr3 cells. Any substitution on the phenyl ring was determined beneficial; however, electron-deficient groups are more effective than electron-rich substitutions (14d/14h vs 14f). Substitution at the para position resulted in a slightly increased activity compared to substitutions at the meta position (14d vs 14c), while substitution at the ortho position is detrimental (14b vs 14d and 14o vs 14p). It appears that bulky substitutions (14n) or a phenoxyl group (14m) at the para position can also lead to compounds that exhibit increased anti-proliferative activity. The decreased activity observed for 14l, when compared to compound 1, clearly demonstrates that a methyl group at the 8-position of the coumarin ring increases anti-proliferative activity. More interestingly, benzo[b]thiophene derivative 16a exhibited activity against both cell lines at concentrations below 1 μM, and could serve as a lead compound for further diversification.</p><p>To further confirm that 8-methyl substituents are beneficial for anti-proliferative activity, a small set of commercial available phenylacetic acids 17a–e were similarly condensed with 3 in the presence of acetic anhydride, followed by hydrolysis to give 19a–e, which underwent etherication with N-methyl-4-hydroxylpiperidine to afford 8-methyl-3-arylcoumarin derivatives, 20a–e.</p><p>As suggested by related compounds, the 8-methyl-3-arylcoumarin derivatives manifested improved anti-proliferative activity, when compared to the 8-desmethyl analogues against the MCF-7 breast cancer cell line, suggesting that substitution at the 8-position of the coumarin ring is important (Table 2). Although compound 14f, which contains an electron-rich methoxy group at the para position, exhibited decreased activity when compared to 8-desmethyl compounds against both SKBr3 and MCF-7 cell lines (see Table 1), its 8-methyl counterpart (20b) manifested comparable activity to the other 8-methyl derivatives (20c–e).</p><p>The 3-benzo[b]thiophenecoumarin 16a exhibited the most potent anti-proliferative activity against two breast cancer cell lines evaluated. The anti-proliferative activities exhibited by 16a result from Hsp90 inhibition as demonstrated by Western blot analyses of MCF-7 cell lysates following administration of 16a. As shown in Figure 3, the Hsp90-dependent client proteins, Raf-1 and Akt, were degraded in a concentration-dependent manner when exposed to 16a, which mirrored the concentration needed to manifest anti-proliferative activity and therefore linked client protein degradation to cell viability. The non-Hsp90-dependent protein, Actin, was not affected upon administration of 16a, indicating that non-Hsp90-dependent proteins are not degraded. In addition, Hsp90 levels appear to decrease at increasing concentrations of 16a, which is consistent with other known Hsp90 C-terminal inhibitors.29</p><p>In conclusion, a library of 3-arylcoumarin derivatives was designed, synthesized and evaluated against two breast cancer cell lines and the initial structure-activity relationships for the phenyl appendage were investigated. 3-Arylcoumarin derivatives were identified as novel inhibitors of the Hsp90 protein folding machinery. Compound 16a exhibited lead like activity and Western blot analyses of this compound supports binding to the Hsp90 C-terminus, as no increase in Hsp90 levels was observed. Further structural modifications are current underway and will be reported in due course.</p>",
"role": "user"
},
{
"content": "The potential therapeutic benefits associated with Hsp90 modulation for the treatment of cancer and neurodegenerative diseases highlight the importance of identifying novel Hsp90 scaffolds. KU-398, a novobiocin analogue, and silybin were recently identified as new Hsp90 inhibitors. Consequently, a library of 3-arylcoumarin derivatives that incorporated the structural features of KU-398 and silybin was designed, synthesized and evaluated against two breast cancer cell lines. Western blot analysis confirmed that the resulting 3-arylcoumarin hybrids target the Hsp90 protein folding machinery.",
"role": "assistant"
}
] |
science.cdr_ner | science.cdr_ner.323 | [
{
"content": "Examine the provided text to identify entities. Focus on extracting relevant information.\n\nPresent your findings in a structured JSON format. Each category should list the entities found.\n\nSource Material:\n\nRegional localization of the antagonism of amphetamine-induced hyperactivity by intracerebral calcitonin injections. Calcitonin receptors are found in the brain, and intracerebral infusions of calcitonin can produce behavioral effects. Among these behavioral effects are decreases in food intake and decreases in amphetamine-induced locomotor activity. In previous experiments we found that decreases in food intake were induced by local administration of calcitonin into several hypothalamic sites and into the nucleus accumbens. In the present experiment calcitonin decreased locomotor activity when locally injected into the same sites where it decreases food intake. The areas where calcitonin is most effective in decreasing locomotor activity are located in the hypothalamus and nucleus accumbens, suggesting that these areas are the major sites of action of calcitonin in inhibiting amphetamine-induced locomotor activity.",
"role": "user"
},
{
"content": "{\"Chemical\": [\"amphetamine\", \"Calcitonin\", \"calcitonin\"], \"Disease\": [\"hyperactivity\"]}",
"role": "assistant"
}
] |
science.mslr2022_cochrane_multidoc_summarization | science.mslr2022_cochrane_multidoc_summarization.181 | [
{
"content": "Review the provided collection of medical RCT summaries. Your objective is to distill the essential insights into a brief conclusion. Focus solely on the synthesis of findings without additional commentary. The RCT summaries are as follows:\n\n\n\nReport Title 1:\nBaby CareLink: using the internet and telemedicine to improve care for high-risk infants.\nReport Abstract 1:\nTo evaluate an Internet-based telemedicine program designed to reduce the costs of care, to provide enhanced medical, informational, and emotional support to families of very low birth weight (VLBW) infants during and after their neonatal intensive care unit (NICU) stay.\n Baby CareLink is a multifaceted telemedicine program that incorporates videoconferencing and World Wide Web (WWW) technologies to enhance interactions between families, staff, and community providers. The videoconferencing module allows virtual visits and distance learning from a family's home during an infant's hospitalization as well as virtual house calls and remote monitoring after discharge. Baby CareLink's WWW site contains information on issues that confront these families. In addition, its security architecture allows efficient and confidential sharing of patient-based data and communications among authorized hospital and community users.\n A randomized trial of Baby CareLink was conducted in a cohort of VLBW infants born between November 1997 and April 1999. Eligible infants were randomized within 10 days of birth. Families of intervention group infants were given access to the Baby CareLink telemedicine application. A multimedia computer with WWW browser and videoconferencing equipment was installed in their home within 3 weeks of birth. The control group received care as usually practiced in this NICU. Quality of care was assessed using a standardized family satisfaction survey administered after discharge. In addition, the effect of Baby CareLink on hospital length of stay as well as family visitation and interactions with infant and staff were measured.\n Of the 176 VLBW infants admitted during the study period, 30 control and 26 study patients were enrolled. The groups were similar in patient and family characteristics as well as rates of inpatient morbidity. The CareLink group reported higher overall quality of care. Families in the CareLink group reported significantly fewer problems with the overall quality of care received by their family (mean problem score: 3% vs 13%). In addition, CareLink families also reported greater satisfaction with the unit's physical environment and visitation policies (mean problem score: 13% vs 50%). The frequency of family visits, telephone calls to the NICU, and holding of the infant did not differ between groups. The duration of hospitalization until ultimate discharge home was similar in the 2 groups (68.5 +/- 28.3 vs 70.6 +/- 35.6 days). Among infants born weighing <1000 g (n = 31) there was a tendency toward shorter lengths of stay (77.4 +/- 26.2 vs 93.1 +/- 35.6 days). All infants in the CareLink group were discharged directly to home whereas 6/30 (20%) of control infants were transferred to community hospitals before ultimate discharge home.\n CareLink significantly improves family satisfaction with inpatient VLBW care and definitively lowers costs associated with hospital to hospital transfer. Our data suggest the use of telemedicine and the Internet support the educational and emotional needs of families facilitating earlier discharge to home of VLBW infants. We believe that further extension of the Baby CareLink model to the postdischarge period will significantly improve the coordination and efficiency of care.",
"role": "user"
},
{
"content": "There is insufficient evidence to support or refute the use of telemedicine technology to support the parents of high-risk newborn infants receiving intensive care. Clinical trials are needed to assess the application of telemedicine to support parents and families of infants in NICU with length of hospital stay and their perception of NICU care as the major outcomes.",
"role": "assistant"
}
] |
science.pico_ner | science.pico_ner.441 | [
{
"content": "Examine the following abstract and identify distinct entities. Focus on extracting entities related to participants, interventions, and outcomes.\n\nPresent your findings in a JSON format like: {\"Participant\": [\"example entity\", ...], \"Intervention\": [\"example entity\", ...], \"Outcome\": [\"example entity\", ...]}. If no entities are found for a category, use an empty list.\n\nAbstract Content:\n\nImproving patient involvement in chronic disease management : the views of patients , GPs and specialists on a guidebook for ulcerative colitis . Patient information , shared care and decision-making are recognised as beneficial to chronic disease management . As part of an RCT , opinions of ulcerative colitis patients and their doctors were sought on a guidebook designed to support self-care and mediate doctor/patient interaction during consultations . Semi-structured interviews were conducted with 6 specialists and 16 GPs . Patients ' views were obtained from written responses to a postal questionnaire ( 136 replies ) . Inclusion of lay knowledge and clarification of treatment decision processes increased patients ' perception of involvement in disease management through increased identification and awareness of others ' self-management experiences . Doctors focused on the importance of improving patients ' compliance and use of services . However , the guidebook was seldom used as it had been intended during consultations . Patients felt constrained by time limits and consultants did not actively encourage guidebook use . Based on the findings , we suggest an approach utilising the guidebook to improve patient participation in disease management .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"Semi-structured interviews\", \"mediate doctor/patient interaction\", \"support self-care\", \"patient involvement\", \"postal questionnaire\"], \"Outcome\": [\"utilising the guidebook\", \"patient participation\"], \"Participant\": [\"ulcerative colitis patients and their doctors\"]}",
"role": "assistant"
}
] |
science.data_reco_mcq_mc | science.data_reco_mcq_mc.439 | [
{
"content": "Consider the following research inquiry and associated key terms. Review the dataset summaries provided and determine which datasets are suitable for addressing the research question. Base your decision on your understanding of machine learning datasets.\nProvide your answer as a JSON list. For example, for the inquiry \"Semi supervised image classification\", a possible response could be: [\"CIFAR-10\", \"CIFAR-100\"]. Ensure the response contains only the dataset names in JSON format.\n\nResearch Inquiry: We propose a semi-supervised framework for person re-identification. It utilizes the correlations\n\nKey Terms: person re-identification images\n\nDataset Summaries:\nP-DESTRE: Provides consistent ID annotations across multiple days, making it suitable for the extremely challenging problem of person search, i.e., where no clothing information can be reliably used. Apart this feature, the P-DESTRE annotations enable the research on UAV-based pedestrian detection, tracking, re-identification and soft biometric solutions.\nMarket-1501: Market-1501 is a large-scale public benchmark dataset for person re-identification. It contains 1501 identities which are captured by six different cameras, and 32,668 pedestrian image bounding-boxes obtained using the Deformable Part Models pedestrian detector.\nCUHK03: The CUHK03 consists of 14,097 images of 1,467 different identities, where 6 campus cameras were deployed for image collection and each identity is captured by 2 campus cameras. This dataset provides two types of annotations, one by manually labelled bounding boxes and the other by bounding boxes produced by an automatic detector.\nPartial-iLIDS: Partial iLIDS is a dataset for occluded person person re-identification. It contains a total of 476 images of 119 people captured by 4 non-overlapping cameras.\nVIPeR: The Viewpoint Invariant Pedestrian Recognition (VIPeR) dataset includes 632 people and two outdoor cameras under different viewpoints and light conditions. Each person has one image per camera and each image has been scaled to be 128×48 pixels.\n\nDataset Options: A: P-DESTRE B: Market-1501 C: CUHK03 D: Partial-iLIDS E: VIPeR",
"role": "user"
},
{
"content": "[\"Market-1501\", \"CUHK03\", \"VIPeR\"]",
"role": "assistant"
}
] |
science.bioasq_yesno_qa | science.bioasq_yesno_qa.735 | [
{
"content": "Examine the provided biomedical text and determine the answer to the question. Your response should be either \"yes\" or \"no\" based on the information given.\n\nQuestion: The TRPM2 gene is associated with development of spontaneous thromboembolism?\n\nText: TheTransientReceptorPotentialMelastatin 2 (TRPM2) is a member of G protein coupled receptor superfamily and a novel dual-function protein that possesses both ion channel andAdenosine 5'-DiphosPhataseRibose (ADPR) hydrolase function. TRPM2 is involved in Ca2+signaling in various cells as an endogenous redox sensor for oxidative stress and reactive oxygen species, and contributes to cytokine production, insulin release, motility, Ca2+entry and Ca2+-dependent cellular reactions such as endothelial hyper-permeability and apoptosis.\nWe show here that the redox-sensitive transient receptor potential (TRP) cation channel TRPM2 is expressed in the phagosomal membrane and regulates macrophage bactericidal activity through the activation of phagosomal acidification\nThe transient receptor potential melastatin-2 (TRPM2) is an oxidative stress sensing channel that is expressed in a number of inflammatory cells and therefore it has been suggested that inhibition of TRPM2 could lead to a beneficial effect in COPD patients.\nTRPM2 is a recently identified TRPM family cation channel which is unique among known ion channels in that it contains a C-terminal domain which is homologous to the NUDT9 ADP-ribose hydrolase and possesses intrinsic ADP-ribose hydrolase activity\nThese results suggest that TRPM2 may participate in antigen-induced extracellular Ca(2+) influx and subsequent degranulation. In addition, TRPM2 inhibitors were shown to improve food allergic reactions in a mouse model. Together, these results suggest that TRPM2 inhibitors suppress MMC degranulation via regulation of the increase in [Ca(2+)]cyt. Thus, TRPM2 may play a key role in degranulation by modulating intracellular Ca(2+) in MMCs.\nhe Na+ and Ca(2+)-permeable melastatin related transient receptor potential 2 (TRPM2) channels can be gated either by ADP-ribose (ADPR) in concert with Ca(2+) or by hydrogen peroxide (H(2)O(2)), an experimental model for oxidative stress, binding to the channel's enzymatic Nudix domain\nhese alterations of the extracellular milieu change the activity of transient receptor potential melastatin subfamily member 2 (TRPM2), a nonselective cation channel expressed in the central nervous system and the immune system.\nTRPM2 (Transient Receptor Potential Melastatin 2) is a Ca2+-permeable ion channel that is activated under conditions of oxidative stress.\nTransient receptor potential melastatin 2 (TRPM2) is a thermosensitive, Ca2+-permeable cation channel. TRPM2 contributes to the pathogenesis of inflammatory bowel disease, and inflammatory and neuropathic pain. We hypothesized that TRPM2 is important for visceral nociception and the development of visceral hypersensitivity.\nHere, we describe the computational identification of a melanoma-enriched antisense transcript, TRPM2-AS, mapped within the locus of TRPM2, an ion channel capable of mediating susceptibility to cell death\nThe transient receptor potential melastatin 2 (TRPM2) channel, a Ca(2+)-permeable nonselective cation channel activated by oxidative stress, has been implicated in neurodegenerative diseases, and more recently in amyloid-induced toxicity\nTransient receptor potential melastatin 2 (TRPM2) is a calcium-permeable cation channel activated by ADP-ribose or reactive oxygen species.\nTransient receptor potential melastatin type 2 (TRPM2) is a Ca2+ permeable non-selective cation channel expressed in several cell types including hippocampal pyramidal neurons. Moreover, activation of TRPM2 during oxidative stress has been linked to cell death.",
"role": "user"
},
{
"content": "no",
"role": "assistant"
}
] |
science.mslr2022_cochrane_multidoc_summarization | science.mslr2022_cochrane_multidoc_summarization.3127 | [
{
"content": "Review the provided collection of medical RCT summaries. Your objective is to distill the essential insights into a brief conclusion. Focus solely on the synthesis of findings without additional commentary. The RCT summaries are as follows:\n\n\n\nReport Title 1:\n[Quantitatively evaluated magnetic resonance tomography as a therapeutic follow-up of the nonsteroidal antirheumatic ademetionin: a pilot study in patients with gonarthrosis].\nReport Abstract 1:\nThe report deals with a double-blind study of nine patients treated over a period of three months with either ademetionin (five patients) or with a placebo (four patients). Prior to treatment, and after three months, clinical and biochemical studies were carried out as well as radiological examinations and MRT. The latter had been standardised previously in order to obtain a cartilage signal which could be quantified. Of the five patients treated with ademetionin, none were improved clinically, two were worse and three were unchanged, but on MRT, three were improved, one was worse and one was unchanged. Amongst the placebo patients, the clinical results were: three improved, none worse and one unchanged, with similar findings on MRT. MRT appears to be useful for complementing the information for treatment planning with chondroprotective drugs.\n\n\nReport Title 2:\nItalian double-blind multicenter study comparing S-adenosylmethionine, naproxen, and placebo in the treatment of degenerative joint disease.\nReport Abstract 2:\nIn a double-blind study, the efficacy and tolerability of S-adenosylmethionine (SAMe) were evaluated in comparison with those of placebo and naproxen in the treatment of osteoarthritis of the hip, knee, spine, and hand. Thirty-three centers, 18 rheumatologic and 15 orthopedic, participated in this study. A total of 734 subjects, including 582 with coxarthrosis (hip osteoarthritis) or gonarthrosis (knee osteoarthritis), were enrolled. SAMe administered orally at a dose of 1,200 mg daily was shown to exert the same analgesic activity as naproxen at a dose of 750 mg daily. Both drugs were more effective than placebo (p less than 0.01). Tolerability of SAMe was significantly better than that of naproxen, both in terms of physicians' (p less than 0.025) and patients' (p less than 0.01) judgments and in terms of the number of patients with side effects (p less than 0.05). There was no difference between SAMe and placebo in the number of side effects. Ten patients in the SAMe group and 13 in the placebo group withdrew from the study because of intolerance to the drug.\n\n\nReport Title 3:\nA randomized, double blind, placebo controlled trial of intravenous loading with S-adenosylmethionine (SAM) followed by oral SAM therapy in patients with knee osteoarthritis.\nReport Abstract 3:\nWe evaluated the effectiveness and rapidity of onset of S-adenosylmethionine (SAM), administered as daily intravenous boluses of 400 mg for 5 days, followed by oral tablets, 200 mg thrice daily for 23 days, versus a matching placebo regimen, in the treatment of 81 patients with symptomatic knee osteoarthritis (OA).\n The study was bicentric, randomized, double blinded, and placebo controlled. Patients underwent a 7-day washout of arthritis medications prior to initiation of this study treatment. Major outcome measures were the Stanford Health Assessment Questionnaire disability and pain scales, and supplemental visual analog scales for rest and walking pain.\n At one site, patients had milder OA, the baseline characteristics of the treatment groups were well matched, and the SAM treated group showed significantly greater reduction in overall pain and rest pain (p < 0.05) than the placebo treated group. At the other site, the patients had more severe OA, randomization yielded markedly different treatment groups, and the response to treatment did not differ between groups. Onset of SAM effect was seen as early as 14 days after the start of treatment.\n SAM may be an effective treatment for some patients with symptomatic knee OA, and merits further study. Intravenous loading before oral maintenance therapy may be advantageous.\n\n\nReport Title 4:\nDouble-blind study of S-adenosyl-methionine versus placebo in hip and knee arthrosis.\nReport Abstract 4:\nnan",
"role": "user"
},
{
"content": "The current systematic review is inconclusive, hampered by the inclusion of mainly small trials of questionable quality. The effects of SAMe on both pain and function may be potentially clinically relevant and, although effects are expected to be small, deserve further clinical evaluation in adequately sized randomised, parallel-group trials in patients with knee or hip osteoarthritis. Meanwhile, routine use of SAMe should not be advised.",
"role": "assistant"
}
] |
science.mslr2022_ms2_multidoc_summarization | science.mslr2022_ms2_multidoc_summarization.13783 | [
{
"content": "Analyze the following collection of medical RCT summaries. Your goal is to craft a concise synthesis of the findings. Focus on the core insights and present them succinctly. Only include the synthesis in your response. Below are the RCT summaries:\n\n\n\nReport 1:\nTitle: Randomized Controlled Trial of Mindfulness-Based Stress Reduction Versus Aerobic Exercise: Effects on the Self-Referential Brain Network in Social Anxiety Disorder\nSummary: Background : Social anxiety disorder ( SAD ) is characterized by distorted self-views . The goal of this study was to examine whether mindfulness-based stress reduction ( MBSR ) alters behavioral and brain measures of negative and positive self-views . Methods : Fifty-six adult patients with generalized SAD were r and omly assigned to MBSR or a comparison aerobic exercise ( AE ) program . A self-referential encoding task was administered at baseline and post-intervention to examine changes in behavioral and neural responses in the self-referential brain network during functional magnetic resonance imaging . Patients were cued to decide whether positive and negative social trait adjectives were self-descriptive or in upper case font . Results : Behaviorally , compared to AE , MBSR produced greater decreases in negative self-views , and equivalent increases in positive self-views . Neurally , during negative self versus case , compared to AE , MBSR led to increased brain responses in the posterior cingulate cortex ( PCC ) . There were no differential changes for positive self versus case . Secondary analyses showed that changes in endorsement of negative and positive self-views were associated with decreased social anxiety symptom severity for MBSR , but not AE . Additionally , MBSR-related increases in dorsomedial prefrontal cortex ( DMPFC ) activity during negative self-view versus case were associated with decreased social anxiety related disability and increased mindfulness . Analysis of neural temporal dynamics revealed MBSR-related changes in the timing of neural responses in the DMPFC and PCC for negative self-view versus case . Conclusion : These findings suggest that MBSR attenuates maladaptive habitual self-views by facilitating automatic ( i.e. , uninstructed ) recruitment of cognitive and attention regulation neural networks . This highlights potentially important links between self-referential and cognitive-attention regulation systems and suggests that MBSR may enhance more adaptive social self-referential processes in patients with SAD\n\n\nReport 2:\nTitle: Differential effects of mindful breathing, progressive muscle relaxation, and loving-kindness meditation on decentering and negative reactions to repetitive thoughts.\nSummary: Decentering has been proposed as a potential mechanism of mindfulness-based interventions but has received limited empirical examination to date in experimental studies comparing mindfulness meditation to active comparison conditions . In the present study , we compared the immediate effects of mindful breathing ( MB ) to two alternative stress-management techniques : progressive muscle relaxation ( PMR ) and loving-kindness meditation ( LKM ) to test whether decentering is unique to mindfulness meditation or common across approaches . Novice meditators ( 190 female undergraduates ) were r and omly assigned to complete one of three 15-min stress-management exercises ( MB , PMR , or LKM ) presented by audio recording . Immediately after the exercise , participants completed measures of decentering , frequency of repetitive thoughts during the exercise , and degree of negative reaction to thoughts . As predicted , participants in the MB condition reported greater decentering relative to the other two conditions . The association between frequency of repetitive thought and negative reactions to thoughts was relatively weaker in the MB condition than in the PMR and LKM conditions , in which these two variables were strongly and positively correlated . Consistent with the construct of decentering , the relative independence between these two variables in the MB condition suggests that mindful breathing may help to reduce reactivity to repetitive thoughts . Taken together , results help to provide further evidence of decentering as a potential mechanism that distinguishes mindfulness practice from other credible stress-management approaches\n\n\nReport 3:\nTitle: Cognitive-Affective Neural Plasticity following Active-Controlled Mindfulness Intervention\nSummary: Mindfulness meditation is a set of attention-based , regulatory , and self-inquiry training regimes . Although the impact of mindfulness training ( MT ) on self-regulation is well established , the neural mechanisms supporting such plasticity are poorly understood . MT is thought to act through interoceptive salience and attentional control mechanisms , but until now conflicting evidence from behavioral and neural measures renders difficult distinguishing their respective roles . To resolve this question we conducted a fully r and omized 6 week longitudinal trial of MT , explicitly controlling for cognitive and treatment effects with an active-control group . We measured behavioral metacognition and whole-brain blood oxygenation level-dependent ( BOLD ) signals using functional MRI during an affective Stroop task before and after intervention in healthy human subjects . Although both groups improved significantly on a response-inhibition task , only the MT group showed reduced affective Stroop conflict . Moreover , the MT group displayed greater dorsolateral prefrontal cortex responses during executive processing , consistent with increased recruitment of top-down mechanisms to resolve conflict . In contrast , we did not observe overall group-by-time interactions on negative affect-related reaction times or BOLD responses . However , only participants with the greatest amount of MT practice showed improvements in response inhibition and increased recruitment of dorsal anterior cingulate cortex , medial prefrontal cortex , and right anterior insula during negative valence processing . Our findings highlight the importance of active control in MT research , indicate unique neural mechanisms for progressive stages of mindfulness training , and suggest that optimal application of MT may differ depending on context , contrary to a one-size-fits-all approach\n\n\nReport 4:\nTitle: MBSR vs aerobic exercise in social anxiety: fMRI of emotion regulation of negative self-beliefs.\nSummary: Mindfulness-based stress reduction ( MBSR ) is thought to reduce emotional reactivity and enhance emotion regulation in patients with social anxiety disorder ( SAD ) . The goal of this study was to examine the neural correlates of deploying attention to regulate responses to negative self-beliefs using functional magnetic resonance imaging . Participants were 56 patients with generalized SAD in a r and omized controlled trial who were assigned to MBSR or a comparison aerobic exercise ( AE ) stress reduction program . Compared to AE , MBSR yielded greater ( i ) reductions in negative emotion when implementing regulation and ( ii ) increases in attention-related parietal cortical regions . Meditation practice was associated with decreases in negative emotion and social anxiety symptom severity , and increases in attention-related parietal cortex neural responses when implementing attention regulation of negative self-beliefs . Changes in attention regulation during MBSR may be an important psychological factor that helps to explain how mindfulness meditation training benefits patients with anxiety disorders\n\n\nReport 5:\nTitle: Neural mechanisms of symptom improvements in generalized anxiety disorder following mindfulness training☆☆☆\nSummary: Mindfulness training aims to impact emotion regulation . Generalized anxiety disorder ( GAD ) symptoms can be successfully addressed through mindfulness-based interventions . This preliminary study is the first to investigate neural mechanisms of symptom improvements in GAD following mindfulness training . Furthermore , we compared brain activation between GAD patients and healthy participants at baseline . 26 patients with a current DSM-IV GAD diagnosis were r and omized to an 8-week Mindfulness Based Stress Reduction ( MBSR , N = 15 ) or a stress management education ( SME , N = 11 ) active control program . 26 healthy participants were included for baseline comparisons . BOLD response was assessed with fMRI during affect labeling of angry and neutral facial expressions . At baseline , GAD patients showed higher amygdala activation than healthy participants in response to neutral , but not angry faces , suggesting that ambiguous stimuli reveal stronger reactivity in GAD patients . In patients , amygdala activation in response to neutral faces decreased following both interventions . BOLD response in ventrolateral prefrontal regions ( VLPFC ) showed greater increase in MBSR than SME participants . Functional connectivity between amygdala and PFC regions increased significantly pre- to post-intervention within the MBSR , but not SME group . Both , change in VLPFC activation and amygdala – prefrontal connectivity were correlated with change in Beck Anxiety Inventory ( BAI ) scores , suggesting clinical relevance of these changes . Amygdala – prefrontal connectivity turned from negative coupling ( typically seen in down-regulation of emotions ) , to positive coupling ; potentially suggesting a unique mechanism of mindfulness . Findings suggest that in GAD , mindfulness training leads to changes in fronto-limbic areas crucial for the regulation of emotion ; these changes correspond with reported symptom improvements\n\n\nReport 6:\nTitle: Cardiovascular fitness, cortical plasticity, and aging.\nSummary: Cardiovascular fitness is thought to offset declines in cognitive performance , but little is known about the cortical mechanisms that underlie these changes in humans . Research using animal models shows that aerobic training increases cortical capillary supplies , the number of synaptic connections , and the development of new neurons . The end result is a brain that is more efficient , plastic , and adaptive , which translates into better performance in aging animals . Here , in two separate experiments , we demonstrate for the first time to our knowledge , in humans that increases in cardiovascular fitness results in increased functioning of key aspects of the attentional network of the brain during a cognitively challenging task . Specifically , highly fit ( Study 1 ) or aerobically trained ( Study 2 ) persons show greater task-related activity in regions of the prefrontal and parietal cortices that are involved in spatial selection and inhibitory functioning , when compared with low-fit ( Study 1 ) or nonaerobic control ( Study 2 ) participants . Additionally , in both studies there exist groupwise differences in activation of the anterior cingulate cortex , which is thought to monitor for conflict in the attentional system , and signal the need for adaptation in the attentional network . These data suggest that increased cardiovascular fitness can affect improvements in the plasticity of the aging human brain , and may serve to reduce both biological and cognitive senescence in humans\n\n\nReport 7:\nTitle: Effects of mindfulness-based stress reduction (MBSR) on emotion regulation in social anxiety disorder.\nSummary: Mindfulness-based stress reduction ( MBSR ) is an established program shown to reduce symptoms of stress , anxiety , and depression . MBSR is believed to alter emotional responding by modifying cognitive-affective processes . Given that social anxiety disorder ( SAD ) is characterized by emotional and attentional biases as well as distorted negative self-beliefs , we examined MBSR-related changes in the brain-behavior indices of emotional reactivity and regulation of negative self-beliefs in patients with SAD . Sixteen patients underwent functional MRI while reacting to negative self-beliefs and while regulating negative emotions using 2 types of attention deployment emotion regulation-breath-focused attention and distraction-focused attention . Post-MBSR , 14 patients completed neuroimaging assessment s. Compared with baseline , MBSR completers showed improvement in anxiety and depression symptoms and self-esteem . During the breath-focused attention task ( but not the distraction-focused attention task ) , they also showed ( a ) decreased negative emotion experience , ( b ) reduced amygdala activity , and ( c ) increased activity in brain regions implicated in attentional deployment . MBSR training in patients with SAD may reduce emotional reactivity while enhancing emotion regulation . These changes might facilitate reduction in SAD-related avoidance behaviors , clinical symptoms , and automatic emotional reactivity to negative self-beliefs in adults with SAD\n\n\nReport 8:\nTitle: Right inferior prefrontal cortex mediates response inhibition while mesial prefrontal cortex is responsible for error detection\nSummary: Inhibitory control and error detection are among the highest evolved human self-monitoring functions . Attempts in functional neuroimaging to effectively isolate inhibitory motor control from other cognitive functions have met with limited success . Different brain regions in inferior , mesial , and dorsolateral prefrontal cortices and parietal and temporal lobes have been related to inhibitory control in go/no-go and stop tasks . The widespread activation reflects the fact that the design s used so far have comeasured additional noninhibitory cognitive functions such as selective attention , response competition , decision making , target detection , and inhibition failure . Here we use rapid , mixed trial , event-related functional magnetic resonance imaging to correlate brain activation with an extremely difficult situation of inhibitory control in a challenging stop task that controls for noninhibitory functions . The difficulty of the stop task , requiring withholding of a triggered motor response , was assured by an algorithm that adjusted the task individually so that each subject only succeeded on half of all stop trials , failing on the other half . This design allowed to elegantly separate brain activation related to successful motor response inhibition and to inhibition failure or error detection . Brain activation correlating with successful inhibitory control in 20 healthy volunteers could be isolated in right inferior prefrontal cortex . Failure to inhibit was associated with activation in mesial frontopolar and bilateral inferior parietal cortices , presumably reflecting an attention network for error detection",
"role": "user"
},
{
"content": "Results showed a predominance of greater prefrontal/frontal activity related to meditation , which might indicate the increased recruitment of cognitive/attentional control re sources in naïve and long-term meditators .\nThis increased frontal activity was also observed when participants were asked to simply react to negative stimuli .\nFindings from emotion-related areas were scarce but suggested increased insular activity in meditators , potentially indicating that meditation might be associated with greater bodily awareness .\nConclusions : Meditation practice prompts regulatory mechanisms when participants face aversive stimuli , even without an explicit request .\nMoreover , some studies reported increased insular activity in meditators , consistent with the hypothesis that meditation helps foster an interoceptive awareness of bodily and emotional states",
"role": "assistant"
}
] |
science.data_reco_mcq_sc | science.data_reco_mcq_sc.794 | [
{
"content": "Explore the following research question and associated terms. Identify the most suitable dataset from the list provided. Use the dataset descriptions to guide your choice.\n\nInquiry: I want to train a model for abstractive summarization.\n\nKeywords: abstractive summarization text paragraph-level\n\nDataset Information:\nSNIPS: The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity: SearchCreativeWork (e.g. Find me the I, Robot television show), GetWeather (e.g.\nCNN/Daily Mail: CNN/Daily Mail is a dataset for text summarization. Human generated abstractive summary bullets were generated from news stories in CNN and Daily Mail websites as questions (with one of the entities hidden), and stories as the corresponding passages from which the system is expected to answer the fill-in the-blank question.\nOrangeSum: OrangeSum is a single-document extreme summarization dataset with two tasks: title and abstract. Ground truth summaries are respectively 11.42 and 32.12 words in length on average, for the title and abstract tasks respectively, while document sizes are 315 and 350 words.\nBookCorpus: BookCorpus is a large collection of free novel books written by unpublished authors, which contains 11,038 books (around 74M sentences and 1G words) of 16 different sub-genres (e.g., Romance, Historical, Adventure, etc. ).\n\nAvailable Datasets: A: SNIPS B: CNN/Daily Mail C: OrangeSum D: BookCorpus",
"role": "user"
},
{
"content": "BookCorpus",
"role": "assistant"
}
] |
science.bc7_litcovid_topic_classification | science.bc7_litcovid_topic_classification.4340 | [
{
"content": "Analyze the given paragraph from COVID-19 research. Determine the relevant topics discussed in the text. Your response should be a JSON array containing the identified topics. Ensure the output is a valid JSON array with no additional commentary.\n\nText:\n\nInfection with severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) leads to coronavirus disease 2019 (COVID-19), which poses an unprecedented worldwide health crisis, and has been declared a pandemic by the World Health Organization (WHO) on March 11, 2020. The angiotensin converting enzyme 2 (ACE2) has been suggested to be the key protein used by SARS-CoV-2 for host cell entry. In their recent work, Lindskog and colleagues (Hikmet et al, 2020) report that ACE2 is expressed at very low protein levels-if at all-in respiratory epithelial cells. Severe COVID-19, however, is characterized by acute respiratory distress syndrome and extensive damage to the alveoli in the lung parenchyma. Then, what is the role of the airway epithelium in the early stages of COVID-19, and which cells need to be studied to characterize the biological mechanisms responsible for the progression to severe disease after initial infection by the novel coronavirus?",
"role": "user"
},
{
"content": "[\"Treatment\", \"Mechanism\"]",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_pubmed | science.scientific_papers_summarization_single_doc_pubmed.6340 | [
{
"content": "Read the provided biomedical research paper and create a concise abstract. Ensure the abstract captures the main points of the paper.\n\nFull Paper:\nchronic inflammatory demyelinating polyradiculoneuropathy ( cidp ) is an uncommon progressive or relapsing paralyzing disease caused by inflammation of the peripheral nerves .\ncidp is characterized by the occurrence of symmetrical weakness in both proximal and distal muscles , which progressively increases for more than 2 months1 .\nsensory disturbances are usually slight , proximal and distal parts of the limbs are usually affected symmetrically .\nless common variants include widespread multifocal neuropathy , focal brachial plexus , and isolated limb nerve and initial cranial nerve presentations , ataxic sensory neuropathy , and generalized neuropathy associated with multifocal central nervous system .\nwe report the clinical and brain single photon emission computed tomography ( spect ) and response to treatment in patients with atypical cidp2 .\na 32-year - old man experienced double vision around january , 2010 , followed by weakness of his left upper and lower extremities .\narticulation disorders and loss of hearing in his left ear developed , and he was admitted to our hospital on february 14 , 2010 .\nphysical examination was normal , and neurological examination showed clear consciousness with no impairment of cognitive function , but with articulation disorders . olfactory sensation was reduced .\nleft ptosis and left gaze palsy , complete left facial palsy , perceptive deafness of the left ear , and muscle weakness of the left trapezius muscle were observed .\nparesis in the left upper and lower extremities was graded 4/5 through manual muscle testing .\ndeep tendon reflexes were slightly diminished equally on both sides ; no pathologic reflex was seen .\nchest x - ray , electrocardiogram , thoracoabdominal computed tomography , abdominal ultrasound , and peripheral hematological values were normal .\nblood chemistry revealed glucose 174 mg / dl , hba1c 7.8% , ck 366 u / l , and no other abnormal findings . anti - acetylcholine antibody , anti - jo-1 antibody , and antiganglioside antibody were all negative .\nspinal fluid revealed a cell count of 3/l , protein 68 mg / dl , glucose 98 mg / dl , showing albumino - cytologic dissociation ; myelin basic proteins and oligoclonal bands were not detected . no gene mutation related to mitochondrial encephalomyopathy\nnerve conduction velocity test showed delay of conduction in the bilateral peroneal nerve and median nerve at motor conduction velocity with conduction blocks and only a slight delay in sensory conduction velocity .\nno abnormality of the brain parenchyma , cerebral nerves or cervico - thoracolumbar region was found on brain magnetic resonance imaging ( mri ) . on electroencephalogram ,\nalpha waves in the main frequency band of 8 to 9 hz were recorded and no obvious paroxysmal discharge was observed , indicating normal findings .\nbrain spect scan showed reduced blood flow in the right inner frontal lobe and both occipital lobes ( figure 1 ) .\nnerve biopsy ( left sural nerve ) showed reduction of nerve density by 30% , with demyelination , but no adventitial thickening around capillaries , ruling out diabetic peripheral neuropathy .\nafter high - dose intravenous gamma - globulin ( 25 g / day , 5 days ) therapy , articulation disorder , ocular movement disturbance , hearing loss , and motor disturbance of the distal muscles and distal sensory disturbance in the left lower extremity resolved and the patient was able to walk , though mild motor disturbance of the distal muscles and distal sensory disturbance in the left upper extremity remained . \n\nfigure 1brain single photon emission computed tomography scan showed reduced blood flow in the right inner frontal lobe and both occipital lobes .\nimprovements in flow can be seen two weeks after intravenous gamma - globulin ( 25 g / day , 5 days ) . \n brain single photon emission computed tomography scan showed reduced blood flow in the right inner frontal lobe and both occipital lobes .\nimprovements in flow can be seen two weeks after intravenous gamma - globulin ( 25 g / day , 5 days ) .\ntumors , vascular disease , trauma , tuberculosis , and bacterial infection are the most frequent major causes of multiple cranial nerve palsy .\nother causes include demyelination neuropathies such as guillain - barr syndrome , cidp , charcot - marie - tooth disease , granulomatous lesions such as tolosa - hunt syndrome and sarcoidosis , vasculitis such as behcet 's disease and sjgren syndrome , and diabetes mellitus .\nthe sudden onset of double vision , articulation disorders , and left hemiplegia in our patient suggested cerebrovascular disorder , multiple sclerosis , or diabetic external ophthalmoplegia .\nthe patient also showed manifestations of multiple cranial nerve disorder , i.e. , of the trigeminal nerve , glossopharyngeal nerve , vagus nerve , and hypoglossal nerve .\nwhole - body examination was negative . finally , based on ischemic brain spect images , spinal fluid findings and nerve biopsy results ,\nhigh - dose intravenous gamma - globulin ( 25 g / day , 5 days ) was initiated .\nafter two weeks , articulation disorders , left ocular motility , deafness of the left ear , and left trapezius muscle weakness were improved .\nwere improved after high - dose intravenous gamma - globulin therapy , i assumed that this case may be associated with some kind of inflammatory change .\nalthough this patient had disturbances of the central nervous system including the cranial nerves , no abnormal findings were found on brain mri .\nhowever , spect revealed a decrease in cerebral blood flow ( cbf ) in the right frontal lobe contralateral to the affected side of the body , extending to the region around the anterior cingulate gyrus and right cerebral cortex .\nthe fab region of the immunoglobulin molecule binds to various antigens while the fc region binds to effector cells to regulate immune response .\nhuman blood components include 3 types of fc receptors ( fcri , fcrii , and fcriii ) whose ligand is the fc region of immunoglobulin , and most of these receptors activate inflammation .\nhowever , fcriiib is an exception and is known to enhance immunological tolerance and suppress inflammation . in cidp\n, it has been demonstrated that genetic polymorphism in the promoter region of fcriiib correlates with decrease in fcriiib expression on b cells .\ndecrease in fcriiib is likely to lead to the breakdown in the immune suppression system and result in excessive activation of the inflammatory reaction .\ntherefore , we believe that a similar condition was caused in this patient and inflammation spread to not only the peripheral nerves but also the central nerves , resulting in the decrease in cbf on spect . in addition , the decrease in cbf in this patient may be partially due to a transient ischemic condition arising from a change in the central process of the primary sensory nerves caused by a sensory disturbance through c fibers activated by inflammation .\na published report on the therapeutic experience of 10 patients with cidp or its subtype notes that anti - tnf medications were effective in treating sensory disturbance . therefore , the inflammatory mediator that activated c fibers in our patient may be associated with tnf cytokines .\nthat is , we think that , since the inflammatory response disappeared after high - dose intravenous immunoglobulin therapy , this accounts for the improvement of spect findings after therapy\n. cases of multiple cranial nerve palsy due to diabetes mellitus have been reported , though without complications of impairment of motion .\nbut cidp with central nervous system symptoms as in this case is rare , and no study presenting the changes in spect before and after treatment has been reported .\nin addition , this case is important and valuable because it presents findings indicating that the central nervous system symptoms observed in a patient with cidp may be related to an ischemic condition caused by an inflammatory change in the central nervous system .\nthe present case is a very unusual one of hemiplegic peripheral neuropathy accompanied with multiple cranial nerve palsy .",
"role": "user"
},
{
"content": "a 32-year - old man experienced double vision around january , 2010 , followed by weakness of his left upper and lower extremities . \n articulation disorders and loss of hearing in his left ear developed , and he was admitted to our hospital on february 14 , 2010 . \n physical examination was normal , and neurological examination showed clear consciousness with no impairment of cognitive function , but with articulation disorders . olfactory sensation was reduced . \n left ptosis and left gaze palsy , complete left facial palsy , perceptive deafness of the left ear , and muscle weakness of the left trapezius muscle were observed . \n paresis in the left upper and lower extremities was graded 4/5 through manual muscle testing . \n sensory system evaluation revealed complete left - side palsy , including the face . \n deep tendon reflexes were slightly diminished equally on both sides ; no pathologic reflex was seen . \n no abnormality of the brain parenchyma , cerebral nerves or cervicothoracolumbar region was found on brain magnetic resonance imaging . on electroencephalogram , \n alpha waves in the main frequency band of 8 to 9 hz were recorded , indicating normal findings . \n brain single photon emission computed tomography ( spect ) scan showed reduced blood flow in the right inner frontal lobe and both occipital lobes . \n nerve biopsy ( left sural nerve ) showed reduction of nerve density by 30% , with demyelination . \n the patient also showed manifestations of multiple cranial nerve disorder , i.e. , of the trigeminal nerve , glossopharyngeal nerve , vagus nerve , and hypoglos - sal nerve . \n whole - body examination was negative . finally , based on ischemic brain spect images , spinal fluid findings and nerve biopsy results , \n peripheral neuropathy accompanied with multiple cranial nerve palsy was diagnosed .",
"role": "assistant"
}
] |
science.qasper_extractive_qa | science.qasper_extractive_qa.24.2 | [
{
"content": "Analyze the given research paper excerpts and address the question provided. Extract the necessary information directly from the text. Your answer should be formatted as a JSON object with the following structure:\n\n\"answer\": A list of text segments from the paper that directly respond to the question.\n\"evidence\": A list of text segments from the paper where the answer can be found.\n\nIf the question cannot be answered with the given text, return \"null\".\n\nDocument: Unsupervised Machine Commenting with Neural Variational Topic Model\n\nArticle comments can provide supplementary opinions and facts for readers, thereby increase the attraction and engagement of articles. Therefore, automatically commenting is helpful in improving the activeness of the community, such as online forums and news websites. Previous work shows that training an automatic commenting system requires large parallel corpora. Although part of articles are naturally paired with the comments on some websites, most articles and comments are unpaired on the Internet. To fully exploit the unpaired data, we completely remove the need for parallel data and propose a novel unsupervised approach to train an automatic article commenting model, relying on nothing but unpaired articles and comments. Our model is based on a retrieval-based commenting framework, which uses news to retrieve comments based on the similarity of their topics. The topic representation is obtained from a neural variational topic model, which is trained in an unsupervised manner. We evaluate our model on a news comment dataset. Experiments show that our proposed topic-based approach significantly outperforms previous lexicon-based models. The model also profits from paired corpora and achieves state-of-the-art performance under semi-supervised scenarios.\n\nIntroduction\nMaking article comments is a fundamental ability for an intelligent machine to understand the article and interact with humans. It provides more challenges because commenting requires the abilities of comprehending the article, summarizing the main ideas, mining the opinions, and generating the natural language. Therefore, machine commenting is an important problem faced in building an intelligent and interactive agent. Machine commenting is also useful in improving the activeness of communities, including online forums and news websites. Article comments can provide extended information and external opinions for the readers to have a more comprehensive understanding of the article. Therefore, an article with more informative and interesting comments will attract more attention from readers. Moreover, machine commenting can kick off the discussion about an article or a topic, which helps increase user engagement and interaction between the readers and authors.\nBecause of the advantage and importance described above, more recent studies have focused on building a machine commenting system with neural models BIBREF0 . One bottleneck of neural machine commenting models is the requirement of a large parallel dataset. However, the naturally paired commenting dataset is loosely paired. Qin et al. QinEA2018 were the first to propose the article commenting task and an article-comment dataset. The dataset is crawled from a news website, and they sample 1,610 article-comment pairs to annotate the relevance score between articles and comments. The relevance score ranges from 1 to 5, and we find that only 6.8% of the pairs have an average score greater than 4. It indicates that the naturally paired article-comment dataset contains a lot of loose pairs, which is a potential harm to the supervised models. Besides, most articles and comments are unpaired on the Internet. For example, a lot of articles do not have the corresponding comments on the news websites, and the comments regarding the news are more likely to appear on social media like Twitter. Since comments on social media are more various and recent, it is important to exploit these unpaired data.\nAnother issue is that there is a semantic gap between articles and comments. In machine translation and text summarization, the target output mainly shares the same points with the source input. However, in article commenting, the comment does not always tell the same thing as the corresponding article. Table TABREF1 shows an example of an article and several corresponding comments. The comments do not directly tell what happened in the news, but talk about the underlying topics (e.g. NBA Christmas Day games, LeBron James). However, existing methods for machine commenting do not model the topics of articles, which is a potential harm to the generated comments.\nTo this end, we propose an unsupervised neural topic model to address both problems. For the first problem, we completely remove the need of parallel data and propose a novel unsupervised approach to train a machine commenting system, relying on nothing but unpaired articles and comments. For the second issue, we bridge the articles and comments with their topics. Our model is based on a retrieval-based commenting framework, which uses the news as the query to retrieve the comments by the similarity of their topics. The topic is represented with a variational topic, which is trained in an unsupervised manner.\nThe contributions of this work are as follows:\n\nSolutions\nFacing the above challenges, we provide three solutions to the problems.\nGiven a large set of candidate comments, the retrieval model can select some comments by matching articles with comments. Compared with the generative model, the retrieval model can achieve more promising performance. First, the retrieval model is less likely to suffer from the mode collapse problem. Second, the generated comments are more predictable and controllable (by changing the candidate set). Third, the retrieval model can be combined with the generative model to produce new comments (by adding the outputs of generative models to the candidate set).\nThe unsupervised learning method is also important for machine commenting to alleviate the problems descried above. Unsupervised learning allows the model to exploit more data, which helps the model to learn more complex patterns of commenting and improves the generalization of the model. Many comments provide some unique opinions, but they do not have paired articles. For example, many interesting comments on social media (e.g. Twitter) are about recent news, but require redundant work to match these comments with the corresponding news articles. With the help of the unsupervised learning method, the model can also learn to generate these interesting comments. Additionally, the unsupervised learning method does not require negative samples in the training stage, so that it can alleviate the negative sampling bias.\nAlthough there is semantic gap between the articles and the comments, we find that most articles and comments share the same topics. Therefore, it is possible to bridge the semantic gap by modeling the topics of both articles and comments. It is also similar to how humans generate comments. Humans do not need to go through the whole article but are capable of making a comment after capturing the general topics.\n\nRetrieval-based Commenting\nGiven an article, the retrieval-based method aims to retrieve a comment from a large pool of candidate comments. The article consists of a title INLINEFORM0 and a body INLINEFORM1 . The comment pool is formed from a large scale of candidate comments INLINEFORM2 , where INLINEFORM3 is the number of the unique comments in the pool. In this work, we have 4.5 million human comments in the candidate set, and the comments are various, covering different topics from pets to sports.\nThe retrieval-based model should score the matching between the upcoming article and each comments, and return the comments which is matched with the articles the most. Therefore, there are two main challenges in retrieval-based commenting. One is how to evaluate the matching of the articles and comments. The other is how to efficiently compute the matching scores because the number of comments in the pool is large.\nTo address both problems, we select the “dot-product” operation to compute matching scores. More specifically, the model first computes the representations of the article INLINEFORM0 and the comments INLINEFORM1 . Then the score between article INLINEFORM2 and comment INLINEFORM3 is computed with the “dot-product” operation: DISPLAYFORM0 \nThe dot-product scoring method has proven a successful in a matching model BIBREF1 . The problem of finding datapoints with the largest dot-product values is called Maximum Inner Product Search (MIPS), and there are lots of solutions to improve the efficiency of solving this problem. Therefore, even when the number of candidate comments is very large, the model can still find comments with the highest efficiency. However, the study of the MIPS is out of the discussion in this work. We refer the readers to relevant articles for more details about the MIPS BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . Another advantage of the dot-product scoring method is that it does not require any extra parameters, so it is more suitable as a part of the unsupervised model.\n\nTraining\nIn addition to the unsupervised training, we explore a semi-supervised training framework to combine the proposed unsupervised model and the supervised model. In this scenario we have a paired dataset that contains article-comment parallel contents INLINEFORM0 , and an unpaired dataset that contains the documents (articles or comments) INLINEFORM1 . The supervised model is trained on INLINEFORM2 so that we can learn the matching or mapping between articles and comments. By sharing the encoder of the supervised model and the unsupervised model, we can jointly train both the models with a joint objective function: DISPLAYFORM0 \nwhere INLINEFORM0 is the loss function of the unsupervised learning (Eq. refloss), INLINEFORM1 is the loss function of the supervised learning (e.g. the cross-entropy loss of Seq2Seq model), and INLINEFORM2 is a hyper-parameter to balance two parts of the loss function. Hence, the model is trained on both unpaired data INLINEFORM3 , and paired data INLINEFORM4 .\n\nDatasets\nWe select a large-scale Chinese dataset BIBREF0 with millions of real comments and a human-annotated test set to evaluate our model. The dataset is collected from Tencent News, which is one of the most popular Chinese websites for news and opinion articles. The dataset consists of 198,112 news articles. Each piece of news contains a title, the content of the article, and a list of the users' comments. Following the previous work BIBREF0 , we tokenize all text with the popular python package Jieba, and filter out short articles with less than 30 words in content and those with less than 20 comments. The dataset is split into training/validation/test sets, and they contain 191,502/5,000/1,610 pieces of news, respectively. The whole dataset has a vocabulary size of 1,858,452. The average lengths of the article titles and content are 15 and 554 Chinese words. The average comment length is 17 words.\n\nImplementation Details\nThe hidden size of the model is 512, and the batch size is 64. The number of topics INLINEFORM0 is 100. The weight INLINEFORM1 in Eq. EQREF26 is 1.0 under the semi-supervised setting. We prune the vocabulary, and only leave 30,000 most frequent words in the vocabulary. We train the model for 20 epochs with the Adam optimizing algorithms BIBREF8 . In order to alleviate the KL vanishing problem, we set the initial learning to INLINEFORM2 , and use batch normalization BIBREF9 in each layer. We also gradually increase the KL term from 0 to 1 after each epoch.\n\nAnalysis and Discussion\nWe analyze the performance of the proposed method under the semi-supervised setting. We train the supervised IR model with different numbers of paired data. Figure FIGREF39 shows the curve (blue) of the recall1 score. As expected, the performance grows as the paired dataset becomes larger. We further combine the supervised IR with our unsupervised model, which is trained with full unpaired data (4.8M) and different number of paired data (from 50K to 4.8M). It shows that IR+Proposed can outperform the supervised IR model given the same paired dataset. It concludes that the proposed model can exploit the unpaired data to further improve the performance of the supervised model.\nAlthough our proposed model can achieve better performance than previous models, there are still remaining two questions: why our model can outperform them, and how to further improve the performance. To address these queries, we perform error analysis to analyze the error types of our model and the baseline models. We select TF-IDF, S2S, and IR as the representative baseline models. We provide 200 unique comments as the candidate sets, which consists of four types of comments as described in the above retrieval evaluation setting: Correct, Plausible, Popular, and Random. We rank the candidate comment set with four models (TF-IDF, S2S, IR, and Proposed+IR), and record the types of top-1 comments.\nFigure FIGREF40 shows the percentage of different types of top-1 comments generated by each model. It shows that TF-IDF prefers to rank the plausible comments as the top-1 comments, mainly because it matches articles with the comments based on the similarity of the lexicon. Therefore, the plausible comments, which are more similar in the lexicon, are more likely to achieve higher scores than the correct comments. It also shows that the S2S model is more likely to rank popular comments as the top-1 comments. The reason is the S2S model suffers from the mode collapse problem and data sparsity, so it prefers short and general comments like “Great” or “That's right”, which appear frequently in the training set. The correct comments often contain new information and different language models from the training set, so they do not obtain a high score from S2S.\nIR achieves better performance than TF-IDF and S2S. However, it still suffers from the discrimination between the plausible comments and correct comments. This is mainly because IR does not explicitly model the underlying topics. Therefore, the correct comments which are more relevant in topic with the articles get lower scores than the plausible comments which are more literally relevant with the articles. With the help of our proposed model, proposed+IR achieves the best performance, and achieves a better accuracy to discriminate the plausible comments and the correct comments. Our proposed model incorporates the topic information, so the correct comments which are more similar to the articles in topic obtain higher scores than the other types of comments. According to the analysis of the error types of our model, we still need to focus on avoiding predicting the plausible comments.\n\nTopic Model and Variational Auto-Encoder\nTopic models BIBREF20 are among the most widely used models for learning unsupervised representations of text. One of the most popular approaches for modeling the topics of the documents is the Latent Dirichlet Allocation BIBREF21 , which assumes a discrete mixture distribution over topics is sampled from a Dirichlet prior shared by all documents. In order to explore the space of different modeling assumptions, some black-box inference methods BIBREF22 , BIBREF23 are proposed and applied to the topic models.\nKingma and Welling vae propose the Variational Auto-Encoder (VAE) where the generative model and the variational posterior are based on neural networks. VAE has recently been applied to modeling the representation and the topic of the documents. Miao et al. NVDM model the representation of the document with a VAE-based approach called the Neural Variational Document Model (NVDM). However, the representation of NVDM is a vector generated from a Gaussian distribution, so it is not very interpretable unlike the multinomial mixture in the standard LDA model. To address this issue, Srivastava and Sutton nvlda propose the NVLDA model that replaces the Gaussian prior with the Logistic Normal distribution to approximate the Dirichlet prior and bring the document vector into the multinomial space. More recently, Nallapati et al. sengen present a variational auto-encoder approach which models the posterior over the topic assignments to sentences using an RNN.\n\nTable 2: The performance of the unsupervised models and supervised models under the retrieval evaluation settings. (Recall@k, MRR: higher is better; MR: lower is better.)\n\nFigure 1: The performance of the supervised model and the semi-supervised model trained on different paired data size.\n\nFigure 2: Error types of comments generated by different models.\n\nInquiry: How many articles did they have?",
"role": "user"
},
{
"content": "{\"answer\": [\"198,112\"], \"evidence\": [\"We select a large-scale Chinese dataset BIBREF0 with millions of real comments and a human-annotated test set to evaluate our model. The dataset is collected from Tencent News, which is one of the most popular Chinese websites for news and opinion articles. The dataset consists of 198,112 news articles. Each piece of news contains a title, the content of the article, and a list of the users\\u0027 comments. Following the previous work BIBREF0 , we tokenize all text with the popular python package Jieba, and filter out short articles with less than 30 words in content and those with less than 20 comments. The dataset is split into training/validation/test sets, and they contain 191,502/5,000/1,610 pieces of news, respectively. The whole dataset has a vocabulary size of 1,858,452. The average lengths of the article titles and content are 15 and 554 Chinese words. The average comment length is 17 words.\"]}",
"role": "assistant"
}
] |
science.data_reco_mcq_sc | science.data_reco_mcq_sc.12999 | [
{
"content": "Explore the following research question and associated terms. Identify the most suitable dataset from the list provided. Use the dataset descriptions to guide your choice.\n\nInquiry: Given a query image, the image retrieval task aims to retrieve the images that are semantically similar to the\n\nKeywords: image retrieval images\n\nDataset Information:\nFashion IQ: Fashion IQ support and advance research on interactive fashion image retrieval. Fashion IQ is the first fashion dataset to provide human-generated captions that distinguish similar pairs of garment images together with side-information consisting of real-world product descriptions and derived visual attribute labels for these images.\nComplexWebQuestions: ComplexWebQuestions is a dataset for answering complex questions that require reasoning over multiple web snippets. It contains a large set of complex questions in natural language, and can be used in multiple ways: By interacting with a search engine; As a reading comprehension task: the authors release 12,725,989 web snippets that are relevant for the questions, and were collected during the development of their model; As a semantic parsing task: each question is paired with a SPARQL query that can be executed against Freebase to retrieve the answer.\nOxford5k: Oxford5K is the Oxford Buildings Dataset, which contains 5062 images collected from Flickr. It offers a set of 55 queries for 11 landmark buildings, five for each landmark.\nSemArt: SemArt is a multi-modal dataset for semantic art understanding. SemArt is a collection of fine-art painting images in which each image is associated to a number of attributes and a textual artistic comment, such as those that appear in art catalogues or museum collections.\n\nAvailable Datasets: A: Fashion IQ B: ComplexWebQuestions C: Oxford5k D: SemArt",
"role": "user"
},
{
"content": "Oxford5k",
"role": "assistant"
}
] |
science.pico_ner | science.pico_ner.249 | [
{
"content": "Examine the following abstract and identify distinct entities. Focus on extracting entities related to participants, interventions, and outcomes.\n\nPresent your findings in a JSON format like: {\"Participant\": [\"example entity\", ...], \"Intervention\": [\"example entity\", ...], \"Outcome\": [\"example entity\", ...]}. If no entities are found for a category, use an empty list.\n\nAbstract Content:\n\nEffect of surgical technique in subtotal and bilateral thyroidectomy on risk of postoperative parathyroid insufficiency development -- our experience . One of the postoperative complications after strumectomy is hypoparathyroidism . Therefore , the purpose of our study was to evaluate the effect of the surgical technique on the development of parathyroid gland insufficiency in our group of patients . Subtotal , bilateral strumectomies were performed according to Rothmund 's suggestions . Randomized controlled trial was performed in two groups . In the first group of 19 patients , main trunk of the inferior thyroid artery was ligated and in the second one consisting of 18 patients , only the branches of this artery were ligated . Total calcium and PTH levels were evaluated pre- and postoperatively . Based on the biochemical and clinical data , no statistically significant differences in the development of postoperative hypoparathyroidism in relation to performed surgical techniques were observed .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"surgical techniques\", \"subtotal and bilateral thyroidectomy\", \"main trunk of the inferior thyroid artery was ligated\", \"bilateral strumectomies\", \"surgical technique\", \"only the branches of this artery were ligated .\"], \"Outcome\": [\"development of postoperative hypoparathyroidism\", \"development of parathyroid gland insufficiency\", \"Total calcium and PTH levels\"], \"Participant\": [\"18 patients\", \"parathyroid gland insufficiency\", \"19 patients\"]}",
"role": "assistant"
}
] |
science.covid_deepset_qa | science.covid_deepset_qa.1043 | [
{
"content": "Analyze the following excerpt from a biomedical text. Identify the relevant information needed to respond to the inquiry. Provide only the necessary details in your answer.\n\nContext: Vaccines against circulating influenza strains are available and updated annually, but many issues are still present, including low efficacy in the populations at greatest risk of complications from influenza virus infection, i.e., the young and elderly [8, 9] . Despite increasing vaccination rates, influenza-related hospitalizations are increasing [8, 10] , and substantial drug resistance has developed to two of the four currently approved anti-viral drugs [11, 12] . While adjuvants have the potential to improve efficacy and availability of current inactivated vaccines, live-attenuated and virus-vectored vaccines are still considered one of the best options for the induction of broad and efficacious immunity to the influenza virus [13] . The general types of influenza vaccines available in the United States are trivalent inactivated influenza vaccine (TIV), quadrivalent influenza vaccine (QIV), and live attenuated influenza vaccine (LAIV; in trivalent and quadrivalent forms). There are three types of inactivated vaccines that include whole virus inactivated, split virus inactivated, and subunit vaccines. In split virus vaccines, the virus is disrupted by a detergent. In subunit vaccines, HA and NA have been further purified by removal of other viral components. TIV is administered intramuscularly and contains three or four inactivated viruses, i.e., two type A strains (H1 and H3) and one or two type B strains. TIV efficacy is measured by induction of humoral responses to the hemagglutinin (HA) protein, the major surface and attachment glycoprotein on influenza. Serum antibody responses to HA are measured by the hemagglutination-inhibition (HI) assay, and the strain-specific HI titer is considered the gold-standard correlate of immunity to influenza where a four-fold increase in titer post-vaccination, or a HI titer of ≥1:40 is considered protective [4, 14] . Protection against clinical disease is mainly conferred by serum antibodies; however, mucosal IgA antibodies also may contribute to resistance against infection. Split virus inactivated vaccines can induce neuraminidase (NA)-specific antibody responses [15] [16] [17] , and anti-NA antibodies have been associated with protection from infection in humans [18] [19] [20] [21] [22] . Currently, NA-specific antibody responses are not considered a correlate of protection [14] . LAIV is administered as a nasal spray and contains the same three or four influenza virus strains as inactivated vaccines but on an attenuated vaccine backbone [4] . LAIV are temperature-sensitive and cold-adapted so they do not replicate effectively at core body temperature, but replicate in the mucosa of the nasopharynx [23] . LAIV immunization induces serum antibody responses, mucosal antibody responses (IgA), and T cell responses. While robust serum antibody and nasal wash (mucosal) antibody responses are associated with protection from infection, other immune responses, such as CD8 + cytotoxic lymphocyte (CTL) responses may contribute to protection and there is not a clear correlate of immunity for LAIV [4, 14, 24] . Currently licensed influenza virus vaccines suffer from a number of issues. The inactivated vaccines rely on specific antibody responses to the HA, and to a lesser extent NA proteins for protection. The immunodominant portions of the HA and NA molecules undergo a constant process of antigenic drift, a natural accumulation of mutations, enabling virus evasion from immunity [9, 25] . Thus, the circulating influenza A and B strains are reviewed annually for antigenic match with current vaccines, Replacement of vaccine strains may occur regularly, and annual vaccination is recommended to assure protection [4, 26, 27] . For the northern hemisphere, vaccine strain selection occurs in February and then manufacturers begin production, taking at least six months to produce the millions of vaccine doses required for the fall [27] . If the prediction is imperfect, or if manufacturers have issues with vaccine production, vaccine efficacy or availability can be compromised [28] . LAIV is not recommended for all populations; however, it is generally considered to be as effective as inactivated vaccines and may be more efficacious in children [4, 9, 24] . While LAIV relies on antigenic match and the HA and NA antigens are replaced on the same schedule as the TIV [4, 9] , there is some suggestion that LAIV may induce broader protection than TIV due to the diversity of the immune response consistent with inducing virus-neutralizing serum and mucosal antibodies, as well as broadly reactive T cell responses [9, 23, 29] . While overall both TIV and LAIV are considered safe and effective, there is a recognized need for improved seasonal influenza vaccines [26] . Moreover, improved understanding of immunity to conserved influenza virus antigens has raised the possibility of a universal vaccine, and these universal antigens will likely require novel vaccines for effective delivery [30] [31] [32] . Virus-vectored vaccines share many of the advantages of LAIV, as well as those unique to the vectors. Recombinant DNA systems exist that allow ready manipulation and modification of the vector genome. This in turn enables modification of the vectors to attenuate the virus or enhance immunogenicity, in addition to adding and manipulating the influenza virus antigens. Many of these vectors have been extensively studied or used as vaccines against wild type forms of the virus. Finally, each of these vaccine vectors is either replication-defective or causes a self-limiting infection, although like LAIV, safety in immunocompromised individuals still remains a concern [4, 13, [33] [34] [35] . Table 1 summarizes the benefits and concerns of each of the virus-vectored vaccines discussed here. There are 53 serotypes of adenovirus, many of which have been explored as vaccine vectors. A live adenovirus vaccine containing serotypes 4 and 7 has been in use by the military for decades, suggesting adenoviruses may be safe for widespread vaccine use [36] . However, safety concerns have led to the majority of adenovirus-based vaccine development to focus on replication-defective vectors. Adenovirus 5 (Ad5) is the most-studied serotype, having been tested for gene delivery and anti-cancer agents, as well as for infectious disease vaccines. Adenovirus vectors are attractive as vaccine vectors because their genome is very stable and there are a variety of recombinant systems available which can accommodate up to 10 kb of recombinant genetic material [37] . Adenovirus is a non-enveloped virus which is relatively stable and can be formulated for long-term storage at 4 °C, or even storage up to six months at room temperature [33] . Adenovirus vaccines can be grown to high titers, exceeding 10 1° plaque forming units (PFU) per mL when cultured on 293 or PER.C6 cells [38] , and the virus can be purified by simple methods [39] . Adenovirus vaccines can also be delivered via multiple routes, including intramuscular injection, subcutaneous injection, intradermal injection, oral delivery using a protective capsule, and by intranasal delivery. Importantly, the latter two delivery methods induce robust mucosal immune responses and may bypass preexisting vector immunity [33] . Even replication-defective adenovirus vectors are naturally immunostimulatory and effective adjuvants to the recombinant antigen being delivered. Adenovirus has been extensively studied as a vaccine vector for human disease. The first report using adenovirus as a vaccine vector for influenza demonstrated immunogenicity of recombinant adenovirus 5 (rAd5) expressing the HA of a swine influenza virus, A/Swine/Iowa/1999 (H3N2). Intramuscular immunization of mice with this construct induced robust neutralizing antibody responses and protected mice from challenge with a heterologous virus, A/Hong Kong/1/1968 (H3N2) [40] . Replication defective rAd5 vaccines expressing influenza HA have also been tested in humans. A rAd5-HA expressing the HA from A/Puerto Rico/8/1934 (H1N1; PR8) was delivered to humans epicutaneously or intranasally and assayed for safety and immunogenicity. The vaccine was well tolerated and induced seroconversion with the intranasal administration had a higher conversion rate and higher geometric meant HI titers [41] . While clinical trials with rAd vectors have overall been successful, demonstrating safety and some level of efficacy, rAd5 as a vector has been negatively overshadowed by two clinical trial failures. The first trial was a gene therapy examination where high-dose intravenous delivery of an Ad vector resulted in the death of an 18-year-old male [42, 43] . The second clinical failure was using an Ad5-vectored HIV vaccine being tested as a part of a Step Study, a phase 2B clinical trial. In this study, individuals were vaccinated with the Ad5 vaccine vector expressing HIV-1 gag, pol, and nef genes. The vaccine induced HIV-specific T cell responses; however, the study was stopped after interim analysis suggested the vaccine did not achieve efficacy and individuals with high preexisting Ad5 antibody titers might have an increased risk of acquiring HIV-1 [44] [45] [46] . Subsequently, the rAd5 vaccine-associated risk was confirmed [47] . While these two instances do not suggest Ad-vector vaccines are unsafe or inefficacious, the umbra cast by the clinical trials notes has affected interest for all adenovirus vaccines, but interest still remains. Immunization with adenovirus vectors induces potent cellular and humoral immune responses that are initiated through toll-like receptor-dependent and independent pathways which induce robust pro-inflammatory cytokine responses. Recombinant Ad vaccines expressing HA antigens from pandemic H1N1 (pH1N1), H5 and H7 highly pathogenic avian influenza (HPAI) virus (HPAIV), and H9 avian influenza viruses have been tested for efficacy in a number of animal models, including chickens, mice, and ferrets, and been shown to be efficacious and provide protection from challenge [48, 49] . Several rAd5 vectors have been explored for delivery of non-HA antigens, influenza nucleoprotein (NP) and matrix 2 (M2) protein [29, [50] [51] [52] . The efficacy of non-HA antigens has led to their inclusion with HA-based vaccines to improve immunogenicity and broaden breadth of both humoral and cellular immunity [53, 54] . However, as both CD8 + T cell and neutralizing antibody responses are generated by the vector and vaccine antigens, immunological memory to these components can reduce efficacy and limit repeated use [48] . One drawback of an Ad5 vector is the potential for preexisting immunity, so alternative adenovirus serotypes have been explored as vectors, particularly non-human and uncommon human serotypes. Non-human adenovirus vectors include those from non-human primates (NHP), dogs, sheep, pigs, cows, birds and others [48, 55] . These vectors can infect a variety of cell types, but are generally attenuated in humans avoiding concerns of preexisting immunity. Swine, NHP and bovine adenoviruses expressing H5 HA antigens have been shown to induce immunity comparable to human rAd5-H5 vaccines [33, 56] . Recombinant, replication-defective adenoviruses from low-prevalence serotypes have also been shown to be efficacious. Low prevalence serotypes such as adenovirus types 3, 7, 11, and 35 can evade anti-Ad5 immune responses while maintaining effective antigen delivery and immunogenicity [48, 57] . Prime-boost strategies, using DNA or protein immunization in conjunction with an adenovirus vaccine booster immunization have also been explored as a means to avoided preexisting immunity [52] . Adeno-associated viruses (AAV) were first explored as gene therapy vectors. Like rAd vectors, rAAV have broad tropism infecting a variety of hosts, tissues, and proliferating and non-proliferating cell types [58] . AAVs had been generally not considered as vaccine vectors because they were widely considered to be poorly immunogenic. A seminal study using AAV-2 to express a HSV-2 glycoprotein showed this virus vaccine vector effectively induced potent CD8 + T cell and serum antibody responses, thereby opening the door to other rAAV vaccine-associated studies [59, 60] . AAV vector systems have a number of engaging properties. The wild type viruses are non-pathogenic and replication incompetent in humans and the recombinant AAV vector systems are even further attenuated [61] . As members of the parvovirus family, AAVs are small non-enveloped viruses that are stable and amenable to long-term storage without a cold chain. While there is limited preexisting immunity, availability of non-human strains as vaccine candidates eliminates these concerns. Modifications to the vector have increased immunogenicity, as well [60] . There are limited studies using AAVs as vaccine vectors for influenza. An AAV expressing an HA antigen was first shown to induce protective in 2001 [62] . Later, a hybrid AAV derived from two non-human primate isolates (AAVrh32.33) was used to express influenza NP and protect against PR8 challenge in mice [63] . Most recently, following the 2009 H1N1 influenza virus pandemic, rAAV vectors were generated expressing the HA, NP and matrix 1 (M1) proteins of A/Mexico/4603/2009 (pH1N1), and in murine immunization and challenge studies, the rAAV-HA and rAAV-NP were shown to be protective; however, mice vaccinated with rAAV-HA + NP + M1 had the most robust protection. Also, mice vaccinated with rAAV-HA + rAAV-NP + rAAV-M1 were also partially protected against heterologous (PR8, H1N1) challenge [63] . Most recently, an AAV vector was used to deliver passive immunity to influenza [64, 65] . In these studies, AAV (AAV8 and AAV9) was used to deliver an antibody transgene encoding a broadly cross-protective anti-influenza monoclonal antibody for in vivo expression. Both intramuscular and intranasal delivery of the AAVs was shown to protect against a number of influenza virus challenges in mice and ferrets, including H1N1 and H5N1 viruses [64, 65] .\n\nInquiry: What is one of the issues with present vaccines?",
"role": "user"
},
{
"content": "low efficacy in the populations at greatest risk of complications from influenza virus infection, i.e., the young and elderly",
"role": "assistant"
}
] |
science.bioasq_factoid_qa | science.bioasq_factoid_qa.224 | [
{
"content": "Given the following biomedical passages, answer the question succinctly. Ensure your response is precise and limited to the answer only.\n\nQuestion: What is the presumed key event in Fanconi anemia pathogenesis?\n\nPassages: A key event in FA pathway activation is the monoubiquitylation of the FA complementation group I (FANCI)-FANCD2 (ID) complex by FA complementation group L (FANCL), an E3 ubiquitin ligase\nMonoubiquitination of the Fanconi anaemia protein FANCD2 is a key event leading to repair of interstrand cross-links\nHere we show that the protein defective in individuals with Fanconi anemia belonging to complementation group B is an essential component of the nuclear protein 'core complex' responsible for monoubiquitination of FANCD2, a key event in the DNA-damage response pathway associated with Fanconi anemia and BRCA\nFanconi anemia (FA) is characterized by congenital abnormalities, bone marrow failure, chromosome fragility, and cancer susceptibility. Eight FA-associated genes have been identified so far, the products of which function in the FA/BRCA pathway. A key event in the pathway is the monoubiquitination of the FANCD2 protein, which depends on a multiprotein FA core complex\nHere we show that the protein defective in individuals with Fanconi anemia belonging to complementation group B is an essential component of the nuclear protein 'core complex' responsible for monoubiquitination of FANCD2, a key event in the DNA-damage response pathway associated with Fanconi anemia and BRCA.\nThis event also causes phosphorylation of the Fanconi anemia (FA) protein FANCI, triggering its monoubiquitination of the key DNA repair factor FANCD2 by the FA core E3 ligase complex, thereby promoting this central pathway of DNA repair which permits replication to be restarted.\nHere we show that the protein defective in individuals with Fanconi anemia belonging to complementation group B is an essential component of the nuclear protein 'core complex' responsible for monoubiquitination of FANCD2, a key event in the DNA-damage response pathway associated with Fanconi anemia and BRCA.\nA key event in the pathway is the monoubiquitination of the FANCD2 protein, which depends on a multiprotein FA core complex.\nThe key event of the FA pathway is dependent on an eight-protein core complex (CC), required for the monoubiquitination of each member of the FANCD2-FANCI complex.\nFA proteins (FancA/C/E/F/G/L) form a multiprotein complex, which is responsible for DNA damage-induced FancD2 monoubiquitination, a key event for cellular resistance to DNA damage.\nA key event in FA pathway activation is the monoubiquitylation of the FA complementation group I (FANCI)-FANCD2 (ID) complex by FA complementation group L (FANCL), an E3 ubiquitin ligase.\nA key event in the pathway is the monoubiquitination of the FANCD2 protein, which depends on a multiprotein FA core complex\nThis event also causes phosphorylation of the Fanconi anemia (FA) protein FANCI, triggering its monoubiquitination of the key DNA repair factor FANCD2 by the FA core E3 ligase complex, thereby promoting this central pathway of DNA repair which permits replication to be restarted",
"role": "user"
},
{
"content": "FANCD2 monoubiquitination",
"role": "assistant"
}
] |
science.qasa_abstractive_qa | science.qasa_abstractive_qa.236 | [
{
"content": "Review the following segment from a research document in AI. Answer the subsequent query based solely on the provided information. Limit your response to the answer itself.\n\nResearch Segment: In this experiment, we show our Residual Attention Network enjoys noise resistant property on CIFAR-10 dataset following the setting of paper [31].The confusion matrix Q in our experiment is set as follows:Q=\\left(\\begin{matrix}r&\\frac{1-r}{9}&\\cdots&\\frac{1-r}{9}\\\\\\frac{1-r}{9}&r&\\cdots&\\frac{1-r}{9}\\\\\\vdots&\\vdots&\\ddots&\\vdots\\\\\\frac{1-r}{9}&\\frac{1-r}{9}&\\cdots&r\\\\\\end{matrix}\\right)_{10\\times 10}(7) where r denotes the clean label ratio for the whole dataset.\n\nInquiry: What does the confusion matrix Q in the authors noisy label robustness experiment refers to?",
"role": "user"
},
{
"content": "The confusion matrix Q shows how many images were correctly labeled and how many images were purposely incorrectly labeled for the noise experiment.",
"role": "assistant"
}
] |
science.chemsum_single_document_summarization | science.chemsum_single_document_summarization.44193 | [
{
"content": "Craft a concise summary for a chemistry research paper using the provided title and selected text segments.\n\nHeading:\nPhotocatalytic reverse polarity Povarov reaction\n\nContent Extracts:\nIntroduction\n<p>Photoredox catalysis, through its ability to generate reactive radical intermediates under mild yet highly tunable reaction conditions and experimental set-ups, 1 offers great potential for new reaction discovery. 2 Pioneering developments have significantly expanded the synthetic toolbox 3 and have been applied in the construction of building blocks 4 and natural products, 5 and also in the derivatization of biologically relevant molecules 6 and macromolecular peptide structures. 7 Photocatalytic approaches to the synthesis and functionalisation of amines and their derivativeswith direct impact on medicinal chemistry programmes 8 have been particularly prevalent. 9 Recent efforts have focussed on the photocatalytic generation and trapping of a-amino radicals derived from a range of starting materials. 10 Noteworthy examples include single electron transfer (SET) decarboxylation of amino acid derivatives, 11 direct hydrogen atom transfer (HAT) of aliphatic amines, 12 or via the SET reduction of imine derivatives. 13 The resulting a-amino radicals have been shown to subsequently engage in radical-radical coupling reactions, 14 partake in transition metal catalysed cross coupling processes or react with a range of electrophilic species. 15 Building on Knowles' seminal studies on proton coupled electron transfer (PCET) for the generation of a-heteroatom radicals, 13a our group recently reported the photocatalytic reductive coupling of imines with allyl sulfone electrophiles using Eosin Y as the photocatalyst, and Hantzsch ester as the stoichiometric reductant, under green LED light irradiation. 15b Contemporaneously Chen, 15a and later Ngai, 15c reported similar reactivity of in situ generated a-amino radicals from imines. Such a reversal of the natural imine polarity via the PCET manifold establishes a new umpolung approach for the synthesis of a-functionalised amines (Scheme 1a), and creates many opportunities for new reaction discovery. In a continuation of our programme, we sought to explore and expand the synthetic utility of photocatalytic imine umpolung chemistry. Our previous work demonstrated that the a-allylated amine product was the outcome of the reaction with Michael acceptors bearing a sulfone leaving group. 15b With prospective coupling partners not possessing a suitable leaving group it was unclear as to what the outcome of the reaction would be. Following nucleophilic addition, the intermediate g-amino radical could either gain a further electron and proton to form the Michael addition product, 15a or potentially cyclize onto the pendant electron rich aromatic ring and subsequently rearomatize (Scheme 1a). Both pathways would be synthetically useful but the latter would allow the direct construction of the biologically relevant tetrahydroquinoline scaffold, examples of which have shown to possess biological activity in g-secretase inhibitors (Scheme 1c) 16 and androgen agonists/antagonists. 17 Previous studies by Hirano and Miura on the copper catalysed redox coupling of N,N-dimethylaniline and maleimide derivatives provided some precedent for the cyclisation pathway (Scheme 1b) 18 and encouraged us to investigate along these lines of enquiry. Herein, we wish to report our ndings.</p><p>Preliminary studies were carried out using uorine tagged aldimine (1a), phenyl vinyl sulfone (2a, 3 equiv.) as the Michael acceptor, (Ir[dF(CF 3 )ppy] 2 (dtbbpy))PF 6 ([Ir], 1 mol%) as photocatalyst and the commercial Hantzsch ester (HE1, 1.2 equiv.) as a stoichiometric reductant, in DMSO, under blue LED light irradiation. Pleasingly, good reactivity was identied early and importantly cyclised product 3aa reverse polarity Povarov product 19was formed as the sole coupling product in the reaction mixture (Table 1, entry 1) as a 7 : 1 mixture of diastereomers. Despite the typical use of Hantzsch esters as a superstoichiometric reductive quencher in photoredox coupling reactions of imines, 20,21 our redoxneutral process beneted from the use of 60 mol% HE1. This was largely due to suppression of over-reduction and aza-pinacol side products (entry 2) and the use of substituted Hantzsch esters (HE4 & HE6, entries 3 & 4) further suppressed their formation. 16,22 Further reduction of the Hantzsch ester loading to 10 mol% led to lower reaction conversions (entry 5) using blue LED irradiation, however by switching to a commercial photoreactor, full conversion was achieved in 16 hours and the product was isolated in excellent yield (90%) as a 10 : 1 mixture of diastereomers. The reaction methodology was also amenable to a reduction in equivalents of vinyl sulfone coupling partner without major impact on reaction efficiency (entry 7) and reactivity was completely lost on removal of Hantzsch ester (entry 8), or iridium photocatalyst (entry 9), and in the absence of light (entry 10).</p><p>With optimal conditions established, we looked to probe the scope of the photocatalytic reverse polarity Povarov reaction (Scheme 2a). Initially, the tolerance to variation on the aniline portion of the aldimine was investigated. Pleasingly, the reaction proceeded well with a number of substituents in the 4-position (3b-d). Good reaction efficiency and excellent diastereoselectivity towards the trans diastereoisomeric product was noted in all cases. 23 This is in contrast to Brønsted or Lewis acid catalysed Povarov reactions in which cis-congured stereoisomeric products oen predominate. 19 When variations to the aromatic ring of the aldehyde moiety were explored, electronics were found to play a pivotal role, with electron rich and neutral arenes giving excellent yields (3e-g, 3j-n) whereas electron poor aromatics led to longer reaction times (3h) and even complete nullication of reactivity (3i). 3-Fluoro and 2-uoro-substituted aldimines were tolerated in this chemistry however longer reaction times were required and yields diminished with proximity to the imine C]N bond (3p-r). A substituted pyridyl substrate was also shown to be effective in the reaction mixture leading to an excellent yield of product (3s). Disappointingly, ketimines were not found to be viable substrates for this reaction (3t) even aer prolonged reaction times. 4-Chlorophenyl vinyl sulfone was found to be an excellent electrophile (3ab). Similarly, maleimide and N-phenylmaleimide electrophiles were shown to be excellent coupling partners in the new cyclisation methodology, however diastereoselectivity was respectively reduced or absent in the reaction products (3ac-ae). The chemistry was extendable to a three component one-pot process (Scheme 2b) and demonstrated the expedient construction of complex tetrahydroquinoline structures using simple building blocks, as well as its potential application to modular library synthesis. An interesting observation made throughout the optimisation studies was that aer the imine substrate 1a was consumed in the reaction vessel, product diastereomeric ratio continued to increase, suggesting that epimerization was occurring under the reaction conditions. To investigate this further, Povarov product 3a (dr 10 : 1) was resubmitted to the reaction conditions. Pleasingly an increase to 18 : 1 was observed and mass balance was maintained (Scheme 3a). As this epimerization does not take place without the iridium catalyst, or in the absence of light, a photoredox based mechanism for the formation of a planar a-amino radical is proposed. 24 Monitoring the reaction over time from initiation using 19 F NMR spectroscopy, demonstrated that the formation of 3a had an intrinsic kinetic dr of $10 : 1 in favor of the trans diastereoisomer. Then from the point of $90% conversion, the diastereoselectivity increased gradually up to 20 : 1 aer 48 hours (Scheme 3b).</p><p>Recent investigations have shown that imines (E 0 1/2 ¼ À1.90 V vs. SCE in CH 3 CN), 25 can be reduced more readily using proton-coupled electron transfer. Chen's, 15a ours, 15b and Ngai's 15c previous investigations have demonstrated that partially oxidized Hantzsch esters are acidic enough to be used as proton donors in PCET mechanisms. 26 To probe this further, the reaction was repeated using the reduced aminewhich is a by-product of the reactionas starting material (4a, Scheme 4a). Only trace amounts of product were observed, suggesting the reverse polarity Povarov reaction does not proceed via initial reduction of the imine. This reinforces the proposal of PCET construction of the key a-amino radical.</p><p>As only sub stoichiometric quantities of the Hantzsch ester are needed for full conversion, its role is likely as an initiator in a self-propagating mechanism. We were intrigued to identify whether either the cyclised product 3a or an intermediate could act as a reductive quencher to propagate further product formation. To this end, we carried out a control experiment where HE4 (10 mol%) was replaced with 3a (Scheme 4b). Importantly, no product formation was observed suggesting that an intermediatenot the product of the reactionis capable of propagating this reaction mechanism. Kinetic isotope effect (KIE) studies demonstrated that the isotopic sensitivity of selectivity determining steps has a value of $1, suggesting that the substrates are kinetically committed toward product formation prior to any C-H cleavage events (Scheme 5c). 27 From these insights, we propose that PCET would enable Ir [dF(CF 3 )ppy] 2 (dtbbpy)PF 6 (E 0 1/2 ¼ À1.37 V vs. SCE in CH 3 CN) 1a to reduce the imine readily to form the key a-amino radical (Scheme 5) which can react with the phenyl vinyl sulfone in a step-wise radical cyclization to give the stabilised Povarov radical intermediate A. As this methodology only requires substoichiometric quantities of the substituted Hantzsch ester (HE4), we suggest this Povarov radical intermediate A can lose a proton (kinetically facile) to form radical anion intermediate B, in a base assisted homolytic aromatic substitution-type mechanism. 28 Using Yoon's method, we calculated the quantum yield of the reaction between 1a and 2a, to be 18. 29 This suggests that a radical chain process is in operation, with the strongly reducing radical anion intermediate able to reduce fresh imine substrate with concomitant protonation, 30 thus forming the product (3a) and regenerating the key a-amino radical.</p>\n\nConclusions\n<p>In conclusion we have developed a new photocatalytic reverse polarity Povarov reaction to construct decorated tetrahydroquinolines in high yield and diastereoselectivity. This polarity reversal was postulated to stem from an a-amino radical formed via the PCET of imine derivatives. 10 mol% of a Hantzsch ester was found to be optimal as a reductive initiator, a feature which has not previously been disclosed in photoredox catalysis. Further investigations are ongoing to establish further coupling partners and scaffolds for this reverse polarity platform for the synthesis of a-functionalised amines.</p>",
"role": "user"
},
{
"content": "A visible light mediated iridium photocatalysed reverse polarity Povarov reaction of aryl imines and electron deficient alkenes is described. Operating via a putative nucleophilic a-amino radical, generated by a proton coupled electron transfer process, addition to a range of conjugated electron deficient alkene substrates affords substituted tetrahydroquinoline products in high yields and with typically good to excellent diastereoselectivity in favor of the trans diastereoisomer. Sub-stoichiometric quantities of Hantzsch ester were found to be key to initiate the overall redox-neutral, free radical cyclization cascade. This new reaction complements existing two electron Lewis acid mediated variants and expands the capabilities of imine umpolung chemistry to synthetically relevant cyclisation methodology.",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_pubmed | science.scientific_papers_summarization_single_doc_pubmed.9979 | [
{
"content": "Read the provided biomedical research paper and create a concise abstract. Ensure the abstract captures the main points of the paper.\n\nFull Paper:\nwithin 4 days , 5 patients with skin lesions suggestive of an opv infection were reported to the infectious disease task force at the bavarian health and food safety authority .\ninfected patients were from 2 unrelated families living in 2 different counties in the greater munich area , germany .\nthe families had bought 1 and 2 rats , respectively , from the same pet shop on december 15 and december 17 , 2008 .\nsource tracing showed that the pet shop owner had sold a litter of 8 rats to 7 different households in the greater munich area .\nthe pet shop owner had purchased the litter from a bavarian rat breeder 7 days before the last rat in the litter was sold .\nno symptoms of opv infection were reported in the rats or any another animal in the pet shop .\nthe pet shop owner denied purchasing any animals from abroad that could have been related to the 2003 us monkeypox outbreak ( 7 ) , although he did acknowledge owning another breeding facility in the czech republic .\ninspection of the breeding facility in bavaria found 4 rats with crusts suspicious for opv infection .\nmice , hamsters , rabbits , and degus ( octodon degus ) were also bred in the facility , but none had clinical signs of opv .\na total of 31 rats from the facility were tested for opv infection by oral swabs and serology .\naccording to their owners , all pet rats were asymptomatic when purchased , but 2 rats ( 1 in each family with a human opv infection ) died after 9 and 14 days , respectively .\none rat had distinct skin lesions on its extremities , mouth , and nose ( figure 1 , panel a ) ; the other had only 2 very small lesions on its front leg and nose .\ntwo additional rats from the litter in 2 other households were euthanized due to clinical suspicion of opv infection ; 3 rats were assessed as healthy by their owners .\na ) rat named shiva ( strain named after this rat ) with lesions on the right hind limb ; it died 1 day later .\nb ) neck lesions of a girl without previous vaccinia virus ( vv ) vaccination .\nc ) neck lesion of the girl s grandmother with a history of vv vaccination .\n2 . in households 1 and 2 ( table ) with human cases of infection , 2 and 3 persons , respectively , reported only direct skin contact with their pet rats since the first day of purchase .\nnotably , the onset and severity of symptoms were apparently associated with a patient s vv vaccination status : 2 girls ( patients 2 and 5 , each 16 years of age with no history of vv vaccination ) had multiple lesions on the neck , chest , and abdomen ( figure 1 , panel b ) accompanied by fever and local lymphadenopathy .\nin contrast , the incubation period for 2 vv - vaccinated mothers ( patients 1 and 3 , 42 and 40 years of age , respectively ) and for the vv - vaccinated grandmother of 1 of the girls ( patient 4 , 60 years of age ) was > 7 days .\nall showed less severe symptoms ( figure 1 , panel c ) without fever or lymphadenopathy and only 1 small skin lesion on the neck or chest . *\nall obtained hemagglutinin open reading frames were 924 bp , and the respective sequences were 100% identical to each other ( genbank accession no .\nrat 8 from the outbreak litter belonging to household 7 is missing because the rat owner could not be identified .\nem , electron microscopy ; nd , not done ; na , materials not available . for orthopoxvirus spp . in household 3\nafter 35 days , skin lesions developed in all of her rats , including the initially asymptomatic index rat .\n, the kidney transplant patient without previous vv vaccination did not develop signs or symptoms suggestive of cpxv infection . nevertheless , we collected a blood sample and swabs from her throat and a recent rat - bite finger wound .\nvarious specimens ( skin biopsies , crusts , oral swabs , serum , and whole blood ) obtained from 5 patients and from rats from 3 households and 31 other rats ( 939 ) from the local breeding facility in bavaria were sent to the bavarian health and food safety authority ( table ) .\nskin biopsy and crust specimens were homogenized and inspected for typical opv - like particles by using by electron microscopy .\ndna from all samples was extracted using the qiaamp dna mini kit ( qiagen , hilden , germany ) according to the manufacturer s instructions .\nfor opv dna detection , the realart orthopoxvirus lc kit ( qiagen ) was used . for species identification\n, the products of a second pcr , spanning the entire open reading frame of the hemagglutinin gene ( 8,9 ) , were sequenced .\ndatasets were edited and aligned using bioedit ( 10 ) . blast search ( www.ncbi.nlm.nih.gov/blast/blast.cgi ) was performed to confirm species identification of the isolated strain as well as similarity with published cpxv strains .\na phylogenetic tree was constructed using the maximum parsimony method with the phylogeny inference package version 3.68 ( http://evolution.genetics.washington.edu/phylip.html ) with 100 bootstraps ; the tree was drawn with treeview version 1.6.6 ( http://taxonomy.zoology.gla.ac.uk/rod/treeview.html ) ( figure 2 ) .\nopv - specific serum antibody titers were determined using an immunfluorescence test based on vv - infected cells and either an antihuman or an antirat immunoglobulin g fluorescein - labeled conjugate ( dako , hamburg , germany ) .\nphylogenetic tree of the isolated cowpox virus ( cpxv ) shiva strain ( in boldface ; named after pet rat shown in figure 1 , panel a ; genbank accession no .\nfj654467 ) , constructed by the maximum - parsimony method based on the partial sequences method based on the hemagglutinin ( ha ) gene , unrooted . blast search ( www.ncbi.nlm.nih.gov/blast/blast.cgi ) confirmed the identification of this strain as a cpxv strain with a unique ha gene sequence .\nadditional unique cpxv strains shown for comparison , by accession number : ay902307 ( cowha35e ) , ay902301 ( cowha82 ) , ay902299 ( cowha70 ) , ay902298 ( cowha68 ) , ay902297 ( cowha52 ) , ay902296 ( cowha51 ) , ay902295 ( cowha48 ) , ay902294 ( cowha46 ) , ay902289 ( cowha47 ) , ay902288 ( cowha41 ) , ay902287 ( cowha40 ) , ay902286 ( cowha37 ) , ay902279 ( cowha76 ) , ay902276 ( cowha23 ) , ay902308 ( cowha38 ) , ay902275 ( cowha22 ) , ay902274 ( cowha21 ) , ay902273 ( cowha81 ) , ay902271 ( cowha19 ) , ay902270 ( cowha18 ) , ay902269 ( cowha17 ) , ay902268 ( cowha16 ) , ay902263 ( cowha15 ) , ay902262 ( cowha34 ) , ay902260 ( cowha13 ) , ay902257 ( cowha09 ) , ay902256 ( cowha07 ) , ay902255 ( cowha63 ) , ay902252 ( cowha73 ) , ay944029 ( cpv90_ger2 ) , ay944028 ( cpv91_ger3 ) , af377886 ( cowpox virus ) , af377885 ( cpv-922 - 99 ) , af377884 ( cpv-867 - 99b ) , af377883 ( cpv-667 - 94b ) , af377877 ( cpv-1218 - 00 ) , af375090 ( cpx - ep-2 ) , af375089 ( cpx - brt ) , af375088 ( cpx-90 - 5 ) , af375087 ( cpx-90 - 1 ) , af375086 ( cpx-89 - 5 ) , af375085 ( cpx-89 - 4 ) , and af375084 ( cpx-89 - 1 ) .\nbesides molecularly proven wild rat - to - human cpxv transmission ( 6 ) an additional cpxv infection probably transmitted from a pet rat was reported ( 11 ) .\nrecently , 4 human infections acquired from pet rats were reported to the reference laboratory for poxviruses at the robert koch institute ( 12 ) .\nwe describe a cpxv outbreak among 5 patients caused by infected pet rats from the same litter .\ncpxv infections seem to be increasing ( 13 ) , but because cpxv infections in humans and in most animals ( e.g. , cats and rats ) are not notifiable , this increase remains an assumption .\none obvious reason for an increase might be the fading cross - protective immunity to opv after the cessation of vv vaccination ( 14 ) . in our small cluster ,\nthe onset and severity of symptoms seemed to be correlated with vv vaccination status ; however , although patients reported similar contact with pet rats , patient age and manner of infection might confound this hypothesis ( 15 ) .\nthe rising popularity of pet rats might also be a point of concern in a population with decreasing cross - protection to opv and an increasing number of immunocompromised persons .\nour findings emphasize the necessity to monitor opv infections in humans and all animals ( e.g. , notification requirement ) and to improve public awareness .\nour outbreak investigation underlines the importance of close cooperation between human health and veterinary authorities in the management of zoonotic diseases .",
"role": "user"
},
{
"content": "we describe a cluster of cowpox virus ( cpxv ) infections in humans that occurred near munich , germany , around the beginning of 2009 . \n previously , only sporadic reports of cpxv infections in humans after direct contact with various animals had been published . this outbreak involved pet rats from the same litter .",
"role": "assistant"
}
] |
science.medmentions_ner | science.medmentions_ner.2277 | [
{
"content": "Analyze the provided paragraph from a scientific document. Your goal is to identify and extract relevant entities.\n\nFormat your findings as a JSON object, where each key represents a category of entities, and the corresponding value is a list of entities identified in that category. If no entities are found for a category, return an empty list.\n\nFocus on precision and ensure that the output is strictly in JSON format.\n\nDocument:\n\nA new patient safety smartphone application for prevention of \" forgotten \" ureteral stents: results from a clinical pilot study in 194 patients Approximately 12% of all ureteral stents placed are retained or \" forgotten .\" Forgotten stents are associated with significant safety concerns as well as increased costs and legal issues. Retained ureteral stents (RUS) often occur due to lack of clinical follow-up, communication or language barriers, and economic concerns. We describe a multiplatform application that facilitates data collection to prevent RUS. The \"Stent Tracker\" application can be installed on mobile devices and computers. The encrypted and password-protected information is accessible from any device and provides information about each procedure, stent placement and removal dates, as well as product description. This multicenter retrospective study included 194 patients who underwent stent placement between July and October 2015. Nominal data was tallied and ordinal data was divided into quartiles of 25, 50, and 75%. A total of 194 patients from three institutions underwent ureteral stent placement. Reasons for stent placement include 122 cases post ureteroscopy (63%), 8 cases post percutaneous nephrolithotomy (PCNL) (4%), 14 cases post extracorporeal shock wave lithotripsy (SWL) (7%), 18 cases of cancer-related ureteral obstruction (9%), 21 cases of hydronephrosis (11%), and 11 for other reasons (6%). Of these patients, only one patient was lost to follow-up (0.5%). On average, ureteral stents were removed within 14 days of placement (IQR: 8-26 days). The \" Stent Tracker \" is a patient safety application that provides a secure and simplified interface, which can significantly reduce the incidence of RUS. Further developments could include automated notifications to patients and staff, color-coding, and integrated information with electronic patient charts.",
"role": "user"
},
{
"content": "{\"AnatomicalStructure\": [\"ureteral\", \"ureteral obstruction\"], \"Bacterium\": [], \"BiologicFunction\": [\"hydronephrosis\"], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [], \"Chemical\": [], \"ClinicalAttribute\": [], \"Eukaryote\": [], \"Finding\": [\"cancer-related\", \"language barriers\", \"Forgotten\", \"forgotten\"], \"Food\": [], \"HealthCareActivity\": [\"follow-up\", \"extracorporeal shock wave lithotripsy\", \"ureteroscopy\", \"placement\", \"SWL\", \"clinical follow-up\", \"PCNL\", \"safety concerns\", \"percutaneous nephrolithotomy\", \"patient safety\", \"removal\", \"stent placement\"], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [\"multiplatform application\", \"\\\"Stent Tracker\\\" application\", \"application\", \"password-protected\", \"Stent Tracker\", \"product description\", \"color-coding\", \"smartphone application\", \"legal issues\", \"simplified interface\"], \"MedicalDevice\": [\"stents\", \"RUS\", \"ureteral stents\"], \"Organization\": [], \"PopulationGroup\": [], \"ProfessionalOrOccupationalGroup\": [\"staff\"], \"ResearchActivity\": [\"multicenter retrospective study\", \"data collection\", \"clinical pilot study\"], \"SpatialConcept\": [], \"Virus\": []}",
"role": "assistant"
}
] |
science.data_reco_mcq_sc | science.data_reco_mcq_sc.5326 | [
{
"content": "Explore the following research question and associated terms. Identify the most suitable dataset from the list provided. Use the dataset descriptions to guide your choice.\n\nInquiry: I want to learn disentangled representations for images in an unsupervised fashion.\n\nKeywords: unsupervised learning disentangled representations images\n\nDataset Information:\nVoxPopuli: VoxPopuli is a large-scale multilingual corpus providing 100K hours of unlabelled speech data in 23 languages. It is the largest open data to date for unsupervised representation learning as well as semi-supervised learning.\nMAFL: The MAFL dataset contains manually annotated facial landmark locations for 19,000 training and 1,000 test images.\nCelebA: CelebFaces Attributes dataset contains 202,599 face images of the size 178×218 from 10,177 celebrities, each annotated with 40 binary labels indicating facial attributes like hair color, gender and age.\nPTC: PTC is a collection of 344 chemical compounds represented as graphs which report the carcinogenicity for rats. There are 19 node labels for each node.\n\nAvailable Datasets: A: VoxPopuli B: MAFL C: CelebA D: PTC",
"role": "user"
},
{
"content": "CelebA",
"role": "assistant"
}
] |
science.healthver_entailment | science.healthver_entailment.120 | [
{
"content": "Evaluate the given public health statement using the provided research abstract. Your goal is to determine the relationship between the claim and the abstract content.\n\n- \"decision\": Indicate whether the abstract \"SUPPORTS\", \"CONTRADICTS\", or provides \"INSUFFICIENT\" information regarding the claim.\n- \"justification\": List sentences from the abstract that back your decision. If the decision is \"INSUFFICIENT\", leave this list empty.\n\nStatement: A face mask covers your mouth and nose. It can block the release of virus-filled droplets into the air when you cough or sneeze. This helps slow the spread of COVID-19.\n\nResearch Abstract:\nIn the context of Coronavirus Disease (2019) (COVID-19) cases globally, there is a lack of consensus across cultures on whether wearing face masks is an effective physical intervention against disease transmission.\nThis study 1) illustrates transmission routes of Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2); 2) addresses controversies surrounding the mask from perspectives of attitude, effectiveness, and necessity of wearing the mask with evidence that the use of mask would effectively interrupt the transmission of infectious diseases in both hospital settings and community settings; and 3) provides suggestion that the public should wear the mask during COVID-19 pandemic according to local context.\nTo achieve this goal, government should establish a risk adjusted strategy of mask use to scientifically publicize the use of masks, guarantee sufficient supply of masks, and cooperate for reducing health resources inequities.",
"role": "user"
},
{
"content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"This study 1) illustrates transmission routes of Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2); 2) addresses controversies surrounding the mask from perspectives of attitude, effectiveness, and necessity of wearing the mask with evidence that the use of mask would effectively interrupt the transmission of infectious diseases in both hospital settings and community settings; and 3) provides suggestion that the public should wear the mask during COVID-19 pandemic according to local context.\"]}",
"role": "assistant"
}
] |
science.craftchem_ner | science.craftchem_ner.2590 | [
{
"content": "Analyze the following sentence to identify chemical entities of interest. List these entities in a JSON array format. If no entities are found, return an empty array.\n\nText to Analyze:\nSouthernblotanalysisdetectedasubstantiallyhighercopynumberofthemitochondrialgenome-encodedCOXIIDNAinthetransgenicmice(Figure3B).",
"role": "user"
},
{
"content": "[]",
"role": "assistant"
}
] |
science.chemdner_ner | science.chemdner_ner.1930 | [
{
"content": "Analyze the given text to identify specific entities. Focus on extracting relevant data.\n\nProvide your findings in JSON format. Each category should list the entities found.\n\nText:\n\nRationally designed multitarget anticancer agents. Balanced modulation of multiple targets is an attractive therapeutic strategy in treating complex diseases including cancer. Comparing with drugs combination, single molecule modulating desirable multiple targets has advantages in pharmacokinetic and pharmacodynamics. Different from previous reviews, we provided an overview of reported multitarget antitumor agents from the viewpoint of pharmacophores. These multitarget antitumor agents were designed by combination of pharmacophores or by high-throughput screening plus structural modification, which were exemplified by the privileged pharmacophore quinazoline and several other popular pharmacophores, including phenylaminopyrimidine, anthracycline and naphthalimide. Previous research demonstrated the importance of in-depth validation against multiple targets not only in cell-free system, but also in cancer cells. Furthermore, the multitarget compounds were also effective for resistance cell lines which highlighted their antitumor potency in the era of increasing drug resistance in cancer patients.",
"role": "user"
},
{
"content": "{\"Chemical\": [\"naphthalimide\", \"phenylaminopyrimidine\", \"anthracycline\"]}",
"role": "assistant"
}
] |
science.bc7_litcovid_topic_classification | science.bc7_litcovid_topic_classification.1274 | [
{
"content": "Analyze the given paragraph from COVID-19 research. Determine the relevant topics discussed in the text. Your response should be a JSON array containing the identified topics. Ensure the output is a valid JSON array with no additional commentary.\n\nText:\n\nThe acute respiratory distress syndrome (ARDS) induced by SARS-CoV-2-mediated cytokine storm (CS) in lungs leads to the high mortality in COVID-19 patients. To reduce ARDS, an ideal approach is to diminish virus loading by activating immune cells for CS prevention or to suppress the overactive cytokine-releasing immune cells for CS inhibition. Here, a potential radiation-mediated CS regulation is raised by reevaluating the radiation-mediated pneumonia control in the 1920s, with the following latent advantages of lung radiotherapy (LR) in treatment of COVID-19: (1) radiation accesses poorly circulated tissue more efficiently than blood-delivered medications; (2) low-dose radiation (LDR)-mediated metabolic rewiring and immune cell activation inhibit virus loading; (3) pre-consumption of immune reserves by LDR decreases CS severity; (4) higherdose radiation (HDR) within lung-tolerable doses relieves CS by eliminating in situ overactive cytokine-releasing cells. Thus, LDR and HDR or combined with antiviral and life-supporting modalities may mitigate SARS-CoV-2 and other virus-mediated ARDS.",
"role": "user"
},
{
"content": "[\"Treatment\", \"Mechanism\"]",
"role": "assistant"
}
] |
science.chemsum_single_document_summarization | science.chemsum_single_document_summarization.23445 | [
{
"content": "Craft a concise summary for a chemistry research paper using the provided title and selected text segments.\n\nHeading:\nFacile synthesis of borofragments and their evaluation in activity-based protein profiling\n\nContent Extracts:\n\n<p>Organoboron compounds have received considerable attention as valuable building blocks in chemical synthesis.1 Boron-containing molecules have also been used as enzyme inhibitors.2 Both of these applications result from the accessibility of boron's empty p-orbital, which enables reversible interaction with synthetic and biological nucleophiles. Our recent interest in this area has led to an ongoing exploration of boron-based electrophiles that engage nucleophilic residues in active sites of proteins. In contrast to aldehydes, acrylates and epoxides, which are widely used as \"baits\" for nucleophilic residues in active sites, small boron-containing electrophiles have received relatively little attention. This is mainly due to their lack of stability and relative difficulty of preparation, especially when it comes to molecules with C(sp3)-B bonds.1 Because the corresponding synthetic protocols often depend on organometallic reactions with low functional group tolerance, we recently developed a boron-containing isocyanide reagent that enabled us to synthesize novel boromorpholinones and boropeptides under mild conditions.3 The N-methyliminodiacetic acid (MIDA) residue on boron4,5 was the main factor that contributed to the stability of boropeptides and their successful one-step synthesis.</p><p>Building on our approach to boropeptides, we became interested in advancing the concept of MIDA-containing borofragments, which are chimeric molecules where a boron-containing group is connected to a recognition unit, most commonly a heterocycle. As many biologically relevant heterocycles with high ligand efficiency have conjugate acids with relatively low pKa's,6 we were motivated to develop a mild method for connecting a diverse set of heterocyclic nucleophiles to the smallest possible C-B unit equipped with the MIDA substituent (furnishing the corresponding borofragments). To facilitate navigation of the accessible chemistry space in enzyme active sites, we were particularly intrigued by the borofragments in which a heterocycle is separated from the boron center by two or three rotatable bonds. In this paper, we present our efforts to synthesize electrophilic borofragments and their evaluation as serine hydrolase inhibitors using competitive activity-based protein profiling (ABPP). We show that the MIDA group is compatible with the synthesis of a diverse collection of borofragments and that select members of this class inhibit several poorly characterized serine hydrolases, including α/β hydrolase domain-containing protein 10 (ABHD10) and predicted serine carboxypeptidase (CPVL). These findings should encourage broader campaigns to append C(sp3)-B bonds in their MIDA forms to various molecular recognition components.</p><p>In order to convert readily accessible nucleophilic small molecules into borofragments, we pursued hydroxymethyl-containing building block 5 with the goal of evaluating its reactivity in redox driven condensations. Despite the relative bulk of the MIDA group, the mild and neutral conditions 7 of the Mitsunobu process were particularly attractive to us. Compound 5 was synthesized according to Scheme 1. Starting with dibromomethane (1), bromomethylpinacolborate (2) was prepared according to literature procedure.8 Treatment of 1 with n-butyllithium in the presence of triisopropoxyborate followed by transesterification of the resulting bromomethyldiisopropoxyborane with pinacol furnished intermediate 2. Subsequent treatment of crude product 2 with sodium benzyloxide in DMSO afforded benzyloxymethylpinacolboronic ester (3),9 which was converted to benzyloxymethyl(MIDA)boronate (4) in situ by heating in the presence of MIDA. Most importantly, purification of compound 4 was achieved by trituration with Et2O, which eliminated the need for chromatography. Hydrogenolysis of 4 with Pd/C afforded hydroxymethyl(MIDA)boronate (5) as a bench-stable white solid in pure form (49% isolated yield over 4 steps, >10g).</p><p>We were pleased to observe that, despite the steric bulk of the B-MIDA substituent, hydroxymethyl(MIDA)boronate (5) reacted with various acidic pro-nucleophiles (NuH) in the presence of diisopropyl azodicarboxylate (DIAD) and triphenylphosphine to produce a variety of α-functionalized alkyl(MIDA)boronates in excellent yields (Table 1). Ester formation via coupling of benzoic acid (6a) or cinnamic acid (6b) with 5 afforded the corresponding products 7a and 7b, respectively (entries 1 and 2). N-hydroxyphthalimide (6c) was easily converted into the corresponding heterocyclic product 7c (entry 3). Etherification of 5 was also carried out. Thus, phenol 6d was used to produce 7d, although 2.0 equivalents of 6d were required in this case (entry 4). Nitrogen-containing acidic pro-nucleophiles were also employed in this methodology (entries 5 and 6). The reaction of phthalimide (6e) with 5 gave the corresponding product 7e (entry 5). N,O-bis(phenoxycarbonyl)hydroxylamine (6f)10 was converted to 7f (entry 6). Purine derivatives 6g and 6h11 were found to react with 5 to afford N(9)-alkylated products 7g and 7 h, whose structures were confirmed by X-ray crystallography (entries 7 and 8). Sulfur-containing pro-nucleophiles 6i,j reacted with 5 to afford the corresponding products 7i,j (entries 9 and 10). For compounds 7a–e,g,j purification was achieved by trituration with Et2O due to the low solubility of the products in non-polar solvents. We note that Molander and co-workers have reported the nucleophilic substitution of α-halomethyltrifluoroborates with nucleophiles such as amines and organolithium/organosodium reagents.12 Their work also includes nucleophilic substitutions of α-halomethylboronic esters with amines followed by addition of KHF2 to synthesize a number of α-functionalized alkyl trifluoroborates.13 However, those substitutions have been conducted under basic conditions and often at high temperatures. In contrast, our reactions are carried out at neutral pH and room temperature, thus expanding the scope of compatible functional groups, which includes biologically relevant heterocycles with low pKa's.</p><p>We were also able to incorporate the azide function via a modified Mitsunobu reaction of hydroxymethyl(MIDA)boronate (5) with diphenylphosphorylazide (DPPA) (8). This reaction produced azidomethyl(MIDA)boronate (9) in 90% yield (Scheme 2). 14 Compound 9 was also synthesized through an alternate route in 75% yield from the reaction of sodium azide with iodomethyl(MIDA)boronate (10), which was in turn prepared by treatment of 5 with iodine and imidazole. The presence of an azide in compound 9, which can be elaborated via click chemistry,15 further expands the synthetic versatility of borofragment construction.</p><p>While boronic acids and their derivatives have been explored as enzyme inhibitors for decades,2 their potency and selectivity across mechanistically conserved enzyme classes remains poorly understood. In large part this is due to the poor stability of free boronic acids under biological conditions and the consequential lack of compounds for analysis. We have found that the MIDA ligand enables preparation of compounds unavailable in their free boronic acid form, yet in a biological milieu would likely be released to form active boronic acids. With this in mind, we wondered if broadly profiling the activity of serine hydrolases in the presence of our (MIDA)boronate fragments would elucidate novel inhibitor-enzyme pairs that could aid functional annotation of poorly characterized enzymes within this class. Using competitive activity-based protein profiling (ABPP) with the serine hydrolase-directed activity-based probe fluorophosphonate-rhodamine (FP-Rh),16 we first assayed 10 library members (Table 1) across the serine hydrolases natively expressed in a human prostate cancer cell line (PC3). Briefly, a soluble PC3 cell proteomic lysate was treated with either DMSO or a (MIDA)boronate (10 μM) for 30 min at 37 °C and then with FP-Rh (1.0 μM) for 30 min. The enzymes labeled by FP-Rh were then resolved by SDS-PAGE and their degree of labeling was measured by fluorescence gel imaging. This initial screen revealed that, while most of these compounds did not affect FP-labeling for any detectable serine hydrolase, two thioether boronates (7i and 7j) displayed moderate inhibition of a ~30 kDa band which we have previously identified17 as ABHD10 (Figure S1). Considering that enzyme inhibition might necessitate decomposition of the (MIDA)boronate fragments to an active boronic acid, we retested each compound at 20 μM with an extended 2 h preincubation time prior to adding FP-Rh, allowing more time for the formation of the putative boronic acid (Figure 1A). Notably, this preincubation time was sufficient to produce >95% conversion of 7j to its corresponding boronic acid in DPBS (see Supporting Information). Using this new assay protocol, 7i and 7j displayed enhanced activity, completely inhibiting ABHD10 with few observable off-targets. We and others have recently shown that ABHD10, a poorly characterized serine hydrolase, is also potently inhibited by β-lactones17,18 and aza-β-lactams19, but to our knowledge, targeting this enzyme with boron-based inhibitors has not been reported. To further assess the potency and selectivity of 7j, we next tested this compound across a broad concentration range using gel-based ABPP (Figure 1B). From this analysis, we observed near complete inhibition of ABHD10 at 10 μM and acyl-coenzyme A thioesterase 1/2 (ACOT1/2)20 (acyl-CoA thioesterases which is involved in fatty acid metabolism21) at 100 μM. Although 7j is less potent for ABHD10 relative to previously described inhibitors,17, 19a it should be emphasized that the ABPP platform used herein, which relies on an irreversible FP probe to report target occupancy, may underestimate the degree of enzyme inhibition for reversible inhibitors, such as boronates/boronic acids, unless the kinetics of FP probe reactivity is carefully monitored while competitive ABPP experiments. This question can be addressed in future studies using established protocols.19b</p><p>To definitively identify the targets of 7j, we used a quantitative mass spectrometry (MS)-based platform termed ABPP-SILAC.22 Following previously described protocols, cell lysates were prepared from PC3 cells cultured in the presence of isotopically light and heavy lysine and arginine. These lysates were then treated with either DMSO (light) or 25 μM 7j (heavy) for 2 h and subsequently treated with a biotinylated FP probe (FP-biotin)23 for 30 min. After mixing the heavy and light proteomes in equal proportions, the biotinylated proteins were enriched by streptavidin chromatography, trypsinized on bead and analyzed by LC-MS/MS to quantify the extent of inhibition for the identified serine hydrolases. Among ~30 serine hydrolases only two targets were maximally inhibited (>95%): ABHD10 and the predicted carboxypeptidase CPVL. While the former target was inferred from the gel profiles, we had not detected CPVL by gel-based profiling. Given that CPVL remains poorly annotated with regard to its endogenous substrates and physiological function, there is a clear need for the development of selective inhibitors of this enzyme, which to our knowledge are still lacking. Our data suggest that the (MIDA)boronates described herein are promising leads for CPVL inhibitor development and may facilitate studying this enzyme's role in human biology.</p><p>In summary, the hydroxymethyl (MIDA)boronate reagent disclosed in this paper allows for straightforward conjugation of boron with heterocycles of biological significance. Significantly, the MIDA group stabilizes borofragments against premature decomposition and is likely released under biological conditions to unmask an active boronic acid. This chemistry allowed us to generate new boron-containing inhibitors of ABHD10 and the predicted carboxypeptidase CPVL. Given the diversity of heterocycles that are either commercially available or exist in the historical collections of the pharmaceutical industry, our straightforward method of linking molecular recognition units with stabilized boron electrophiles should enable facile exploration of previously uncharted covalent inhibitor space.</p><p>Electronic Supplementary Information (ESI) available: [details of any supplementary information available should be included here]. See DOI: 10.1039/c000000x/</p>",
"role": "user"
},
{
"content": "The discovery of enzyme inhibitors relies on synthetic methods that enable rapid and modular construction of small molecules. Heterocyclic fragments designed to maximize enthalpic interactions with their protein targets represent a particularly desirable class of molecules. Here we describe a reagent that enables straightforward construction of \\xe2\\x80\\x9cborofragments\\xe2\\x80\\x9d, in which a heterocycle is separated from the boron center by two or three rotatable bonds. The stability of these molecules depends on the MIDA group which likely acts as a slow-release element under biological conditions. Borofragments can be used to discover inhibitors of enzymes that use catalytic oxygen nucleophiles. We have employed this method to identify inhibitors of ABHD10 and the predicted carboxypeptidase CPVL. This technique should be applicable to other classes of targets.",
"role": "assistant"
}
] |
science.scicite_classification | science.scicite_classification.3818 | [
{
"content": "Examine the citation provided below. Determine the primary purpose of this citation within the context of the research paper. \n\nCitation:\n\nHemorrhage causes a biphasic response in SNA, HR, and vascular resistance, with an initial increase and subsequent reduction in these variables (26).",
"role": "user"
},
{
"content": "Background",
"role": "assistant"
}
] |
science.scireviewgen_multidoc_summarization | science.scireviewgen_multidoc_summarization.4086 | [
{
"content": "Craft a cohesive chapter for a literature review using the abstracts provided. Focus on the chapter's theme, weaving together insights and findings from the abstracts. Highlight key points and note any research gaps.\nLiterature Review Title: Towards Data Analytics of Pathogen-Host Protein-Protein Interaction: A Survey \nChapter Title: A. High Dimensions \n1. Abstract of Cited Paper (BIB001): Motivation: An important aspect of infectious disease research involves understanding the differences and commonalities in the infection mechanisms underlying various diseases. Systems biology-based approaches study infectious diseases by analyzing the interactions between the host species and the pathogen organisms. This work aims to combine the knowledge from experimental studies of host– pathogen interactions in several diseases to build stronger predictive models. Our approach is based on a formalism from machine learning called ‘multitask learning’, which considers the problem of building models across tasks that are related to each other. A ‘task’ in our scenario is the set of host–pathogen protein interactions involved in one disease. To integrate interactions from several tasks (i.e. diseases), our method exploits the similarity in the infection process across the diseases. In particular, we use the biological hypothesis that similar pathogens target the same critical biological processes in the host, in defining a common structure across the tasks. Results: Our current work on host–pathogen protein interaction prediction focuses on human as the host, and four bacterial species as pathogens. The multitask learning technique we develop uses a taskbased regularization approach. We find that the resulting optimization problem is a difference of convex (DC) functions. To optimize, we implement a Convex–Concave procedure-based algorithm. We compare our integrative approach to baseline methods that build models on a single host–pathogen protein interaction dataset. Our results show that our approach outperforms the baselines on the training data. We further analyze the protein interaction predictions generated by the models, and find some interesting insights. Availability: The predictions and code are available at: http://www.cs. \n2. Abstract of Cited Paper (BIB002): Using automated computer tools and in particular machine learning to facilitate and enhance medical analysis and diagnosis is a promising and important area. In this paper, we show that how unsupervised feature learning can be used for cancer detection and cancer type analysis from gene expression data. The main advantage of the proposed method over previous cancer detection approaches is the possibility of applying data from various types of cancer to automatically form features which help to enhance the detection and diagnosis of a specic one. The technique is here applied to the detection and classication of cancer types based on gene expression data. In this domain we show that the performance of this method is better than that of previous methods, therefore promising a more comprehensive and generic approach for cancer detection and diagnosis. \n3. Abstract of Cited Paper (BIB003): With well-developed methods in gene level data extraction, there are huge amount of gene expression data, including normal composition and abnormal ones. Therefore, mining gene expression data is currently an urgent research question, for detecting a corresponding pattern, such as cancer species, quickly and accurately. Since gene expression data classification problem has been widely studied accompanying with the development of gene technology, by far numerous methods, mainly neural network related, have been deployed in medical data analysis, which is mainly dealing with the high dimension and small quantity. A lot of research has been conducted on clustering approaches, extreme learning machine and so on. They are usually applied in a shallow neural network model. Recently deep learning has shown its power and good performance on high dimensional datasets. Unlike current popular deep neural network, we will continue to apply shallow neural network but develop an innovative algorithm for shallow neural network. In the supervised model, we demonstrate a shallow neural network model with a batch of parameters, and narrow its computational process into several positive parts, which process smoothly for a better result and finally achieve an optimal goal. It shows a stable and excellent result comparable to deep neural network. An analysis of the algorithm is also presented in this paper.\n\nReview Chapter:",
"role": "user"
},
{
"content": "For the computational model, the feature vectors, which are composed from these PPIs databases and feature databases, are with very high dimension. A short list for our PHPPI data set is shown in Table V BIB002 , it would result in a feature with 700k dimensions. Under these circumstances, there are several methods worth being considered to tackle the challenging problem. First of all, we could use Principal Component Analysis, minimal-redundancy-maximal-relevance criterion (mRMR) BIB001 and artificial neural network to reduce the dimensions. Neural network, especially a multi-layer structure, is believed to be powerful to extract the inner relations between each data and dimensions. The state-of-art method here to encounter with high dimension problem would be a multilayer structure for mapping the high dimension data into a low dimensions representation, which implies decreasing the dimension and extracting the abundant relations between each dimensions in a neural network learning model. For example, BIB003 used deep learning to create a multi-layer network for cancer diagnosis and classification. Compared with deep learning, General Vector Machines (GVM) BIB004 , which was based on Monte Carlo algorithm and also used by us in genetic classifications, could be used to learn from data set associated with high dimension but a low quantity of samples.",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.4460 | [
{
"content": "Examine the abstracts provided from a main scientific article and its cited works. Craft a narrative that synthesizes the key insights and contributions, emphasizing their interconnections. Employ numerical citations such as [1], [2], etc., to reference the cited works.\n\nMain Article Summary:\nWith the fast expanding scale of vehicular networks, vehicular edge computing (VEC) has emerged and attracted growing attention from both industry and academia. Parked vehicles (PVs) have great potential to join vehicular networks for sharing their idle computing and networking resources. However, due to the underlying security and privacy threats, it is challenging to fairly motivate PVs for resource sharing in an efficient and secure way. In this paper, we propose a permissioned vehicular blockchain for secure and efficient resource sharing in VEC, namely, Parkingchain. We first design smart contract to achieve secure resource sharing and efficient service provisioning between PVs and service requesters (SRs). A multi-weight subjective logic based delegated Byzantine Fault Tolerance (DBFT) consensus mechanism is presented to improve the consensus process in Parkingchain. Further, we design a contract theory-based incentive mechanism to model the interactions between SR and PVs under asymmetric information scenario. Finally, numerical results demonstrate that the proposed incentive mechanism is effective and efficient compared with existing schemes.\n\nCited Works Summary:\n[1]: The paradigm of Internet of Things (IoT) is paving the way for a world, where many of our daily objects will be interconnected and will interact with their environment in order to collect information and automate certain tasks. Such a vision requires, among other things, seamless authentication, data privacy, security, robustness against attacks, easy deployment, and self-maintenance. Such features can be brought by blockchain, a technology born with a cryptocurrency called Bitcoin. In this paper, a thorough review on how to adapt blockchain to the specific needs of IoT in order to develop Blockchain-based IoT (BIoT) applications is presented. After describing the basics of blockchain, the most relevant BIoT applications are described with the objective of emphasizing how blockchain can impact traditional cloud-centered IoT applications. Then, the current challenges and possible optimizations are detailed regarding many aspects that affect the design, development, and deployment of a BIoT application. Finally, some recommendations are enumerated with the aim of guiding future BIoT researchers and developers on some of the issues that will have to be tackled before deploying the next generation of BIoT applications.\n\n[2]: This paper describes a new replication algorithm that is able to tolerate Byzantine faults. We believe that Byzantinefault-tolerant algorithms will be increasingly important in the future because malicious attacks and software errors are increasingly common and can cause faulty nodes to exhibit arbitrary behavior. Whereas previous algorithms assumed a synchronous system or were too slow to be used in practice, the algorithm described in this paper is practical: it works in asynchronous environments like the Internet and incorporates several important optimizations that improve the response time of previous algorithms by more than an order of magnitude. We implemented a Byzantine-fault-tolerant NFS service using our algorithm and measured its performance. The results show that our service is only 3 slower than a standard unreplicated NFS.",
"role": "user"
},
{
"content": "The most accepted PoW consensus algorithm is based on the fact that if a node performs a lot of work for the network, it is less likely that it is going to attack it. However, it requires miners to perform computationally expensive tasks and consume a large amount energy. Proof-of-stake (PoS) is similar to PoW, but the target value depends on the owned stakes of nodes. PoS is a consensus mechanism that requires less computational power than PoW, so it consumes less energy. Different from PoW and PoS applied in the public blockchain, Practical Byzantine Fault Tolerance (PBFT) [2] is carried on by a group of authenticated nodes to validate the transactions. Some variants, such as delegated PoS (DPoS) and DBFT select some nodes to generate and validate blocks to achieve high scalability, high throughput and low latency [1] . In the proposed Parkingchain, the consensus algorithm converges to an agreement fast and is superior to a computational-based algorithm (i.e. PoW algorithm). We design our consensus phases based on DBFT algorithm to satisfy the requirements of efficiency in a vehicular networking scenario.",
"role": "assistant"
}
] |
science.pico_ner | science.pico_ner.472 | [
{
"content": "Examine the following abstract and identify distinct entities. Focus on extracting entities related to participants, interventions, and outcomes.\n\nPresent your findings in a JSON format like: {\"Participant\": [\"example entity\", ...], \"Intervention\": [\"example entity\", ...], \"Outcome\": [\"example entity\", ...]}. If no entities are found for a category, use an empty list.\n\nAbstract Content:\n\nShould we debrief and counsel people who have had psychological shock ?",
"role": "user"
},
{
"content": "{\"Intervention\": [], \"Outcome\": [], \"Participant\": []}",
"role": "assistant"
}
] |
science.bc7_litcovid_topic_classification | science.bc7_litcovid_topic_classification.2063 | [
{
"content": "Analyze the given paragraph from COVID-19 research. Determine the relevant topics discussed in the text. Your response should be a JSON array containing the identified topics. Ensure the output is a valid JSON array with no additional commentary.\n\nText:\n\nAIM: In response to the COVID-19 pandemic, there is a need for substantial changes in the procedures for accessing healthcare services. Even in the current pandemic, we should not reduce our attention towards the diagnosis and treatment of GDM. The purpose of this document is to provide a temporary guide for GDM screening, replacing the current guidelines when it is not possible to implement standard GDM screening because of an unfavorable risk/benefit ratio for pregnant women or when usual laboratory facilities are not available. DATA SYNTHESIS: At the first visit during pregnancy, we must exclude the presence of \"Overt diabetes\" in all women. The criteria for the diagnosis of overt diabetes are either fasting plasma glucose >/=126 mg/dL, or random plasma glucose >/=200 mg/dL, or glycated hemoglobin >/=6.5%. When the screening procedure (OGTT) cannot be safely performed, the diagnosis of GDM is acceptable if fasting plasma glucose is >/= 92 mg/dL. In order to consider the impaired fasting glucose as an acceptable surrogate for the diagnosis of GDM, the fasting glucose measurement should be performed within the recommended time windows for the risk level (high or medium risk). CONCLUSIONS: The changes to the screening procedure for GDM reported below are specifically produced in response to the health emergency of the COVID-19 pandemic. Therefore, these recommended changes should cease to be in effect and should be replaced by current national guidelines when the healthcare authorities declare the end of this emergency.",
"role": "user"
},
{
"content": "[\"Prevention\"]",
"role": "assistant"
}
] |
science.scireviewgen_multidoc_summarization | science.scireviewgen_multidoc_summarization.4441 | [
{
"content": "Craft a cohesive chapter for a literature review using the abstracts provided. Focus on the chapter's theme, weaving together insights and findings from the abstracts. Highlight key points and note any research gaps.\nLiterature Review Title: A Survey on Human Machine Interaction in Industry 4.0 \nChapter Title: Human-Machine Interaction and Related Technologies \n1. Abstract of Cited Paper (BIB001): We present an augmented reality system that supports human workers in a rapidly changing production environment. By providing spatially registered information on the task directly in the user's field of view the system can guide the user through unfamiliar tasks (e.g. assembly of new products) and visualize information directly in the spatial context were it is relevant. In the first version we present the user with picking and assembly instructions in an assembly application. In this paper we present the initial experience with this system, which has already been used successfully by several hundred users who had no previous experience in the assembly task. \n2. Abstract of Cited Paper (BIB002): Abstract The paper presents an Augmented Reality (AR) tool for supporting operators where humans and robots coexist in a shared industrial workplace. The system provides AR visualization of the assembly process, video and text based instructions and production status updates. The tool also enhances the operator's safety and acceptance of hybrid assembly environments through the immersion capabilities of AR technology. A hardware landscape including the AR equipment and markers, the handheld devices for user input and the network infrastructure for interfacing the robot and the storage database is provided. The software architecture for coordinating the AR tool with the assembly process and the data retrieval from the robot controller are also presented. The tool has been tested on a pilot case in the automotive sector. The results indicate that the approach can significantly enhance the operator's working conditions and their integration in the assembly process. \n3. Abstract of Cited Paper (BIB003): This paper describes the idea to support the engineering process in the automation domain – especially in the field of industrial robots - via methods from virtual reality. Due to the technical progress like the industrial internet of things (IIOT) or the Industry 4.0 the complexity of automated production system increases. As a consequence, the engineering effort also increases. This paper focuses the task related to the programming of industrial robots via the common Teach-In method. A virtual reality environment known from the gaming domain is used for the virtual presentation of the production system including the robot. An interactive prototype is developed and evaluated with almost 25 users.\n\nReview Chapter:",
"role": "user"
},
{
"content": "While early scientific efforts on HMI or human-computer interaction had concentrated on fully controllable systems, research quickly turned the focus towards adaptive mechanisms due to the development of dynamic, highly complex human-machine systems BIB001 . Exceeding the role of mere information processing, automated systems have evolved into partly independent actors in dynamic situations and uncertain environments which has led to the requirement of introducing elements of human-to-human relationships into HMI and human-machine collaboration BIB001 . This implies that, in complex human-machine systems, the human and machine agent can no longer be considered in isolation and should instead be regarded as a dynamic unit or team collaborating towards an overall task and aim including dynamic task allocation among participants BIB001 . Enabling adaptability and dynamism of the human-machine system has required the abandonment of full system-side determinism in order to leave degrees of freedom for the automated system to react to situations which could not have been perfectly foreseen and modeled by system designers BIB001 . Thus, the human operator's responsibility evolves from full control to partial control and supervision in the interaction with machines BIB001 . In order to secure that the human operator maintains an overview and understanding of the processes and dynamics in this complex human-machine system, the types and characteristics of implemented interfaces remain a central aspect in the examination of HMI processes, especially regarding ergonomic properties like usability and transparency BIB001 . As modern and advanced instances of user interfaces (UIs), Augmented Reality (AR) and Virtual Reality (VR) will play an important role in the discussion of UIs for HMI purposes in I4.0 in the course of this paper. Therefore, brief definitions of those technologies are provided. According to Paelke, AR applications are characterized by enhancing a real-world environment through its integration with computer-generated information BIB002 . Mainly, this implies adding digital objects and information to a human user's real-world view, enabling direct interaction with information which has a direct spatial relation to the real environment and is more readily interpretable in its environmental context BIB002 . In contrast, users of VR applications do not necessarily maintain a view of a real-world scenery but are instead immersed in an entirely computer-generated and digital world offering interaction possibilities with components of the artificial environment BIB003 . Thus, both AR and VR enable the mediation of information in a spatial relation to the environmental context BIB002 , but AR relies on a basis of data from a real-world scenery to be enhanced digitally while VR applications are entirely based on computer-generated content BIB004 .",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_elife_single_doc_summ | science.scientific_lay_summarisation_elife_single_doc_summ.1288 | [
{
"content": "Transform the provided sections of a biomedical research paper into a digestible summary for a general audience. Focus on simplifying complex ideas while retaining key scientific insights.\n\nResearch Title: Calcium specificity signaling mechanisms in abscisic acid signal transduction in Arabidopsis guard cells\nContent Overview:\nAbstract:\nA central question is how specificity in cellular responses to the eukaryotic second messenger Ca2+ is achieved. Plant guard cells, that form stomatal pores for gas exchange, provide a powerful system for in depth investigation of Ca2+-signaling specificity in plants. In intact guard cells, abscisic acid( ABA) enhances( primes) the Ca2+-sensitivity of downstream signaling events that result in activation of S-type anion channels during stomatal closure, providing a specificity mechanism in Ca2+-signaling. However, the underlying genetic and biochemical mechanisms remain unknown. Here we show impairment of ABA signal transduction in stomata of calcium-dependent protein kinase quadruple mutant plants. Interestingly, protein phosphatase 2Cs prevent non-specific Ca2+-signaling. Moreover, we demonstrate an unexpected interdependence of the Ca2+-dependent and Ca2+-independent ABA-signaling branches and the in planta requirement of simultaneous phosphorylation at two key phosphorylation sites in SLAC1. We identify novel mechanisms ensuring specificity and robustness within stomatal Ca2+-signaling on a cellular, genetic, and biochemical level.\nIntroduction:\nCytosolic calcium([Ca2+]cyt) functions as key cellular second messenger in a plethora of crucial processes in plants and other eukaryotes( Hetherington and Woodward, 2003; Clapham, 2007; McAinsh and Pittman, 2009; Berridge, 2012; Charpentier and Oldroyd, 2013; Webb, 2013). Elucidation of the mechanisms mediating specificity in Ca2+ signaling is fundamental to understanding signal transduction( Berridge et al., 2003; Hetherington and Woodward, 2003; Clapham, 2007; Webb, 2013). In a few cases, the biochemical and cellular mechanisms mediating Ca2+ signaling specificity have been revealed( e. g. De Koninck and Schulman, 1998; Dolmetsch et al., 1998; Oancea and Meyer, 1998; Dolmetsch et al., 2001; Bradshaw et al., 2003; Rellos et al., 2010; Chao et al., 2011). More than one( non-exclusive) mechanism is likely to contribute to specificity in Ca2+ signal transduction( Berridge et al., 2003; Dodd et al., 2010). However, characterization of the combined cellular, biochemical, and genetic mechanisms underlying Ca2+ specificity in a single cell type has not been achieved to our knowledge. The genome of the plant Arabidopsis thaliana encodes over 200 EF-hand Ca2+-binding proteins( Day et al., 2002), with many of these genes co-expressed in the same cell types( Harmon et al., 2000; McCormack et al., 2005; Schmid et al., 2005; Winter et al., 2007), illustrating the need for Ca2+ specificity signaling mechanisms in plants. Two guard cells form a stomatal pore representing the gateway for CO2 influx, which is inevitably accompanied by plant water loss. The aperture of stomatal pores is consequently tightly regulated by the guard cells. Intracellular Ca2+ represents a key second messenger in stomatal closing( McAinsh et al., 1990; MacRobbie, 2000; Hetherington, 2001; Hetherington and Woodward, 2003; Hubbard et al., 2012), but intracellular Ca2+ also functions in stomatal opening( Irving et al., 1992; Shimazaki et al., 1992; Curvetto et al., 1994; Shimazaki et al., 1997; Cousson and Vavasseur, 1998; Young et al., 2006), raising the question how cytosolic free Ca2+ concentration([Ca2+]cyt) elevations trigger a specific cellular response. The underlying mechanisms mediating specificity in guard cell Ca2+ signaling are not well understood. The development of genetic, electrophysiological, and cell signaling tools for the dissection of Ca2+ signaling within this model cell type renders guard cells a powerful system for the investigation of specificity mechanisms within Ca2+ signal transduction. Recent studies including analyses in intact Arabidopsis( Young et al., 2006) and Vicia faba( Chen et al., 2010) guard cells, have shown that stomatal closing stimuli including abscisic acid( ABA) and CO2 enhance the[Ca2+]cyt sensitivity of downstream signaling mechanisms, switching them from an inactivated state to an enhanced Ca2+-responsive ‘primed’ state, thus tightly controlling specificity in Ca2+ responsiveness( Young et al., 2006; Munemasa et al., 2007; Siegel et al., 2009; Chen et al., 2010; Xue et al., 2011). A rise of[Ca2+]cyt from resting to elevated levels alone does not trigger the full ion channel regulation and stomatal response( Young et al., 2006; Munemasa et al., 2007; Siegel et al., 2009; Chen et al., 2010; Xue et al., 2011). Similarly, a recent study of pathogen-associated molecular pattern( PAMP) signaling suggests that prior PAMP signaling enhances the sensitivity to intracellular Ca2+ during signal transduction( Kadota et al., 2014), indicating that this principle for Ca2+ specificity priming may be more widely used in plants. The biological closing stimulus has to be present for the guard cell to react to physiological Ca2+ elevation. However, the biochemical and genetic mechanisms mediating Ca2+ sensitivity priming remain unknown. SLAC1 represents the major anion channel mediating S-type anion currents in guard cells( Negi et al., 2008; Vahisalu et al., 2008) and Ca2+ activation of S-type anion currents is an early and crucial step in stomatal closure( Schroeder and Hagiwara, 1989; McAinsh et al., 1990; Siegel et al., 2009; Chen et al., 2010). Ca2+-independent SnRK2 protein kinases( Li et al., 2000; Mustilli et al., 2002; Yoshida et al., 2002), most importantly OST1, have been shown to activate SLAC1 in Xenopus leavis oocytes( Geiger et al., 2009; Lee et al., 2009; Brandt et al., 2012). The full length Ca2+-dependent protein kinases 6, 21, and 23( CPK6, CPK21, and CPK23) also activate SLAC1 in oocytes( Geiger et al., 2010; Brandt et al., 2012). Presently, the Ca2+-dependent and Ca2+–independent branches are considered to function independently( e. g. Li et al., 2006; Kim and et al., 2010; Roelfsema et al., 2012). The activation of SLAC1 by OST1 or CPK6 is inhibited by the clade A protein phosphatase 2Cs( PP2Cs) ABI1, ABI2, or PP2CA in oocytes( Geiger et al., 2009; Lee et al., 2009; Brandt et al., 2012). The cytosolic ABA-receptors pyrabactin resistance( PYR) /PYR-like( PYL) /regulatory component of ABA receptor( RCAR)( Ma et al., 2009; Park and et al., 2009) have been shown to inhibit PP2C activity in the presence of ABA( Ma et al., 2009; Park and et al., 2009; Santiago et al., 2009; Nishimura et al., 2010; Szostkiewicz et al., 2010). Reconstitution of ABA activation of SLAC1 in Xenopus oocytes has been shown by co-expression of the ABA-receptor PYR1 together with SLAC1, PP2Cs, and either Ca2+-independent OST1 or Ca2+-dependent CPK6 protein kinases( Brandt et al., 2012). However, whether the Ca2+-dependent and–independent branches in ABA signal transduction are functionally linked and depend on one-another in planta remains to be investigated using higher order genetic mutants. Here we present biochemical, genetic and cellular signaling findings that describe mechanisms underlying specificity and robustness in Ca2+ signaling within a single cell type and demonstrate an unexpected strong dependence of the Ca2+-dependent signal transduction branch on the Ca2+-independent pathway in guard cells. Moreover our results suggest that in contrast to OST1( Umezawa et al., 2009; Vlad et al., 2009), calcium-dependent protein kinases( CPKs) are not directly deactivated by PP2Cs, but these PP2Cs rapidly deactivate both of the Ca2+-dependent and Ca2+–independent branches by directly dephosphorylating the protein kinase target SLAC1.\nDiscussion:\nIn summary, the present study reveals a first genetic mechanism that mediates Ca2+ sensitivity priming. Ca2+ sensitivity is demonstrated here to be constitutively primed in pp2c quadruple mutant guard cells, showing that PP2Cs ensure Ca2+ signaling specificity. Interestingly, PP2Cs do not directly down-regulate CPK activity, in contrast to direct PP2C down-regulation of the SnRK2 protein kinases. Rather PP2Cs very rapidly down-regulate signaling targets downstream of CPKs, which could enable the same CPKs to function in more than one pathway. We have further identified a cpk quadruple mutant here that for the first time strongly abrogates ABA activation of S-type anion channels. This abrogation occurs despite an intact Ca2+-independent SnRK2 signaling branch. Furthermore, disruption of the Ca2+-independent signaling branch in snrk2 protein kinase triple mutant plants abrogates Ca2+ signaling. Thus, unexpectedly genetic analyses reveal a dependence of the Ca2+-sensitive ABA signaling branch on the Ca2+-insensitive branch in planta. The control of ABA-triggered stomatal closure by parallel interdependent Ca2+-dependent and–independent mechanisms could contribute to the robustness of the stomatal ABA signaling network( Hetherington, 2001). Unexpectedly, in planta studies show that the S59 and S120 phosphorylation sites in SLAC1 are together required for intact ABA-induced stomatal closing in vivo. The Ca2+ sensitivity priming mechanism described here could represent a more general principle present in plants contributing to Ca2+ specificity within cellular signal transduction pathways, while also maintaining the availability of Ca2+ sensors for distinct Ca2+-dependent signaling outputs.",
"role": "user"
},
{
"content": "Plant leaves have tiny openings or pores called stomata , which allow carbon dioxide , water vapor and other gases to diffuse in and out of the plant . Two cells called guard cells surround each stoma and control the opening and closing of the pore . If a plant is losing excessive amounts of water , for example during a drought , the plant produces a hormone called abscisic acid that promotes the closure of its stomata . When abscisic acid is present , the guard cells are sensitive to changes in their internal concentration of calcium ions so that calcium ions can activate a protein called SLAC1 . This leads to responses in the guard cells that close the stoma . The calcium ions activate SLAC1 by stimulating enzymes called calcium-dependent protein kinases ( CPKs ) . However , abscisic acid can also trigger other enzymes that can activate SLAC1 independently of the calcium ions . Calcium ions are also reported to be involved in the opening of stomata , when abscisic acid is not present . Therefore , it is not clear how abscisic acid works to specifically ‘prime’ guard cells to close the stomata in response to increases in calcium ions during drought . Brandt , Munemasa et al . studied stomata in a plant called Arabidopsis thaliana . The experiments show that , in the presence of abscisic acid , mutant plants that lack four different CPK enzymes are impaired in the activation of SLAC1 and the closing of stomata in response to increases in calcium ions . Further experiments found that other enzymes called the PP2Cs—which are switched off by abscisic acid—are responsible for regulating the Ca2+ sensitivity of guard cells . Switching off PP2Cs enables closing of the stomata in response to calcium ions . It has been suggested previously that the CPKs and the calcium-independent enzymes are involved in two separate pathways that promote the closure of stomata . However , Brandt , Munemasa et al . found that the calcium-independent enzymes are required for calcium ions to activate SLAC1 in guard cells , revealing that these two pathways are linked . Brandt , Munemasa et al . 's findings reveal how abscisic acid is able to specifically prime guard cells to close stomata in response to calcium ions . The next challenge is to understand how the CPKs and calcium-independent enzymes work together during the closure of stomata .",
"role": "assistant"
}
] |
science.nlmgene_ner | science.nlmgene_ner.242 | [
{
"content": "Analyze the provided text to identify gene-related entities. \n\nFormat your findings as a JSON object: {\"Gene\": [\"ExampleGene\", ...]}. Use the entity type as the key, and list the entities under it. If no entities are found, use an empty list.\n\nFocus solely on the JSON output.\n\nText:\n\nTBX21 and HLX1 polymorphisms influence cytokine secretion at birth. BACKGROUND: TBX21 (T cell specific T-box transcription factor) and HLX1 (H.20-like homeobox 1) are crucial transcription factors of T(H)1-cells, inducing their differentiation and suppressing T(H)2 commitment, particularly important for early life immune development. This study investigated the influence of TBX21 and HLX1 single nucleotide polymorphisms (SNPs), which have previously been shown to be associated with asthma, on T(H)1/T(H)2 lineage cytokines at birth. METHODS AND FINDINGS: Cord blood mononuclear cells (CBMCs) of 200 neonates were genotyped for two TBX21 and three HLX1 SNPs. CBMCs were stimulated with innate (Lipid A, LpA; Peptidoglycan, Ppg), adaptive stimuli (house dust mite Dermatophagoides pteronyssinus 1, Derp1) or mitogen (phytohemagglutinin, PHA). Cytokines, T-cells and mRNA expression of T(H)1/T(H)2-related genes were assessed. Atopic diseases during the first 3 years of life were assessed by questionnaire answered by the parents. Carriers of TBX21 promoter SNP rs17250932 and HLX1 promoter SNP rs2738751 showed reduced or trendwise reduced (p≤0.07) IL-5, IL-13 and TNF-α secretion after LpA-stimulation. Carriers of HLX1 SNP rs2738751 had lower IL-13 levels following Ppg-stimulation (p = 0.08). Carriers of HLX1 exon 1 SNP rs12141189 showed increased IL-5 (LpA, p = 0.007; Ppg, p = 0.10), trendwise increased IL-13 (LpA), higher GM-CSF (LpA/Ppg, p≤0.05) and trendwise decreased IFN-γ secretion (Derp1+LpA-stimulation, p = 0.1). Homozygous carriers of HLX1 promoter SNP rs3806325 showed increased IL-13 and IL-6 (unstimulated, p≤0.03). In carriers of TBX21 intron 3 SNP rs11079788 no differences in cytokine secretion were observed. mRNA expression of T(H)1/T(H)2-related genes partly correlated with cytokines at protein level. TBX21 SNP rs11079788 carriers developed less symptoms of atopic dermatitis at 3 years of age (p = 0.03). CONCLUSIONS: Polymorphisms in TBX21 and HLX1 influenced primarily IL-5 and IL-13 secretion after LpA-stimulation in cord blood suggesting that genetic variations in the transcription factors essential for the T(H)1-pathway may contribute to modified T(H)2-immune responses already early in life. Further follow-up of the cohort is required to study the polymorphisms' relevance for immune-mediated diseases such as childhood asthma.",
"role": "user"
},
{
"content": "{\"Gene\": [\"Derp1\", \"Cytokines\", \"HLX1\", \"IL-13\", \"TNF-\\u03b1\", \"cytokine\", \"TBX21\", \"H.20-like homeobox 1\", \"Dermatophagoides pteronyssinus 1\", \"IL-5\", \"GM-CSF\", \"IL-6\", \"transcription factors\", \"IFN-\\u03b3\", \"cytokines\", \"T cell specific T-box transcription factor\"]}",
"role": "assistant"
}
] |
science.scitldr_aic | science.scitldr_aic.1419 | [
{
"content": "Review the provided sections from a computer science paper and distill the primary contribution into a single sentence. Provide only the summary sentence.\n\nSections:\n\nGenerative adversarial networks (GANs) learn to map samples from a noise distribution to a chosen data distribution.\nRecent work has demonstrated that GANs are consequently sensitive to, and limited by, the shape of the noise distribution.\nFor example, a single generator struggles to map continuous noise (e.g. a uniform distribution) to discontinuous output (e.g. separate Gaussians) or complex output (e.g. intersecting parabolas).\nWe address this problem by learning to generate from multiple models such that the generator's output is actually the combination of several distinct networks.\nWe contribute a novel formulation of multi-generator models where we learn a prior over the generators conditioned on the noise, parameterized by a neural network.\nThus, this network not only learns the optimal rate to sample from each generator but also optimally shapes the noise received by each generator.\nThe resulting Noise Prior GAN (NPGAN) achieves expressivity and flexibility that surpasses both single generator models and previous multi-generator models.\nLearning generative models of high-dimensional data is of perpetual interest, as its wide suite of applications include synthesizing conversations, creating artwork, or designing biological agents (Bollepalli et al., 2017; Tan et al., 2017; Blaschke et al., 2018) .\nDeep models, especially generative adversarial networks (GANs), have significantly improved the state of the art at modeling these complex distributions, thus encouraging further research (Goodfellow et al., 2014) .\nWhether implicitly or explicitly, works that use GANs make a crucial modeling decision known as the manifold assumption (Zhu et al., 2016; Schlegl et al., 2017; Reed et al., 2016) .\nThis is the assumption that high-dimensional data lies on a single low-dimensional manifold which smoothly varies and where local Euclidean distances in the low-dimensional space correspond to complex transformations in the high-dimensional space.\nWhile generally true in many applications, this assumption does not always hold (Khayatkhoei et al., 2018) .\nFor example, recent work has emphasized situations where the data lies not on one single manifold, but on multiple, disconnected manifolds (Khayatkhoei et al., 2018; Gurumurthy et al., 2017; Hoang et al., 2018) .\nIn this case, GANs must attempt to learn a continuous cover of the multiple manifolds, which inevitably leads to the generation of off-manifold points which lie in between (Kelley, 2017) .\nThe generator tries to minimize the number of these off-manifold points, and thus they are generally just a small fraction of the total generated distribution.\nAs such, they barely affect the typical GAN evaluation measures (like Inception and FID scores for images), which measure the quality of the generated distribution as a whole.\nThus, this problem is usually ignored, as other aspects are prioritized.\nHowever, in some applications, the presence of these bad outliers is more catastrophic than slight imperfections in modeling the most dense regions of the space.\nFor example, consider the goal of an artificial agent acting indistinguishably from a human: the famous Turing Test.\nIncorrectly modeling sentence density by using a given sentence structure 60% of the time instead of 40% of the time is relatively harmless.\nHowever, generating a single gibberish sentence will give away the identity of the artificial agent.\nMoreover, there are serious concerns about the implications this has for proofs of GAN convergence (Mescheder et al., 2018) .\nThese works address the problem of disconnected manifolds by Figure 1 : The Noise-Prior GAN (NPGAN) architecture.\nUnlike previous work, the NP network learns a prior over the generators conditioned on the noise distribution z.\nThis allows it to both control the sampling frequency of the generators and shape the input appropriate to each one, in an end-to-end differentiable framework.\nsimultaneously training multiple generators and using established regularizations to coax them into dividing up the space and learning separate manifolds.\nMethods for getting multiple generators to generate disconnected manifolds can be divided into two categories:\n(i) imposing information theoretic losses to encourage output from different generators to be distinguishable (Khayatkhoei et al., 2018; Hoang et al., 2018)\n(ii) changing the initial noise distribution to be disconnected (Gurumurthy et al., 2017) .\nOur approach falls into the second category.\nPrevious efforts to change the noise distribution to handle disconnectedness has exclusively taken the form of sampling from a mixture of Gaussians rather than the typical single Gaussian (with sampling fixed and uniform over the mixture).\nOur approach differs significantly from those previously.\nWe use multiple generators as before, but instead of dividing up the noise space into factorized Gaussians and sending one to each generator, we let an additional neural network determine how best to divide up the noise space and dispatch it to each generator.\nThis network learns a prior over the generators, conditioned on the noise space.\nThus, we call our additional third network a noise-prior (NP) network.\nPrevious methods have modeled the data with noise z and generators\nWe instead propose a framework to incorporate a richer p(G i |z) into the generator.\nThis framework is entirely differentiable, allowing us to optimize the NP network along with the generators during training.\nWe note that with this strategy, we significantly increase the expressivity of each generator over the previous disconnected manifold models.\nBy dividing up the space into four slices s i and sending s 1 , s 3 to the first generator and s 2 , s 4 to the second generator, we can generate four disconnected manifolds with just two generators.\nPrevious work would have to devote precisely four generators to this task, with degradation in performance if fewer or more generators are chosen for the hyperparameter.\nHere, the prior network learns to divide the noise space appropriately for whatever number of generators is chosen, and is thus more expressive as well as more robust than previous models.\nMoreover, much existing work has exclusively framed the problem as, and tailored solutions for, the disconnected manifold problem.\nOur approach is more generalized, addressing any misspecification between noise distribution and the target distribution.\nThis means that our approach does not become redundant or unnecessary in the case of single complex manifolds, for example.\nOur contributions can be summarized as:\n1. We introduce the first multi-generator ensemble to learn a prior over the noise space, using a novel soft, differentiable loss formulation.\n2. We present a multi-generator method that can learn to sample generators in proportion to the relative density of multiple manifolds.\n3. We show how our model not only improves performance on disconnected manifolds, but also on complex-but-connected manifolds, which are more likely to arise in real situations.\nWe introduced a novel formulation of multiple-generator models with a prior over the generators, conditioned on the noise input.\nThis results in improved expressivity and flexibility by shaping each generator's input specifically to best perform that generator's task.\nIn this section, we elaborate on the CIFAR experiment from the main text.\nWe use a more complicated architecture here with spectral normalization, self-attention, and ResNet connections, per the best achieving models to-date.\nWe experimented using two, three, four, and five generators in the NPGAN architecture.\nFigure A .1 shows images generated by the NPGAN with each number of generators.\nWith just two generators, each one creates a wide diversity of images.\nOn the other hand, when increasing the number of generators, each one more homogeneous.\nFor example, in the two generator model, one of them creates dogs, cars, and frogs, while in the five-generator model each generator has specialized to just birds in the sky or just cars.\nQualitatively, the noise prior is obviously learning a sensible split of the data across generators and each generator is outputting quality images.\nHowever, when comparing the two-generator, threegenerator, four-generator, and five-generator versions of NPGAN to the baseline one-generator of the same model, we do not observe any improvement in FID score.\nThis is unsurprising for the reasons mentioned in the main text.\nThe FID scores treat all points equally across a generated dataset, and thus will be most strongly influenced by where the most points are.\nA relatively small number of outliers barely register by this metric.\nEven current state-of-the-art image generation on CIFAR10 is no where close to perfectly modeling the data.\nWhen GANs are able to perfectly model the dataset except for trailing outliers between modes, we expect the NPGAN's improvements to be visible in FID scores on this dataset.\nUntil then, the detection of a few bad outliers needs to be done with other evaluation techniques on this dataset.\nWith this caveat, we note that we could achieve an FID score of 26.4 with our NPGAN, compared to 25.8 with our code and one generator, which demonstrates that the NPGAN can scale to stateof-the-art architecture without suffering in quality.\nThe NPGAN is robust to a connected dataset while simultaneously being able to automatically solve the problems of a disconnected dataset.\nFurthermore, this motivated the creation of our new outlier manifold distance metric, designed to be more sensitive to the creation of outliers than the FID score.\nUsing this metric, we see NPGAN outperform all other models.\nRelation to Machine Teaching In (Zhu, 2013) , an analogous question is posed: if a teacher network knows the function its student network is supposed to learn, what are the optimal training points to teach it as efficiently as possible?\nFor students following a Bayesian learning approach, this is thought of as finding the best data points D to make the desired model θ * , or minimizing with respect to D: −log(p(θ * |D)) .\nIn our framework, the teacher network NP does not know the function its students should learn ahead-of-time, because this target is changing continually as the discriminator improves simultaneously.\nNevertheless, the NP network is still learning to form the optimal curriculum for each individual student such that the collection of students best models the target function given the current parameters of the discriminator.\nRelation to knowledge distillation Our NP network also has links to the field of knowledge distillation (Kim & Rush, 2016; Chen et al., 2017; Furlanello et al., 2018; Wang et al., 2018) , where a teacher network is trying to compress or distill the knowledge it has about a particular distribution into one or several (Hinton et al., 2015) smaller models.\nIn the case of multiple smaller models, the teacher can be thought of as a generalist whose job it is to find the right specialist for a specific problem.",
"role": "user"
},
{
"content": "A multi-generator GAN framework with an additional network to learn a prior over the input noise.",
"role": "assistant"
}
] |
science.mslr2022_cochrane_multidoc_summarization | science.mslr2022_cochrane_multidoc_summarization.2129 | [
{
"content": "Review the provided collection of medical RCT summaries. Your objective is to distill the essential insights into a brief conclusion. Focus solely on the synthesis of findings without additional commentary. The RCT summaries are as follows:\n\n\n\nReport Title 1:\nA controlled trial of transcutaneous electrical nerve stimulation (TENS) and exercise for chronic low back pain.\nReport Abstract 1:\nA number of treatments are widely prescribed for chronic back pain, but few have been rigorously evaluated. We examined the effectiveness of transcutaneous electrical nerve stimulation (TENS), a program of stretching exercises, or a combination of both for low back pain. Patients with chronic low back pain (median duration, 4.1 years) were randomly assigned to receive daily treatment with TENS (n = 36), sham TENS (n = 36), TENS plus a program of exercises (n = 37), or sham TENS plus exercises (n = 36). After one month no clinically or statistically significant treatment effect of TENS was found on any of 11 indicators of outcome measuring pain, function, and back flexion; there was no interactive effect of TENS with exercise. Overall improvement in pain indicators was 47 percent with TENS and 42 percent with sham TENS (P not significant). The 95 percent confidence intervals for group differences excluded a major clinical benefit of TENS for most outcomes. By contrast, after one month patients in the exercise groups had significant improvement in self-rated pain scores, reduction in the frequency of pain, and greater levels of activity as compared with patients in the groups that did not exercise. The mean reported improvement in pain scores was 52 percent in the exercise groups and 37 percent in the nonexercise groups (P = 0.02). Two months after the active intervention, however, most patients had discontinued the exercises, and the initial improvements were gone. We conclude that for patients with chronic low back pain, treatment with TENS is no more effective than treatment with a placebo, and TENS adds no apparent benefit to that of exercise alone.",
"role": "user"
},
{
"content": "At this time, the evidence from the small number of placebo-controlled trials does not support the use of TENS in the routine management of chronic LBP. Further research is encouraged.",
"role": "assistant"
}
] |
science.genia_ner | science.genia_ner.263 | [
{
"content": "Analyze the provided text excerpt from a scientific document. Identify and extract distinct entities based on predefined categories.\n\nDeliver the results as a JSON structure where each key represents an entity category, and the associated value is a list of identified entities. If no entities are found for a category, return an empty list.\n\nText Excerpt:\n\nIL-4 down-regulates IL-2 -, IL-3 -, and GM-CSF-induced cytokine gene expression in peripheral blood monocytes. IL-4, a product of the T-helper 0 (Th0) and 2 (Th2) subset, was originally described as a B-cell stimulatory factor and has subsequently been found to suppress IL-1 alpha, IL-1 beta, IL-6, IL-8, and TNF-alpha gene expression in monocytes stimulated with LPS, and to upregulate IL-1 receptor antagonist (IL1-RA) gene expression. In this study we investigated the effect of IL-4 on the expression of cytokine genes in monocytes evoked by other T-helper cell cytokines: IL-2, IL-3, and GM-CSF. IL-4 down-regulated mRNA accumulation of the proinflammatory cytokines IL-1 beta, IL-8, and TNF-alpha in monocytes stimulated with IL-2, IL-3, and GM-CSF. IL-4 also suppressed the IL-2-induced IL-6 mRNA expression. Temporal analysis of the IL-4 down-regulatory effect on the IL-2 -, IL-3 -, or GM-CSF-induced proinflammatory cytokine gene expression in monocytes provided evidence that IL-4 acts predominantly on the post-transcriptional level. This was supported by the observation that the down-regulatory capacity of IL-4 appeared to be dependent on de novo protein synthesis. IL-4 did not exert significant influence on the induction of expression of IL-1-RA or various CSFs by IL-2, IL-3, and GM-CSF. (ABSTRACT TRUNCATED AT 250 WORDS)",
"role": "user"
},
{
"content": "{\"CellLine\": [], \"CellType\": [\"Th2\", \"Th0\", \"monocytes\", \"T - helper 0 (Th0) and 2 (Th2) subset\"], \"DNA\": [\"cytokine genes\", \"IL - 1 receptor antagonist (IL1 - RA) gene\"], \"Protein\": [\"GM - CSF\", \"TNF - alpha\", \"IL - 2\", \"IL - 1 beta\", \"IL - 6\", \"CSFs\", \"IL - 4\", \"B - cell stimulatory factor\", \"IL - 1 - RA\", \"IL - 3\", \"IL - 8\", \"proinflammatory cytokines\", \"T - helper cell cytokines\"], \"RNA\": []}",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_plos_single_doc_summ | science.scientific_lay_summarisation_plos_single_doc_summ.17405 | [
{
"content": "Transform the following sections from a biomedical study into a simple summary for the general public. Focus on making the information accessible and easy to understand.\n\nResearch Heading: Integration and multiplexing of positional and contextual information by the hippocampal network\nContent:\nAbstract:\nThe hippocampus is known to store cognitive representations, or maps, that encode both positional and contextual information, critical for episodic memories and functional behavior. How path integration and contextual cues are dynamically combined and processed by the hippocampus to maintain these representations accurate over time remains unclear. To answer this question, we propose a two-way data analysis and modeling approach to CA3 multi-electrode recordings of a moving rat submitted to rapid changes of contextual( light) cues, triggering back-and-forth instabitilies between two cognitive representations( “teleportation” experiment of Jezek et al). We develop a dual neural activity decoder, capable of independently identifying the recalled cognitive map at high temporal resolution( comparable to theta cycle) and the position of the rodent given a map. Remarkably, position can be reconstructed at any time with an accuracy comparable to fixed-context periods, even during highly unstable periods. These findings provide evidence for the capability of the hippocampal neural activity to maintain an accurate encoding of spatial and contextual variables, while one of these variables undergoes rapid changes independently of the other. To explain this result we introduce an attractor neural network model for the hippocampal activity that process inputs from external cues and the path integrator. Our model allows us to make predictions on the frequency of the cognitive map instability, its duration, and the detailed nature of the place-cell population activity, which are validated by a further analysis of the data. Our work therefore sheds light on the mechanisms by which the hippocampal network achieves and updates multi-dimensional neural representations from various input streams.\nIntroduction:\nFollowing the discovery of place cells, which specifically fire at determined positions in space[1], the hippocampus was recognized as an essential brain area for spatial representations and memories. These cognitive representations, or maps, actually code for more than position in physical space, and are also strongly informative about context[2], including physical features of the background, such as visual landmarks, light, odors, auditory stimuli, as well as more abstract conditions, such as the emotional state or the task to be performed[3–8]. A fundamental property of the hippocampus is its capacity to memorize multiple cognitive maps[1, 9–11]. This property may result from specific recurrent synaptic connectivity in the hippocampal CA3 region[12, 13], and can be theoretically understood in the framework of continuous attractor neural networks( CANN)[14, 15]. Thanks to the remapping properties of place cells, multiple maps can be memorized in the same connectivity matrix with almost no interference between them[16–19]. Cognitive maps may be retrieved when the animal explores again the corresponding environments, or be quickly and intermittently recalled depending on the most relevant behavorial information at that moment[20]. Different sources of inputs to the hippocampus concur to form, recall, and dynamically maintain cognitive maps[21]. Changes in visual cues and landmarks may substantially affect place field shape and positioning[5]. The Path Integrator( PI), capable of integrating proprioceptive, vestibular and visual flow inputs and possibly supported by the grid-cell network in the medial-enthorinal cortex( mEC)[22], allows the animal to update the neural representation during navigation[23]. The path integrator is itself sensitive to other sources of inputs, and undergoes reset in case of large disagreement with external landmarks or sensory information[24]. Insights about how these different inputs contribute to hippocampal representations were recently obtained by studying the effects of mismatches between path-integration and visual sensory information, in particular in virtual reality settings[25, 26]. In another study Jezek et al showed how abrupt changes in visual context( light conditions) during active exploration by a rodent resulted in fast flickering between context-associated maps in CA3 on the theta time scale[27]( Fig 1A). Though they are largely artificial, these conditions offer a rare window on the fast dynamics of the place-cell population, and on how this dynamics is shaped by the various inputs. Despite these studies, how contextual and PI inputs are combined by the hippocampal network to produce cognitive maps and accurate positional encoding is not fully understood yet. In this work, we carefully reanalyze and model the experiment of Jezek et al to address this issue. We first introduce of a dual inference method capable of extracting reliably and independently the encoded map[28] and the encoded position[29, 30] from the recorded spiking activity alone( Fig 1B). Our dual decoder allows us to robustly show that the hippocampal activity always encodes the correct location in the retrieved map, even during the fast, unstable dynamics of the cognitive maps, as put forward in[27]. To explain this robust encoding, we propose a CANN model of the hippocampal circuitry, capable of storing multiple cognitive maps; the model is fed by visual-cue and path-integration inputs projecting on the place-cell populations supporting those maps[31]. The path integrator is, in turn, influenced by the hippocampal activity, closing an interaction loop between the hippocampus and the mEC[32]. Our model not only reproduces the flickering phenomenology and the stable encoding of position, but also makes several precise predictions on the dynamics of cognitive maps, the relative strength of inputs, and the intricate activation of place-cell populations supporting the two maps. These predictions are corroborated by a further detailed analysis of Jezek et al’s data. Our work therefore proposes explicit mechanisms by which the hippocampus could be capable of encoding various contextual and self-locomotion information in multi-dimensional representations, and of updating them accurately on fast time scales.\nDiscussion:\nOur statistical inference-based data analysis allows us to quantify how well the CA3 neural activity encode various cognitive maps, and the position therein. Correlation-based procedures, e. g. used in[27], decode the cognitive state by comparing the instantaneous population activity to the average activity recorded in reference sessions at the same position of the rodent. Our functional-network based map decoder, instead, relies on the fact that the joint pairwise spiking activity of neurons is a fingerprint of the cognitive map[28, 34]. It does not need any knowledge of the sensory correlate( here, position), and could be used to decode generic brain states in other areas. The fast dynamics of cognitive maps studied in[27] and here results from an unrealistic sensory situation. Imposing artificial conflicts between inputs and studying their consequences is a standard approach to unveil the circuitry underlying the processing of multimodal sensory information in the hippocampus[25, 26] as well as in other brains areas, see for instance[39] for an illustration in the primary visual cortex where mismatches involve sensory and motor inputs. However, fast retrieval of functionally relevant maps, characterized by grouping and cognitive control, has also been observed in realistic settings, in which a behaving animal is required to maintain representations of two distinct spatial frames[20]. The position of the animal was accurately inferred at all times from the spiking activity using the place fields of the retrieved cognitive map( Fig 1B). As a main finding, we show that the hippocampus maintains high-quality encoding of the position even if the contextual variable undergoes fast dynamical changes. This is explained in the model by the fact that inputs point to place cells coding for the physical position in both competing maps( Fig 4A), and that the bump of activity is most often localized around these place cells in one map, and scattered all over the other map. Similar findings were reported in main text Fig 3d and Supplementary Information Fig 8 within[27], within a statistical framework assuming a priori the consistency of positional representation during flickering events, as the cognitive map was decoded by comparing the neural activity to the mean-activity vectors at the recorded real position of the rat. The emergence of unambiguous, non-mixed representations was also underlined in[27], and shown to take place in the second half of the theta cycle. However, the detailed analysis of the CA3 recordings and of the model data shows a loss of quality of the bump state( reduction in absolute value of log-ratio | Δ L |) and an increased quality of position decoding in the opposite map( Fig 6), providing evidence for the presence of partially mixed states. Our model for the retrieval of hippocampal cognitive maps in the presence of inputs from the path integrator and visual cues is based on CANN theory[32, 40]. Two-dimensional CANN attractors, were previously applied to networks of place[15, 17] and grid[41, 42] cells. Indirect experimental evidence supporting CANN is now accumulating in various animals and brain areas. Evidence of a ring-shaped attractor region associated to head direction representation was recently reported in the drosophila central brain[43]. Attractor dynamics has also been associated to behavioral observation in a study on the monkey prefrontal cortex[44]. As for space representation, experimental support for attractor behaviour has been found in hippocampal CA1[45] as well in grid cell[46] recordings. Further indirect evidence is provided by the pattern of connectivity in CA3, compatible with its functional role as an auto-associative attractor network[13], as suggested long ago based on anatomical and computational considerations[12, 47], and by the active nature of dendrites of mEC neurons, which enhances the robustness of attractors under environmental changes[48]. The detailed analysis of the CA3 recordings done here provides another indirect support for CANN theoretical framework, when multiple( two) cognitive maps are memorized. Memorization of the two attractors is obtained, in the model, by adding the corresponding connectivity matrices into the unique CANN connectivity matrix[16–19]. A detailed theoretical study of the mechanisms for transition from map to map was obtained in the absence of inputs, i. e. for spontaneous transitions induced by neural noise only[36]. A similar picture is found here in the presence of visual-cue inputs pointing to the ‘new’ map, while the path-integrator inputs point to the ‘old’ map in the conflicting phase. As inputs are of comparable magnitude, no single map is favored. The stochastic fluctuations resulting from the noise of the individual neurons are sufficient for the system to cross the activation barrier between the two memory states( maps) of the network, see Fig 4B and Fig. A in S1 Text. The hippocampal network jumps intermittently from one cognitive map to the other, reproducing the flickering events experimentally identified and described in[27]. Transition rates between the two maps increase with the neural noise, modeled here by the parameter β, see Eq( 12) in Methods and Fig. D in S1 Text. Neural noise relative to the population activity could also be effectively increased through the introduction of periodic( theta and gamma) modulations of the activity into the model[31, 37, 49]. The presence of rhythms is known to facilitate memory formation and integration of information[50, 51]. While theta oscillations can help produce flickering events as previously reported[31, 38], our work shows that such periodic modulations are not necessary. Transitions could also be facilitated by particular ‘confounding’ landmarks or positions in space, where the maps happen to be locally similar[16, 36]. The present model reproduces accurately all the observed flickering properties, without any need for a post-learning short-term plasticity of the CA3 network hypothesized in[38]. In particular, our model predicts that the flickering frequency is independent from the time spent after the teleportation event in the conflicting phase( Fig 5B). This finding is at first sight in disagreement with the exponential decay of the flickering frequency reported in[27, 38]. However, the latter was obtained as a result of an averaging over many teleportation events. For a single event, accurate data analysis shows that our constant flickering rate hypothesis, when combined with the exponentially distributed realignment time of the path integrator( Fig 5A), is much more likely than an exponential decreasing scenario. Our model is based on the existence of two streams of inputs conveying, respectively, external landmark and self-navigation information. Recent studies have pointed to the grid cells network in mEC as the possible region that supports path integration, as their firing patterns are maintained in the dark[52], and the relative phases of grid cells seem to be largely unaffected by global remapping between environments of similar shapes[11, 53]. CANN-based approaches have been proposed to model grid-cell networks[41, 42], differing from hippocampal CANN mostly by the short-range nature of the inhibitory couplings. In much the same way the microscopic hippocampal CANN proposed here can effectively be reduced to a 2-state model( Fig 4B), we expect CANN models for the grid-cell networks to be approximately described by a 2-state model, corresponding to the PI aligned with map A or B[11, 54, 55]. This motivates the simple model for the PI we have considered here. In addition to sending projections towards the hippocampus, our model PI receives a feedback from the CANN, greatly increasing the probability of transition to the state agreeing with the instantaneous cognitive map[32, 56, 57]. Eventually, the state of the PI is realigned along the visual cue inputs, which stops the conflicting phase. Our model effectively implements a ratchet mechanism, locking the system into the coherent phase after a conflicting transient. Realignment of the path integrator based on visual landmarks is an important functional property, intended to limit the accumulation of errors in position estimation[58], and observed for large mismatch between external and internal inputs[24]. From a physiological point of view, projections exist from CA1 to mEC[32], and have been shown to be important for the formation of grid cells[57]. Hence, the feedback from the CANN, thought here to model CA3 activity, to the PI should be understood as effective. As recently reported in[53], the impairement of the mEC grid firing resulted in a loss of path integrator in behaving rodents. As in our model, the recall of the pre-teleportation map, and, therefore, the whole flickering phenomenology are driven by the input stream from the path integrator to the CA3 network, we conjecture that flickering instabilities would disappear upon grid-cell impairement. Simultaneous recordings of mEC and CA3, as in[11], would be extremely useful to test our predictions and better describe the effect of the path integrator on the cognitive status of CA3.",
"role": "user"
},
{
"content": "As an animal moves in space and receives external sensory inputs , it must dynamically maintain the representations of its position and environment at all times . How the hippocampus , the brain area crucial for spatial representations , achieves this task , and manages possible conflicts between different inputs remains unclear . We propose here a comprehensive attractor neural network-based model of the hippocampus and of its multiple input streams ( including self-motion ) . We show that this model is capable of maintaining faithful representations of positional and contextual information , and resolves conflicts by adapting internal representations to match external cues . Model predictions are confirmed by the detailed analysis of hippocampal recordings of a rat submitted to quickly varying and conflicting contextual inputs .",
"role": "assistant"
}
] |
science.scicite_classification | science.scicite_classification.1424 | [
{
"content": "Examine the citation provided below. Determine the primary purpose of this citation within the context of the research paper. \n\nCitation:\n\nDifferent mechanisms of resistance to CDDP have been described, including reduced intracellular accumulation of CDDP; increased detoxification of CDDP by the thiols, glutathione and metallothionein; increased DNA-repair activity; and increased tolerance to DNA damage (Galluzzi et al. 2012; Kelland 2007).",
"role": "user"
},
{
"content": "Background",
"role": "assistant"
}
] |
science.scireviewgen_multidoc_summarization | science.scireviewgen_multidoc_summarization.2777 | [
{
"content": "Craft a cohesive chapter for a literature review using the abstracts provided. Focus on the chapter's theme, weaving together insights and findings from the abstracts. Highlight key points and note any research gaps.\nLiterature Review Title: RGB-D-Based Object Recognition Using Multimodal Convolutional Neural Networks: A Survey \nChapter Title: 1) HAND-CRAFTED BASED DEPTH ENCODING METHODS \n1. Abstract of Cited Paper (BIB001): Robust object recognition is a crucial ingredient of many, if not all, real-world robotics applications. This paper leverages recent progress on Convolutional Neural Networks (CNNs) and proposes a novel RGB-D architecture for object recognition. Our architecture is composed of two separate CNN processing streams - one for each modality - which are consecutively combined with a late fusion network. We focus on learning with imperfect sensor data, a typical problem in real-world robotics tasks. For accurate learning, we introduce a multi-stage training methodology and two crucial ingredients for handling depth data with CNNs. The first, an effective encoding of depth information for CNNs that enables learning without the need for large depth datasets. The second, a data augmentation scheme for robust learning with depth images by corrupting them with realistic noise patterns. We present state-of-the-art results on the RGB-D object dataset and show recognition in challenging RGB-D real-world noisy settings. \n2. Abstract of Cited Paper (BIB002): Object recognition and pose estimation from RGB-D images are important tasks for manipulation robots which can be learned from examples. Creating and annotating datasets for learning is expensive, however. We address this problem with transfer learning from deep convolutional neural networks (CNN) that are pre-trained for image categorization and provide a rich, semantically meaningful feature set. We incorporate depth information, which the CNN was not trained with, by rendering objects from a canonical perspective and colorizing the depth channel according to distance from the object center. We evaluate our approach on the Washington RGB-D Objects dataset, where we find that the generated feature set naturally separates classes and instances well and retains pose manifolds. We outperform state-of-the-art on a number of subtasks and show that our approach can yield superior results when only little training data is available. \n3. Abstract of Cited Paper (BIB003): Deep learning based methods have achieved unprecedented success in solving several computer vision problems involving RGB images. However, this level of success is yet to be seen on RGB-D images owing to two major challenges in this domain: training data deficiency and multi-modality input dissimilarity. We present an RGB-D object recognition framework that addresses these two key challenges by effectively embedding depth and point cloud data into the RGB domain. We employ a convolutional neural network (CNN) pre-trained on RGB data as a feature extractor for both color and depth channels and propose a rich coarse-to-fine feature representation scheme, coined Hypercube Pyramid, that is able to capture discriminatory information at different levels of detail. Finally, we present a novel fusion scheme to combine the Hypercube Pyramid features with the activations of fully connected neurons to construct a compact representation prior to classification. By employing Extreme Learning Machines (ELM) as non-linear classifiers, we show that the proposed method outperforms ten state-of-the-art algorithms for several tasks in terms of recognition accuracy on the benchmark Washington RGB-D and 2D3D object datasets by a large margin (upto 50% reduction in error rate). \n4. Abstract of Cited Paper (BIB004): Augmenting RGB images with depth information is a well-known method to significantly improve the recognition accuracy of object recognition models. Another method to improve the performance of visual recognition models is ensemble learning. However, this method has not been widely explored in combination with deep convolutional neural network based RGB-D object recognition models. Hence, in this paper, we form different ensembles of complementary deep convolutional neural network models, and show that this can be used to increase the recognition performance beyond existing limits. Experiments on the Washington RGB-D Object Dataset show that our best performing ensemble improves the recognition performance with 0.7% compared to using the baseline model alone.\n\nReview Chapter:",
"role": "user"
},
{
"content": "The hand-crafted based methods are very popular because they are intuitive and easy to be implemented. Many handcrafted mappings were proposed to colorize depth data and obtained impressive improvements especially over the Washington RGB-D object dataset . (1) Surface normals Within the RGB-D classification literature, surface normals is the most widely used encoding method. Bo et al. first converted the depth map to surface normals and then re-interpreted it as RGB values. Due to the inherent noisiness of the depth channel, such a direct conversion results in noisy images in the color space. To address this issue, Akerberg et al. modified it with recursive median filter and bilateral filter. (2) ColorJet ColorJet is a simple color mapping method by assigning different colors to different depth values. It was first introduced in RGB-D object recognition by Eitel et al. BIB002 . In ColorJet, the original depth map is firstly normalized 0-255 values. Then the colorization works by mapping the lowest value to the blue channel and the highest value to the red channel. The value in the middle is mapped to green and the intermediate values are arranged accordingly. To increase invariance of the generated images against camera pitch angle changes, Schwarz et al. BIB003 improved the ColorJet model by an optional re-projection step in a coordinate system. (3) HHA HHA was first introduced by Gupta et al. BIB001 for RGB-D object detection and segmentation. HHA is a mapping method in which the first channel encodes the horizontal disparity, the second channel encodes the height above ground and the third channel encodes the pixel-wise angle between the surface normal and the gravity vector. Horizontal disparity calculates the difference between the images of the left and right camera, which helps to understand the closeness of objects from the camera. Height above the ground depicts the possible positions of objects in the scene concerning the ground plane. The angle quantifies the shape of object in the scene. The HHA representation encodes properties of geocentric pose that emphasize complementary discontinuities in the image (depth, surface normal and height). This embedding method is strictly geocentric and such information is not always available in object-centric recognition BIB004 . (4) Embedded depth Embedded depth method was first proposed by Zaki et al. BIB004 . This method takes the shape information into the depth embedding. In embedded depth method, a single channel depth map is convoluted by vertical and horizontal Prewitt kernels. Then the gradient magnitude G m and gradient direction G θ are calculated, respectively by: where G x and G y are the vertical and horizontal derivative of depth image. Last, the three-channel depth map is constructed as the concatenation of the original single channel depth map with the gradient magnitude and direction maps. It's worth mentioning that the depth modality can be encoded by a certain encoding method or by a combination of the encoding methods. For example, Rahman et al. utilized both surface normals and ColorJet methods to encode the depth images. Aakerberg et al. BIB005 encoded the depth modality by combing the surface normals, ColorJet and fine-tuning of the surface normals in an ensemble learning framework.",
"role": "assistant"
}
] |
science.pico_ner | science.pico_ner.1527 | [
{
"content": "Examine the following abstract and identify distinct entities. Focus on extracting entities related to participants, interventions, and outcomes.\n\nPresent your findings in a JSON format like: {\"Participant\": [\"example entity\", ...], \"Intervention\": [\"example entity\", ...], \"Outcome\": [\"example entity\", ...]}. If no entities are found for a category, use an empty list.\n\nAbstract Content:\n\nCost-effectiveness of clopidogrel in acute coronary syndromes in Canada : a long-term analysis based on the CURE trial . BACKGROUND Cardiovascular diseases account for nearly 20 % of all hospitalizations in Canada and consume 12 % of the total cost of all illnesses . With increasing trends of cardiovascular disease and increasing costs of care , development of cost-effective strategies is vital . The Clopidogrel in Unstable angina to prevent Recurrent Events ( CURE ) trial demonstrated the effectiveness of clopidogrel plus acetylsalicylic acid ( ASA ) compared with ASA alone in reducing cardiovascular events in patients with acute coronary syndromes and , in addition , patients undergoing percutaneous coronary intervention in the Percutaneous Coronary Intervention in CURE ( PCI-CURE ) trial . OBJECTIVE To assess the cost-effectiveness of clopidogrel in the Canadian health care system . METHODS Estimates of hospitalization costs were based on the 2003 cost schedules released by the Health Funding and Costing Branch of the Alberta Health and Wellness , as well as on the Case Mix Group classification system . Life expectancy beyond the trial was estimated from the Saskatchewan Health Database . Cost-effectiveness was expressed as the incremental cost-effectiveness ratio , and bootstrap methods were used to estimate the joint distribution of costs and effectiveness . RESULTS Clopidogrel was shown to be cost-effective , with incremental cost-effectiveness ratios less than $ 10,000 per event prevented and less than $ 4,000 per life-year gained . The probability of clopidogrel resulting in cost per life-year gained of less than $ 20,000 was 0.975 for CURE patients and 0.904 for PCI-CURE patients . CONCLUSIONS The economic analysis demonstrated that clopidogrel combination therapy is not only cost-effective as antiplatelet therapy compared with ASA alone , but it is also cost-effective compared with other commonly used and openly reimbursed cardiovascular therapies in the Canadian health care system .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"clopidogrel\", \"ASA alone\", \"ASA\", \"clopidogrel plus acetylsalicylic acid ( ASA )\", \"clopidogrel combination therapy\", \"Clopidogrel\"], \"Outcome\": [\"cost-effectiveness ratios\", \"cost per life-year gained\", \"cost-effective\", \"gained .\"], \"Participant\": [\"acute coronary syndromes\", \"coronary intervention\"]}",
"role": "assistant"
}
] |
science.bc7_litcovid_topic_classification | science.bc7_litcovid_topic_classification.3140 | [
{
"content": "Analyze the given paragraph from COVID-19 research. Determine the relevant topics discussed in the text. Your response should be a JSON array containing the identified topics. Ensure the output is a valid JSON array with no additional commentary.\n\nText:\n\nProbiotics have been suggested as a potential intervention for improving outcomes, particularly ventilatory-associated pneumonia, in patients infected with coronavirus disease 2019 (COVID-19). However, with the rapid development of the COVID-19 pandemic, there is little direct evidence available in infected patients. The objective of this scoping review is to examine the availability and nature of literature describing the effect of probiotics in adults with conditions or infections similar to COVID-19 infection on related health outcomes. MEDLINE, Cumulative Index to Nursing & Allied Health Literature, and Cochrane Databases were searched for studies published from 1999 to May 1, 2020, examining the effect of probiotics in conditions applicable to individuals infected with COVID-19, including, but not limited to, other forms of coronavirus, critical illness, and mechanical ventilation. The databases search identified 1925 unique articles, 77 full-text articles were reviewed, and 48 studies were included in this scoping review, including 31 primary studies and 17 systematic reviews. Primary studies examined a range of interventions that varied by probiotic diversity and types, including 8 studies that focused on synbiotics, which include both pre- and probiotics. Several systematic reviews examined the effect of probiotics on ventilator-associated pneumonia and other infections. Although most systematic reviews concluded probiotics may improve these outcomes, most systematic review authors concluded that the evidence was low in quality and high in heterogeneity. In the absence of direct evidence with patients infected with COVID-19, studies in comparable populations are currently the best resource to guide probiotics interventions in conjunction with clinical expertise and multidisciplinary health care planning.",
"role": "user"
},
{
"content": "[\"Treatment\"]",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_pubmed | science.scientific_papers_summarization_single_doc_pubmed.9355 | [
{
"content": "Read the provided biomedical research paper and create a concise abstract. Ensure the abstract captures the main points of the paper.\n\nFull Paper:\nour research is approved by the ethics committee of medical college hospitals , kolkata , west bengal , india .\neach of the authors for preparing this case report , dr . kallol dasbaksi ( corresponding author ) , dr .\nplaban mukherjee have contributed to the study concept or design , data collection , data analysis or interpretation , for writing the paper .",
"role": "user"
},
{
"content": "highlightsthis case presentation , emphasizes the typical manifestations of hepatic but associated asymptomatic pericardial hcs.hepatic hydatid cyst presenting as a painless abdominal lump preceded painful presentation for a few months.asymptomatic pericardial hydatid cyst could have presented with pericardial tamponade due to rupture , if ignored further.the origin of the pericardial hydatid cyst could have been due to trans diaphragmatic passage of scolices from the hepatic location , as the usual pulmonary involvement was absent .",
"role": "assistant"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.