dataset
stringclasses 45
values | id
stringlengths 17
63
| messages
listlengths 2
2
|
---|---|---|
science.chia_ner | science.chia_ner.1151 | [
{
"content": "Analyze the provided clinical trial criteria. Identify and extract distinct entities from the text.\n\nReturn the results as a JSON structure. Ensure the output is strictly in JSON format without any additional commentary.\n\nCriteria Text:\n\nAre negative for human immunodeficiency virus (HIV) infection at screening \nIs healthy on the basis of physical examination, medical history, electrocardiogram (ECG), and vital signs measurement performed at screening \nAre willing/able to adhere to the prohibitions and restrictions specified in the protocol and study procedures \nFemale participants of childbearing potential must have a negative serum pregnancy test (beta human chorionic gonadotropin [beta hCG]) at the Screening visit, and a negative urine pregnancy test pre-dose on Day 1 \nAre assessed by the clinic staff as being at low risk for HIV infection",
"role": "user"
},
{
"content": "{\"Condition\": [\"healthy\", \"low risk\", \"childbearing potential\", \"human immunodeficiency virus (HIV)\", \"HIV infection\"], \"Device\": [], \"Drug\": [], \"Measurement\": [\"urine pregnancy test\", \"serum pregnancy test\", \"beta human chorionic gonadotropin [beta hCG]\"], \"Mood\": [], \"Multiplier\": [], \"Negation\": [\"negative\"], \"Observation\": [\"medical history\"], \"Person\": [\"Female\"], \"Procedure\": [\"physical examination\", \"electrocardiogram (ECG)\", \"vital signs measurement\"], \"Qualifier\": [], \"ReferencePoint\": [\"Day 1\", \"Screening visit\"], \"Scope\": [\"physical examination, medical history, electrocardiogram (ECG), and vital signs measurement\", \"serum pregnancy test (beta human chorionic gonadotropin [beta hCG])\"], \"Temporal\": [\"pre-dose on Day 1\", \"at the Screening visit\", \"at screening\"], \"Value\": [\"negative\"], \"Visit\": []}",
"role": "assistant"
}
] |
science.acl_arc_intent_classification | science.acl_arc_intent_classification.1556 | [
{
"content": "Examine the given text from a research paper in the field of NLP. Your task is to identify the role of the citation within the context provided. Focus on understanding the intent behind the citation and classify it accordingly.\n\nSection Title:\nrelated work\n\nText Before Citation:\nLiu et al. (2005), Meral et al. (2007), Murphy (2001), Murphy and Vogel (2007) and Topkara et al. (2006a) all belong to the syntactic transformation category. In other words, instead of performing lexical substitution directly to the text, the secret message is embedded into syntactic parse trees of the sentences. Later, Atallah et al. (2001b) embedded information in the tree structure of the text by adjusting the structural properties of intermediate representations of sentences.\n\nCitation Text:\nLiu et al. ( 2005 ) , Meral et al. ( 2007 ) , Murphy ( 2001 ) , Murphy and Vogel ( 2007 ) and Topkara et al. ( 2006a ) all belong to the syntactic transformation category .\n\nText After Citation:\nAfter embedding the secret message, modified deep structure forms are converted into the surface structure format via language generation tools. Atallah et al. (2001b) and Topkara et al. (2006a) attained the embedding capacity of 0.5 bits per sentence with the syntactic transformation method.",
"role": "user"
},
{
"content": "Background",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.3529 | [
{
"content": "Examine the abstracts provided from a main scientific article and its cited works. Craft a narrative that synthesizes the key insights and contributions, emphasizing their interconnections. Employ numerical citations such as [1], [2], etc., to reference the cited works.\n\nMain Article Summary:\nTreasure hunt is the task of finding an inert target by a mobile agent in an unknown environment. We consider treasure hunt in geometric terrains with obstacles. Both the terrain and the obstacles are modeled as polygons and both the agent and the treasure are modeled as points. The agent navigates in the terrain, avoiding obstacles, and finds the treasure when there is a segment of length at most 1 between them, unobstructed by the boundary of the terrain or by the obstacles. The cost of finding the treasure is the length of the trajectory of the agent. We investigate the amount of information that the agent needs a priori in order to find the treasure at cost @math , where @math is the length of the shortest path in the terrain from the initial position of the agent to the treasure, avoiding obstacles. Following the paradigm of algorithms with advice, this information is given to the agent in advance as a binary string, by an oracle cooperating with the agent and knowing the whole environment: the terrain, the position of the treasure and the initial position of the agent. Information complexity of treasure hunt is the minimum length of the advice string (up to multiplicative constants) that enables the agent to find the treasure at cost @math . We first consider treasure hunt in regular terrains which are defined as convex polygons with convex @math -fat obstacles, for some constant @math . A polygon is @math -fat if the ratio of the radius of the smallest disc containing it to the radius of the largest disc contained in it is at most @math . For the class of regular terrains, we establish the exact information complexity of treasure hunt. We then show that information complexity of treasure hunt for the class of arbitrary terrains (even for non-convex polygons without obstacles, and even for those with only horizontal or vertical sides) is exponentially larger than for regular terrains.\n\nCited Works Summary:\n[1]: We consider the motion planning problem for a point constrained to move along a smooth closed convex path of bounded curvature. The workspace of the moving point is bounded by a convex polygon with m vertices, containing an obstacle in a form of a simple polygon with n vertices. We present an O(m+n) time algorithm finding the path, going around the obstacle, whose curvature is the smallest possible.\n\n[2]: The paper considers the problem of covering a continuous planar area by a square-shaped tool attached to a mobile robot. Using a tool-based approximation of the work-area, we present an algorithm that covers every point of the approximate area. The algorithm, called spanning tree covering (STC), subdivides the work-area into disjoint cells corresponding to the square-shaped tool, then follows a spanning tree of the graph induced by the cells, while covering every point precisely once. We present and analyze three versions of the STC algorithm. The first version is an off-line algorithm that computes an optimal covering path in linear time O(N), where N is the number of cells comprising the approximate area. The second version is an online or sensor based algorithm, that completes an optimal covering path in time O(N), but requires O(N) memory for its implementation. The third version of STC is \"ant\"-like, where the robot may leave pheromone-like markers during the coverage process. The ant-like STC algorithm runs in time O(N) and requires only O(1) memory. We present simulation results of the three STC algorithms, demonstrating their effectiveness in cases where the tool size is significantly smaller than the work-area characteristic dimension.\n\n[3]: A mobile robot represented by a point moving in the plane has to explore an unknown flat terrain with impassable obstacles. Both the terrain and the obstacles are modeled as arbitrary polygons. We consider two scenarios: the unlimited vision, when the robot situated at a point p of the terrain explores (sees) all points q of the terrain for which the segment pq belongs to the terrain, and the limited vision, when we require additionally that the distance between p and q is at most 1. All points of the terrain (except obstacles) have to be explored and the performance of an exploration algorithm, called its complexity, is measured by the length of the trajectory of the robot. For unlimited vision we show an exploration algorithm with complexity O(P+Dk), where P is the total perimeter of the terrain (including perimeters of obstacles), D is the diameter of the convex hull of the terrain, and k is the number of obstacles. We do not assume knowledge of these parameters. We also prove a matching lower bound showing that the above complexity is optimal, even if the terrain is known to the robot. For limited vision we show exploration algorithms with complexity O(P+A+Ak), where A is the area of the terrain (excluding obstacles). Our algorithms work either for arbitrary terrains (if one of the parameters A or k is known) or for c-fat terrains, where c is any constant (unknown to the robot) and no additional knowledge is assumed. (A terrain T with obstacles is c-fat if R r=\n\n[4]: The context of this work is the exploration of unknown polygonal environments with obstacles. Both the outer boundary and the boundaries of obstacles are piecewise linear. The boundaries can be nonconvex. The exploration problem can be motivated by the following application. Imagine that a robot has to explore the interior of a collapsed building, which has crumbled due to an earthquake, to search for human survivors. It is clearly impossible to have a knowledge of the building's interior geometry prior to the exploration. Thus, the robot must be able to see, with its onboard vision sensors, all points in the building's interior while following its exploration path. In this way, no potential survivors will be missed by the exploring robot. The exploratory path must clearly reflect the topology of the free space, and, therefore, such exploratory paths can be used to guide future robot excursions (such as would arise in our example from a rescue operation).",
"role": "user"
},
{
"content": "Exploration of polygons by a robot with limited vision has been studied, e.g., in [2] [4] [1] . In [2] the authors described an on-line algorithm with competitive ratio @math , where @math is a quantity depending on the perimeter of the polygon, @math is the area seen by the robot, and @math is the area of the polygon. In [1] the author studied exploration of the boundary of a terrain with limited vision. The cost of exploration of arbitrary terrains with obstacles, both for limited and unlimited vision, was studied in [3] .",
"role": "assistant"
}
] |
science.scireviewgen_multidoc_summarization | science.scireviewgen_multidoc_summarization.2942 | [
{
"content": "Craft a cohesive chapter for a literature review using the abstracts provided. Focus on the chapter's theme, weaving together insights and findings from the abstracts. Highlight key points and note any research gaps.\nLiterature Review Title: Air Flow Measurement and Management for Improving Cooling and Energy Efficiency in Raised-Floor Data Centers: A Survey \nChapter Title: B. HOT AISLE CONTAINMENT (HAC) \n1. Abstract of Cited Paper (BIB001): Data centers are very costly structures, both in terms of capital and operational expenditures. The high innovation rate in the IT business conflicts with the long-term character of data centers. In order to ensure lifetime usability, flexibility is a major concern for any data center design. Recent developments of data center air-cooling, in particular containment solutions for hot/cold air separation, allow rethinking of traditional data center design approaches. In this paper we present the Plenum concept that represents a major overhaul of traditional design principles. Our approach requires the construction of a full additional building story instead of a traditional raised floor. We then position the computer room air-handling (CRAH) units directly below the hot aisle of a block of racks. The down- ward flow direction of hot air makes the strict use of hot/cold air separation mandatory. Our approach minimizes or eliminates support space on the IT floor, moves almost all support structures into the Plenum, and creates very clean interfaces between both spaces. This greatly reduces the need for support personnel on the IT floor. Vice versa, IT personnel does not need to access support spaces, allowing for simplified data center operations. Insight from the planning process of a 5 MW data center currently under construction at TU Dresden shows that the construction of a full building story below the IT floor instead of a traditional raised floor does not necessarily increase the building volume. The simplified access and the separation of IT and support personnel reduces corridor and access spaces. We show that the overall building volume can even be decreased in comparison to classical approaches. Our design easily supports more than 15 kW average heat loads per rack with n+1 CRAH unit redundancy at a highly efficient CRAH operating point and with little scalability limitations regarding computer room size. \n2. Abstract of Cited Paper (BIB002): Abstract The goal of air management system in high-density data centers is to minimize hot air re-circulation and cold air by-pass in data centers. If successfully implemented, both goals will result in energy savings and enhanced thermal conditions. Air management helps to reduce operating costs by enhancing economizer utilization and improving cooling system efficiency. Metrics play a significant role in providing performance indices, which include Supply/Return Heat Index (SHI/RHI), the rack cooling index (RCI) and the return temperature index (RTI), of air management systems. CFD simulation estimates temperature and airflow in the data center as well as to provide wealth of information to evaluate airflow performance. This study defines four performance metrics that analyzes air management systems. These metrics were computed with CFD modeling of 46 different design alternatives to evaluate the thermal environment in a typical data center module. Advanced and comprehensive evaluation index of cooling efficiency was also derived as well-crafted metrics will undoubtedly play an important role. \n3. Abstract of Cited Paper (BIB003): In an effort to improve the reliability and efficiency of data centers, racks and sometimes entire hot aisles are ducted to a dropped ceiling. The cooling performance of such systems strongly depends on IT and cooler airflow, the number and configuration of ducted objects and perforated ceiling tiles, the leakiness of the ceiling system, ceiling plenum depth, and other factors. Recently, a compact model has been proposed in which a Flow Network Model (FNM) representing the ducted equipment is embedded into a parent CFD model. By eliminating the need to explicitly model difficult-to-characterize leakage paths in CFD, this approach allows for realistic solutions while greatly improving the solutions speed and robustness of the CFD simulation. This paper employs the FNM (without CFD) to characterize and compare the cooling effectiveness of individually-ducted racks and ducted hot aisles subject to a given ceiling plenum pressure. Example resistance values needed in the FNM are provided. Additionally, an example data center layout is studied with the coupled FNM-CFD model to explore cooling performance as a function of ceiling leakiness, plenum depth, ratio of cooling to IT airflow, and rack density (IT airflow). Best-practice type recommendations for ducted equipment are provided. \n4. Abstract of Cited Paper (BIB004): In cold aisle containment (CAC) the supply of cold air is separated within the contained volume. The hot air exhaust leaves the IT and increases the room’s temperature before returning to the cooling unit. On the other hand, hot aisle containment (HAC) generates a cooler environment in the data center room as a whole by segregating hot air within the containment. Hot air is routed back to the cooling unit return by a drop ceiling or a chimney. Each system has different characteristics and airflow paths. For instance, leakage introduces different effects for CACs and HACs since the hot and cold aisles are switched.This article utilizes data center measurements and containment characterization carried out circa April 2015 in the ES2 Data Center lab at Binghamton University. Details on the containment model include leakages at below racks, above racks, below CAC doors, between doors, and above doors. The model deploys the experimentally obtained flow curves approach for flow-pressure correlation.Data center operators rely on the pressure differential to measure how much the IT is provided. Hence, in this study the level of provisioning was expressed in terms of pressure differentials between the hot and cold aisles. In this manner the model reflected real-life DC thermal management practices. This was done by integrating a pressure differential based controller to the cooling unit model. Leakages in each system are quantified and ranked based on a proposed LIF (Leakage Impact Factor) metric.The LIF describes the transport contribution each leakage location has. This metric can be used by containment designers and data center operators to prioritize their sealing efforts or consider deploying the containment solution differently. Finally, a systematic approach is shown in which containment models can be used to optimize operations at the real-life site.Copyright © 2015 by ASME \n5. Abstract of Cited Paper (BIB005): Aisle containment is increasingly being used in data center cooling solutions because of the benefits of segregating the cold and hot air streams which reduces mixing and enhances energy savings. Containment can be integrated into legacy raised floor or non-raised floor local cooling systems such as in-row coolers. However, containment is not always entirely effective due to leakage. Although containment systems providing better sealing are becoming available, significant leakage exists in currently operational sites. The impact of leakage can be significant leading to backflow, the Venturi effect or entrainment. Comparisons between Cold Aisle Containment (CAC) and Hot Aisle Containment (HAC) indicate that there is a slight advantage of the latter in terms of ambient room temperature, failure scenarios and heat exchanger efficiency, although CAC systems can be easier to install. Detailed comparisons between both containment solutions are not widely available. Each containment approach has different behavior characteristics and air flow paths. For instance, leakage can introduce different flow and temperature conditions at server inlets depending on whether CAC or HAC is being used. An experimental characterization of containment was carried out at the ES2 research data center lab at Binghamton University [1] along with detailed numerical modeling. The numerical model was validated based on the experimentally measured flow rate of the perforated tiles in the cold aisles. Also, for the different types of IT equipment modeled in the simulations, experimentally measured flow impedance and fan curves were utilized. The details of the containment including leakage at the bottom and sides of the racks, the mounting rails, blanking panels and door seams were characterized experimentally and then incorporated in the computational model. Fan curves for the three CRAH blowers were used to accurately represent the targeted flow rates. This study focuses on three containment configurations: cold aisle containment and hot aisle containment and chimney (with a drop ceiling). In first two cases, the CRAH flow rate is controlled based on the pressure difference that was measured by three sensors inside and outside of the containment and for the third case, the CRAH flow rate was controlled based on IT required air flow. The water flow rate of the CRAH unit is also controlled based on the air return temperature and total heat load in the room so as to maintain the supply air temperature set point. The temperature and pressure fields in the aisles and at the server inlets and outlets are monitored for all the cases with leakage. \n6. Abstract of Cited Paper (BIB006): PART I: Comparative Study of Air Flow Energy for Active and Passive cooling designs within Data Center room using CFD\n\nReview Chapter:",
"role": "user"
},
{
"content": "HAC seals the space in rack rear and directs the hot air back to CRAH without mixing with the supplying cold air. There are no differences between CAC and HAC in terms of the thermal efficiency . However, HAC is more favorable in practice since 1) For the same air provision level, a less aisle pressure differential is required for HAC compared to CAC, hence a less leakage is expected BIB005 . 2) Nearly the entire room space in the HAC solution is casted into the cold aisle, which potentially yields more cooling capacity . 3) HAC takes smaller space for the hot air transportation and is more temperature friendly for data center maintenance staff BIB002 . The ambient temperatures in all working areas except HAC are all close to the supplying air temperature. Therefore, the CRAH temperature setpoint can be further raised in HAC to achieve the same working environment temperature compared to CAC BIB003 . It was estimated that 43% annual cooling energy could be saved (corresponding to 15% PUE reduction) by using HAC . Other advantages of HAC include more uptime in CRAH failure and higher heat exchanger efficiency BIB006 , etc. HAC can be implemented using Ducted Hot Aisles (DHAs) to the drop ceiling (Fig. 20b) or Vertical Exhaust Ducts (VEDs, a.k.a. cabinet chimneys) for individual racks (Fig. 20c) . For DHAs, it was suggested that the hot aisle width should be larger than 1.8m BIB007 , since a small dimension of hot aisle elevated the pressure and required more server fan energy to overcome it. it The VED solution provides a cooler working condition in hot aisles compared to DHA. It is also an option to capture the hot exhaust air for data centers where DHAs cannot be installed due to space or location issues BIB001 . However, solid rear doors and side panels are indispensable for VED racks as the internal pressure of racks is very high BIB006 , and additional fans may be necessary to overcome the increased pressure drop . The average cooling performance is generally better for DHAs than VEDs BIB004 . While the pressure distribution in CAC is rather uniform, the pressure of hot aisles in HAC exhibits variations, i.e., a lower pressure is observed in the upper part for racks close to CRAH BIB006 . In VEDs, the rack pressure variation is more drastic: lower pressure for racks closer to CRAH and higher pressure for racks far away from CRAH. The performance of HAC also shows a strong correlation with the drop ceiling pressure BIB004 . To obtain a more uniform ceiling plenum pressure, a depth of 18 − 24 inches (0.46 − 0.61 m) and regular/symmetic layouts are favourable. Well sealing the containment system is instrumental in the enhancement of HAC performance. Special attentions should be paid for the drop ceiling since many investigations suggested that the ceiling system, especially integrated with lighting fixtures, was substantially leakier than estimated by ASHRAE BIB004 , . The typical leakage of the ceiling should not exceed 10%.",
"role": "assistant"
}
] |
science.acl_arc_intent_classification | science.acl_arc_intent_classification.905 | [
{
"content": "Examine the given text from a research paper in the field of NLP. Your task is to identify the role of the citation within the context provided. Focus on understanding the intent behind the citation and classify it accordingly.\n\nSection Title:\nNone\n\nText Before Citation:\nThis is similar to âone sense per collocationâ idea of Yarowsky (1993). In capitalized-word disambiguation, however, we use this assumption with caution and first apply strategies that rely not just on single words but on words together with their local contexts (n-grams). Gale, Church, and Yarowskyâs observation is also used in our DCA, especially for the identification of abbreviations.\n\nCitation Text:\nThis is similar to `` one sense per collocation '' idea of Yarowsky ( 1993 ) .\n\nText After Citation:\nThe description of the EAGLE workbench for linguistic engineering (Baldwin et al. 1997) mentions a case normalization module that uses a heuristic in which a capitalized word in an ambiguous position should be rewritten without capitalization if it is found lower-cased in the same document. This heuristic also employs a database of bigrams and unigrams of lower-cased and capitalized words found in unambiguous positions. It is quite similar to our method for capitalized-word disambiguation. The description of the EAGLE case normalization module provided by Baldwin et al. is, however, very brief and provides no performance evaluation or other details. \u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000",
"role": "user"
},
{
"content": "CompareOrContrast",
"role": "assistant"
}
] |
science.mslr2022_cochrane_multidoc_summarization | science.mslr2022_cochrane_multidoc_summarization.2089 | [
{
"content": "Review the provided collection of medical RCT summaries. Your objective is to distill the essential insights into a brief conclusion. Focus solely on the synthesis of findings without additional commentary. The RCT summaries are as follows:\n\n\n\nReport Title 1:\nDemonstration of the efficacy of ginkgo biloba special extract EGb 761 on intermittent claudication--a placebo-controlled, double-blind multicenter trial.\nReport Abstract 1:\nA multicentric, randomized, placebo-controlled double-blind study on ginkgo biloba special extract EGb 761 (Tebonin forte) in patients suffering from peripheral occlusive arterial disease (POAD) in Fontaine stage II b was carried out in order to prove its clinical efficacy in this indication according to guidelines of European Community authorities and the German Angiological Society and to confirm the results of former clinical studies with EGb 761.\n In total, 111 patients with angiographically proven POAD in Fontaine stage II b and intermittent claudication (pain-free walking distance < 150 m on the treadmill) were recruited in 5 centers and treated with either EGb 761 or placebo at a daily dose of 3 times 1 film-coated tablet over a duration of 24 weeks following a 2-week placebo run-in period. The primary response variable was the difference of the pain-free walking distance between the start of treatment and after 8, 16 and 24 weeks as measured on the treadmill (walking speed 3 km/h and slope of 12%) under standardized conditions.\n At the start of the treatment period, the mean pain-free walking distances were very similar with 108.5 m in the EGb 761 group and 105.2 m in the placebo group. At the end of the treatment period these values increased to 153.2 m and 126.6 m, respectively. The group differences were statistically significant at all three control visits with p = 0.017, p = 0.007, and p = 0.016. The differences for the maximum walking distance and the relative increases of the pain-free walking distance and the maximum distance were also significantly higher in the EGb 761 group with p-values < 0.05 each. In both groups Doppler indices remained nearly unchanged during therapy. The subjective assessment of the patients demonstrated an amelioration of complaints in both groups. Tolerability was very good with no adverse events under EGb 761 and one case of heartburn and gastric pain in the placebo group.\n It can be concluded from the results of this study that treatment with EGb 761 in POAD patients with Fontaine stage II b is very safe and causes a significant and therapeutically relevant prolongation of the patients' walking distance.\n\n\nReport Title 2:\n6-Month double-blind randomised clinical trial of Ginkgo biloba extract versus placebo in two parallel groups in patients suffering from peripheral arterial insufficiency.\nReport Abstract 2:\n79 patients suffering from peripheral arteriopathy (Fontaine's stage IIb) completed a 6-month double-blind randomised clinical trial of Ginkgo biloba extract (GBE) (as coated tablets containing 40 mg GBE; rökan) versus placebo in two parallel groups. From the results of measurements of pain-free walking distance, maximum walking distance and plethysmography recordings, GBE was shown to be active and significantly superior to placebo. These results correlated with the physician's and patients' overall assessment of response to treatment.\n\n\nReport Title 3:\nEffect of Ginkgo biloba (EGb 761) on treadmill walking time among adults with peripheral artery disease: a randomized clinical trial.\nReport Abstract 3:\nMedical therapies for treatment of peripheral artery disease (PAD) are limited. Ginkgo biloba has been reported to increase maximal and pain-free walking distance among patients with PAD; however, the evidence is inconsistent. The objective of this study was to compare the effects of 300 mg/d of Ginkgo biloba (EGb 761) versus placebo on treadmill walking time and related cardiovascular measures among patients with PAD.\n A double-blind, placebo-controlled, parallel design trial with a 4-month duration was used. Participants were 62 adults, aged 70 +/- 8 years (mean +/- SD), with claudication symptoms of PAD. The primary study outcomes were maximal and pain-free walking time on a treadmill. Secondary outcomes included flow-mediated vasodilation, a measure of antioxidant status as assessed by determining antibody levels to epitopes of oxidized low-density lipoprotein, and questionnaires addressing walking impairment and quality of life.\n Maximal treadmill walking time increased by 20 +/- 80 and 91 +/- 242 seconds in the placebo and the EGb 761 groups, respectively (P = .12). Pain-free walking time increased by 15 +/- 31 and 21 +/- 43 seconds, respectively (P = .28). No significant differences were detected between groups for any of the secondary outcomes.\n In older adults with PAD, Ginkgo biloba produced a modest but insignificant increase in maximal treadmill walking time and flow-mediated vasodilation. These data do not support the use of Ginkgo biloba as an effective therapy for PAD, although a longer duration of use should be considered in any future trials.\n\n\nReport Title 4:\n[The effectiveness of a Ginkgobiloba preparation in arterial ischemic diseases of the leg. Controlled double-blind cross-over study].\nReport Abstract 4:\nnan\n\n\nReport Title 5:\n[Placebo-controlled double-blind study of the effectiveness of Ginkgo biloba special extract EGb 761 in trained patients with intermittent claudication].\nReport Abstract 5:\nThis monocenter, randomized, placebo-controlled double-blind study with parallel-group comparison was carried out in order to demonstrate the efficacy of Ginkgo biloba special extract EGb 761 on objective and subjective parameters of the walking performance in trained patients suffering from peripheral arterial occlusive disease in Fontaine stage IIb. In total 60 patients were recruited (42 men; aged 47-82 years) with angiographically proven peripheral arterial occlusive disease of the lower extremities and an intermittent claudication existing for at least 6 months. No improvement had been shown despite consistent walking training and a maximum pain-free walking distance on the treadmill of less than 150 m was recorded at the beginning of the study. The therapeutic groups were treated with either Ginkgo biloba special extract EGb 761 at a dose of 3 times 1 film-coated tablet of 40 mg per day by oral route or placebo over a duration of 24 weeks following a two-week placebo run-in phase. The main outcome measure was the difference of the walking distance between the start of treatment and after 8, 16 and 24 weeks of treatment as measured on the treadmill (walking speed 3 km/h and slope of 12%). As secondary parameters the corresponding differences for the maximum walking distance, the relative increase of the pain-free walking distance, the Doppler index and the subjective evaluation of the patients were analyzed. The absolute changes in the pain-free walking distance in treatment weeks 8, 16 and 24 as against the treatment beginning (median values with 95% confidence interval) led to the following values for the patients treated with Ginkgo biloba special extract EGb 761:19 m (14, 33), 34 m (18, 50) and 41 m (26, 64). The corresponding values in the placebo group were as follows: 7 m (-4, 12), 12 m (5, 22) and 8 m (-1, 21). The advantage of the EGb 761-treated group as compared to the placebo group could be verified statistically at the 3 time points with p < 0.0001, p = 0.0003 and p < 0.0001. The test for the presence of a clinically relevant difference of 20% between EGb 761 and placebo also produced a statistically significant result (p = 0.008). The Doppler index remained unchanged in both therapeutic groups: A corresponding statistically significant advantage for the EGb 761 group was observed on a descriptive level for the other parameters tested. The tolerance of the treatment was very good. The results of this placebo-controlled study show that treatment with Ginkgo biloba special extract EGb 761 produces a statistically highly significant and clinically relevant improvement of the walking performance in trained patients suffering from intermittent claudication with very good tolerance of the study preparation.\n\n\nReport Title 6:\nSupervised exercise training combined with ginkgo biloba treatment for patients with peripheral arterial disease.\nReport Abstract 6:\nTo evaluate whether a combination of supervised exercise training and ginkgo biloba treatment is a better treatment than exercise training alone for patients with peripheral arterial disease.\n A 24-week double-blind, placebo-controlled ginkgo biloba trial with the first 12-week period as a non-exercise control stage and the second 12-week period as an exercise training stage.\n Exercise physiology laboratory.\n Twenty-two subjects with peripheral arterial disease.\n The subjects were randomly allocated into a ginkgo or a placebo group. During the first stage, the ginkgo group ingested standardized ginkgo biloba tablets with a daily dosage of 240 mg, while the placebo group received placebo tablets. During the second stage, all subjects engaged in a supervised treadmill-walking programme while continuing to take the same dosage of ginkgo biloba or placebo tablets.\n Walking capacity on treadmill, oxygen consumption during exercise, peripheral haemodynamics and blood viscosity were measured at baseline, and after the first and the second stages of treatment.\n The ginkgo group did not show significant changes in most of the measured variables after each stage of treatment, except that the maximal walking time was significantly increased after the combined treatment (from 236 +/- 112 seconds to 557 +/- 130 seconds, P < 0.001). However, similar response was also found in the placebo group after exercise training (from 384 +/- 125 seconds to 820 +/- 146 seconds, P < 0.001).\n Supervised exercise training combined with ginkgo biloba treatment did not produce greater beneficial effects than exercise training alone in patients with peripheral arterial disease.\n\n\nReport Title 7:\nStudy of the antiischemic action of EGb 761 in the treatment of peripheral arterial occlusive disease by TcPo2 determination.\nReport Abstract 7:\nIn a randomized, placebo-controlled, double-blind, parallel study of 20 patients, the antiischemic effect of EGb 761 (Ginkgo biloba Extract) was studied by measuring the transcutaneous partial pressure of oxygen (TcPo2) during exercise. Transcutaneous oximetry during exercise provides a good, noninvasive estimation of local arterial perfusion and constitutes a real index of local and regional capillary perfusion. Twenty patients between the ages of forty-four and seventy-three years suffering from claudicating atherosclerotic arterial occlusive disease in stage II according to the Leriche and Fontaine classification, diagnosed for more than a year and stable for three months, were included. The eligible patients received placebo for fifteen days under single-blind conditions. At the end of this preinclusion period, the eligibility criteria were checked and the patients were randomized to two treatment groups. The first group received 320 mg per day of EGb 761 for four weeks and the second group received placebo. The treadmill walking test was performed under standardized conditions at the same time of day and by the same investigator. In a comparison of the differences before and after treatment, the areas of ischemia decreased by 38% in the EGb 761 group but remained essentially stable (+5%) in the placebo group. This difference between groups is significant (F [1.18] = 4.91; P = 0.04) and the 95% confidence interval for the difference ranges from 0.89 to 3.87. This study confirmed significantly the rapid antiischemic action of EGb 761 and its value in the management of peripheral arterial occlusive disease at the stage of intermittent claudication.\n\n\nReport Title 8:\n[The effect of Ginkgo biloba extract in patients with intermittent claudication].\nReport Abstract 8:\nEighteen patients with stable intermittent claudication were randomized in a double blind cross-over study comparing the effects of the Ginkgo biloba extract GB-8 at a dose of 120 mg o.d. with placebo. All patients were treated for three months with the active extract and three months with placebo. The effects of treatment on arterial insufficiency were quantified by measurements of systemic and peripheral systolic blood pressures, and pain-free and maximal walking distances on a tread-mill. Questionnaires based on visual analogue scales were used to quantify the severity of leg pain, impairment of concentration, and inability to remember. Short-term memory was objectively assessed. We did not find any significant changes in either peripheral blood pressures, walking distances or the severity of leg pain. Systemic blood pressure was reduced both by placebo and GB-8. The impairment of concentration and the inability to remember were both reduced, when comparing results during active treatment to placebo. Short-term memory did not change significantly. In conclusion, our study has shown that treatment with the Ginkgo biloba extract GB-8 improves some cognitive functions in elderly patients with moderate arterial insufficiency, whereas the extract did not change signs and symptoms of vascular disease in the patients.\n\n\nReport Title 9:\nA clinical trial of Gingkco Biloba Extract in patients with intermittent claudication.\nReport Abstract 9:\nThirty-seven patients with stage 2 peripheral vascular disease were randomised to receive a six month course of Gingkco Biloba Extract (Tanakan) or matching placebo. Assessment, by claudication distance, A/B ratio, Doppler ankle pressure response to exercise together with recovery time, and a 10 cm analogue scale (LAS) estimation of maximal pain severity, was performed before treatment, and at 6, 12 and 24 weeks. LAS scores were significantly improved after 24 weeks in patients receiving EgB, but not placebo. Claudication distance was significantly increased by Egb. Although claudication distance also increased in the placebo group, this was not significant. A/B ratio and Doppler ankle responses to exercise did not show any significant change in either group at any time interval, nor did the post exercise recovery time. Gingkco Biloba Extract is a safe and effective method of improving walking distance and reducing pain severity in patients with intermittent claudication, although Doppler studies have failed to suggest any gross improvement in the perfusion of the ischaemic leg.\n\n\nReport Title 10:\n[Ginkgo-special-extract EGb 761 in peripheral arterial occlusive diseases stage IIb according to Fontaine].\nReport Abstract 10:\nnan",
"role": "user"
},
{
"content": "Overall, there is no evidence that Ginkgo biloba has a clinically significant benefit for patients with peripheral arterial disease.",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.3119 | [
{
"content": "Examine the abstracts provided from a main scientific article and its cited works. Craft a narrative that synthesizes the key insights and contributions, emphasizing their interconnections. Employ numerical citations such as [1], [2], etc., to reference the cited works.\n\nMain Article Summary:\nWhile autonomous navigation has recently gained great interest in the field of reinforcement learning, only a few works in this field have focused on the time optimal velocity control problem, i.e. controlling a vehicle such that it travels at the maximal speed without becoming dynamically unstable. Achieving maximal speed is important in many situations, such as emergency vehicles traveling at high speeds to their destinations, and regular vehicles executing emergency maneuvers to avoid imminent collisions. Time optimal velocity control can be solved numerically using existing methods that are based on optimal control and vehicle dynamics. In this paper, we use deep reinforcement learning to generate the time optimal velocity control. Furthermore, we use the numerical solution to further improve the performance of the reinforcement learner. It is shown that the reinforcement learner outperforms the numerically derived solution, and that the hybrid approach (combining learning with the numerical solution) speeds up the learning process.\n\nCited Works Summary:\n[1]: We present a machine learning approach for estimating the second derivative of a drivable surface, its roughness. Robot perception generally focuses on the first derivative, obstacle detection. However, the second derivative is also important due to its direct relation (with speed) to the shock the vehicle experiences. Knowing the second derivative allows a vehicle to slow down in advance of rough terrain. Estimating the second derivative is challenging due to uncertainty. For example, at range, laser readings may be so sparse that significant information about the surface is missing. Also, a high degree of precision is required in projecting laser readings. This precision may be unavailable due to latency or error in the pose estimation. We model these sources of error as a multivariate polynomial. Its coefficients are learned using the shock data as ground truth -- the accelerometers are used to train the lasers. The resulting classifier operates on individual laser readings from a road surface described by a 3D point cloud. The classifier identifies sections of road where the second derivative is likely to be large. Thus, the vehicle can slow down in advance, reducing the shock it experiences. The algorithm is an evolution of one we used in the 2005 DARPA Grand Challenge. We analyze it using data from that route.",
"role": "user"
},
{
"content": "There are several approaches for velocity control using reinforcement learning. Each of them solves a problem defined differently. For example, [1] developed a self-supervised'' method for velocity selection in off-road terrain using the input of an inertial measurement unit. The goal is to protect the vehicle from too high vibrations while achieving a higher velocity but not the optimal one. Rosolia and Borrelli present a model predictive controller that improves its performance based on previous iterations, and test it on a racing car. This method is effective for repetitive tasks, in which the initial state is the same at each iteration; hence it cannot control a vehicle on different paths. Other approaches use machine learning based methods as a preprocessing stage only. use a convolutional neural network to create the input for a predictive controller to achieve aggressive driving capabilities. While a convolutional neural network training is used as a preprocessing stage to create a cost map, the vehicle itself is controlled by a model predictive controller.",
"role": "assistant"
}
] |
science.chemprot_ner | science.chemprot_ner.3 | [
{
"content": "Analyze the following text to identify specific entities. Focus on the categories provided: [\"Chemical\", \"Protein\"].\n\nDeliver your findings as a JSON structure: {\"Chemical\": [\"ExampleChemical\", ...], \"Protein\": [\"ExampleProtein\", ...]}. Each key represents an entity type, and the associated list contains the entities found. \n\nEnsure the response is strictly the JSON format without any extra text.\n\nSource:\n\nHepatocyte growth factor activator inhibitor type 2 (HAI-2) modulates hepcidin expression by inhibiting the cell surface protease matriptase-2.\nMatriptase-2, a recently identified cell surface protease, is the key enzyme of iron homoeostasis modulating the expression of the liver peptide hormone hepcidin. HAI (hepatocyte growth factor activator inhibitor) types 1 and 2 (HAI-1 and HAI-2 respectively) have been shown to inhibit the close homologue, i.e. matriptase. By co-expressing matriptase-2 and the inhibitor HAI-2 we have identified HAI-2 displaying high inhibitory potential against matriptase-2 at the cell surface as well as in conditioned medium. Accordingly, complex formation between matriptase-2 and HAI-2 was demonstrated by isolation of the complex via immobilizing either HAI-2 or matriptase-2 from lysates and conditioned medium of co-expressing cells. Furthermore, HAI-2 indirectly influences the expression of the hepcidin-encoding gene HAMP. The inhibitor abrogates the matriptase-2-mediated suppression of HAMP expression, presumably by inhibiting the supposed potential of matriptase-2 to cleave membrane-bound HJV (haemojuvelin). Taken together, the results of the present study have characterized HAI-2 as an inhibitor of matriptase-2 that modulates the synthesis of hepcidin and provides new insights into the regulatory mechanism of iron homoeostasis, with clinical importance for a treatment of iron overload diseases.",
"role": "user"
},
{
"content": "{\"Chemical\": [\"iron\"], \"Protein\": [\"matriptase\", \"Hepatocyte growth factor activator inhibitor type 2\", \"HAI\", \"(hepatocyte growth factor activator inhibitor) types 1 and 2\", \"Matriptase-2\", \"matriptase-2\", \"HAI-2\", \"hepcidin\", \"protease\", \"HAI-1\", \"HAMP\"]}",
"role": "assistant"
}
] |
science.data_reco_mcq_mc | science.data_reco_mcq_mc.490 | [
{
"content": "Consider the following research inquiry and associated key terms. Review the dataset summaries provided and determine which datasets are suitable for addressing the research question. Base your decision on your understanding of machine learning datasets.\nProvide your answer as a JSON list. For example, for the inquiry \"Semi supervised image classification\", a possible response could be: [\"CIFAR-10\", \"CIFAR-100\"]. Ensure the response contains only the dataset names in JSON format.\n\nResearch Inquiry: I want to train a generative model to synthesize high resolution images.\n\nKey Terms: generative modeling images\n\nDataset Summaries:\nMake3D: The Make3D dataset is a monocular Depth Estimation dataset that contains 400 single training RGB and depth map pairs, and 134 test samples. The RGB images have high resolution, while the depth maps are provided at low resolution.\nCIFAR-10: The CIFAR-10 dataset (Canadian Institute for Advanced Research, 10 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The images are labelled with one of 10 mutually exclusive classes: airplane, automobile (but not truck or pickup truck), bird, cat, deer, dog, frog, horse, ship, and truck (but not pickup truck).\nSNIPS: The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity: SearchCreativeWork (e.g. Find me the I, Robot television show), GetWeather (e.g.\nCelebA: CelebFaces Attributes dataset contains 202,599 face images of the size 178×218 from 10,177 celebrities, each annotated with 40 binary labels indicating facial attributes like hair color, gender and age.\nConvAI2: The ConvAI2 NeurIPS competition aimed at finding approaches to creating high-quality dialogue agents capable of meaningful open domain conversation. The ConvAI2 dataset for training models is based on the PERSONA-CHAT dataset.\n\nDataset Options: A: Make3D B: CIFAR-10 C: SNIPS D: CelebA E: ConvAI2",
"role": "user"
},
{
"content": "[\"CIFAR-10\", \"CelebA\"]",
"role": "assistant"
}
] |
science.bc7_litcovid_topic_classification | science.bc7_litcovid_topic_classification.863 | [
{
"content": "Analyze the given paragraph from COVID-19 research. Determine the relevant topics discussed in the text. Your response should be a JSON array containing the identified topics. Ensure the output is a valid JSON array with no additional commentary.\n\nText:\n\nPURPOSE OF REVIEW: To highlight the lessons learned from the Ebola outbreak that may inform our approach to the COVID-19 pandemic, particularly related to the widespread disruption of healthcare, ophthalmic disease manifestations, and vision health systems strengthening for future outbreaks. RECENT FINDINGS: Coronavirus disease 2019 (COVID-19), first detected in China in December 2019, has become a worldwide health emergency, with significant disruption of all aspects of society, including travel, business, and medical care. Although this pandemic has had unprecedented effects on healthcare delivery in the United States, experiences from recent Ebola virus disease (EVD) outbreaks in Africa provide insight and inform our approach to COVID-19 and outbreak preparedness. Like COVID-19, the rapid emergence of Ebola required new clinical and surgical approaches to understand its associated spectrum of ophthalmic complications and the potential for Ebola viral persistence within the eye and in tear film. Recent reports of ophthalmic findings associated with COVID-19 include conjunctivitis, retinopathy, and molecular evidence of virus within the tear film in a minority of cases. Yet, more rigorous approaches to understand ophthalmic disease and transmission risk associated with COVID-19 are needed. Gaps also remain in our understanding of eye disease associated with other high priority emerging infectious diseases including Nipah, Lassa fever, Marburg virus, and others. SUMMARY: Thoroughly understanding the ophthalmic findings and transmission risk associated with COVID-19 is paramount during this pandemic, providing additional measures of safety while resuming ophthalmic care for all patients. Vision health systems preparedness measures developed during recent EVD outbreaks and the current pandemic provide models for ophthalmic clinical practice, research, and education, as we continue to address COVID-19 and future emerging infectious disease threats.",
"role": "user"
},
{
"content": "[\"Prevention\"]",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_plos_single_doc_summ | science.scientific_lay_summarisation_plos_single_doc_summ.774 | [
{
"content": "Transform the following sections from a biomedical study into a simple summary for the general public. Focus on making the information accessible and easy to understand.\n\nResearch Heading: Molecular Basis of Orb2 Amyloidogenesis and Blockade of Memory Consolidation\nContent:\nAbstract:\nAmyloids are ordered protein aggregates that are typically associated with neurodegenerative diseases and cognitive impairment. By contrast, the amyloid-like state of the neuronal RNA binding protein Orb2 in Drosophila was recently implicated in memory consolidation, but it remains unclear what features of this functional amyloid-like protein give rise to such diametrically opposed behaviour. Here, using an array of biophysical, cell biological and behavioural assays we have characterized the structural features of Orb2 from the monomer to the amyloid state. Surprisingly, we find that Orb2 shares many structural traits with pathological amyloids, including the intermediate toxic oligomeric species, which can be sequestered in vivo in hetero-oligomers by pathological amyloids. However, unlike pathological amyloids, Orb2 rapidly forms amyloids and its toxic intermediates are extremely transient, indicating that kinetic parameters differentiate this functional amyloid from pathological amyloids. We also observed that a well-known anti-amyloidogenic peptide interferes with long-term memory in Drosophila. These results provide structural insights into how the amyloid-like state of the Orb2 protein can stabilize memory and be nontoxic. They also provide insight into how amyloid-based diseases may affect memory processes.\nIntroduction:\nAmyloids, whose cross-β fold has been postulated as the ancestral protein fold[1, 2], were initially associated with fatal neurodegenerative disorders[3, 4]. However, more than a century after the identification of these “pathological amyloids, ” a growing list of so-called “functional amyloids” that fulfils a wide variety of physiological functions has recently emerged[5]. The discovery of functional amyloids raises the question of what causes such a strikingly distinct behaviour to that observed in pathological amyloids. Indeed, it remains unclear what features are shared by functional and pathological amyloids and what determines whether a particular amyloid is functional rather than toxic. One functional amyloid-like protein of particular interest is the cytoplasmic polyadenylation element binding protein( CPEB). The CPEB family of proteins regulates the translation of dormant mRNAs[6, 7], and some members of this family are involved in synaptic plasticity and long-lasting memory[8, 9]. The CPEB isoforms share a common C-terminal catalytic region( RNA-binding domain), but they differ in the N-terminal region. Surprisingly, the N-terminus of some CPEB isoforms in Aplysia, Drosophila, and mouse have features characteristic of self-sustaining amyloidogenic( prion-like) proteins[9–13]. For example, the neuronal specific isoform of Aplysia CPEB( ApCPEB) has a glutamine-asparagine( Q/N)-rich N-terminal domain, which resembles a yeast prion-like domain( PLD)[14], and it is predicted to have conformational flexibility[10]. Indeed, ApCPEB undergoes a conformational transition to a β-sheet-rich state similar to that undertaken by other prion-like proteins[15]. In sensory neurons, the neurotransmitter serotonin controls the prion-like switch from the monomeric form to the self-sustaining oligomeric state, which is important for the serotonin-induced increase in synaptic strength[16]. Accordingly, it has been postulated that the switch to the oligomeric and self-perpetuating state contributes to the long-term maintenance of synapse-specific changes[16], providing a molecular mechanism for the persistence of memory[10]. The Drosophila melanogaster orthologue of CPEB, Orb2, has two isoforms that are structurally similar to the ApCPEB isoform: Orb2A and Orb2B[17, 18]. Both forms are expressed in the fly brain and they are required for long-term memory[11, 17, 19]. Orb2A, a synaptic protein, is scarce, and its availability is controlled by phosphorylation[19, 20]. This isoform has eight amino acid residues( YNKFVNFI) preceding the PLD, which are critical for both the efficiency as well as the nature of the amyloid-like oligomers being formed[11]. The longer Orb2B isoform, with a region of 162 residues preceding the PLD, appears to act as a canonical CPEB[21], regulating translation via its RNA-binding domain[18, 21]. The PLD of Orb2A has a low Q/N content compared to ApCPEB( 23. 5% versus 48. 1%), it acts in long-term memory independently of its RNA-binding domain[21], and its mutations prevent memory consolidation[11, 17]. The nervous system is particularly susceptible to amyloid-driven diseases, some of which lead to severe cognitive deficits[22]. Currently, it remains unclear why some amyloids in general are detrimental for neurons, while a seemingly similar amyloid state of some neuronal CPEB proteins is critical for memory stabilization[9, 11]. As with other biological conundrums, a structural/functional analysis of CPEB/Orb2 proteins may shed light on the features that distinguish functional from pathological amyloids, and it may also help us to understand the molecular basis of memory consolidation. Here, we have employed an array of in vitro( bulk and single-molecule biophysics as well as cell culture) and in vivo( Drosophila and yeast) techniques to characterize Orb2 amyloid both structurally and functionally. In addition to dissecting out the characteristics of the Q/N-rich PLD-containing region of Orb2, we have found that a known anti-amyloidogenic peptide inhibits some forms of memory consolidation in Drosophila. Furthermore, comparing with pathological amyloids, we found that although in solution Orb2 can form toxic oligomers, these toxic species rapidly progress to innocuous species. These transient Orb2 species are structurally similar to toxic Huntingtin aggregates and, when abundant, these two proteins form a heteromeric complex. These findings indicate that there are intrinsic structural constraints that prevent functional amyloids to dwell in the toxic conformation. These observations also provide clues as to how pathological amyloids may interfere with the function of those beneficial amyloids.\nDiscussion:\nHere we have analyzed the structural states of Orb2A, an amyloidogenic protein that is important for memory consolidation. We find that Orb2A not only forms self-perpetuating amyloids but it does so extremely quickly and efficiently. Surprisingly, Orb2A has several structural features and properties that are similar to those of pathological amyloids( Fig 8). First, the Orb2A monomer shows a rich conformational polymorphism with mechanical stabilities similar to those of the well-characterized pathological amyloids[41]. Second, Orb2 binds to the conformational antibody A11, which detects the toxic oligomeric species of pathological amyloids[45]. Third, like pathological amyloids, Orb2A can form unbranched amyloid fibers and pre-fibrillar oligomers that react against the conformational antibody OC[46, 57–60]. Finally, Orb2A amyloid formation is inhibited by QBP1, a peptide capable of inhibiting amyloid formation in expanded polyQ tracts[41, 51]. What makes Orb2A PLD beneficial despite sharing all these features with pathological amyloids? Orb2 amyloid-like oligomers are formed in response to neuronal stimulation to support memory, which indicates the existence of a highly regulated mechanism in the cell to drive Orb2 amyloid formation and most likely to avoid the toxicity of its A11-reactive oligomers. One such regulation is the tight control of the Orb2A protein level via phosphorylation[20] and possibly other mechanisms. Here we show the existence of additional intrinsic kinetic factors in Orb2A, i. e., the lifespan of A11-reactive oligomers, which is probably related to its efficiency to form a functional amyloid state. We postulate that although the early events in Orb2 amyloid formation are similar to those of pathological amyloids, in subsequent steps it acquires structural features to rapidly evolve to a nontoxic amyloid-like state( Fig 8). For Orb2 proteostasis, these intrinsic structural features could be further enhanced in the cell by association with other components or by the direct modification of proteins, such as phosphorylation[20]. Both the presence of A11-reactive oligomers and the ability to form amyloids efficiently have been also reported for the intracellular nonpathological amyloids of the yeast Sup35 prion[48], which suggests that this may be a general mechanism for some amyloids to avoid cytotoxicity. Still, it must be noted that neuronal CPEB/Orb2 is the only known functional amyloid-like protein in the nervous system. This distinction is fundamental for two reasons: first, the nervous system is particularly susceptible to amyloid-based disease and; second, amyloids are known to interfere with cognitive abilities including memory. The persistence of memories over months and years raises the fundamental question of how memory is protected against molecular turnover[61]. Structural changes in individual proteins and in supramolecular protein assemblies have been proposed to play an important role in long-term memory[62, 63]. The prion features of the specific neuronal CPEB isoform, such as Orb2 in Drosophila[11], or CPEB3 in mouse[9, 12] that can adopt a self-sustaining state at synapses very rapidly, provide a plausible solution as to how memories can remain stable in the face of constant molecular turnover[10, 16, 64]. Although exactly to what extent the self-sustaining amyloidogenic properties observed in vitro reflect the in vivo properties remains unclear, accumulating evidence reinforces the hypothesis that an amyloid-like state of neuronal CPEB is involved in long-lasting memory. In this regard, the present study provides evidence that mutations in Orb2, such as the 5th residue in Orb2A, that are known to interfere with memory consolidation also impair the protein’s ability to attain a self-sustaining amyloidogenic state as monitored by in vitro measurements. Similarly, the expression of QBP1, an amyloid-blocking peptide, impairs memory consolidation( Fig 8). However, we are not sure whether the memory-disrupting effect of QBP1 is applicable for all forms of long-term memory or whether the effects of QBP1 in memory are mediated exclusively through the Orb2 protein. Furthermore, in the absence of atomic-level 3D structural analysis of endogenous CPEB protein in the amyloid-like state, it remains unclear exactly to what extent they are similar to toxic amyloids. Our observations also highlight certain features of functional amyloids that may be relevant for neuropathologies of amyloid-based diseases. First, the short life span of Orb2 toxic oligomers supports a therapeutic approximation based on speeding up the conversion of toxic oligomers into stable amyloids[65]. Second, Orb2’s capacity to form toxic conformers resembling those of pathological amyloids, and its sequestration into HttQ128 aggregates( Fig 8), suggests that pathological amyloids might hijack Orb2 into a nonfunctional state, impairing its physiological activity. Third, anti-amyloidogenic compounds that target early stages of amyloidogenic pathways might inadvertently interfere with functional amyloid formation, impairing their associated function. Finally, the polyvalent amyloid blocker QBP1, or improved analogues based on its recently determined structure[66], could be used in the future to block newly formed traumatic memories( Fig 8)[67]. However, we are cognisant of the inherent limitations of our ideas since none of our studies are carried out with human proteins in the human nervous system. These anticipated connections of our in vitro findings with memory need to be explored in detail.",
"role": "user"
},
{
"content": "Amyloids are ordered protein aggregates typically associated with neurodegenerative diseases , such as Alzheimer disease and Parkinson disease , which usually result in cognitive impairment . However , the amyloid state of the neuronal RNA binding protein Orb2 shows a quite opposite behaviour: instead of impairment , it is associated with memory consolidation . Here , we have characterized the structural features of Orb2 from the monomer to the amyloid state in order to determine why it shows this unexplained puzzling behaviour . Surprisingly , we find that Orb2 shares many structural traits with pathological amyloids . However , Orb2 rapidly forms amyloids , and its toxic intermediates are transient . Furthermore , we find that an anti-amyloidogenic peptide that blocks the initial structural transitions in pathological amyloids also blocks Orb2 amyloid formation and memory consolidation . Our results provide structural insights into how Orb2 amyloids can mediate stable memory while being nontoxic and how amyloid-based diseases may affect memory processes . They also suggest a new pharmacological approach for blocking memory consolidation .",
"role": "assistant"
}
] |
science.medmentions_ner | science.medmentions_ner.874 | [
{
"content": "Analyze the provided paragraph from a scientific document. Your goal is to identify and extract relevant entities.\n\nFormat your findings as a JSON object, where each key represents a category of entities, and the corresponding value is a list of entities identified in that category. If no entities are found for a category, return an empty list.\n\nFocus on precision and ensure that the output is strictly in JSON format.\n\nDocument:\n\nOptimization of Image Reconstruction for (90)Y Selective Internal Radiotherapy on a Lutetium Yttrium Orthosilicate PET/CT System Using a Bayesian Penalized Likelihood Reconstruction Algorithm Imaging on a γ-camera with (90)Y after selective internal radiotherapy (SIRT) may allow for verification of treatment delivery but suffers relatively poor spatial resolution and imprecise dosimetry calculation. (90)Y PET/CT imaging is possible on 3-dimensional, time-of-flight machines; however, images are usually poor because of low count statistics and noise. A new PET reconstruction software using a Bayesian penalized likelihood (BPL) reconstruction algorithm (termed Q.Clear) was investigated using phantom and patient scans to optimize the reconstruction for post-SIRT imaging and clarify whether BPL leads to an improvement in clinical image quality using (90)Y. Methods: Phantom studies over an activity range of 0.5-4.2 GBq were performed to assess the contrast recovery, background variability, and contrast-to-noise ratio for a range of BPL and ordered-subset expectation maximization (OSEM) reconstructions on a PET/CT scanner. Patient images after SIRT were reconstructed using the same parameters and were scored and ranked on the basis of image quality, as assessed by visual evaluation, with the corresponding SPECT/CT Bremsstrahlung images by 2 experienced radiologists. Results: Contrast-to-noise ratio was significantly better in BPL reconstructions when compared with OSEM in phantom studies. The patient -derived BPL and matching Bremsstrahlung images scored higher than OSEM reconstructions when scored by radiologists. BPL with a β value of 4,000 was ranked the highest of all images. Deadtime was apparent in the system above a total phantom activity of 3.3 GBq. Conclusion: BPL with a β value of 4,000 is the optimal image reconstruction in PET/CT for confident radiologic reading when compared with other reconstruction parameters for (90)Y imaging after SIRT imaging. Activity in the field of view should be below 3.3 GBq at the time of PET imaging to avoid deadtime losses for this scanner.",
"role": "user"
},
{
"content": "{\"AnatomicalStructure\": [], \"Bacterium\": [], \"BiologicFunction\": [], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [], \"Chemical\": [\"(90)Y\"], \"ClinicalAttribute\": [], \"Eukaryote\": [], \"Finding\": [], \"Food\": [], \"HealthCareActivity\": [\"selective internal radiotherapy\", \"SPECT/CT\", \"scans\", \"PET imaging\", \"Imaging\", \"dosimetry calculation\", \"PET\", \"evaluation\", \"Selective Internal Radiotherapy\", \"treatment delivery\", \"imaging\", \"post-SIRT\", \"SIRT\"], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [\"BPL\", \"software\", \"ordered-subset expectation maximization\", \"Bayesian penalized likelihood (BPL) reconstruction algorithm\", \"Bayesian Penalized Likelihood Reconstruction Algorithm\", \"ranked\", \"Q.Clear\", \"OSEM\"], \"MedicalDevice\": [\"PET/CT\", \"PET/CT System\", \"scanner\", \"PET/CT scanner\", \"\\u03b3-camera\"], \"Organization\": [], \"PopulationGroup\": [], \"ProfessionalOrOccupationalGroup\": [\"radiologists\"], \"ResearchActivity\": [\"studies\"], \"SpatialConcept\": [\"parameters\", \"3-dimensional\"], \"Virus\": []}",
"role": "assistant"
}
] |
science.data_reco_mcq_sc | science.data_reco_mcq_sc.5773 | [
{
"content": "Explore the following research question and associated terms. Identify the most suitable dataset from the list provided. Use the dataset descriptions to guide your choice.\n\nInquiry: An effective elastic net hypergraph learning model is proposed to explore the high-order relationships among data. The robust\n\nKeywords: image clustering images\n\nDataset Information:\nPA-100K: PA-100K is a recent-proposed large pedestrian attribute dataset, with 100,000 images in total collected from outdoor surveillance cameras. It is split into 80,000 images for the training set, and 10,000 for the validation set and 10,000 for the test set.\nUSPS: USPS is a digit dataset automatically scanned from envelopes by the U.S. Postal Service containing a total of 9,298 16×16 pixel grayscale samples; the images are centered, normalized and show a broad range of font styles.\nBraTS 2017: The BRATS2017 dataset. It contains 285 brain tumor MRI scans, with four MRI modalities as T1, T1ce, T2, and Flair for each scan.\nNYU-VP: NYU-VP is a new dataset for multi-model fitting, vanishing point (VP) estimation in this case. Each image is annotated with up to eight vanishing points, and pre-extracted line segments are provided which act as data points for a robust estimator.\n\nAvailable Datasets: A: PA-100K B: USPS C: BraTS 2017 D: NYU-VP",
"role": "user"
},
{
"content": "USPS",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_arxiv | science.scientific_papers_summarization_single_doc_arxiv.35104 | [
{
"content": "You will be presented with the full text of a scientific research paper. Write an abstract summarizing the key points. Your response should contain only the abstract without any other text.\n\nArticle text:\nsupersymmetry ( susy ) between bosons and fermions at the unification - energy scale is one of the most promising hypothesis , which is expecteted to resolve the remaining problems in the standard model ( sm ) . in particular , the minimal supersymmetric extension of the sm ( mssm )\n@xcite has been extensively studied in the last decade due to the simplicity . for more than ten years\n, we have been developing the system of the automatic computation of the high energy physics processes .\nthe system for the computation of the sm , grace , has been published in @xcite . in including the interactions of susy particles in the grace system\n, we have made several modifications and the expansion of the system @xcite .\nas the first outcome from grace / susy , we have published a package of event - generator , susy23 , which contains 23 specific susy processes for @xmath1 2-body and 3-body @xcite . at this stage ,\nthe model definition files are based on the hikasa s manual @xcite .\nrecently , we have constructed the complete lagrangian of the mssm @xcite using the european convention : namely , the positive chargino is called a particle and the ranges of @xmath2 and @xmath3 are defined as @xmath4 and @xmath5 .\nthus we have published the new version of grace / susy ( grace v2.2.0 ) @xcite , which is available from http://minami-home.kek.jp/ . in the world , there exist several other groups independently developing the systems of the automatic computation in the sm with different methods @xcite , and also developing the systems of the automatic computation in the mssm , feynarts - formcalc @xcite and comphep @xcite . in this paper\n, we present the latest development of the grace / susy system including the one - loop calculations in the mssm .\nin the the mssm , several particles are mixed states , so there are three kinds of way of introducing wavefunction renormalization constants .\nwe adopt the renormalization scheme of the mssm as follows : * the gauge - boson sector : the conventional approach @xcite + ( renormalization constants of wavefunctions are introduced to unmixed bare states and mass counterterms are introduced to mixed mass eigenstates . )\n* the higgs sector : the dabelstein s approach @xcite ; the chargino sector and the neutralino sector : the kuroda s approach @xcite ( see also @xcite ) + ( renormalization constants of wavefunctions are introduced only to unmixed bare states . ) * the matter - fermion sector and the sfermion sector : the kyoto approach @xcite + ( renormalization constants of wavefunctions are introduced only to mixed mass eigenstates . ) for the tree - level calculations , we first check the gauge invariance of amplitudes at a point of the phase space before the integration . in the grace system ,\nthe gauge invariance check is automatically carried out using the covariant gauge and the unitary gauge . in the sm , we have also checked grace with the non - linear gauge @xcite . in the mssm\n, we have already checked the gauge invariance for 582,102 processes with up to six - external particles within quadruple precision @xcite .\nfor the one - loop calculations , we check the invariance of cross sections varying three parameters , the uv constant ( @xmath6 ) , the fictitious photon mass ( @xmath7 ) and the cutoff energy of the soft photon ( @xmath8 ) . as an example of the invariance checks , the result for the process @xmath9 at @xmath10 gev\nis shown in ref .\n@xcite , using the same input parameters as in ref .\n@xcite .\n[ table : eq1 ] [ cols=\">,>,>,>,>,>,>\",options=\"header \" , ] + table 2 .\nthe tree masses and the full one - loop masses of @xmath11 and @xmath12 are compared with those given by dablestein @xcite for the values of the input parameters given in ( 2 ) . as an application\n, we consider the production of the lighter cp - even higgs @xmath11 @xcite .\nfirst , we compare the cp - even higgs masses , @xmath13 and @xmath14 , with the results of the dabelstein @xcite . in table 1 and table 2 ,\nthe tree masses , one - loop masses are compared using parameters in ( 1 ) and ( 2 ) .\nwe can claim the agreement is satisfactory .\n@xmath15 @xmath16 we have investigated the process @xmath0\n@xcite , using the input values ( 1 ) except @xmath17 gev , @xmath18 gev , @xmath19 and @xmath20 gev .\nwe found that the cross sections are not sensitive to @xmath21 at @xmath22 gev . in figure\n[ fig : z0h0 ] we show the results for @xmath23 gev . .\nthe solid ( dashed ) line shows the radiative corrected ( born ) cross sections in pb at @xmath22 gev . ]\nwe have developed the system for the automatic computation of cross sections , grace / susy , including the one - loop calculations for processes in the mssm . for an application\n, we investigate the process @xmath0 .\nremaining tasks for us are : * checking grace / susy/1loop with the non - linear gauge in the mssm * checking grace / susy/1loop for the invariance of cross sections on the uv constant in other processes , for example , sfermion productions and neutralino productions .\nthis work was partly supported by japan society for promotion of science under the grant - in - aid for scientific research b ( no.14340081 )\n. 9 h.p .\nnilles , phys . rep .\n110 ( 1984 ) , 1 .\n+ h.e . haber and g.l .\nkane , phys .\n117 ( 1985 ) , 75 .\nt. ishikawa , et al .\n, grace manual version 1.0 , kek report92 - 19 ( 1993 ) .\n+ f. yuasa , et al .\nphys . suppl .\n138 ( 2000 ) , 18 .\nt. kon , in electroweak interactions and unified theories proceedings of xxxth rencontres de moriond , les - arcs , savoie , france , march 11 - 18 , 1995 , edited by j. tran thanh van , ( ditions fronties , gif - sur - yvette cedex , 1996 ) , p.287 .\n+ m. jimbo , et al . , in new computing techniques in physics research iv\n proceedings of the fourth international workshop on software engineering , artificial intelligence and expert systems for high energy and nuclear physics ( aihenp95 ) , pisa , italy , april 3 - 8 , 1995 , edited by b. denby and d. perret - gallix , ( world scientific , singapore , 1995 ) , p.149\n. + t. kaneko , et al .\n, in proceedings of the workshop on physics and experiments with linear colliders , morioka - appi , iwate , japan , september 8 - 12 , 1995 , edited by a. miyamoto et al .\n, ( world scientific , singapore , 1996 ) , p.579 .\nh. tanaka , et al .\ninstrum . meth . a 389 ( 1997 ) , 295 .\nj. fujimoto , et al .\n111 ( 1998 ) , 185 .\nk. hikasa , susy manuscript , version july 5 , 1995 , unpublished .\nm. kuroda , kek cp-080 ( 1999 ) , hep - ph/9902340 .\n+ see also , j. rosiek , phys .\nd41 ( 1990 ) 3464 ; erratum\nka - tp-8 - 1995 , hep - ph/9511250 .\nj. fujimoto , et al .\ncommun . 153\n( 2003 ) , 106 .\ne. boos , et al .\n, in new computing techniques in physics research , edited by d. perret - gallix and w. wojcik , ( ' edition du cnrs , paris , 1990 ) , p.573 .\n+ e. boos , et al .\n, in new computing techniques in physics research ii , edited by d. perret - gallix , ( world scientific , singapore , 1992 ) , p.665 .\n+ a. pukov , in new computing techniques in physics research iii , edited by k .- h .\nbecks and d. perret - gallix , ( world scientific , singapore , 1994 ) , p.473 .\nj. kblbeck , m. bhm and a. denner , comput .\n. commun . 60\n( 1990 ) , 165 .\n+ r. mertig , m. bhm and a. denner , comput .\n64 ( 1991 ) , 345 .\n+ a. denner , et al .\nb 291 ( 1992 ) , 278 . ;\nb 387 ( 1992 ) , 467 .\n+ r. mertig , in new computing techniques in physics research iii , edited by k .- h .\nbecks and d. perret - gallix , ( world scientific , singapore , 1994 ) , p.467 .\n+ h. eck and j. kblbeck , ibid .\nt. stelzer and w.f .\nlong , comput .\n81 ( 1994 ) , 357 .\n+ f. maltoni and t. stelzer , jhep 0302 ( 2003 ) , 027 . t. hahn and m. prez - victoria , comput .\ncommun . 118 ( 1999 ) , 153 .\nt. hahn , nucl .\nsuppl . 89 ( 2000 ) , 231 . ; comput .\ncommun . 140\n( 2001 ) , 418 .\n+ t. hahn and c. schappacher , comput .\ncommun . 143 ( 2002 ) , 54 .\ngorbunov and a.v .\nsemenov , hep - ph/0111291 .\n+ a. semenov , nucl .\ninstrum . meth .\na502 ( 2003 ) , 558 .\nm. bhm , w. hollik and h. spiesberger , fortschr .\nc34 ( 1986 ) , 687 .\na. dabelstein , z. phys .\nc67 ( 1995 ) , 495 .\nm. kuroda , in research report to the ministry of education , science and culture , japan , the grant - in - aid for scientific research c ( no.08640391 ) , ( 1999 ) , p.127 .\nt. fritzsche and w. hollik , eur .\nj. c24 ( 2002 ) , 619 .\naoki , et al .\n73 ( 1982 ) , 1 .\ng. blanger , et al .\n, lapth-982 - 03 , kek - cp-138 ( 2003 ) , hep - ph/0308080 . t. ishikawa , gauge invariance check in susy processes , unpublished .\n+ ( results were partly shown at qfthep97 , samara , 1997 ) .\nj. fujimoto , et al .\n, to appear in proceedings of qfthep2003 , hep - ph/0402144 .\ndiaz , s.f .\nking and a. ross , nucl .\nphys . b529\n( 1998 ) , 23 .\nv. driesen , w. hollik and j. rosiek , z. phys .\nc71 ( 1996 ) , 259 .\n+ s. heinemeyer , et al .\nj. c19 ( 2001 ) , 535 .",
"role": "user"
},
{
"content": "we have developed the system for the automatic computation of cross sections , grace / susy , including the one - loop calculations for processes of the minimal supersymmetric extension of the the standard model . \n for an application , we investigate the process @xmath0 .",
"role": "assistant"
}
] |
science.scierc_re | science.scierc_re.106 | [
{
"content": "Examine the provided abstract from a computer science paper. Your objective is to identify and extract relationships between entities. Present these relationships in a structured JSON format.\n\nInput Text:\n\nThe problem of predicting image or video interestingness from their low-level feature representations has received increasing interest. As a highly subjective visual attribute, annotating the interesting-ness value of training data for learning a prediction model is challenging. To make the annotation less subjective and more reliable, recent studies employ crowdsourcing tools to collect pairwise comparisons -- relying on majority voting to prune the annotation outliers/errors. In this paper, we propose a more principled way to identify annotation outliers by formulating the interestingness prediction task as a unified robust learning to rank problem, tackling both the outlier detection and interestingness prediction tasks jointly. Extensive experiments on both image and video interestingness benchmark datasets demonstrate that our new approach significantly outperforms state-of-the-art alternatives.",
"role": "user"
},
{
"content": "[[\"outlier detection\", \"interestingness prediction tasks\", \"CONJUNCTION\"], [\"way\", \"annotation outliers\", \"USED-FOR\"], [\"crowdsourcing tools\", \"pairwise comparisons\", \"USED-FOR\"], [\"majority voting\", \"crowdsourcing tools\", \"USED-FOR\"], [\"interestingness prediction task\", \"annotation outliers\", \"USED-FOR\"], [\"unified robust learning\", \"interestingness prediction task\", \"USED-FOR\"], [\"way\", \"interestingness prediction tasks\", \"USED-FOR\"], [\"way\", \"outlier detection\", \"USED-FOR\"], [\"unified robust learning\", \"rank problem\", \"USED-FOR\"], [\"image and video interestingness benchmark datasets\", \"way\", \"EVALUATE-FOR\"], [\"way\", \"state-of-the-art alternatives\", \"COMPARE\"], [\"low-level feature representations\", \"predicting image or video interestingness\", \"USED-FOR\"], [\"majority voting\", \"annotation outliers/errors\", \"USED-FOR\"]]",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_arxiv | science.scientific_papers_summarization_single_doc_arxiv.17487 | [
{
"content": "Given the full text of a scientific research paper, write an abstract section for the paper. Your response should contain only the abstract without any other text.\n\nArticle text:\nthe cosmic microwave background ( cmb ) data shows an extreme homogeneity in the apparently causally disconnected regions in the sky @xcite . in some sense , according to the cmb picture , the universe looks a lot older than it really is . in principle\n, there are two main lines of reasoning to explain this : either our counting and notion of the `` time '' at the birth of the time and the birth of universe is completely wrong and the correct picture of time that will emerge in full `` quantum gravity '' theory is so dramatically different that the universe really had enough time to solve the horizon and related problems .\nthis point of view outcasts the problems to the putative quantum gravity , while it may be a viable solution , there is not much one can do about it right now in the absence of a proper theory . or , as a second , more conventional , option the universe went through a brief inflationary period ( an accelerated expansion phase with a shrinking comoving hubble radius ) which can be comfortably handled within a classical space - time picture governed by general relativity with a caveat @xcite .\nthe inflationary expansion should be brief , meaning pure general relativity with a positive cosmological constant is ruled out as it predicts inflation forever and hence the caveat : with conventional matter ( renormalizable fields in the standard model ) one does not have an inflating universe with correct amount of friction to stop inflation . to solve\nthe problem one adds additional fields , such as a scalar field , `` the inflaton '' .\nbut without the constraints coming from the stringent requirements of renormalization in quantum field theory , the possibilities of self - interaction ( the potential ) of the scalar field is practically endless .\nso with additional fields one can have a successful inflationary period as well as required tiny inhomogeneities , but without the lack of a proper fundamental principle , there is no way to find what these fields are .\nhence the plethora of the theories exhibiting inflationary epochs in the early universe .\nanother route to inflation is to modify gravity at short distances .\none of the earliest and most successful approaches is that of starobinsky @xcite adopting a quadratic modification of general relativity lagrangian as @xmath1 which yields consistent observables today , such as the scalar tilt and scalar / tensor ratio of the fluctuations @xcite .\nbut , overall , this model is equivalent to- after a conformal transformation of the metric - general relativity coupled to a scalar field with a specific potential .\nthis is expected since , the quadratic theory not only has a massless spin-2 graviton , but also a massive spin-0 graviton in its spectrum , and the latter becomes a massive scalar field in the conformally related metric frame .\nmany successful inflation models seem to be related to the starobinsky model @xcite , hence to general relativity with a scalar field of some specified ( exponential ) self - interaction which has no place in a renormalizable four dimensional quantum field theory . in this work we propose and pursue a new approach , built on the wisdom of conventional inflationary scenarios that the spacetime during inflation is a classical pseudo - rimennian manifold described by the metric , but is distinctly different from the earlier approaches in one important aspect : we propose that no new fields ( such as an inflaton ) and no additional degrees of freedom ( such as the ones in quadratic gravity ) are to be introduced to einstein s gravity .\nthis line of reasoning can be considered as a bottom - up approach of model building and can be briefly explained as follows . while we do not know what the full ( microscopic ) quantum gravity theory is\n, we can still ask the following question : what properties should a _ low energy _ quantum gravity theory have , as long as the curved space formulation of gravity is correct ; namely , as long as gravity is described by a metric and spacetime is effectively a pseudo - riemannian manifold ?\na possible answer to this question could come from the lowest order theory , that is einstein s general relativity ( gr ) .\ngr is obviously diffeomorphism invariant , a property which is very easy to implement in other low energy quantum gravity theories cousins of gr that work better at higher energies but not a sufficiently restrictive one .\nthere are two other properties of gr which are usually not discussed in building low energy quantum gravity theories : 1- the _ uniqueness _ of the maximally symmetric vacuum which is the flat space in the absence of a cosmological constant and the ( anti)-de sitter space in the presence of it .\n2- the _ masslessness _ of the unitary graviton as an excitation about the vacuum . according to the accepted wisdom , at higher energies the lagrangian density of gr will be augmented with more powers of curvature which immediately ruin the above mentioned two properties .\nfor example , in general when @xmath2 terms are added , generically one has @xmath3 maximally symmetric solutions ( for @xmath4 ) and depending on the details of the added terms , many new excitations about any one of the vacua .\nit is very hard to make these additional excitations unitary , namely , non - ghosts and non - tachyons .\nfor example , the quadratic theory @xmath5 is renormalizable but has a massive spin-2 ghost and a massive spin-0 particle besides the massless spin-2 graviton @xcite .\nrecently @xcite , a higher derivative theory with arbitrary powers of curvature that share the two salient features of general relativity was found . here , we show that , this minimal extension of general relativity , without the need of additional fields , has an inflationary period , a quasi - de sitter phase with enough number of @xmath0-foldings required to solve the horizon and related problems .\nlet us briefly introduce the theory .\nour rather long search for a theory that has a unique maximally symmetric solution and a single massless spin-2 excitation about this vacuum , as a minimal extension of einstein s gravity , yielded the following lagrangian density in four dimensions @xcite @xmath6.\\label{eq : generic_bi_intro}\\ ] ] note that the action is diffeomorphism invariant and given as @xmath7 . here\n, @xmath8 is the minimal symmetric matrix @xmath9 given as @xmath10 where @xmath11 , @xmath12 and @xmath13 are arbitrary real numbers and @xmath14 is the born - infeld parameter with dimensions of @xmath15 $ ] .\nits numerical value will be fixed approximately to the scale the inflation takes place which is several orders of magnitude smaller than the planck scale . here\n@xmath16 is the weyl - tensor and @xmath17 is the traceless ricci tensor .\nthe properties , the construction of this theory and its @xmath3 dimensional generalization are noted elsewhere @xcite based on @xcite , hence we just recapitulate some of the results below . defining the dimensionless cosmological constant as @xmath18 and the bare one as @xmath19 , for any values of @xmath20 and @xmath21 , * the theory has only a single massless spin-2 particle in spectrum about its flat or ( anti)-de sitter space . *\neffective ( dimensionless ) cosmological constant of the maximally symmetric solution ( @xmath22 obeys the equation @xmath23 * effective newton s constant is related to the bare one and @xmath24 via @xmath25 * the theory when expanded in small curvature ( @xmath26 ) , first reproduces gr as demanded by low energy observations , and at the quadratic order , it reproduces gr modified with the gauss - bonnet theory , which at the classical level is just gr and at the cubic and the quartic order specific theories studied recently @xcite .\nseveral pertinent issues need to be mentioned here : the line of thought that led to this theory started with long time ago with eddington s proposal @xcite of writing determinantal actions in the palatini formulation which have picked up momentum both in the metric @xcite and palatini form in the recent works @xcite . in the metric formulation ,\nthe major stumbling block was to construct unitary theories about the ( anti)-de sitter vacua , since the flat case is not restrictive enough . in what follows we shall study the quasi - de sitter solution in this theory very early in the universe where\nhigher curvature terms can not be neglected .\nlet us consider the simple spatially flat robertson - walker solution as favored by the data @xmath27 the field equations coming from the variation of ( [ eq : generic_bi_intro ] ) are complicated @xcite and are not needed for our current purposes .\nthe usual procedure is to insert the metric into the action and define the hubble parameter as @xmath28 and then derive a single equation containing the hubble parameter and its time derivatives . but this procedure of reducing the action requires extreme care as the symmetry group `` integrated - out '' here is not a compact one and so not covered by palais symmetric criticality guarantee @xcite .\none can not simply vary the action with respect to the scale factor or the hubble parameter as these will yield necessary but not sufficient conditions .\n[ even the recent literature is full of this subtle mistake which leads to metrics that do not solve the @xmath29 component of the field equations . ] .\nweinberg s computation @xcite is robust and error - free which we follow . inserting [ rw ] to ( [ eq : generic_bi_intro ] ) and dropping an overall irrelevant constant we arrive at @xmath30 where we skipped the arguments of the functions which all depend on the coordinate time @xmath31 .\nthe remarkable property of our theory is that just like einstein s gravity the second time derivative of the hubble parameter does not appear in the reduced action in contrast to other higher derivative theories .\none finds that @xmath32 where , after defining a dimensionless hubble parameter @xmath33 , and a dimensionless time @xmath34 one has @xmath35 @xmath36 where derivatives are with respect to the dimensionless time in these two expressions .\nit is clear that for the rw metric , the weyl tensor vanishes and the parameter @xmath11 did not contribute to the reduced action .\nnote also that for the particular case @xmath37 and @xmath38 , one has a purely quartic theory since @xmath39 whose spherically symmetric solutions were studied in @xcite .\nequipped with this , the necessary and sufficient equation to solve is the one coming from @xmath40 which yields @xcite @xmath41 the resulting nonlinear ode is cumbersome and not particularly illuminating to depict here .\nbut , what is important is the following : assuming a constant hubble parameter @xmath42 , the equation reduces to @xmath43 which is of course nothing but the vacuum equation ( [ eff_cosmo ] ) with @xmath44 .\nthere is a unique viable solution as long as @xmath45 . for the moment\nlet us assume that this pure de sitter solution is @xmath46 .\nthen the scale factor grows as @xmath47 .\nthis state of affairs can not go on forever , inflation must end with a grateful exit after a sufficient number of @xmath0-foldings say , @xmath48 , which is around 50 - 70 .\nfor this purpose let us consider the stability of de sitter phase .\nlet @xmath49 denote a perturbation of the form @xmath50 inserting this in ( [ eom ] ) and expanding up to the first order in @xmath51 and making use of ( [ ds ] ) , one arrives at a first order differential equation whose solution is @xmath52 where @xmath53 note that the parameters @xmath13 and @xmath12 play a role in the full nonlinear evolution of the hubble parameter but not its de sitter value and on the linearization about the de sitter value .\ninflation ends when @xmath54 as it shows the instability of the de sitter solution .\nbut before it ends , the universe expands by an @xmath0-folding of @xmath55 , which yields @xmath56 using ( [ ds ] ) , we can relate the bare dimensionless cosmological constant to the number of @xmath0-foldings as @xmath57 consider the extreme case of @xmath58 , which yields @xmath59 and @xmath60 , it i.e. the point where the inverse of the effective newton s vanish and the solution is pure de sitter .\nthe discriminant of the quartic equation ( [ ds ] ) is @xmath61 which is negative for @xmath62 , hence there are two real and two complex roots and any finite number of desired @xmath0-foldings can be accommodated given an appropriate @xmath63 .\nfigure 1 shows the relation .\n-foldings , width=7 ] there is one final piece of puzzle we have to settle : number of @xmath0-foldings when written in terms of @xmath24 is simply @xmath64 which requires @xmath65 .\ndemanding @xmath66 , which is the newton s constant at low energies right after the inflation , one must have the effective newton s constant @xmath67 at high energies .\nthe fact that the newton s constant run with curvature an becomes negative as the ( scalar ) curvature is @xmath68 , gives the required repulsive era for gravity . during the inflation ,\nthe curvature is reduced and the newton s constant start to grow which provides the needed `` friction '' mechanism or stopping the inflationary period . once the curvature is diluted and the universe becomes flat\n, one has the usual positive newton s constant @xmath69 which can be taken numerically as today s value . to give some numerical values , let us demand @xmath70 @xmath0-foldings , then , @xmath71 .\nif the inflation occurred at a high energy scale @xmath72 gev , then our dimensionful parameter @xmath14 is around this value , basically the relation is @xmath73 , which is numerically a very small value making sure that higher derivative terms do not dominate when the curvature is small , after the inflation .\nwe studied the inflationary phase of a pure higher derivative gravity theory that has the same particle spectrum as general relativity , namely only a massless spin-2 graviton .\nthe effective newton s constant is a function of the curvature and in the maximally symmetric case of the robertson - walker spacetime , a function of the effective cosmological parameter , a dimensionless number , we we called @xmath24 that decreases as the universe expands . when the curvature is high in the pre - inflation moments of the universe , higher derivative terms dominate and the theory is repulsive , but as the universe expands , the curvature becomes flatter , as @xmath24 decreases and the effective newton s constant start to grow and becomes positive after passing through the fixed point given by @xmath74\nthis allows a rather interesting freedom for the theory : a quasi - de sitter solution with any number of @xmath0-foldings is possible depending on the value of the bare cosmological constant .\nthe theory is a close cousin of einstein s gravity in many aspects but , unlike cosmological einstein s gravity , it finds a way out of never - ending inflation due the above mentioned behavior of the effective newton s constant .\nwhat we have presented here is a suggestion that pure gravity with higher derivative terms , that has the same perturbative spectrum as einstein s theory and no other additional fields , can can have a period of inflation and an exit from inflation after sufficient number of @xmath0-foldings .\n[ see @xcite for another point of view where gravity is that of einstein s theory coupled to non - dynamical ( auxiliary ) fields that necessarily involve higher derivative . ] for the theory , presented here is to be considered as an effective theory with higher curvature terms playing a major role during inflation and it can be expanded by adding more powers inside the determinant while keeping the particle spectrum intact .\nthis will also change the behavior of the effective newton s constant .\nafter the higher derivative terms play their role , and the curvature is diluted , the ensuing theory is the einstein s gravity coupled to sources , such as the standard model fields or the perfect fluid .\nfor the theory to be a viable one , one must work out the density perturbations during inflation : for this purpose there are various possibilities of minimal or non - minimal coupling as our theory is a determinantal form .\nthis work was supported by the tubitak grant 113f155 .\ni would like to thank m. gurses , a. karasu , p. olver , a. erkip and o. kisisel for useful discussions on nonlinear ode and a. kaya , e. o. kahya and t.c .\nsisman on inflation and related matters .",
"role": "user"
},
{
"content": "we show that without introducing additional fields or extra degrees of freedom , a specific higher derivative extension of einstein s gravity that has only a massless spin-2 excitation in its perturbative spectrum , has an inflationary period , a quasi - de sitter phase with enough number of @xmath0-foldings required to solve the horizon and related problems . \n the crucial ingredient in the construction is the curvature dependence of the effective newton s constant .",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_arxiv | science.scientific_papers_summarization_single_doc_arxiv.4658 | [
{
"content": "Write an abstract for the research paper below. Your response should only include the abstract and no additional text.\n\nArticle text:\nweakly interacting massive particles ( wimps ) , and in particular the lightest neutralino in supersymmetric models , are a well motivated dark matter candidate @xcite .\nwimps can be detected directly , in the lab , via the elastic scattering of wimps on detector nuclei @xcite .\nexperiments now have the sensitivity required to probe the theoretically favoured regions of parameter space @xcite and the cdms experiment has recently observed two events in its wimp signal region @xcite .\nneutrons , from cosmic - ray induced muons or natural radioactivity , can produce nuclear recoils which ( on an event by event basis ) are indistinguishable from wimp induced recoils .\nfurthermore perfect rejection of other backgrounds is impossible , for instance ` surface events ' ( electron recoils close to the detector surface ) in the case of cdms .\nas highlighted by the cdms events , as well as the long - standing dama annual modulation signal @xcite , demonstrating the wimp origin of a putative signal is absolutely crucial .\nthe direction dependence of the wimp scattering rate ( due to the earth s motion with respect to the galactic rest frame ) @xcite provides a potential wimp ` smoking gun ' . assuming that the wimp distribution is predominantly smooth ,\nthe peak wimp flux comes from the direction of solar motion ( towards the constellation cygnus ) and the recoil rate is then peaked in the opposite direction .\nthe recoil rate , in the galactic rest frame , is highly anisotropic ; the rate in the forward direction is roughly an order of magnitude larger than that in the backward direction .\na detector capable of measuring the nuclear recoil vectors ( including the sense + @xmath1 versus -@xmath2 ) in 3-dimensions , with good angular resolution , could reject isotropy of the recoils with only of order 10 events @xcite .\nmost , but not all , backgrounds would produce an isotropic galactic recoil distribution ( due to the complicated motion of the earth with respect to the galactic rest frame ) . an anisotropic galactic recoil distribution would therefore provide strong , but not conclusive , evidence for a galactic origin of the recoils .\nconfirmation of the wimp origin could be obtained by verifying that the properties of the anisotropy match the expectations for wimp induced recoils . assuming the wimp distribution is dominated by a smooth component , the median inverse recoil direction\nshould be compatible with the direction of solar motion . to summarise , a wimp search strategy with a directional detector\ncould be divided into a sequence of phases : + 1 .\nsearch phase ( detection of non - zero recoil signal ) + 2 .\ndetection of anisotropy + 3 .\nstudy of properties of anisotropy + which require successively larger numbers of events ( and hence larger exposures ) .\nthe initial simple search phase aims to detect an anomalous recoil signal above that expected from backgrounds .\nto claim an anomalous signal inconsistent with zero at 95% confidence requires 4 - 5 events .\nthe second step , as discussed above , would be to check whether the galactic recoil directions are anisotropic and would require of order 10 events ( assuming zero background ) .\nin this paper we focus on the third phase , examining how measuring the median recoil direction could be used to provide confirmation of the wimp origin of an anisotropic recoil signal . in section [ model ]\nwe describe the input to our monte carlo simulations . in section [ res ]\nwe present our results , before concluding with discussion in sec .\nwe use the same statistical techniques and methods for calculating the directional nuclear recoil spectrum as in refs . @xcite .\nwe briefly summarise these procedures here , for further details see these references and ref .\n@xcite .\nmany of the directional detectors currently under development @xcite are low pressure gas time projection chambers ( tpcs ) , e.g. dmtpc @xcite , drift @xcite and newage @xcite .\nwe therefore simulate a fairly generic tpc based detector .\nwe assume that the the recoil directions , including their senses , are reconstructed perfectly in 3d .\nthese are optimistic assumptions , therefore our results provide a lower limit on the number of events / exposure required by a real tpc based detector . for concreteness\nwe use a @xmath3 target with an energy threshold of 20 kev .\nfinite angular resolution does not significantly affect the number of events required to detect anisotropy @xcite , provided it is not worse than of order @xmath4 .\naxial and/or 2-d read - out would , however , significantly degrade the detector capability @xcite .\nthe detailed angular dependence of the recoil rate depends on the exact form of the wimp velocity distribution @xcite .\nhowever , if the wimp velocity distribution is dominated by a smooth component the main features of the recoil distribution ( rear - front asymmetry , median direction opposite to the direction of solar motion ) are robust ( see e.g. ref .\n@xcite ) ) . for concreteness\nwe use the standard halo model halo , an isotropic sphere with local density @xmath5 and a maxwellian / gaussian velocity distribution with three dimensional velocity dispersion @xmath6 , and fix the wimp mass at @xmath7 .\nnumerical simulations find velocity distributions with stochastic features at large speeds @xcite .\nkuhlen et .\n@xcite find that for high speed wimps ( @xmath8 ) the direction in which the wimp flux is largest can deviate by more than @xmath9 from the direction of solar motion .\nthe effect on the median recoil direction will be substantially smaller than this however .\nfirstly the set - up we are considering ( @xmath3 target with energy threshold of 20 kev and @xmath7 ) is sensitive to wimps with @xmath10 .\ntherefore a high speed feature will only contribute a small fraction of the wimp flux .\nthis will be true in general unless the wimp mass is small and/or the threshold energy is large . secondly , see e.g. fig . 3 of ref .\n@xcite , due to the elastic scattering the recoil rate is less anisotropic than the wimp flux .\nthe deviation of the inverse median direction from the direction of solar motion will therefore be substantially smaller than the deviation of the peak wimp flux .\nit will also depend on the ( a priori unknown ) velocity and density of the feature .\nfor instance , for a stream with velocity , in galactic coordinates , @xmath11 the difference between the inverse median direction and the solar direction only exceeds @xmath12 if the fraction of the local density in the stream exceeds @xmath13 @xcite . since the effect of a feature in the speed distribution on the median recoil direction\nis expected to be small we do not investigate it in this study . if , with a large number of events , a statistically significant deviation of the inverse median direction were found , its origin could be investigated by studying the energy dependence of the deviation .\nwe defer an investigation of this to future work . finally , direct detection experiments probe the ultra - local dark matter distribution on scales many orders of magnitude smaller than those resolved by simulations .\nit is not ( and may never be ) possible to directly calculate , or otherwise measure , the dark matter distribution on the relevant scales . in this case\nif wimps are detected , then the directional recoil rate could be used to reconstruct the ultra - local dark matter distribution @xcite .\nthe recoil rate is peaked in the direction opposite to the direction of solar motion . to allow ease of comparison with the direction of solar motion we use the inverse recoil directions ( i.e. the directions from which the recoils originate ) in our analyses .\nwe first examine how the accuracy with which the median galactic recoil direction is determined depends on the number of wimp events .\nthe median direction is defined as the direction @xmath14 which minimises the sum of the arclengths between @xmath15 and the individual inverse recoil directions @xmath16\nit is found by minimising @xmath17 where @xmath18 is the number of events . for a given number of wimp events , @xmath19\n, we simulate @xmath20 experiments and in each determine the direction , @xmath15 , by minimizing eq .\n( [ m ] ) and hence calculate @xmath21 , @xmath22 the angle between the median direction and the direction of solar motion , @xmath23 . in fig .\n1 we plot the 50% and 95% percentiles of the distribution of @xmath21 as a function of @xmath24 .\nwe also investigate the effects of non - zero ( isotropic ) backgrounds .\nwe parametrize the background rate in terms of the fraction of events which are signal , @xmath25 where @xmath3 and @xmath26 are the signal and background rates respectively ( c.f .\n@xcite ) . for comparison\nwe also plot the @xmath27 and @xmath28 percentiles for an purely isotropic distribution .\n, as a function of the number of wimp events , @xmath19 , for varying signal fractions ( from top to bottom ) @xmath29 and @xmath30 .\nthe solid and dashed lines are the @xmath31 and @xmath28 percentiles .\nthe dotted lines are ( from top to bottom ) the @xmath28 and @xmath27 percentiles for an isotropic recoil distribution.,width=321 ] we now determine the number of events required to confirm the direction of solar motion as the median inverse recoil direction at 95% confidence .\nwe do this using the same methodology as in ref .\nbriefly , we use the distribution of @xmath21 for wimp induced recoils and for the null hypothesis of isotropic recoils , to calculate the rejection and acceptance factors , @xmath32 and @xmath33 .\nthe rejection factor gives the confidence level with which the null hypothesis can be rejected given a particular value of the test statistic , while the acceptance factor is the probability of measuring a larger value of the test statistic if the alternative hypothesis is true , we then find the number of wimp events required to give @xmath34 i.e. to reject the median direction being random at 95@xmath35 confidence in @xmath31 of experiments .\nthis is shown in fig .\n[ nlambda ] , as a function of the signal fraction @xmath36 . for zero background ( i.e. @xmath37 )\n31 events are required , a factor of @xmath0 more than are required to simply reject isotropy .\nthe number of events required increases as @xmath36 is decreased , initially fairly gradually and then , once the signal becomes subdominant ( @xmath38 ) , more rapidly .\nbillard et al .\n@xcite have recently investigated using a sky - map based likelihood analysis to probe the correlation of the directional recoil rate with the direction of solar motion ( and hence confirm the wimp origin of a signal ) .\ntheir results are qualitatively consistent with ours .\nfor instance they find that , for a signal fraction @xmath39 , the peak signal direction can be confirmed to be within @xmath40 of the direction of solar motion with as few as 25 wimp events .\nthe median direction method is however faster and more robust .\nit particular it has the advantage of being model independent ( i.e. one does not need to assume an exact form of the wimp velocity distribution ) . , required to reject the median direction being random at 95@xmath35 confidence in @xmath31 of experiments on the signal fraction , @xmath36.,width=321 ]\nif desired , further confirmation of the wimp origin of an anisotropic recoil signal could be obtained by studying the variation of the median recoil direction in the lab frame @xcite . over the course of a sidereal day the peak in the recoil distribution traces out a small circle on the sky . for a detector located in the northern hemisphere during a sidereal day\nthe peak recoil direction in the lab rotates ( roughly ) from down to south and back .\na periodogram analysis could be used to verify that median direction varies over a sidereal , rather than solar , day ( c.f .\n@xcite for the annual modulation ) .\nin this paper we have investigated how the median recoil direction in directional detection experiments can be used as a wimp signal . assuming a smooth wimp distribution , the peak wimp flux is from the direction of solar motion and the median recoil direction is in the opposite direction .\nan ideal detector could reject isotropy of recoils with only of order 10 events @xcite .\nconfirmation of the wimp origin of an anisotropic recoil signal could be obtained by studying the details of the anisotropy and in particularly confirming that the median inverse galactic recoil direction coincides with the direction of solar motion .\nwe find that , with an ideal detector and zero - background , to confirm the direction of solar motion as the median inverse recoil direction at 95% confidence requires 31 events ( see also ref .\nnon - zero isotropic background would increase this number , significantly if the signal is sub - dominant . for concreteness ( and in the absence of a definitive alternative ) we have used the standard halo model halo , which has an isotropic maxwellian speed distribution . while the detailed angular dependence of the recoil rate depends on the exact form of the wimp velocity distribution ,\nif the wimp velocity distribution is dominated by a smooth component the median inverse recoil distribution will be close to the direction of solar motion ( see e.g. ref .\nrecent high resolution simulations have found stochastic , features in the speed distribution at large speeds @xcite .\nthe direction in which the flux of high speed wimps is largest can deviate by more than @xmath9 from the direction of solar motion @xcite .\nhowever the deviation of the median inverse direction will be small compared with that expected from an isotropic recoil direction .\nwe therefore conclude that features in the speed distribution at high speed are unlikely to affect the utility of the median recoil direction as a wimp signal . if , with a large number of events , a statistically significant deviation of the inverse median direction were found , its origin could then be investigated by studying the energy dependence of the deviation .",
"role": "user"
},
{
"content": "direct detection experiments have reached the sensitivity to detect dark matter wimps . demonstrating that a putative signal is due to wimps , and not backgrounds , is a major challenge however . \n the direction dependence of the wimp scattering rate provides a potential wimp ` smoking gun ' . \n if the wimp distribution is predominantly smooth , the galactic recoil distribution is peaked in the direction opposite to the direction of solar motion . \n previous studies have found that , for an ideal detector , of order 10 wimp events would be sufficient to reject isotropy , and rule out an isotropic background . \n we examine how the median recoil direction could be used to confirm the wimp origin of an anisotropic recoil signal . \n specifically we determine the number of events required to confirm the direction of solar motion as the median inverse recoil direction at 95% confidence . \n we find that for zero background 31 events are required , a factor of @xmath0 more than are required to simply reject isotropy . \n we also investigate the effect of a non - zero isotropic background . as the background rate \n is increased the number of events required increases , initially fairly gradually and then more rapidly , once the signal becomes subdominant . \n we also discuss the effect of features in the speed distribution at large speeds , as found in recent high resolution simulations , on the median recoil direction .",
"role": "assistant"
}
] |
science.scireviewgen_multidoc_summarization | science.scireviewgen_multidoc_summarization.5587 | [
{
"content": "Craft a cohesive chapter for a literature review using the abstracts provided. Focus on the chapter's theme, weaving together insights and findings from the abstracts. Highlight key points and note any research gaps.\nLiterature Review Title: A Survey on the Computational Complexity of Colouring Graphs with Forbidden Subgraphs \nChapter Title: Choosability \n1. Abstract of Cited Paper (BIB001): A solution to a problem of Erd\\H{o}s, Rubin and Taylor is obtained by showing that if a graph $G$ is $(a:b)$-choosable, and $c/d > a/b$, then $G$ is not necessarily $(c:d)$-choosable. Applying probabilistic methods, an upper bound for the $k^{th}$ choice number of a graph is given. We also prove that a directed graph with maximum outdegree $d$ and no odd directed cycle is $(k(d+1):k)$-choosable for every $k \\geq 1$. Other results presented in this article are related to the strong choice number of graphs (a generalization of the strong chromatic number). We conclude with complexity analysis of some decision problems related to graph choosability. \n2. Abstract of Cited Paper (BIB002): A graph is k-choosable if it admits a proper coloring of its vertices for every assignment of k (possibly different) allowed colors to choose from for each vertex. It is NP-hard to decide whether a given graph is k-choosable for k ? 3, and this problem is considered strictly harder than the k-coloring problem. Only few positive results are known on input graphs with a given structure. Here, we prove that the problem is fixed parameter tractable on P 5-free graphs when parameterized by k. This graph class contains the well known and widely studied class of cographs. Our result is surprising since the parameterized complexity of k-coloring is still open on P 5-free graphs. To give a complete picture, we show that the problem remains NP-hard on P 5-free graphs when k is a part of the input. \n3. Abstract of Cited Paper (BIB003): A graph is H-free if it has no induced subgraph isomorphic to H. We determine the computational complexity of the Choosability problem restricted to H-free graphs for every graph H that does not belong to {K\"1\",\"3,P\"1+P\"2,P\"1+P\"3,P\"4}. We also show that if H is a linear forest, then the problem is fixed-parameter tractable when parameterized by k.\n\nReview Chapter:",
"role": "user"
},
{
"content": "Golovach and Heggernes BIB003 showed that CHOOSABILITY is NP-hard for P 5 -free graphs. Their work was continued by Golovach, Heggernes, van 't Hof and Paulusma who implicitly showed the following result in the proof of [34, Theorem 2]. Theorem 10. Let G be a graph class that is closed under adding dominating vertices. If COLOUR-ING is NP-hard for G, then CHOOSABILITY is NP-hard for G. Golovach et al. BIB004 then used Theorem 10 to prove the following result. Theorem 11. Let H be a graph. The CHOOSABILITY problem is NP-hard for H-free graphs if H / ∈ {K 1,3 , P 1 , 2P 1 , 3P 1 , P 1 +P 2 , P 1 +P 3 , P 2 , P 3 , P 4 } and p-time solvable if H ∈ {P 1 , 2P 1 , 3P 1 , P 2 , P 3 }. Note that there are four missing cases in Theorem 11: when H ∈ {K 1,3 , P 1 + P 2 , P 1 + P 3 , P 4 }. The following result is due to Gutner BIB001 . Gutner and Tarsi BIB002 showed the following result. Theorem 13. For all k ≥ 3, k-CHOOSABILITY is Π p 2 -complete on bipartite graphs. Hence, for some graphs H, Theorem 11 can be strengthened as follows.",
"role": "assistant"
}
] |
science.data_reco_mcq_sc | science.data_reco_mcq_sc.13620 | [
{
"content": "Explore the following research question and associated terms. Identify the most suitable dataset from the list provided. Use the dataset descriptions to guide your choice.\n\nInquiry: A novel neural network architecture for estimating expression-related symptoms of schizophrenia.\n\nKeywords: facial behaviour analysis video\n\nDataset Information:\nUMDFaces: UMDFaces is a face dataset divided into two parts: Still Images - 367,888 face annotations for 8,277 subjects. Video Frames - Over 3.7 million annotated video frames from over 22,000 videos of 3100 subjects.\nCelebA: CelebFaces Attributes dataset contains 202,599 face images of the size 178×218 from 10,177 celebrities, each annotated with 40 binary labels indicating facial attributes like hair color, gender and age.\nChickenpox Cases in Hungary: Chickenpox Cases in Hungary is a spatio-temporal dataset of weekly chickenpox (childhood disease) cases from Hungary. It can be used as a longitudinal dataset for benchmarking the predictive performance of spatiotemporal graph neural network architectures.\nNAS-Bench-201: NAS-Bench-201 is a benchmark (and search space) for neural architecture search. Each architecture consists of a predefined skeleton with a stack of the searched cell.\n\nAvailable Datasets: A: UMDFaces B: CelebA C: Chickenpox Cases in Hungary D: NAS-Bench-201",
"role": "user"
},
{
"content": "CelebA",
"role": "assistant"
}
] |
science.data_reco_mcq_mc | science.data_reco_mcq_mc.755 | [
{
"content": "Consider the following research inquiry and associated key terms. Review the dataset summaries provided and determine which datasets are suitable for addressing the research question. Base your decision on your understanding of machine learning datasets.\nProvide your answer as a JSON list. For example, for the inquiry \"Semi supervised image classification\", a possible response could be: [\"CIFAR-10\", \"CIFAR-100\"]. Ensure the response contains only the dataset names in JSON format.\n\nResearch Inquiry: An end-to-end learning system for future motion prediction.\n\nKey Terms: motion prediction images\n\nDataset Summaries:\nHMDB51: The HMDB51 dataset is a large collection of realistic videos from various sources, including movies and web videos. The dataset is composed of 6,766 video clips from 51 action categories (such as “jump”, “kiss” and “laugh”), with each category containing at least 101 clips.\nUCF101: UCF101 dataset is an extension of UCF50 and consists of 13,320 video clips, which are classified into 101 categories. These 101 categories can be classified into 5 types (Body motion, Human-human interactions, Human-object interactions, Playing musical instruments and Sports).\nEyeCar: EyeCar is a dataset of driving videos of vehicles involved in rear-end collisions paired with eye fixation data captured from human subjects. It contains 21 front-view videos that were captured in various traffic, weather, and day light conditions.\nDIPS: Contains biases but is two orders of magnitude larger than those used previously.\nE2E: End-to-End NLG Challenge (E2E) aims to assess whether recent end-to-end NLG systems can generate more complex output by learning from datasets containing higher lexical richness, syntactic complexity and diverse discourse phenomena.\nDDD20: The dataset was captured with a DAVIS camera that concurrently streams both dynamic vision sensor (DVS) brightness change events and active pixel sensor (APS) intensity frames. DDD20 is the longest event camera end-to-end driving dataset to date with 51h of DAVIS event+frame camera and vehicle human control data collected from 4000km of highway and urban driving under a variety of lighting conditions.\n\nDataset Options: A: HMDB51 B: UCF101 C: EyeCar D: DIPS E: E2E F: DDD20",
"role": "user"
},
{
"content": "[\"HMDB51\", \"UCF101\"]",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_pubmed | science.scientific_papers_summarization_single_doc_pubmed.3094 | [
{
"content": "Read the provided biomedical research paper and create a concise abstract. Ensure the abstract captures the main points of the paper.\n\nFull Paper:\naccidents with a pneumatic nail gun , which has been used in the construction industry , have increased.2 ) however , reports on management of a patient with a nail gun injury to the cranium are rare\n. the authors report on a case of a patient treated with simple retraction after small skin and prophylactic antibiotic administration .\na 56-year - old man had five nail gun - shots due to attempted suicide and was transferred to the emergency room . on his physical examination ,\nhis mental status was drowsy and there was no neurological deficit except for nails on his cranium . in his past history , there was no psychological background , however , he used to drink often and became depressed . on plain skull radiography ,\nfive nails were found ( figure 1a ) . among the five nails , two penetrated glabella .\ntwo nails were located in the right parietal bone and another was located in the left parietal bone . due to the head of the nail , the direction was confined to the outer table of the skull .\nin addition , a subarachnoid hemorrhage was observed on brain computed tomography ( ct)(figure 1b ) .\nall procedures were performed in the operating room under general anesthesia in order to treat the emergent condition as rapidly as possible .\nthe nail head was located beneath the scalp and tightly attached to the skull , it was not visible from the outside . a small incision to advance a forceps and a rongeur was necessary , and\n, all nails were removed successfully ( figure 1c ) . during the period of conservative management\nantibiotics including vancomycin , ceftriaxone , and metronidazole were administered for two weeks and carbamazepine was administered for a week .\nconventional angiography was performed , and there was no vascular invasion or newly formed fistula ( figure 1d ) .\nhead injury by pneumatic nail gun injury is rare , and there have been some case reports on management , treatment . according to lee and park,4 ) the speed of the launched nail and presence of the nail head are important determinants of the extent of nail gun related head injuries\n. if major artery or sinuses are not involved , extraction from the nail head could be the main treatment .\nct is typically the first - line radiologic examination in the emergency room for head - injured patients . in general , ct shows excellent details in bone and average value in soft tissue . failure to recognize cerebral damage following penetrating brain injuries ,\nespecially when there is retained foreign body , can lead to serious life - threatening complications.1 ) in addition , because involvement of the large vessels is critical to determining the operative strategy , vascular integrity should be evaluated with ct angiography or ct enhanced image in the initial evaluation in the emergency room . in this case , nails were removed by a small scalp incision due to the fact that major artery or sinuses were not involved . if this is not the case , management would be direct ligation or vascular bypass surgery . despite many controversies regarding use of prophylactic antibiotics , the authors believe that broad spectrum antibiotics are appropriate for patients with penetrating brain injury since\nit is also recommended that antibiotics should be started as soon as possible.3 ) post - traumatic epilepsy is a major cause of disability and inadequate social integration .\nalthough there is no prospective study to indicate the efficacy of prophylactic anti - epileptic medication following a traumatic brain injury , it has been recommended that patients be covered for about the first week after injury with a medication such as phenytoin or carbamazepine.5 )\nin the patient with head injury by pneumatic nail gun , the first evaluation should focus on the shape of the nail head and vascular integrity from simple x - ray and ct .\nif the nail head is confined to the depth of skull penetration , cautious extraction of the nail head can be performed without postoperative complication and early start of prophylactic antibiotics can be the next treatment .",
"role": "user"
},
{
"content": "a 56-year - old man had five nail gun - shots on his skull due to attempted suicide and was transferred to the emergency room . because the nail head played a role as a brake , the launched nail made a hole in the skull but did not entirely pass through it . \n if major artery or sinuses are not involved , cautious retrieval after a small scalp incision can be performed and prophylactic antibiotics be administered for treatment .",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_plos_single_doc_summ | science.scientific_lay_summarisation_plos_single_doc_summ.13110 | [
{
"content": "Transform the following sections from a biomedical study into a simple summary for the general public. Focus on making the information accessible and easy to understand.\n\nResearch Heading: Predisposition to Cancer Caused by Genetic and Functional Defects of Mammalian Atad5\nContent:\nAbstract:\nATAD5, the human ortholog of yeast Elg1, plays a role in PCNA deubiquitination. Since PCNA modification is important to regulate DNA damage bypass, ATAD5 may be important for suppression of genomic instability in mammals in vivo. To test this hypothesis, we generated heterozygous( Atad5+/m) mice that were haploinsuffficient for Atad5. Atad5+/m mice displayed high levels of genomic instability in vivo, and Atad5+/m mouse embryonic fibroblasts( MEFs) exhibited molecular defects in PCNA deubiquitination in response to DNA damage, as well as DNA damage hypersensitivity and high levels of genomic instability, apoptosis, and aneuploidy. Importantly, 90% of haploinsufficient Atad5+/m mice developed tumors, including sarcomas, carcinomas, and adenocarcinomas, between 11 and 20 months of age. High levels of genomic alterations were evident in tumors that arose in the Atad5+/m mice. Consistent with a role for Atad5 in suppressing tumorigenesis, we also identified somatic mutations of ATAD5 in 4. 6% of sporadic human endometrial tumors, including two nonsense mutations that resulted in loss of proper ATAD5 function. Taken together, our findings indicate that loss-of-function mutations in mammalian Atad5 are sufficient to cause genomic instability and tumorigenesis.\nIntroduction:\nChromosomal instability is a common feature of many tumors[1]. However, the genetic basis of chromosomal instability( CIN) in human tumorigenesis is poorly understood[2]. In contrast, genetic screens in S. cerevisiae have identified many genes controlling a variety of processes such as chromosome condensation, sister-chromatid cohesion, kinetochore structure and function, centrosome and microtubule formation and dynamics, and cell-cycle checkpoints[3]–[5] which, when mutated, lead to CIN. There is a growing body of evidence starting to emerge to support the idea that the presence of a CIN phenotype could lead to mammalian tumorigenesis[6]–[11]. Moreover, based on the observation that there are evolutionarily conserved synthetic lethal interactions among a number of yeast CIN genes, it has recently been proposed that identifying CIN genes that are disrupted in human cancer could lead to the design of rational therapeutics[5],[12],[13]. The yeast ELG1 gene was originally identified in screens for suppressors of CIN and telomere maintenance proteins[3],[4],[14],[15],[16],[17]. Recently, we demonstrated that human ATAD5( hELG1), responds to a wide range of DNA damaging agents, and that reduced expression of ATAD5 in cell lines leads to sensitivity to DNA damaging agents, and high levels of genomic instability[18]. In addition, we found that human ATAD5, together with a deubiquitinating enzyme complex, USP1-UAF1, functions to reduce the level of ubiquitinated PCNA on chromatin[19]. Defects in the regulation of PCNA ubiqutination result in deficiency in the post-replication repair pathways. Although the importance of the post-replication pathway in the suppression of tumorigenesis has been highlighted by the discovery that germline mutations in the POLH gene, which encodes a polymerase functioning in a post-replication repair pathway, are responsible for the variant form of Xeroderma Pigmentosum, a skin cancer-prone syndrome[20],[21], it has not yet been demonstrated whether the regulation of post-replication pathways by PCNA ubiquitination is important for tumor suppression. Because reduced expression of ATAD5 is associated with high levels of genomic instability in vitro[19], we hypothesized that reduced expression of mouse Atad5 would also lead to genomic instability in vivo. Here we demonstrate that mice heterozygous for Atad5 display high levels of genomic instability in vivo and that >90% of Atad5+/m mice develop a variety of spontaneous tumors. Mouse embryonic fibroblasts( MEFs) derived from the Atad5 heterozygous mice were highly sensitive to DNA damaging agents, demonstrating high levels of aneuploidy and genomic instability in response to DNA damage. We further show that somatic, loss-of-function mutations in ATAD5 are present in a subset of primary human endometrial cancers. Taken together, our findings provide the first evidence to indicate that mammalian Atad5 suppresses genomic instability and tumorigenesis.\nDiscussion:\nThere is an emerging consensus that mutations in chromosomal instability genes contribute to many kinds of malignancy[29]. For example, previous studies of endometrial carcinomas have uncovered somatic alterations of a number of genes implicated in the maintenance of genomic integrity including somatic mutations in CDC4/FBXW7, and p53, and amplification or altered expression of STK15, BUB1, CCNB2, and CCNE[30]–[33]. In this report we have demonstrated the first somatic ATAD5 mutations in human cancer through a resequencing analysis of primary endometrial carcinomas. The frequency of somatic ATAD5 mutations was somewhat higher among NEECs than EECs. NEECs are the most clinically aggressive subtype of endometrial cancer that, as a group, exhibit aneuploidy, a form of chromosomal instability, much more frequently than EECs do[27]. Although ATAD5 was mutated at low frequency in endometrial carcinomas, this is not unexpected in light of recent studies of whole exomes of breast, colorectal and pancreatic cancers[34]–[36]. Those studies revealed that most somatically mutated genes are mutated at low frequency but that many infrequently mutated genes impinge on a smaller number of discrete biological processes and biochemical pathways resulting in a significant overall contribution to tumorigenesis. In this regard, it will be interesting, in future studies, to determine the extent to which the ATAD5 pathway is disrupted in sporadic endometrial cancer. The microarray analysis and quantitative PCR of mouse tumor samples showed that the wild type allele of Atad5 was retained and was expressed normally. In addition, human endometrial tumors carrying ATAD5-R1414X mutation expressed both the wild type and mutant alleles. These data are consistent with a mechanism by which haploinsufficiency of ATAD5 causes genomic instability leading to malignant transformation. We have also recently reported increased CIN in a human cell line expressing reduced levels of ATAD5 following siRNA silencing[18]. The reduced expression of ATAD5 in human cells resulted in a spontaneous hyper-recombination phenotype[19]. Finally, primary Atad5+/m fibroblasts had high levels of aneuploidy and other visible genome damage in culture, even in the absence of DNA damaging agents and tumors from Atad5+/m mice exhibited a large number of spontaneous genomic copy number alterations. It is possible that the high level of spontaneous genomic damage might be due to defects in the removal of ubiquitin from PCNA by ATAD5 together with USP1-UAF1 after DNA damage bypass[19]. In this model the persistent presence of ubiquitinated PCNA on DNA could retain translesion synthesis DNA polymerases in DNA replication forks. Due to their low processivity, DNA replication forks that have prolonged translesion synthesis DNA polymerases could collapse and abnormal recombination might enact to create a high level of genomic instability and tumors in the Atad5+/m mouse. Consistent with this possibility, we observed higher levels of PCNA ubiquitination in response to MMS in Atad5+/m MEFs compared to wild type MEFs. Interestingly, the Atad5+/m MEFs displayed high levels of spontaneous Rad51 foci that might lead to a high level of recombination although there was no significant induction of sister chromatid exchange rate. However, it is worth noting that recent studies have shown that the Usp1 knock-out mice, and the PCNA-K164R knock-in mice both have milder phenotypes[37],[38] compared to Atad5+/m mice. Therefore we cannot exclude the possibility that tumorigenesis in the Atad5+/m mice might not be due to the dysregulation of PCNA ubiquitination, but instead might result from other defects caused by Atad5 haploinsufficiency. For example, it is possible that haploinsufficiency for Atad5 might affect DNA replication especially in Okazaki fragment maturation during lagging strand synthesis because it has been observed that yeast Elg1 interacts with Rad27, the yeast homolog of the FEN1 flap endonuclease, which specifically removes RNA primers, during lagging strand maturation[17]. Late passage Atad5+/m MEFs undergo apoptosis in the absence of DNA damaging agents. This is somewhat paradoxical because tumorigenesis is frequently associated with impaired apoptosis. Nonetheless, there are several examples of tumorigenesis requiring apoptosis[39],[40]. We speculate that tumors that arise in Atad5+/m mice might be result from high levels of cell turnover associated with the increased attrition of stem cells that presumably accompanies high levels of spontaneous DNA damage. Therefore, there was increased risk of transformation. Alternatively, it is possible that the DNA damage that occurs in Atad5+/m cells might lead to the accumulation of additional mutations that render Atad5+/m cells resistant to apoptosis, thus accounting for the 11-month latency period before the appearance of tumors. In conclusion, our study provides the first evidence that loss of Atad5/ATAD5 function is associated with tumorigenesis. Our findings warrant future investigations to explore the role of ATAD5 in other human cancers. As concerted efforts to systematically sequence entire tumor exomes and genomes move forward rapidly, we predict that a wider spectrum of sporadic human tumors will harbor acquired somatic ATAD5 mutations. Furthermore, given our observation that haploinsufficiency for Atad5 in mice predisposes to tumorigenesis, it will also be important to determine whether germline variants of ATAD5 confer increased risk for cancer. Notably, there are studies suggesting a higher cancer incidence in approximately 5 to 10% of Neurofibromatosis type 1 patients who have a micro-deletion that includes NF1, ATAD5, and several other genes[41],[42]. The high incidence of tumors in the heterozygous Atad5+/m mice in this study raises the possibility that the tumors in NF1 microdeletion patients might arise as a result of a synergistic effect mediated by the co-deletion of ATAD5.",
"role": "user"
},
{
"content": "Genomic instability is a hallmark of tumorigenesis , suggesting that mutations in genes suppressing genomic instability contribute to this phenotype . In this study , we demonstrate for the first time that haploinsufficiency for Atad5 , a protein that is important in stabilizing stalled DNA replication forks by regulating PCNA ubiquitination during DNA damage bypass , predisposes >90% of mice to tumorigenesis in multiple organs . In heterozygous Atad5 mice , both somatic cells and the spontaneous tumors showed high levels of genomic instability . In a subset of sporadic human endometrial tumors , we identified heterozygous loss-of-function somatic mutations in the ATAD5 gene , consistent with the role of mouse Atad5 in suppressing tumorigenesis . Collectively , our findings suggest that ATAD5 may be a novel tumor suppressor gene .",
"role": "assistant"
}
] |
science.data_reco_mcq_sc | science.data_reco_mcq_sc.7875 | [
{
"content": "Explore the following research question and associated terms. Identify the most suitable dataset from the list provided. Use the dataset descriptions to guide your choice.\n\nInquiry: I want to train a classifier to recognize objects from images.\n\nKeywords: image classification images videos\n\nDataset Information:\nGYAFC: Grammarly’s Yahoo Answers Formality Corpus (GYAFC) is the largest dataset for any style containing a total of 110K informal / formal sentence pairs. Yahoo Answers is a question answering forum, contains a large number of informal sentences and allows redistribution of data.\nCOCO: The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n2D Hela: 2D HeLa is a dataset of fluorescence microscopy images of HeLa cells stained with various organelle-specific fluorescent dyes. The images include 10 organelles, which are DNA (Nuclei), ER (Endoplasmic reticulum), Giantin, (cis/medial Golgi), GPP130 (cis Golgi), Lamp2 (Lysosomes), Mitochondria, Nucleolin (Nucleoli), Actin, TfR (Endosomes), Tubulin.\nSNIPS: The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity: SearchCreativeWork (e.g. Find me the I, Robot television show), GetWeather (e.g.\n\nAvailable Datasets: A: GYAFC B: COCO C: 2D Hela D: SNIPS",
"role": "user"
},
{
"content": "COCO",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_plos_single_doc_summ | science.scientific_lay_summarisation_plos_single_doc_summ.4131 | [
{
"content": "Transform the following sections from a biomedical study into a simple summary for the general public. Focus on making the information accessible and easy to understand.\n\nResearch Heading: Comparative Anatomy of Chromosomal Domains with Imprinted and Non-Imprinted Allele-Specific DNA Methylation\nContent:\nAbstract:\nAllele-specific DNA methylation( ASM) is well studied in imprinted domains, but this type of epigenetic asymmetry is actually found more commonly at non-imprinted loci, where the ASM is dictated not by parent-of-origin but instead by the local haplotype. We identified loci with strong ASM in human tissues from methylation-sensitive SNP array data. Two index regions( bisulfite PCR amplicons), one between the C3orf27 and RPN1 genes in chromosome band 3q21 and the other near the VTRNA2-1 vault RNA in band 5q31, proved to be new examples of imprinted DMRs( maternal alleles methylated) while a third, between STEAP3 and C2orf76 in chromosome band 2q14, showed non-imprinted haplotype-dependent ASM. Using long-read bisulfite sequencing( bis-seq) in 8 human tissues we found that in all 3 domains the ASM is restricted to single differentially methylated regions( DMRs), each less than 2kb. The ASM in the C3orf27-RPN1 intergenic region was placenta-specific and associated with allele-specific expression of a long non-coding RNA. Strikingly, the discrete DMRs in all 3 regions overlap with binding sites for the insulator protein CTCF, which we found selectively bound to the unmethylated allele of the STEAP3-C2orf76 DMR. Methylation mapping in two additional genes with non-imprinted haplotype-dependent ASM, ELK3 and CYP2A7, showed that the CYP2A7 DMR also overlaps a CTCF site. Thus, two features of imprinted domains, highly localized DMRs and allele-specific insulator occupancy by CTCF, can also be found in chromosomal domains with non-imprinted ASM. Arguing for biological importance, our analysis of published whole genome bis-seq data from hES cells revealed multiple genome-wide association study( GWAS) peaks near CTCF binding sites with ASM.\nIntroduction:\nEvidence from genome-wide association studies( GWAS) and cross-species comparisons suggests that many inter-individual phenotypic differences result from genetic variants in non-coding DNA sequences. Thus a major challenge in the post-genomic era is to define the mechanisms by which non-coding sequence polymorphisms and haplotypes result in differences in biological phenotypes. One hypothesis comes from recent work that has revealed strong cis-acting influences of simple nucleotide polymorphisms( SNPs), and the haplotypes in which these SNPs are embedded, on epigenetic marks, leading to allele-specific DNA methylation( ASM) and allele-specific chromatin structure[1]. Historically, ASM has been most intensively studied in the context of genomic imprinting- a parent-of-origin dependent phenomenon that affects about 80 human genes. Mechanistic principles that have emerged from studying imprinted domains include the presence of discrete small( one to several kb) DNA intervals, called differentially methylated regions( DMRs), which show strong parent-of-origin dependent asymmetry in CpG methylation between the two alleles and which control the allele-specific expression( ASE) of one or more nearby genes in cis, in some cases via methylation-sensitive binding of the insulator protein CTCF[2]–[6]. While genomic imprinting is a potent mode of gene regulation, it affects fewer loci than the more recently recognized phenomenon of non-imprinted haplotype-dependent ASM. Mapping haplotype-dependent ASM, and related phenomena such as ASE, will be useful for interpreting the biological meaning of statistical peaks from GWAS( and non-coding variants from post-GWAS exome sequencing studies), as bona fide regulatory sequence variants can reveal their presence by conferring a measurable epigenetic asymmetry between the two alleles. However as yet there have not been many insights to the molecular mechanisms underlying non-imprinted ASM. This situation raises an interesting question – could any principles from studies of genomic imprinting also be relevant for understanding haplotype-dependent, non-imprinted, ASM? To begin to address this issue, and to identify and characterize new examples of loci with imprinted and non-imprinted ASM, we have searched for “index regions” in the human genome showing strong and highly recurrent ASM and used these locations as starting points for intensive local mapping of DNA methylation patterns in multiple human tissues. Here we describe these data for new examples of loci with imprinted and non-imprinted ASM, which reveal epigenomic features that are shared between these two allele-specific phenomena.\nDiscussion:\nASM is a well-studied hallmark of genomic imprinting. However, in the past several years this same type of allelic asymmetry has been found in and around many non-imprinted genes, both in normal human tissues and in F1 progeny of inter-strain mouse crosses[1],[17]. In contrast to imprinted ASM, in which the methylation status is dictated by the parent of origin of the allele, when ASM occurs at non-imprinted loci it is usually haplotype-dependent. Thus non-imprinted ASM is controlled by genetic polymorphisms, such as SNPs, indels and copy number variants( CNVs), which act in cis to set up the allelic asymmetries. The mechanism( s) by which this occurs are not yet understood. Research on this topic is important because haplotype-dependent ASM, with its functional correlate of genotype-specific gene expression manifesting as ASE and expression quantitative trait loci( eQTLs), is a major route by which inter-individual genetic differences in non-coding sequences can lead to differences in phenotypes, including disease susceptibility. Here we have described new examples of genes and regulatory sequences with imprinted and non-imprinted ASM, which have revealed epigenomic features that are shared between these two epigenetic phenomena. Genes that are regulated by parental imprinting frequently have important roles in cell proliferation and tissue growth, so it is interesting that the VTRNA2-1 gene may have a suppressor role in human acute myeloid leukemias and other cancers[8],[18], and that vault RNAs are strongly upregulated during ES cell differentiation[19]. Similarly, given the clear importance of multiple imprinted genes in placental and fetal growth[10],[11], and the accumulating data for rare but recurrent alterations of maternal imprints in offspring conceived by assisted reproductive technologies, the imprinted C3orf27-RPN1 chromosomal domain, including the lncRNA corresponding to the AK097792 EST, as well as other RNAs, such as the as yet unnamed micro-RNA precisely localized to this region by data from the ENCODE/CSH project( Figure S3) will be interesting for future studies of placental growth and function. The loci with non-imprinted ASM that we have described here are also potentially relevant to human traits. CYP2A7 is located immediately adjacent to and is highly homologous to the CYP2A6 gene, and both genes are predicted to encode cytochrome P450 proteins responsible for coumarin and nicotine metabolism and smoking behavior[20]. The C2orf76 gene encodes a protein with unknown function, but a non-synonymous SNP in this gene was tentatively implicated in HIV susceptibility in African-Americans by GWAS at a nominal p-value of 0. 0001[21]. As we have previously pointed out[1], a major utility of mapping non-imprinted, haplotype-dependent ASM is to reveal bona fide regulatory haplotypes underlying human complex disease susceptibility. In this regard, our current analysis of prior whole genome bis-seq data has already yielded an interesting set of additional loci in which GWAS signals for important human traits and disease susceptibility coincide with both ASM and CTCF sites. Lastly, our data lay an essential groundwork for testing mechanisms of haplotype-dependent ASM. One feasible application will be in determining the minimal sequence requirements that allow local haplotypes to dictate methylation patterns in cis: based on our mapping data it should be possible to design allelic series of large bacterial artificial chromosomes( BACs) containing the chromosomal regions described here, which can be subjected to targeted site-directed mutagenesis, including the CTCF sites as well as haplotype-specific sequences outside of these sites, and then analyzed for cis-effects on CpG methylation patterns in vivo in BAC-transgenic mice.",
"role": "user"
},
{
"content": "Allele-specific DNA methylation ( ASM ) is a central mechanism of gene regulation in humans , which can influence inter-individual differences in physical and mental traits and disease susceptibility . ASM is mediated either by parental imprinting , in which the repressed copy ( allele ) of the gene is determined by which type of parent ( mother or father ) transmitted it or , for a larger number of genes , by the local DNA sequence , independent of which parent transmitted it . Chromosomal regions with imprinted ASM have been well studied , and certain mechanistic principles , including the role of discrete differentially methylated regions ( DMRs ) and involvement of the insulator protein CTCF , have emerged . However , the molecular mechanisms underlying non-imprinted sequence-dependent ASM are not yet understood . Here we describe our detailed mapping of ASM across 5 gene regions , including two novel examples of imprinted ASM and three gene regions with non-imprinted , sequence-dependent ASM . Our data uncover shared molecular features – small discrete DMRs , and the binding of CTCF to these DMRs , in examples of both types of ASM . Combining ASM mapping with genetic association data suggests that sequence-dependent ASM at CTCF binding sites influences diverse human traits .",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_plos_single_doc_summ | science.scientific_lay_summarisation_plos_single_doc_summ.4864 | [
{
"content": "Transform the following sections from a biomedical study into a simple summary for the general public. Focus on making the information accessible and easy to understand.\n\nResearch Heading: Prion Infections and Anti-PrP Antibodies Trigger Converging Neurotoxic Pathways\nContent:\nAbstract:\nPrions induce lethal neurodegeneration and consist of PrPSc, an aggregated conformer of the cellular prion protein PrPC. Antibody-derived ligands to the globular domain of PrPC( collectively termed GDL) are also neurotoxic. Here we show that GDL and prion infections activate the same pathways. Firstly, both GDL and prion infection of cerebellar organotypic cultured slices( COCS) induced the production of reactive oxygen species( ROS). Accordingly, ROS scavenging, which counteracts GDL toxicity in vitro and in vivo, prolonged the lifespan of prion-infected mice and protected prion-infected COCS from neurodegeneration. Instead, neither glutamate receptor antagonists nor inhibitors of endoplasmic reticulum calcium channels abolished neurotoxicity in either model. Secondly, antibodies against the flexible tail( FT) of PrPC reduced neurotoxicity in both GDL-exposed and prion-infected COCS, suggesting that the FT executes toxicity in both paradigms. Thirdly, the PERK pathway of the unfolded protein response was activated in both models. Finally, 80% of transcriptionally downregulated genes overlapped between prion-infected and GDL-treated COCS. We conclude that GDL mimic the interaction of PrPSc with PrPC, thereby triggering the downstream events characteristic of prion infection.\nIntroduction:\nPrion diseases are lethal infectious diseases that propagate through the conversion of the cellular prion protein( PrPC) into a pathological conformer, the scrapie-associated prion protein( PrPSc)[1]. Neuronal expression of PrPC is required to mediate the neurotoxicity of PrPSc[2] and possibly also of other protein aggregates[3], yet the pathways leading to neurotoxicity are largely unknown. While caspase activation, autophagy, and Ca2+ dysregulation have been shown to occur after prion infections[4, 5], ablation of Bax and caspase-12, or overexpression of Bcl-2, does not delay incubation time of prion-infected animals[6, 7]. Induction of autophagy, despite enhancing PrPSc clearance in vitro and in vivo, did not prolong survival time of prion-infected mice[8]. Furthermore, excessive unfolded protein responses( UPR) in the endoplasmic reticulum( ER) plays a significant role in the pathogenesis of prion and other neurodegenerative diseases[9, 10], yet the biochemical events emanating from prion replication and leading to UPR induction are unknown, and it is unclear how extracellular aggregates can trigger pathology in a subcellular compartment to which they have no direct access. Prion infection of cerebellar organotypic cultured slices( COCS) has proven to be an extraordinarily faithful and tractable model of prion disease. Prion-infected COCS replicate all salient biochemical, histological, and pathophysiological events which occur during prion infections in vivo, including PrPC-dependent prion replication[11, 12], neuroinflammation with proliferation of microglia and astrogliosis, spongiosis, and neuronal cell loss. In prion-infected COCS, calpain inhibition confers neuroprotection without reducing prion replication, suggesting that calpains are involved in neurotoxicity[13]. We have reported that exposure to antibody-derived anti-PrP ligands( full-length antibodies, F( ab) 1 fragments thereof, and recombinant single-chain miniantibodies) targeting the globular domain( GD) of PrPC[14] induces rapid cerebellar granular cell( CGC) degeneration in COCS and in live mice. Since this toxic effect was also attenuated by calpain inhibitors[15], we wondered whether the two triggers of PrP-dependent cell death, GDL and prions, might induce similar neurotoxic cascades. Here we report that antibodies against the flexible tail( FT) of PrPC, which prevent GD ligand( GDL) toxicity in COCS[15], also counteracted neurotoxicity in prion-infected COCS, suggesting a role for the FT in both models. Furthermore, GDL treatment and prion infection triggered similar intracellular cascades including PERK activation[9] and reactive oxygen species( ROS) production. Also, a comparative analysis of transcription in prion-infected vs. GDL-exposed COCS showed extensive similarities between these two paradigms of PrP-related toxicity. We conclude that prions and GDL share downstream pathways of toxicity, and that in both instances the FT is the main molecular effector of prion-mediated toxicity.\nDiscussion:\nUsing multiple paradigms in organotypic cultures and in vivo, we show that the toxic antibody POM1 induces largely overlapping pathogenetic cascades as bona fide prion infections. Not only were all strategies preventing GDL-induced neurodegeneration( such as calpain inhibition, ROS scavenging and FT binding) found to be neuroprotective against prions, but compounds neuroprotective against other kinds of insult( such as caspase inhibitors and glutamate antagonists) were ineffective against both GDL and prions. Moreover, the results of transcriptomic analyses are compatible with the contention of a large overlap in the downstream effectors of both pathways. Besides highlighting the commonalities between GDL and prion-related neurodegeneration( S7 Fig.), these observations set both conditions apart from other types of neurodegenerative conditions. Treatment of prion-infected COCS with antioxidants did not interfere with the aggregation of PK-resistant material, as was previously shown for calpain inhibitors[13]. This adds to the evidence that ROS scavengers and calpain inhibitors mitigate toxicity by interfering with events triggered by prion replication. A plausible model of pathogenesis predicates that toxicity is triggered by binding of either GDL or PrPSc to the globular domain of PrPC. Since inhibitors of prion replication decreased ROS production, but did not protect from GDL toxicity and did not reduce ROS production( Fig. 1G), we conclude that ROS production is downstream of both prion replication and GDL binding. The modest therapeutic effect of the antioxidative therapy with AcHyt in vivo is not unexpected, since their involvement occurs downstream of both prion replication and calpain activation, and suggests the existence of additional pathways of toxicity that remain operational even after scavenging ROS. How can GDL execute such a faithful molecular mimicry of prion infection? We favor the hypothesis that GDL and PrPSc share the same docking site on cellular PrPC. Engagement of the latter site enacts a long-range allosteric transition of the FT, which in turn triggers the toxic cascade. The above scenario cannot be tested directly because of the technological barriers still hampering structural studies of PrPSc, yet it is at least compatible with the structure of the POM1:PrPC complex, as determined by X-ray crystallography[31, 32]. Solforosi et al[33] claimed that anti-PrP antibodies induced toxicity by crosslinking PrPC, as F( ab) 1 fragments were innocuous in their study. However the monovalent scFv and F( ab) 1 fragments of antibody POM1 lead to toxicity in vitro and in vivo[15]. This refutes clustering of PrPC as a cause of toxicity in the present study. Thus far, quests for anti-prion therapeutics have been rare and mostly unsuccessful. Two crucial reasons are the hazards associated with prion infectivity and the dearth of rapid, validated models of prion-induced toxicity. The validation of GDL as prion mimetics will help identify novel nodes in the pathogenetic cascades leading to neurodegeneration, and it is likely that some of these nodes may represent druggable targets. We also identified differences between the pathogenesis of GDL exposure and that of prion infections. Firstly and most glaringly, the kinetics of GDL-induced neurodegeneration( days) is much faster than that of experimental prion infections( months). Secondly, although GDL function as a prion mimetic, they differ from bona fide prions by not inducing the classical misfolding and aggregation of PrP, by failing to induce deposition of protease-resistant PrP, and most crucially by being non-infectious. Thirdly, compounds that interfere with prion replication, such as pentosan polysulfate, do not alter the toxicity caused by GDL. All of these observations are compatible with the interpretation that GDL, while not leading to the generation of PrPSc, trigger the same signaling pathway as prion infections. The different speed of disease development may be taken to suggest that GDL exposure and prion infections are fundamentally dissimilar. In our view, however, this does not contradict the hypothesis that these two models share pathogenetic pathways. Prion infection is initiated by trace amounts of prions within brain homogenates, with the PrPSc concentration only gradually increasing upon infection of a progressively larger numbers of host cells. The prion isolate used for a standard slice culture infection contains 7. 9*106 ID50 in 10 μl[12]. Antibodies in slice culture experiments were used at 67 nM in 1 ml of medium, which corresponds to 4. 0*1013 molecules. We conclude that, in the case of the antibodies, the exposure to the bioactive principle exceeds that of prions by ca. 7 logs. This calculation is conservative since it disregards that prion inocula were removed after exposure, and that large prion aggregates are unlikely to efficiently penetrate tissues. Hence the bioactive principle in antibody preparations exceeds that in prion infections by several orders of magnitude, which yields—in our view—a highly plausible explanation for the difference in the kinetics of neurodegeneration. Finally, recent evidence suggested the involvement of the unfolded protein response in prion-induced cell death in vivo[9]. Here, we confirm its involvement in RML and GDL-induced cell death in COCS. All of the above suggests that GDL-induced neurodegeneration represents a phenocopy of bona fide prion infections. If this conjecture is correct, targeted manipulations of the FT may be beneficial against neurotoxicity in prion infections. Indeed, we found that binding of the FT by antibodies was neuroprotective to prion-infected slice cultures, yet did not appreciably reduce prion titers—indicating selective suppression of the cytotoxic events downstream of prion replication. Therefore, binding of the FT could modify the course of the disease by uncoupling prion replication from prion toxicity. This hypothesis remains to be validated. Since the GDL-induced toxicity model closely mimics multiple aspects of prion-induced neurotoxicity of prion-induced neurotoxicity, it seems reasonable to utilize GDLs for phenotypic screens aimed at identifying potential antiprion therapeutics. While confirmatory counterscreens will still require proof of efficacy against infectious prions, we posit that GDL toxicity may form the basis of convenient high-throughput and non-biohazardous assays of chemical and biological libraries.",
"role": "user"
},
{
"content": "Prion diseases are a group of infectious , invariably fatal neurodegenerative diseases . Progress in developing therapeutics is slow , partly because animal models of prion diseases require stringent biosafety and are very slow . We recently found that treatment of cerebellar slices with antibodies targeting the globular domain ( GD ligands ) of the prion protein ( PrP ) is neurotoxic . Here we compared this model to prion infection , and describe striking similarities . Both models involved the production of reactive oxygen species , and antioxidants could reverse the toxicity in cerebellar slices and even prolong the survival time of prion-infected mice . Antibodies targeting the flexible tail of PrP that prevent toxicity of GD ligands reduced the toxicity induced by prions . Endoplasmic reticulum stress , which is involved in prion toxicity , is also found in GD-ligand induced neurotoxicity . Finally , changes of gene expression were similar in both models . We conclude that prion infection and GD ligands use converging neurotoxic pathways . Because GD ligands induce toxicity within days rather than months and do not pose biosafety hazards , they may represent a powerful tool for furthering our understanding of prion pathogenesis and also for the discovery of antiprion drugs .",
"role": "assistant"
}
] |
science.medmentions_ner | science.medmentions_ner.1544 | [
{
"content": "Analyze the provided paragraph from a scientific document. Your goal is to identify and extract relevant entities.\n\nFormat your findings as a JSON object, where each key represents a category of entities, and the corresponding value is a list of entities identified in that category. If no entities are found for a category, return an empty list.\n\nFocus on precision and ensure that the output is strictly in JSON format.\n\nDocument:\n\nCytokine IL-10, activators of PI3-kinase, agonists of α-2 adrenoreceptor and antioxidants prevent ischemia - induced cell death in rat hippocampal cultures In the present work we compared the protective effect of anti-inflammatory cytokine IL-10 with the action of a PI3-kinase selective activator 740 Y-P, selective agonists of alpha-2 adrenoreceptor, guanfacine and UK-14,304, and compounds having antioxidant effect: recombinant human peroxiredoxin 6 and B27, in hippocampal cell culture during OGD (ischemia-like conditions). It has been shown that the response of cells to OGD in the control includes two phases. The first phase was accompanied by an increase in the frequency of spontaneous synchronous Ca(2+)-oscillations (SSCO) in neurons and Ca(2+) - pulse in astrocytes. Spontaneous Ca(2+) events in astrocytes during ischemia in control experiments disappeared. The second phase started after a few minutes of OGD and looked like a sharp/avalanche, global synchronic (within 20 s) increase in [Ca(2+)]i in many cells. Within 1 h after OGD, a mass death of cells, primarily astrocytes, was observed. To study the protective action of the compounds, cells were incubated in the presence of the neuroprotective agents for 10-40 min or 24 h before ischemia. All the neuroprotective agents delayed a global [Ca(2+)]i increase during OGD or completely inhibited this process and increased cell survival.",
"role": "user"
},
{
"content": "{\"AnatomicalStructure\": [\"astrocytes\", \"neurons\", \"cells\", \"hippocampal\"], \"Bacterium\": [], \"BiologicFunction\": [\"response of cells to OGD\", \"after a few minutes of OGD\", \"mass death of cells\", \"ischemia\", \"during OGD\", \"SSCO\", \"antioxidant effect\", \"spontaneous synchronous Ca(2+)-oscillations\", \"cell survival\", \"after OGD\", \"pulse\", \"cell death\"], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [], \"Chemical\": [\"guanfacine\", \"Ca(2+)\", \"UK-14,304\", \"B27\", \"cytokine\", \"selective activator\", \"neuroprotective agents\", \"agonists\", \"antioxidants\", \"\\u03b1-2 adrenoreceptor\", \"IL-10\", \"PI3-kinase\", \"activators\", \"compounds\", \"740 Y-P\", \"alpha-2 adrenoreceptor\", \"selective agonists\", \"human peroxiredoxin 6\", \"Cytokine\"], \"ClinicalAttribute\": [], \"Eukaryote\": [\"rat\"], \"Finding\": [\"global synchronic\", \"disappeared\", \"sharp/avalanche\"], \"Food\": [], \"HealthCareActivity\": [\"incubated\"], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [], \"MedicalDevice\": [], \"Organization\": [], \"PopulationGroup\": [], \"ProfessionalOrOccupationalGroup\": [], \"ResearchActivity\": [], \"SpatialConcept\": [], \"Virus\": []}",
"role": "assistant"
}
] |
science.pico_ner | science.pico_ner.4670 | [
{
"content": "Examine the following abstract and identify distinct entities. Focus on extracting entities related to participants, interventions, and outcomes.\n\nPresent your findings in a JSON format like: {\"Participant\": [\"example entity\", ...], \"Intervention\": [\"example entity\", ...], \"Outcome\": [\"example entity\", ...]}. If no entities are found for a category, use an empty list.\n\nAbstract Content:\n\nA prospective randomized study of transurethral resection of the prostate and transurethral vaporization of the prostate as a therapeutic alternative in the management of men with BPH . OBJECTIVE The common goals of new surgical treatment for benign prostatic hyperplasia ( BPH ) are to improve subjective and objective symptoms , to decrease the risk of postoperative complications and short hospitalization . Transurethral electrovaporization of the prostate ( TUVP ) is a new , minimally invasive and a promising alternative to standard transurethral resection of the prostate ( TURP ) in the treatment of BPH . The aim of this study is to compare the efficacy and safety of these two treatment alternatives . METHODS A prospective randomized trial of 60 patients with symptomatic BPH was performed . Preoperative and postoperative International Prostate Symptom Score ( IPSS ) , maximum flow rates ( Qmax ) and complications were recorded in each patient . The volume of the prostate was measured preoperatively and postoperatively using transrectal ultrasound . RESULTS Of the 30 patients who underwent TURP , mean hospital stay was 4.5 days . The Foley catheter was removed on postoperative day 4 following surgery . IPSS decreased from 21.6 to 5.2 , Qmax increased from 9.2 to 19.2 ml/s at 3 months . Mean prostatic volume at 3 months decreased from 51.7 to 26.2 g , a 49.3 % reduction . Of 30 patients undergoing TUVP , mean hospital stay was 2.5 days and the catheter was removed within 2 days following surgery . Postoperative urinary retention did not occur in any patient . IPSS decreased from 19.4 to 4.1 and Qmax increased from 7.9 to 17.7 ml/s at 3 months . Mean prostatic volume decreased from 48.9 to 27.8 g , a 43.1 % reduction at 3 months . In the TUVP group , none of the patients required blood transfusions or developed clinical transurethral resection syndrome . There were no major complications . Sphincteric incontinence , urethral strictures or bladder neck contractures were not recorded . At 3 months postoperatively , 13 patients in the TURP group and 7 patients in the TUVP group had retrograde ejaculation . CONCLUSION Our initial experience of TUVP suggests advantages over conventional TURP through reduced blood loss and shorter hospital stay . It appears to be an effective treatment for BPH ; however , long-term results should be evaluated .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"Transurethral electrovaporization of the prostate ( TUVP )\", \"standard transurethral resection of the prostate ( TURP )\", \"transurethral resection of the prostate and transurethral vaporization of the prostate\"], \"Outcome\": [\"efficacy and safety\", \"Sphincteric incontinence\", \"Mean prostatic volume\", \"transrectal ultrasound\", \"blood transfusions\", \"Preoperative and postoperative International Prostate Symptom Score ( IPSS ) , maximum flow rates ( Qmax )\", \"bladder neck contractures\", \"Foley catheter\", \"retrograde ejaculation\", \"benign prostatic hyperplasia\", \"clinical transurethral resection syndrome\", \"complications\", \"reduced blood loss\", \"no major complications\", \"IPSS\", \"urethral strictures\", \"postoperative complications\", \"Postoperative urinary retention\", \"shorter hospital stay\"], \"Participant\": [\"men with BPH .\"]}",
"role": "assistant"
}
] |
science.chemsum_single_document_summarization | science.chemsum_single_document_summarization.26188 | [
{
"content": "Craft a concise summary for a chemistry research paper using the provided title and selected text segments.\n\nHeading:\nRational Design of New Dihydrobenzooxophosphole-Based Lewis Base Organocatalysts\n\nContent Extracts:\n\n<p>Chiral Lewis base catalysts have emerged as powerful organocatalysts in the past decade for the construction of C‒C or C‒X bonds.1 Among them, the phosphine oxide-containing Lewis base catalysts (Figure 1) have been applied extensively in a number of enantioselective transformations including Mukaiyama aldol and double aldol reactions,2 trichlorosilane-mediated stereoselective reductions of C=N and C=C bonds,3 bromoaminocyclization,4 enantioselective allylation5 and epoxide ring-opening reactions.6</p><p>The availability of diversified phosphine ligand libraries has enabled successful application of metal-catalyzed reactions on industrial scales. With increasing attention on developing organocatalyst-catalyzed asymmetric synthesis, the advancement of novel efficient catalyst systems remains an attractive endeavor. Recently, we reported a series of tunable P-stereogenic dihydrobenzooxophosphole-based bisphosphine ligands BIBOPs that are highly effective in a number of asymmetric transformations.7 We reasoned that the corresponding bis-phosphine oxide BIBOPOs might be applicable as Lewis base organocatalysts. The conjugate reduction of (E)-1,3-diphenylbutenone (1a) in the presence of trichlorosilane3 was selected as a model transformation to evaluate the effectiveness of such catalysts (Scheme 1). However, when BIBOPO was applied in the test reaction, only trace amount of product 2a was observed (Scheme 1a). We rationalized that an increased coordinating angle of the Si atom to bis P=O might be required to accommodate the substrate coordination (Scheme 1b). C2-symmetric tetradentate bisphosphine/diamine (PNNP) ligands with NH-functionality have been reported as highly efficient catalyst system for asymmetric hydrogenation/transfer hydrogenation of ketones8 and other enantioselective reactions.9 We therefore decided to apply the versatile features of a diamine linker and prepare new P(O)NNP(O)-based organocatalysts. We hypothesized that such linker would not only increase the bite angle, but also lead to potential hydrogen bonding of HSiCl3 with the hydrogen atoms on diamine.</p><p>The proposed catalysts were synthesized by coupling of the corresponding carboxylic acid10 with diamines in the presence of propylphosphonic anhydride (T3P) in acetonitrile (Scheme 1c).11 In each case, only one diastereomer product was obtained under the coupling conditions without any racemization. We first synthesized the bis-amide catalyst 3a with a (R,R)-diaminocyclohexane linker. Gladly, (S)-1,3-diphenylbutan-1-one 2a12 was furnished in 90% conversion and an 82.5:17.5 enantiomeric ratio in CH2Cl2 (entry 1). Several environmentally friendly solvents were tested, and acetonitrile was identified to provide the highest selectivity in an 89.6:10.4 er (entry 2).13 Interestingly, the corresponding diastereomeric (S,S)-diaminocyclohexyl–containing catalyst 3b provided incomplete conversion and 67.8:32.2 er indicating that the conformation of the bis-amide linker is integral to both turnover and selectivity. When increasing the rigidity of the catalyst backbone linker with phenyl diamine, however, low reactivity and selectivity were observed for catalyst 3c. On the other hand, the (R,R)-1,2-dicyclohexyl ethylene diamine–containing catalyst 3d provided the product 2a with 100% conversion and an 87.5:12.5 er. When replacing the bis-cyclohexyl group with bis-phenyl group in the catalyst 3f; same reactivity of 100% conversion, but diminished enantioselecitivity of 74:26 er was observed.</p><p>We initiated a DFT study with Gaussian 1614 with B3LYP/6–31G(d)15 to understand the origin of the catalyst reactivity in these reductions. According to the proposed mechanism,16 the stereodetermining step involves a 1,4-addition of a silicon hydride to the enone via a six-membered transition state. In this transition state, the cationic silicon coordinates to the carbonyl oxygen, geometrically placing the silicon hydride near the β-carbon of the enone. Upon reduction, the enolate remains coordinated to the cationic silicon. Possible transition states differ in the position of the hydride on the catalyst, the position of the enone on the catalyst, the orientation of the hydride relative to the enone, and the geometry of the enone (since it can undergo isomerization under the reaction conditions). Detailed conformational analyses of these transition states were undertaken to identify the relevant lowest energy transition states for the different catalysts.</p><p>Amongst the catalysts screened, there was a notable difference in behavior of the diastereomeric compounds obtained from the two enantiomers of cyclohexane diamine (matched 3a, mismatched 3b). To access the reactivity of these diastereomeric catalysts, the activation energies for hydride transfer in respective systems were analyzed. Calculations showed that a higher activation energy is required for the reduction of the enone using the (S,S)-isomer of the catalyst 3b by 2.2 kcal/mol, which is consistent with the observed reactivity.</p><p>As projected, both of the aforementioned catalysts form hydrogen bonds between the amide linker and the chlorine ligands of the silicon in the transition state. The productive catalyst 3a forms stronger hydrogen bonds than the unproductive catalyst 3b, as determined by bond lengths (Figure 3). These hydrogen bonds would be expected to enhance the Lewis acidity of the silicon center and thereby enhance reactivity. It thus appears that the mismatched catalyst, 3b, must engage in greater deformation (i.e. is more strained) to achieve such hydrogen bonds, which are also weaker causing less activation. Together, these factors account for the much lower reactivity of 3b (49% conv) relative to 3a (100% conv).</p><p>The conjugate reduction conditions with catalyst 3a are compatible for (E)-1,3-diphenylbutenones with both electron-rich and electron-poor functional groups to give similar enantioselectivity (2a-2d) in acetonitrile (Scheme 2).</p><p>In conclusion, we have developed new Lewis base organocatalysts derived from P‒stereogenic dihydrobenzooxophosphole core structure by coupling with (R,R)-dicyclohexyl ethylene diamine, which provides a conformational match for the enantioselective reduction of enone derivatives mediated with HSiCl3. The asymmetric transformation is affected by the P‒stereogenic center and the selectivity is tunable by the substituents on the catalyst. This class of catalysts has potential as tunable Lewis base catalysts in terms of reactivity and selectivity. Further work is to identify catalysts that are more efficient; such tuning will be reported in due course.</p>",
"role": "user"
},
{
"content": "A series of new dihydrobenzooxophosphole-based Lewis Base organocatalysts were designed and synthesized. They are demonstrated effective in trichlorosilane-mediated stereoselective conjugate reductions of C=C bonds. DFT calculations reveal that the strong hydrogen bond between the amide linker and the chloride on silicon in the transition state contributes to the high reactivity of the catalyst 3a.",
"role": "assistant"
}
] |
science.pico_ner | science.pico_ner.2430 | [
{
"content": "Examine the following abstract and identify distinct entities. Focus on extracting entities related to participants, interventions, and outcomes.\n\nPresent your findings in a JSON format like: {\"Participant\": [\"example entity\", ...], \"Intervention\": [\"example entity\", ...], \"Outcome\": [\"example entity\", ...]}. If no entities are found for a category, use an empty list.\n\nAbstract Content:\n\nEffect of prophylactic amiodarone in patients with rheumatic valve disease undergoing valve replacement surgery . The study was carried out to evaluate the effect of prophylactic single-dose intravenous amiodarone in patients undergoing valve replacement surgery . Maintenance of sinus rhythm is better than maintenance of fixed ventricular rate in atrial fibrillation ( AF ) especially in the presence of irritable left or right atrium because of enlargement . Fifty-six patients with valvular heart disease with or without AF were randomly divided into two groups . Group I or the amiodarone group ( n=28 ) received amiodarone ( 3 mg/kg in 100 ml normal saline ) and group II or the control group received same volume of normal saline . The standardized protocol for cardiopulmonary bypass was maintained for all the patients . AF occurred in 7.14 % patients in group I , and in group II , 28.57 % ( P=0.035 ) ; ventricular tachycardia/fibrillation was observed in 21.43 % patients in group I and 46.43 % patients in group II ( P=0.089 ) after release of aortic clamp . Most of the patients in group I ( 92.86 % ) maintained sinus rhythm without cardioversion or defibrillation after release of aortic cross clamp ( P=0.002 ) . Defibrillation or cardio version was needed in 7.14 % patients in group I and 28.57 % patients in group II ( P=0.078 ) . A single prophylactic intraoperative dose of intravenous amiodarone decreased post bypass arrhythmia in this study in comparison to the control group . Single dose of intraoperative amiodarone may be used to decrease postoperative arrhythmia in open heart surgery .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"amiodarone group\", \"prophylactic amiodarone\", \"amiodarone\", \"normal saline\"], \"Outcome\": [\"ventricular tachycardia/fibrillation\", \"Defibrillation or cardio version\", \"post bypass arrhythmia\", \"sinus rhythm without cardioversion or defibrillation\"], \"Participant\": [\"Fifty-six patients with valvular heart disease with or without AF were randomly divided\"]}",
"role": "assistant"
}
] |
science.scitldr_aic | science.scitldr_aic.1982 | [
{
"content": "Review the provided sections from a computer science paper and distill the primary contribution into a single sentence. Provide only the summary sentence.\n\nSections:\n\nCurrent practice in machine learning is to employ deep nets in an overparametrized limit, with the nominal number of parameters typically exceeding the number of measurements.\nThis resembles the situation in compressed sensing, or in sparse regression with $l_1$ penalty terms, and provides a theoretical avenue for understanding phenomena that arise in the context of deep nets.\nOne such phenonemon is the success of deep nets in providing good generalization in an interpolating regime with zero training error.\nTraditional statistical practice calls for regularization or smoothing to prevent \"overfitting\" (poor generalization performance).\nHowever, recent work shows that there exist data interpolation procedures which are statistically consistent and provide good generalization performance\\cite{belkin2018overfitting} (\"perfect fitting\").\nIn this context, it has been suggested that \"classical\" and \"modern\" regimes for machine learning are separated by a peak in the generalization error (\"risk\") curve, a phenomenon dubbed \"double descent\"\\cite{belkin2019reconciling}.\nWhile such overfitting peaks do exist and arise from ill-conditioned design matrices, here we challenge the interpretation of the overfitting peak as demarcating the regime where good generalization occurs under overparametrization. \n\n\nWe propose a model of Misparamatrized Sparse Regression (MiSpaR) and analytically compute the GE curves for $l_2$ and $l_1$ penalties.\nWe show that the overfitting peak arising in the interpolation limit is dissociated from the regime of good generalization.\nThe analytical expressions are obtained in the so called \"thermodynamic\" limit.\nWe find an additional interesting phenomenon: increasing overparametrization in the fitting model increases sparsity, which should intuitively improve performance of $l_1$ penalized regression.\nHowever, at the same time, the relative number of measurements decrease compared to the number of fitting parameters, and eventually overparametrization does lead to poor generalization.\nNevertheless, $l_1$ penalized regression can show good generalization performance under conditions of data interpolation even with a large amount of overparametrization.\nThese results provide a theoretical avenue into studying inverse problems in the interpolating regime using overparametrized fitting functions such as deep nets.\nModern machine learning has two salient characteristics: large numbers of measurements m, and non-linear parametric models with very many fitting parameters p, with both m and p in the range of 10 6",
"role": "user"
},
{
"content": "Proposes an analytically tractable model and inference procedure (misparametrized sparse regression, inferred using L_1 penalty and studied in the data-interpolation limit) to study deep-net related phenomena in the context of inverse problems.",
"role": "assistant"
}
] |
science.chia_ner | science.chia_ner.1117 | [
{
"content": "Analyze the provided clinical trial criteria. Identify and extract distinct entities from the text.\n\nReturn the results as a JSON structure. Ensure the output is strictly in JSON format without any additional commentary.\n\nCriteria Text:\n\nCompleted \"ALO-IIT-012(PEAK study)\", without major protocol deviations.\nMale, or female, 19 years to 75 years.\nFemale with childbearing potential who has a negative urine pregnancy test result at study start and willing to continue practice appropriate birth control during the entire duration of study\nSubjects completed PEAK can be included within 30 days after End Of the Study\nSubjects completed PEAK can be included if their treatment is the same as randomized even after 30 days of End Of the Study.",
"role": "user"
},
{
"content": "{\"Condition\": [], \"Device\": [], \"Drug\": [], \"Measurement\": [], \"Mood\": [], \"Multiplier\": [], \"Negation\": [], \"Observation\": [], \"Person\": [\"Male\", \"female\", \"years\"], \"Procedure\": [], \"Qualifier\": [], \"ReferencePoint\": [], \"Scope\": [], \"Temporal\": [], \"Value\": [\"19 years to 75\"], \"Visit\": []}",
"role": "assistant"
}
] |
science.data_reco_mcq_sc | science.data_reco_mcq_sc.7736 | [
{
"content": "Explore the following research question and associated terms. Identify the most suitable dataset from the list provided. Use the dataset descriptions to guide your choice.\n\nInquiry: I want to train a supervised model for multi-label video classification.\n\nKeywords: multi-label video classification\n\nDataset Information:\nSNIPS: The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity: SearchCreativeWork (e.g. Find me the I, Robot television show), GetWeather (e.g.\nImageNet: The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nHVU: HVU is organized hierarchically in a semantic taxonomy that focuses on multi-label and multi-task video understanding as a comprehensive problem that encompasses the recognition of multiple semantic aspects in the dynamic scene. HVU contains approx.~572k videos in total with 9 million annotations for training, validation, and test set spanning over 3142 labels.\nFaceForensics: FaceForensics is a video dataset consisting of more than 500,000 frames containing faces from 1004 videos that can be used to study image or video forgeries. All videos are downloaded from Youtube and are cut down to short continuous clips that contain mostly frontal faces.\n\nAvailable Datasets: A: SNIPS B: ImageNet C: HVU D: FaceForensics",
"role": "user"
},
{
"content": "ImageNet",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_pubmed | science.scientific_papers_summarization_single_doc_pubmed.9406 | [
{
"content": "Read the provided biomedical research paper and create a concise abstract. Ensure the abstract captures the main points of the paper.\n\nFull Paper:\n1 . digital photography is a fundamental art in the day to day practice of a \n dermatologist .\nthe main issue in most of the clinical photographs is the lack of good , reproducible and consistent digital images .\ndigital photography is a fundamental art in the day to day practice of a dermatologist .\nbut the main issue in most of the clinical photographs is the lack of good , reproducible and consistent digital images .\nhence we designed an instrument to address all these deficits and provide quality digital images .\nthe lack of consistency of serial photographs is due to variation in : \n patient positioning , camera angles , lighting anddistance between patient and camera . \n \nthe instrument which we designed in our workshop is made up of stainless steel and has two parts viz . a fixed and a movable part [ figure 1 ] .\nthe patient comfortably sits on a chair and rests the chin over a cup shaped cushion ( 41 inches from the ground ) which is the fixed part [ figure 2 ] .\nthe movable part is where the digital camera or ideally a slr camera is mounted over a firm stainless steel plate .\nthe camera is kept in position by means of screw ( diameter inch , length inch ) which fits into the socket provided on the undersurface of it .\nphotographs are taken in the same place with the patient 's face against same dark background .\nthe movable part rotates around a central axis so that photographs can be taken at 0 , 45 and 90 [ figure 3 ] with angles marked in the axis .\nthe distance between the patient 's face and camera ( 1 feet ) is constant at all these angles .\nthe distance between the camera and face constant in all angles 90 right , 45 right , 0 front , 45 left , 90 left profile images the device which we designed for perfectly aligned photographs was similar to the omnia imaging station which has a camera column to maintain correct distance through all camera positions , dual intelli flash to provide balanced lighting and match pose image overlay for accurate patient positioning .\ncomparison of features between omnia imaging station and our instrument small inconsistencies with patient positioning can be corrected using image analysis software photoshop adobe cs5 .\neach photograph is aligned to the previous photograph by manually selecting points of obvious anatomic landmarks [ figure 4 ] .\nthis increases the ability of the observers to confidently track the same lesions in the same place in serial photographs [ figure 5 ] .\n( a ) before treatment photograph : yellow dots are the anatomic landmarks chosen for alignment to match those in follow - up photograph .\n( c ) without digital alignment : photographs of before treatment and follow - up on overlaying shows mismatching of anatomic landmarks .\n( d ) with digital alignment : yellow and red dots are superimposed to exactly match both photographs separation of images after alignment the photo - editing software allows perfect alignment of photographs taken serially without altering color or contours of the lesions . by monitoring the same area over time sequentially , interpretation of changes is made easy [ figure 6 ] .\nhad used serial digital photographs and similar spatial alignment software , the software picture window pro 4.0 ( digital light and color , belmont , ma , usa ) to examine the evolution of facial acne lesions . serial tracking of acne scars before each laser sitting -images are comparable and consistent the applications of digital photography in our field include : \n tracking serial photographs after cosmetic procedures like peels and lasers , storing patient data , medico - legal proof , discussion in forums andmedical publications . \n \ntracking serial photographs after cosmetic procedures like peels and lasers , storing patient data , discussion in forums and medical publications .\nthe digital photography forms the basis of store - and - forward system of teledermatology .\nthe medical images are captured in the referring center and subsequently forwarded to the specialty centre .\na proper digital photograph highlighting the representative lesions is essential for a dermatologist to make a reliable diagnosis .\nit can also be used in the field of dentistry to assess the improvement of facial profile after wearing orthodontic braces .\nthe lack of consistency of serial photographs is due to variation in : \n patient positioning , camera angles , lighting anddistance between patient and camera . \n \nthe instrument which we designed in our workshop is made up of stainless steel and has two parts viz . a fixed and a movable part [ figure 1 ] .\nthe patient comfortably sits on a chair and rests the chin over a cup shaped cushion ( 41 inches from the ground ) which is the fixed part [ figure 2 ] .\nthe movable part is where the digital camera or ideally a slr camera is mounted over a firm stainless steel plate .\nthe camera is kept in position by means of screw ( diameter inch , length inch ) which fits into the socket provided on the undersurface of it .\nphotographs are taken in the same place with the patient 's face against same dark background .\nthe movable part rotates around a central axis so that photographs can be taken at 0 , 45 and 90 [ figure 3 ] with angles marked in the axis .\nthe distance between the patient 's face and camera ( 1 feet ) is constant at all these angles .\nthe distance between the camera and face constant in all angles 90 right , 45 right , 0 front , 45 left , 90 left profile images the device which we designed for perfectly aligned photographs was similar to the omnia imaging station which has a camera column to maintain correct distance through all camera positions , dual intelli flash to provide balanced lighting and match pose image overlay for accurate patient positioning .\nsmall inconsistencies with patient positioning can be corrected using image analysis software photoshop adobe cs5 .\neach photograph is aligned to the previous photograph by manually selecting points of obvious anatomic landmarks [ figure 4 ] .\nthis increases the ability of the observers to confidently track the same lesions in the same place in serial photographs [ figure 5 ] .\n( a ) before treatment photograph : yellow dots are the anatomic landmarks chosen for alignment to match those in follow - up photograph .\n( c ) without digital alignment : photographs of before treatment and follow - up on overlaying shows mismatching of anatomic landmarks .\n( d ) with digital alignment : yellow and red dots are superimposed to exactly match both photographs separation of images after alignment the photo - editing software allows perfect alignment of photographs taken serially without altering color or contours of the lesions . by monitoring the same area over time sequentially , interpretation of changes is made easy [ figure 6 ] .\nhad used serial digital photographs and similar spatial alignment software , the software picture window pro 4.0 ( digital light and color , belmont , ma , usa ) to examine the evolution of facial acne lesions .\nthe applications of digital photography in our field include : \n tracking serial photographs after cosmetic procedures like peels and lasers , storing patient data , medico - legal proof , discussion in forums andmedical publications . \n tracking serial photographs after cosmetic procedures like peels and lasers , storing patient data , discussion in forums and medical publications .\nthe digital photography forms the basis of store - and - forward system of teledermatology .\nthe medical images are captured in the referring center and subsequently forwarded to the specialty centre .\na proper digital photograph highlighting the representative lesions is essential for a dermatologist to make a reliable diagnosis .\nit can also be used in the field of dentistry to assess the improvement of facial profile after wearing orthodontic braces .\ndigital photography is transforming the specialty of dermatology and standardization of photographs is of prime importance in this era of digital technology . the major limiting factor in a developing country like india is the cost of such sophisticated instruments to provide images of high quality .\nhence there are always alternatives to get the best out of the resources we have and this device is one kind of it\n. digital photography along with image analysis will open avenues onto a more detailed view of the patient and their response to treatment .\na low - cost instrument has been devised to provide consistent digital facial images that is comparable to the higher end alternatives .",
"role": "user"
},
{
"content": "photography has proven to be a valuable tool in the field of dermatology . \n the major reason for poor photographs is the inability to produce comparable images in the subsequent follow ups . \n combining digital photography with image processing software analysis brings consistency in tracking serial images . \n digital photographs were taken with the aid of an instrument which we designed in our workshop to ensure that photographs were taken with identical patient positioning , camera angles and distance . \n it is of paramount importance in aesthetic dermatology to appreciate even subtle changes after each treatment session which can be achieved by taking consistent digital images .",
"role": "assistant"
}
] |
science.healthver_entailment | science.healthver_entailment.137 | [
{
"content": "Evaluate the given public health statement using the provided research abstract. Your goal is to determine the relationship between the claim and the abstract content.\n\n- \"decision\": Indicate whether the abstract \"SUPPORTS\", \"CONTRADICTS\", or provides \"INSUFFICIENT\" information regarding the claim.\n- \"justification\": List sentences from the abstract that back your decision. If the decision is \"INSUFFICIENT\", leave this list empty.\n\nStatement: A few pets - including cats and dogs - also have been infected with the virus that causes COVID-19. This happened mostly after the animals were in close contact with people infected with the COVID-19 virus.\n\nResearch Abstract:\nLittle information on the SARS-CoV-2 virus in animals is available to date.\nWhereas no one husbandry animal case has been reported to date, which would have significant implications in food safety, companion animals play a role in COVID-19 epidemiology that opens up new questions.\nThere is evidence that SARS-CoV-2 can infect felines, dogs and minks, and there is evidence of human-to-animal infection.\nLikewise, the S protein nucleotide sequence of the SARS-CoV-2 virus isolated in domestic animals and humans is identical, and the replication of the SARS-CoV-2 in cats is efficient.\nBesides, the epidemiological evidence for this current pandemic indicates that the spillover to humans was associated with close contact between man and exotic animals, very probably in Chinese wet markets, thus there is a growing general consensus that the exotic animal markets, should be strictly regulated.\nThe examination of these findings and the particular role of animals in COVID-19 should be carefully analyzed in order to establish preparation and containment measures.\nAnimal management and epidemiological surveillance must be also considered for COVID-19 control, and it can open up new questions regarding COVID-19 epidemiology and the role that animals play in it.",
"role": "user"
},
{
"content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"There is evidence that SARS-CoV-2 can infect felines, dogs and minks, and there is evidence of human-to-animal infection.\", \"Likewise, the S protein nucleotide sequence of the SARS-CoV-2 virus isolated in domestic animals and humans is identical, and the replication of the SARS-CoV-2 in cats is efficient.\"]}",
"role": "assistant"
}
] |
science.data_reco_mcq_sc | science.data_reco_mcq_sc.13426 | [
{
"content": "Explore the following research question and associated terms. Identify the most suitable dataset from the list provided. Use the dataset descriptions to guide your choice.\n\nInquiry: The goal of natural language generation is to produce human-like language output. In this paper,\n\nKeywords: natural language generation text\n\nDataset Information:\nConceptual Captions: Automatic image captioning is the task of producing a natural-language utterance (usually a sentence) that correctly reflects the visual content of an image. Up to this point, the resource most used for this task was the MS-COCO dataset, containing around 120,000 images and 5-way image-caption annotations (produced by paid annotators).\nFarsTail: Natural Language Inference (NLI), also called Textual Entailment, is an important task in NLP with the goal of determining the inference relationship between a premise p and a hypothesis h. It is a three-class problem, where each pair (p, h) is assigned to one of these classes: \"ENTAILMENT\" if the hypothesis can be inferred from the premise, \"CONTRADICTION\" if the hypothesis contradicts the premise, and \"NEUTRAL\" if none of the above holds. There are large datasets such as SNLI, MNLI, and SciTail for NLI in English, but there are few datasets for poor-data languages like Persian.\nCOCO: The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\nGEM: Generation, Evaluation, and Metrics (GEM) is a benchmark environment for Natural Language Generation with a focus on its Evaluation, both through human annotations and automated Metrics. GEM aims to: measure NLG progress across 13 datasets spanning many NLG tasks and languages.\n\nAvailable Datasets: A: Conceptual Captions B: FarsTail C: COCO D: GEM",
"role": "user"
},
{
"content": "COCO",
"role": "assistant"
}
] |
science.drug_combo_extraction_re | science.drug_combo_extraction_re.1227 | [
{
"content": "Analyze the given paragraph from a biomedical study. Focus on identifying drug pairings within a specified sentence. Your goal is to determine the nature of these pairings.\n\nPresent your findings as a JSON array. Each element should be an array containing the drug names and their interaction type. Use the exact names from the sentence.\n\nOnly consider the specified sentence for your analysis. The paragraph is for context only.\n\nReturn solely the JSON array. If no pairings are found, return an empty array.\n\nContext Paragraph: Venetoclax Plus Rituximab in Relapsed Chronic Lymphocytic Leukemia: 4-Year Results and Evaluation of Impact of Genomic Complexity and Gene Mutations From the MURANO Phase III Study. In previous analyses of the MURANO study , fixed-duration venetoclax plus rituximab ( VenR ) resulted in improved progression-free survival ( PFS ) compared with bendamustine plus rituximab ( BR ) in patients with relapsed or refractory chronic lymphocytic leukemia ( CLL ) . At the 4-year follow-up, we report long-term outcomes, response to subsequent therapies, and the predictive value of molecular and genetic characteristics. ### Patients And Methods Patients with CLL were randomly assigned to 2 years of venetoclax (VenR for the first six cycles) or six cycles of BR. PFS, overall survival (OS), peripheral-blood minimal residual disease (MRD) status, genomic complexity (GC), and gene mutations were assessed. ### results Of 389 patients, 194 were assigned to VenR and 195 to BR. Four-year PFS and OS rates were higher with VenR than BR, at 57.3% and 4.6% (hazard ratio [HR], 0.19; 95% CI, 0.14 to 0.25), and 85.3% and 66.8% (HR, 0.41; 95% CI, 0.26 to 0.65), respectively. Undetectable MRD (uMRD) at end of combination therapy (EOCT) was associated with superior PFS compared with low MRD positivity (HR, 0.50) and high MRD positivity (HR, 0.15). Patients in the VenR arm who received ibrutinib as their first therapy after progression (n = 12) had a reported response rate of 100% (10 of 10 evaluable patients); patients subsequently treated with a venetoclax-based regimen (n = 14) had a reported response rate of 55% (six of 11 evaluable patients). With VenR, the uMRD rate at end of treatment (EOT) was lower in patients with GC than in those without GC ( ### conclusion Efficacy benefits with fixed-duration VenR are sustained and particularly durable in patients who achieve uMRD. Salvage therapy with ibrutinib after VenR achieved high response rates. Genetic mutations and GC affected MRD rates and PFS.\n\nTarget Sentence: In previous analyses of the MURANO study , fixed-duration venetoclax plus rituximab ( VenR ) resulted in improved progression-free survival ( PFS ) compared with bendamustine plus rituximab ( BR ) in patients with relapsed or refractory chronic lymphocytic leukemia ( CLL ) .",
"role": "user"
},
{
"content": "[[\"venetoclax\", \"rituximab\", \"POS\"], [\"bendamustine\", \"rituximab\", \"COMB\"]]",
"role": "assistant"
}
] |
science.chemprot_ner | science.chemprot_ner.635 | [
{
"content": "Analyze the following text to identify specific entities. Focus on the categories provided: [\"Chemical\", \"Protein\"].\n\nDeliver your findings as a JSON structure: {\"Chemical\": [\"ExampleChemical\", ...], \"Protein\": [\"ExampleProtein\", ...]}. Each key represents an entity type, and the associated list contains the entities found. \n\nEnsure the response is strictly the JSON format without any extra text.\n\nSource:\n\nEffect of galantamine on acetylcholinesterase and butyrylcholinesterase activities in the presence of L-carnitine in rat selected brain and peripheral tissues.\nOBJECTIVES: The alkaloid galantamine (GAL), which exhibits a combined anticholinesterase and direct parasympathomimetic mechanism of action, is employed in conjunction with therapeutic interventions in the stimulation of central cholinergic transfer in cognitive diseases. We attempted to achieve pharmacologically-induced enhancement of the parasympathomimetic activity of GAL in the key areas of rat brain, using an interactive combination of the alkaloid with the transmembrane enhancer L-carnitine (CAR). METHODS: We investigated activities of acetylcholinesterase (AChE) in brain areas (frontal cortex, basal ganglia, septum and hippocampus) and the hypophysis, and that of butyrylcholinesterase (BuChE) in plasma and liver. RESULTS: Following administration of the highest of the GAL doses used (2.5; 5; 10 mg/kg i.m.), AChE activity decreased mainly in the frontal cortex, hippocampus and hypophysis. In the interaction of GAL and CAR, AChE inhibition was stronger but without any statistical significance. The peripheral inhibition of BuChE was found to be dose-dependent. Premedication by CAR led to a slight change in the values of the activities monitored. CONCLUSIONS: CAR in terms of positive modulation of GAL targeting to the central nervous system had no statistically significant effect.",
"role": "user"
},
{
"content": "{\"Chemical\": [\"CAR\", \"L-carnitine\", \"galantamine\", \"GAL\"], \"Protein\": [\"BuChE\", \"acetylcholinesterase\", \"AChE\", \"CAR\", \"butyrylcholinesterase\", \"anticholinesterase\"]}",
"role": "assistant"
}
] |
science.craftchem_ner | science.craftchem_ner.363 | [
{
"content": "Analyze the following sentence to identify chemical entities of interest. List these entities in a JSON array format. If no entities are found, return an empty array.\n\nText to Analyze:\nTworecentpapersprovidenewevidencerelevanttotheroleofthebreastcancersusceptibilitygeneBRCA2inDNArepair.",
"role": "user"
},
{
"content": "[]",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_plos_single_doc_summ | science.scientific_lay_summarisation_plos_single_doc_summ.1936 | [
{
"content": "Transform the following sections from a biomedical study into a simple summary for the general public. Focus on making the information accessible and easy to understand.\n\nResearch Heading: Natural CMT2 Variation Is Associated With Genome-Wide Methylation Changes and Temperature Seasonality\nContent:\nAbstract:\nAs Arabidopsis thaliana has colonized a wide range of habitats across the world it is an attractive model for studying the genetic mechanisms underlying environmental adaptation. Here, we used public data from two collections of A. thaliana accessions to associate genetic variability at individual loci with differences in climates at the sampling sites. We use a novel method to screen the genome for plastic alleles that tolerate a broader climate range than the major allele. This approach reduces confounding with population structure and increases power compared to standard genome-wide association methods. Sixteen novel loci were found, including an association between Chromomethylase 2( CMT2) and temperature seasonality where the genome-wide CHH methylation was different for the group of accessions carrying the plastic allele. Cmt2 mutants were shown to be more tolerant to heat-stress, suggesting genetic regulation of epigenetic modifications as a likely mechanism underlying natural adaptation to variable temperatures, potentially through differential allelic plasticity to temperature-stress.\nIntroduction:\nArabidopsis thaliana has colonized a wide range of habitats across the world and it is therefore an attractive model for studying the genetic mechanisms underlying environmental adaptation[1]. Several large collections of A. thaliana accessions have either been whole-genome re-sequenced or high-density SNP genotyped[1]–[7]. The included accessions have adapted to a wide range of different climatic conditions and therefore loci involved in climate adaptation will display genotype by climate-at-sampling-site correlations in these populations. Genome-wide association or selective-sweep analyses can therefore potentially identify signals of natural selection involved in environmental adaptation, if those can be disentangled from the effects of other population genetic forces acting to change the allele frequencies. Selective-sweep studies are inherently sensitive to population-structure and, if present, the false-positive rates will be high as the available statistical methods are unable to handle this situation properly. Further experimental validation of inferred sweeps( e. g.[1],[8]) is hence necessary to suggest them as adaptive. In GWAS, kinship correction is now a standard approach to account for population structure that properly controls the false discovery rate. Unfortunately, correcting for genomic kinship often decreases the power to detect individual adaptive loci, which is likely the reason that no genome-wide significant associations to climate conditions were found in earlier GWAS analyses[1],[8]. Nevertheless, a number of candidate adaptive loci could despite this be identified using extensive experimental validation[1],[2],[8], showing how valuable these populations are as a resource for finding the genomic footprint of climate adaptation. Genome-wide association( GWA) datasets based on natural collections of A. thaliana accessions, such as the RegMap collection, are often genetically stratified. This is primarily due to the close relationships between accessions sampled at nearby locations. Furthermore, as the climate measurements used as phenotypes for the accessions are values representative for the sampling locations of the individual accessions, these measurements will be confounded with the general genetic relationship[9]. Unless properly controlled for, this confounding might lead to excessive false-positive signals in the association analysis; this as the differences in allele-frequencies between loci in locations that differ in climate, and at the same time are geographically distant, will create an association between the genotype and the trait. However, this association could also be due to other forces than selection. In traditional GWA analyses, mixed-model based approaches are commonly used to control for population-stratification. The downside of this approach is that it, in practice, will remove many true genetic signals coming from local adaptation due to the inherent confounding between local genotype and adaptive phenotype. Instead, the primary signals from such analyses will be due to effects of alleles that exist in, and have similar effects across, the entire studied population. In general, studies into the contributions of genetic variance-heterogeneity to the phenotypic variability in complex traits is a novel and useful approach with great potential[10]. Here, we have developed and used a new approach that combines a linear mixed model and a variance-heterogeneity test, which addresses these initial concerns and shown that it is possible to infer statistically robust results of genetically regulated phenotypic variability in GWA data from natural populations. This study describes the results from a re-analysis of data from the RegMap collection to find loci contributing to climate adaptation through an alternative mechanism: genetic control of plasticity. Such loci are unlikely to be detected with standard GWAS or selective-sweep analyses as they have a different genomic signature of selection and distribution across climate envelopes. The reason for this difference is that plastic alleles are less likely to be driven to fixation by directional selection, but rather that multiple alleles remain in the population under extended periods of time by balancing selection[11]. To facilitate the detection of such loci, we extend and utilize an approach[12],[13] that instead of mapping loci by differences in allele-frequencies between local environments, which is highly confounded by population structure, infer adaptive loci using a heterogeneity-of-variance test. This identifies loci where the minor allele is associated with a broader range of climate conditions than the major allele[12]. As such widely distributed alleles will be present across the entire population, they are less confounded with population structure and detectable in our GWAS analysis that utilizes kinship correction to account for population stratification.\nDiscussion:\nA major challenge in attempts to identify individual loci involved in climate adaptation is the strong confounding between geographic location, climate and population structure in the natural A. thaliana population. Earlier genome-wide association analyses in large collections of natural accessions experienced a lack of statistical power when correcting for population-structure[1],[8]. We used an alternative GWAS approach[12] to test for a variance-heterogeneity, instead of a mean difference, between genotypes. This analysis identifies loci where the minor allele is more plastic( i. e. exist across a broader climatic range) than the major allele. As it has low power to detect cases where the minor allele is associated with a lower variance( here with local environments), it will not map private alleles in local environments in a genome-wide analysis[12],[30]. In contrast, a standard GWAS map loci where the allele-frequencies follow the climatic cline. Although plastic alleles might be less frequent in the genome, they are easier to detect in this data due to their lower confounding with population-structure. This overall increase in power is also apparent when comparing the signals that reach a lower, sub-GWAS significance level( S40 Figure–S44 Figure). Several novel genome-wide significant associations were found to the tested climate variables, and a locus containing VEL1 was associated to both day length and relative humidity in the spring. A thaliana is a facultative photoperiodic flowering plant and hence non-inductive photoperiods will delay, but not abolish, flowering. A genetic control of this phenotypic plasticity is thus potentially an adaptive mechanism. VEL1 regulates the epigenetic silencing of genes in the FLC-pathway in response to vernalization[19] and photoperiod length[20] resulting in an acceleration of flowering under non-inductive photoperiods. Our results suggest that genetically plastic regulation of flowering, via the high-variance VEL1 allele, might be beneficial under short-day conditions where both accelerated and delayed flowering is allowed. In long-daytime areas, accelerated flowering is potentially detrimental hence the wild-type allele has the highest adaptive value. It can be speculated whether this is connected to the fact that day-length follows a latitudinal cline, where early flowering might be detrimental in northern areas where accelerated flowering, when the day-length is short, could lead to excessive exposure to cold temperatures in the early spring and hence a lower fitness. A particularly interesting finding in our vGWAS was the strong association between the CMT2-locus and temperature seasonality. Here the allele associated with higher temperature seasonality( i. e the plastic allele) had an altered genome-wide CHH methylation pattern where some accessions displayed a TE-body CHH methylation pattern similar to that of cmt2 mutant plants. Interestingly, a recent study by Dubin et al.[31] in a collection of Swedish A. thaliana accessions report that CHH methylation is temperature sensitive, and that the CMT2-locus is a major trans-acting controller of the observed variation in genome-wide CHH-methylation between the accessions. These findings, together with our experimental work showing that cmt2 mutants were more tolerant to both mild and severe heat-stress, strongly implicate CMT2 as an adaptive locus and clearly illustrate the potential of our method as a useful approach to identify novel associations of functional importance. It is not clear via which mechanism CMT2-dependent CHH methylation might affect plant heat tolerance. Although our results show that the CMT2STOP allele is present across regions with both low and high temperature seasonality, it remains to be shown whether this is due to this allele being generally more adaptable across all environments, or whether the CMT2WT allele is beneficial in environments with stable temperature and the CMT2STOP in high temperature seasonality areas. Regardless, we consider it most likely that the effect will be mediated by TEs in the immediate neighborhood of protein-coding genes. Heterochromatic states at TEs can affect activity of nearby genes and thus potentially plant fitness[32]. Consistent with a repressive role of CMT2 on heat stress responses, CMT2 expression is reduced by several abiotic stresses including heat[33]. Because global depletion of methylation has been shown to enhance resistance to biotic stress[29], it is possible that DNA-methylation has a broader function in shaping stress responses than currently thought. Our results show that CMT2STOP accessions have more heterogeneous CHH methylation patterns than CMT2WT accessions. The CMT2STOP polymorphism is predicted to lead to a non-functional CMT2 protein, and hence a genome-wide CHH-methylation profile resembling that of a complete cmt2 mutant[24]. Although some of the accessions carrying the CMT2STOP allele displayed this pattern with a lower CHH-methylation inside TE-bodies, most of these accessions did not have any major loss of genome-wide CHH methylation. Such heterogeneity might indicate the presence of compensatory mechanisms and hence that the effects of altered CMT2 function could be dependent on the genetic-background. This is an interesting finding that deserves further investigation, although such work is beyond the scope of the current study. Our interpretation of the available results is that our findings reflect the genetic heterogeneity among the natural accessions studied. In light of the recent report by[25], who showed a role also of CMT3 in TE-body CHH methylation, it is not unlikely that the regulation of CHH methylation may result from the action and interaction of several genes. We identified several alleles associated with a broader range of climates across the native range of A. thaliana, suggesting that a genetically mediated plastic response might of important for climate adaptation. Using publicly available data from several earlier studies, we were able to show that an allele at the CMT2 locus displays an altered genome-wide CHH-methylation pattern was strongly associated with temperature seasonality. Using additional experiments, we also found that cmt2 mutant plants tolerated heat-stress better than wild-type plants. Together, these findings suggest this genetically determined epigenetic variability as a likely mechanism contributing to a plastic response to the environment that has been of adaptive advantage in natural environments.",
"role": "user"
},
{
"content": "A central problem when studying adaptation to a new environment is the interplay between genetic variation and phenotypic plasticity . Arabidopsis thaliana has colonized a wide range of habitats across the world and it is therefore an attractive model for studying the genetic mechanisms underlying environmental adaptation . Here , we study two collections of A . thaliana accessions from across Eurasia to identify loci associated with differences in climates at the sampling sites . A new genome-wide association analysis method was developed to detect adaptive loci where the alleles tolerate different climate ranges . Sixteen novel such loci were found including a strong association between Chromomethylase 2 ( CMT2 ) and temperature seasonality . The reference allele dominated in areas with less seasonal variability in temperature , and the alternative allele existed in both stable and variable regions . Our results thus link natural variation in CMT2 and epigenetic changes to temperature adaptation . We showed experimentally that plants with a defective CMT2 gene tolerate heat-stress better than plants with a functional gene . Together this strongly suggests a role for genetic regulation of epigenetic modifications in natural adaptation to temperature and illustrates the importance of re-analyses of existing data using new analytical methods to obtain deeper insights into the underlying biology from available data .",
"role": "assistant"
}
] |
science.chemsum_single_document_summarization | science.chemsum_single_document_summarization.2714 | [
{
"content": "Craft a concise summary for a chemistry research paper using the provided title and selected text segments.\n\nHeading:\nEvaluation of the Interaction between Phosphohistidine Analogues and Phosphotyrosine Binding Domains\n\nContent Extracts:\n\n<p>Phosphorylation of proteins is well established as a dominant mechanism of signal transduction in eukaryotic cells.[1] Phosphorylation occurs in response to extracellular and intracellular signals, and the modified protein can then be sensed by sequence-specific modular binding domains.[2] These include SH2 and PTB domains, which bind phosphotyrosine,[3] and 14-3-3 and WW domains, which bind phosphothreonine and phosphoserine.[4] In contrast, it is still unknown how, or indeed whether, other phosphorylated amino acids such as phosphohistidine are sensed. The lability of this latter modification means that it has only been directly observed in a limited subset of eukaryotic proteins, including the cystic fibrosis transmembrane receptor (CFTR),[5] histone H4[6] and the potassium-activated calcium channel KCa3.1.[7] In the first and last cases, histidine phosphorylation directly activates the channel; however, in the case of histone H4 and many others, the function has not been determined. Although proteomic methods[8] are increasingly able to identify this modification in proteins, no general function for it has been identified.</p><p>We recently reported the synthesis of a triazolyl analogue, 3 a, of τ-phosphohistidine (1, Scheme 1), which is fully compatible with the Fmoc-protection strategy for solid-phase synthesis.[9] Independently, Kee et al. have demonstrated that, following Boc-peptide synthesis, it was possible to use analogue 2 as a hapten to identify antibodies capable of identifying phosphohistidine in a site-specific fashion.[10] More recently a truncated form of 2 has been used to generate a polyclonal antibody that is able to identify 1 in a variety of proteins.[11]</p><p>Structures of τ-phosphohistidine (1) and the corresponding triazole-phosphonate mimic, 2, which can be incorporated into peptides by using precursors 3 a and 3 b.</p><p>The study of phosphoprotein interactions in vitro requires a source of phosphorylated protein. There are well known techniques for isolating phosphate ester-containing proteins, but maintaining phosphorylated histidine residues during protein isolation is notoriously difficult,[12] even though histidine can be chemically phosphorylated in vitro.[13] An alternative approach is the incorporation of phosphoamino acids (or precursors) into recombinant proteins. This approach was used by Serwa et al. to generate an analogue of phosphotyrosine 4 by Staudinger phosphite reaction on a genetically incorporated p-azidophenylalanine to form phosphoramidate 5 (Scheme 2).[14] A further phosphotyrosine analogue, p-carboxymethylphenylalanine (6),[15] has also been incorporated by an amber-suppression approach; direct genetic incorporation of phosphoserine has also been achieved.[16]</p><p>Structures of phosphotyrosine 4 and analogues 5 and 6, which have previously been incorporated into recombinant proteins.</p><p>We hypothesised that we could generate a protein containing our phosphohistidine analogue by first producing a protein with an azidoalanine residue and subsequent reaction with a suitably protected alkynyl phosphonate 8. Unfortunately l-azidoalanine cannot be incorporated by amber suppression and is only poorly incorporated as a methionine surrogate.[17] However, the homologous compound, l-azidohomoalanine (7), is well accepted as a methionine surrogate[18] and reaction of this residue with alkynyl phosphonate 8 would ultimately produce homotriazole 10 (Scheme 3). Though this triazole could perhaps no longer mimic phosphohistidine, it occurred to us that 10 could present the phosphoryl group in a similar orientation to phosphotyrosine (due to the extra flexibility bestowed by the γ-methylene group) and thus constitute a novel analogue of phosphotyrosine 4. Though not our primary goal, investigation of this seemed prudent as, if a suitable mimic, 10 would be a more accessible phosphotyrosine analogue than the current alternatives.</p><p>Proposed route to generate a protein containing the putative phosphotyrosine analogue 10.</p><p>We envisaged producing suitable peptides by Fmoc-SPPS and investigating the interaction with proteins known to bind to phosphotyrosine. Our initial test system was the prototypical SH2 phosphotyrosine binding domain from Grb2.[19] This domain has been well characterised, binding with a high affinity (Kd=200 nm) to the Shc-derived phosphotyrosine-containing hexapeptide 11 (Scheme 4).[20] We generated the required Fmoc-/benzyl-protected amino acid 14 from Fmoc-protected homoazidoalanine 12 and dibenzyl phosphonylacetylene 13 through a [3+2] cycloaddition in an analogous manner to our previously reported synthesis of Fmoc-/benzyl-protected 3 a (Scheme 4).[9] Fmoc-SPPS was then used to generate peptides 11 and 15, which contain phosphotyrosine (pY) 4 and homotriazole (phTz) 10, respectively. We also investigated using an Fmoc-/ethyl-protected derivative of 14 but encountered deprotection difficulties (as reported previously[21]), so this approach was not pursued further.</p><p>Synthesis of Fmoc-/benzyl-protected homotriazole 14 from Fmoc-azidohomoalanine 12 and alkyne 13, as well as peptides 11, 15 and 16, which were used in binding experiments. For full experimental procedures see the Supporting information.</p><p>For our initial assays of ligand–protein interactions, we used full-length Grb2 that was purified as a glutathione S-transferase (GST) fusion. Titration of the pY control peptide 11 into the protein yielded a sigmoidal binding curve (see the Supporting Information) corresponding to an affinity of (196±21) nm, consistent with literature reports. The titration of phTz peptide 15 into the protein was inconclusive; although the interaction was clearly weak it was not possible to rule out an interaction. Protein aggregation prevented experiments at higher concentrations, and attempts to cleave the GST-tag enzymatically were unsuccessful. To overcome this hurdle, the SH2 domain was subcloned into pET28a to obtain an N-terminally His-tagged isolated SH2 domain. The protein overexpressed well in the insoluble fraction, which was solubilised in 8 m urea and purified by immobilised metal-affinity chromatography. On-column refolding then yielded pure soluble protein, but size-exclusion analysis of this protein revealed two distinct non-interconverting species with identical appearance when analysed by SDS-PAGE (Figure S4 in the Supporting Information). ITC assay of peptide 11 binding to the two protein fractions revealed that the larger (earlier-eluting) species had a significantly reduced affinity (4.8±1.2) μm versus (344±42) nm for the smaller species (Figure S8). These data suggested that the early peak represents a domain-swapped dimer (as previously observed by Benfield et al.)[22] and so the later peak, corresponding to monomeric His6-(Grb2-SH2) was used in all subsequent assays, this allowed experiments to be conducted at significantly higher (more than tenfold) protein concentrations.</p><p>The interaction of phTz peptide 15 with Grb2-SH2 was analysed by ITC. Initial experiments under our standard conditions (final ratio of ligand to protein 2:1) were inconclusive, thus suggesting a possible poorly defined weak binding (not shown). The experiment was therefore repeated under low c-value conditions[23] with a final ratio of ligand to protein of 8:1 (Figure 1 B). This titration was indistinguishable from a titration of the peptide into buffer alone, conclusively confirming no binding. This was somewhat disappointing and surprising given that recent work by Hofmann et al. revealed that the Src kinase-SH2 domain can bind to phosphoarginine.[24] However, this led us to consider that the Grb2-SH2 domain might bind to other proteinogenic phosphoamino acids. To determine if τ-phosphohistidine 1 would bind, we synthesised a further peptide, 16, that contained our phosphotriazole (pTz) analogue 2. Again, an initial ITC experiment under standard conditions was inconclusive (data not shown), but titration to a final ligand to protein ratio of 8:1 (Figure 1 C) yielded a clear binding interaction. This was readily distinguishable from the buffer dilution and corresponded to 1:1 binding with Kd=(719±28) μm (Figure 1). It has been shown that a 20-fold change in affinity can be attributed to a change in protonation state.[25] The relatively low affinity of the peptide 16:Grb2-SH2 interaction observed in our study might be due to a difference in protonation states between the phosphate of 4 and the phosphonate of 2. To investigate this, we conducted NMR titrations (see the Supporting Information) that revealed the pKa of 4 to be 5.8 and 2 to be 5.95. Hence, under the conditions of our experiments (pH 7.4) both phosphoryl groups should be di-anionic, so it is unlikely that the difference in affinity is due to differing protonation states.</p><p>Isothermal titration calorimetry of peptides A) 11, B) 15 and C) 16 into the Grb2 SH2 domain reveals equilibrium binding constants of (385±41) nm for 11 and (719±28) μm for 16. D) A competitive fluorescence polarisation assay indicates an IC50 of (442±80) nm for pTyr peptide 11 (▪) and (363±15) μm for pTz peptide 16 (□). See the Supporting Information for control dilution experiments and full fitting of polarisation data.</p><p>To investigate the binding of other phosphoamino acids and confirm that peptides 11 and 16 were binding at the same site, we developed a competition fluorescence anisotropy assay. Titration of a phosphotyrosine-containing peptide with an N-terminal fluorescein, 17, with purified His6-(Grb2-SH2) (see Figure S11) yielded a binding constant of (283±18) nm; this is consistent with the Kd previously measured by ITC. We then conducted competition experiments on all three peptides used for ITC together with peptides 18–21 (AcHN-Ser-Xaa-Val-Asn-Val-Gln-NH2 Xaa=Tyr, His, pSer or pThr, see the Supporting Information). Binding (displacement of 17) was observed only for peptides 11 and 16, which contain pTyr and pTz, respectively (Figure 1 D). A slightly lower than expected IC50 of 360 μm was observed for the pTz peptide; this suggests that the ITC analysis might have underestimated the affinity of this peptide. No evidence for binding was observed for the other peptides (Figure S11); this suggested that the Grb2-SH2 domain will not simply bind to a phosphoryl group presented in the correct peptide context, or to an aromatic ring.</p><p>We have shown that a proven τ-phosphohistidine mimic is capable of sequence-specific binding to a canonical phosphotyrosine-binding domain. Phosphotyrosine antibodies have previously been shown to bind to phosphohistidine,[8b] but our findings contradict previous studies of phosphohistidine-containing peptides binding to SH2 domains (however the conditions used in these studies precluded the identification of low-affinity binding such as we have observed and used a different SH2 domain, from phospholipase C-γ1).[26] Phosphotyrosine binding protein modules are a common feature of human signalling proteins and, in many cases, the sequence specificity of these domains has not been fully elucidated. The affinities of known peptides for their target proteins also vary hugely, ranging from low nanomolar to high micromolar.[27] Low affinity therefore does not mean low significance; it is possible that a subset of assigned phosphotyrosine binding modules could additionally/alternatively interact with phosphohistidine-containing proteins. Further study to establish the generality of this observation is on-going.</p>\n\nExperimental Section\n<p>Experimental details are given in the Supporting Information.</p>",
"role": "user"
},
{
"content": "We have investigated the interaction of peptides containing phosphohistidine analogues and their homologues with the prototypical phosphotyrosine binding SH2 domain from the eukaryotic cell signalling protein Grb2 by using a combination of isothermal titration calorimetry and a fluorescence anisotropy competition assay. These investigations demonstrated that the triazole class of phosphohistidine analogues are capable of binding too, suggesting that phosphohistidine could potentially be detected by this class of proteins in vivo.",
"role": "assistant"
}
] |
science.bc7_litcovid_topic_classification | science.bc7_litcovid_topic_classification.1469 | [
{
"content": "Analyze the given paragraph from COVID-19 research. Determine the relevant topics discussed in the text. Your response should be a JSON array containing the identified topics. Ensure the output is a valid JSON array with no additional commentary.\n\nText:\n\nApproximately 56 million school-aged children (aged 5-17 years) resumed education in the United States in fall 2020.* Analysis of demographic characteristics, underlying conditions, clinical outcomes, and trends in weekly coronavirus disease 2019 (COVID-19) incidence during March 1-September 19, 2020 among 277,285 laboratory-confirmed cases in school-aged children in the United States might inform decisions about in-person learning and the timing and scaling of community mitigation measures. During May-September 2020, average weekly incidence (cases per 100,000 children) among adolescents aged 12-17 years (37.4) was approximately twice that of children aged 5-11 years (19.0). In addition, among school-aged children, COVID-19 indicators peaked during July 2020: weekly percentage of positive SARS-CoV-2 test results increased from 10% on May 31 to 14% on July 5; SARS-CoV-2 test volume increased from 100,081 tests on May 31 to 322,227 on July 12, and COVID-19 incidence increased from 13.8 per 100,000 on May 31 to 37.9 on July 19. During July and August, test volume and incidence decreased then plateaued; incidence decreased further during early September and might be increasing. Percentage of positive test results decreased during August and plateaued during September. Underlying conditions were more common among school-aged children with severe outcomes related to COVID-19: among school-aged children who were hospitalized, admitted to an intensive care unit (ICU), or who died, 16%, 27%, and 28%, respectively, had at least one underlying medical condition. Schools and communities can implement multiple, concurrent mitigation strategies and tailor communications to promote mitigation strategies to prevent COVID-19 spread. These results can provide a baseline for monitoring trends and evaluating mitigation strategies.",
"role": "user"
},
{
"content": "[\"Prevention\"]",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_pubmed | science.scientific_papers_summarization_single_doc_pubmed.4508 | [
{
"content": "Read the provided biomedical research paper and create a concise abstract. Ensure the abstract captures the main points of the paper.\n\nFull Paper:\nnatural killer / t - cell ( nk / t - cell ) lymphoma is a rare fast - growing lymphoma arising from the lining of the nose or upper airway .\na 26-year - old young female presented with nasal obstruction , bloody discharge and swelling of the nasal skin and right lower eyelid . biopsy from the soft tissue seen in the nasal cavity revealed nk / t - cell lymphoma ( nasal type ) .\npatient was referred to a whole body 18 fluoride ( 18f)-fluoro - deoxy glucose ( fdg ) positron emission tomography - computed tomography ( pet - ct ) .\n18f - fdg pet - ct performed showed intense fdg uptake in heterogeneously enhancing soft tissue in the right nasal cavity with extension into the skin of the cheek and right lower eyelid [ figure 1 : a - maximum - intensity projection , b - axial ct , c - fused pet - ct ] .\nintense fdg uptake was also noted in enhancing soft tissue bilateral adrenal masses [ d - axial ct ; e - fused pet - ct ] indicating disease involvement . a diagnosis of bilateral adrenal involvement by nk / t - cell lymphoma was made .\n18 fluoride - fluoro - deoxy glucose positron emission tomography - computed tomography ( 18f - fdg pet - ct ) performed showed intense fdg uptake in heterogeneously enhancing soft tissue in the right nasal cavity with extension into the skin of the cheek and right lower eye lid ( a - maximum - intensity projection , b - axial ct , c - fused pet - ct ) .\nintense fdg uptake was also noted in enhancing soft tissue bilateral adrenal masses ( d - axial ct ; e - fused pet - ct ) indicating disease involvement\nmorphologically , mature nk cells are large granular lymphoid cells , they are negative for surface cd3 but characteristically express cytoplasmic cd3 epsilon ( ) , cd56 , and cytotoxic molecules .\nthere are two main categories of nk cell - derived neoplasms , namely , aggressive nk cell leukemia and extranodal nk / t - cell lymphoma , nasal type .\nnk / t - cell lymphoma of the nasal type accounts for 710% of all non - hodgkin lymphomas in asia and latin america , but only 1% of that amongst caucasians .\nthe leukemic counterpart of nk / t - cell lymphoma , aggressive nk - cell leukemia is an uncommon disease .\nclinically , it is useful to classify nk / t - cell malignancies into two categories , namely , nasal nk - cell lymphomas , and nonnasal or extranasal nk / t - cell lymphomas , depending on the site of the lesions .\nnasal nk / t - cell lymphomas occur in the nose and the upper aerodigestive tract .\nbone marrow involvement occurs in < 10% of patients , and distant metastasis is unusual .\nextranasal or nonnasal nk / t - cell lymphomas occur outside the typical nasal region , and the common primary sites involved include the skin , gastrointestinal tract , salivary glands , spleen , and testis .\nunusual sites of involvement such as the muscle , the adrenal gland , and the female genital tract ( ovaries and uterus ) have been reported .\noccult nasal involvement should be ruled out with panendoscopy and multiple random biopsies in patients presenting with extranasal nk / t - cell lymphoma .\ndisseminated nodal nhl can involve the adrenals in up to 25% of the cases . however , primary adrenal lymphoma is extremely rare with majority of the cases being b cell type .\nnatural killer - cell lymphomas are fdg - avid and pet - ct offers more accurate definition of the extent of involvement by distinguishing lymphoma involvement from inflammatory masses .\n18-fdg pet - ct is increasingly used both at diagnosis and for monitoring of response to therapy .\npet changes the staging in up to 21% of the patients and alters the planned management strategy in 44% of the patients .\npet parameters such as total lesion glycolysis , in addition to the international prognostic index , is useful in predicting prognosis of these patients . for the stage ,\ni / ii nasal nk / t - cell lymphomas , radiotherapy is an important modality of treatment .\nrecent evidence indicates radiation therapy alone is insufficient , and chemotherapy is also needed in these patients .\nnonnasal nk cell lymphomas tend to have advanced or disseminated disease at presentation , and the role of radiotherapy is often limited . for the stage iii / iv nasal nk / t - cell lymphomas , extranasal nk / t - cell lymphomas and aggressive nk - cell lymphoma / leukemia , chemotherapy is the primary treatment .\nl - asparaginase - containing regimens have shown promise as the tumor cells lack l - asparagine synthetase and are susceptible to l - asparaginase , which depletes l - asparagines in nk lymphoma cells .",
"role": "user"
},
{
"content": "natural killer / t - cell ( nk / t - cell ) lymphoma is a rare condition , which presents as necrotic , granulomatous lesions involving the nose and the upper respiratory tract . \n the condition usually has an aggressive clinical course . \n the predominant subtype of nk / t - cell lymphoma noted in asian population is the nasal type . \n we describe a case of biopsy - proven nk / t - cell lymphoma with bilateral adrenal involvement . adrenal involvement by lymphoma is usually of b - cell type and occurs in disseminated disease and often unilateral . bilateral adrenal involvement by t - cell lymphoma is extremely rare .",
"role": "assistant"
}
] |
science.chia_ner | science.chia_ner.1942 | [
{
"content": "Analyze the provided clinical trial criteria. Identify and extract distinct entities from the text.\n\nReturn the results as a JSON structure. Ensure the output is strictly in JSON format without any additional commentary.\n\nCriteria Text:\n\nForm of diagnosed psoriasis other than chronic plaque psoriasis (i.e. guttate, erythrodermic, pustular)\nDiagnosis of other active, ongoing skin diseases or skin infections that may interfere with examination of psoriasis lesions\nOngoing use of other psoriasis treatment including but not limited to topical or systemic corticosteroids, other topical medications (i.e. coal tar), oral or biologic medications for the treatment of psoriasis, and UV therapy. The following washout periods will be required: 2 weeks for topical therapy; 2 weeks for phototherapy; 12 weeks for biologic or targeted therapies; 4 weeks for other systemic therapies\nUse of oral estrogen therapy, excluding oral contraceptive pills\nWomen who are pregnant, nursing, or of child-bearing potential who are unwilling to use appropriate method(s) of contraception.\nPatients unwilling to limit exposure to UV light\nCurrent significant medical problems that, in the discretion of the investigator, would put the patient at significant risk\nPatients with disorders of calcium metabolism and/or hypercalcemia\nUse of any investigational drug within 4 weeks prior to randomization, or 5 pharmacokinetic/pharmacodynamics half-lives, if known (whichever is longer)\nHistory of allergy to any component of the IP",
"role": "user"
},
{
"content": "{\"Condition\": [\"hypercalcemia\", \"skin diseases\", \"skin infections\", \"disorders of calcium metabolism\", \"psoriasis\", \"allergy\", \"chronic plaque psoriasis\"], \"Device\": [], \"Drug\": [\"biologic medications\", \"topical medications\", \"al o dications f\", \"systemic corticosteroids\", \"pical o rticosteroids,\", \"oral contraceptive pills\", \"any component of the IP\", \"coal tar\"], \"Measurement\": [], \"Mood\": [\"unwilling\"], \"Multiplier\": [], \"Negation\": [\"other than\", \"excluding\"], \"Observation\": [], \"Person\": [], \"Procedure\": [\"oral estrogen therapy\", \"treatment\", \"UV therapy\", \"limit exposure to UV light\"], \"Qualifier\": [\"pustular\", \"erythrodermic\", \"guttate\", \"active\"], \"ReferencePoint\": [], \"Scope\": [\"topical or systemic corticosteroids, other topical medications (i.e. coal tar), oral or biologic medications for the treatment of psoriasis, and UV therapy\", \"other than chronic plaque psoriasis\", \"skin diseases or skin infections\", \"guttate, erythrodermic, pustular\"], \"Temporal\": [\"ongoing\", \"Ongoing\"], \"Value\": [], \"Visit\": []}",
"role": "assistant"
}
] |
science.craftchem_ner | science.craftchem_ner.4281 | [
{
"content": "Analyze the following sentence to identify chemical entities of interest. List these entities in a JSON array format. If no entities are found, return an empty array.\n\nText to Analyze:\nSequencesofmouse-specificprobesandprimersusedforreal-timeRT-PCR.",
"role": "user"
},
{
"content": "[]",
"role": "assistant"
}
] |
science.pico_ner | science.pico_ner.1837 | [
{
"content": "Examine the following abstract and identify distinct entities. Focus on extracting entities related to participants, interventions, and outcomes.\n\nPresent your findings in a JSON format like: {\"Participant\": [\"example entity\", ...], \"Intervention\": [\"example entity\", ...], \"Outcome\": [\"example entity\", ...]}. If no entities are found for a category, use an empty list.\n\nAbstract Content:\n\nEvaluation of electrofunctional data following argon-laser trabeculoplasty in primary open-angle glaucoma . To establish whether or not glaucomatous damage is reversible , we obtained pattern-reversal electroretinograms ( PERGs ) and visual evoked potentials ( VEPs ) in 25 eyes of 25 patients suffering from bilateral primary open-angle glaucoma ( POAG ) before and after argon-laser trabeculoplasty . The laser treatment was carried out in only one eye chosen at random , and the fellow eye was used as a control . In the present study we intended to verify the possibility of using electrofunctional techniques to determine the two distinct and , probably , consecutive glaucomatous alterations occurring in ganglion cells : functional ( reversible ) and anatomical ( irreversible ) . The results obtained indicate that glaucomatous damage is irreversible . We propose that such alterations differ very slightly and that the current electrofunctional techniques may not be sufficiently sophisticated to distinguish between them .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"argon-laser trabeculoplasty\"], \"Outcome\": [\"glaucomatous damage\", \"visual evoked potentials ( VEPs )\", \"pattern-reversal electroretinograms ( PERGs )\", \"glaucomatous alterations occurring in ganglion cells : functional ( reversible ) and anatomical ( irreversible ) .\"], \"Participant\": [\"25 eyes of 25 patients suffering from bilateral primary open-angle glaucoma ( POAG ) before and after argon-laser trabeculoplasty .\"]}",
"role": "assistant"
}
] |
science.pico_ner | science.pico_ner.2981 | [
{
"content": "Examine the following abstract and identify distinct entities. Focus on extracting entities related to participants, interventions, and outcomes.\n\nPresent your findings in a JSON format like: {\"Participant\": [\"example entity\", ...], \"Intervention\": [\"example entity\", ...], \"Outcome\": [\"example entity\", ...]}. If no entities are found for a category, use an empty list.\n\nAbstract Content:\n\nAftermath of a clinical trial : evaluating the sustainability of a medical device intervention in Ghana . A randomized controlled trial recently demonstrated that continuous positive airway pressure ( CPAP ) effectively decreases respiratory rate in children presenting to Ghanaian district hospitals with respiratory distress . A follow-up study 16 months later evaluated the extent to which the skills and equipment necessary for CPAP use have been maintained . Seven of eight CPAP machines were functional , but five of eight oxygen concentrators and three of four electric generators were non-functional . Nurses trained by US study personnel ( first-generation ) and nurses trained by Ghanaian nurses after the study ( second-generation ) were evaluated on CPAP knowledge and skills . Twenty-eight nurses participated in the study , 9 first-generation and 19 second-generation . First-generation trainees scored significantly higher than second-generation trainees on both skills and knowledge assessments ( p = 0.003 ) . Appropriate technical support and training must be ensured to address equipment maintenance . Protocolization of the training program , in conjunction with skills and knowledge assessment , may improve acquisition and retention among second- and future-generation trainees .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"medical device intervention\", \"continuous positive airway pressure\"], \"Outcome\": [\"respiratory rate\", \"skills and knowledge assessments\", \"acquisition and retention\"], \"Participant\": [\"respiratory distress\", \"9\", \"19\", \"Twenty-eight\", \"children\"]}",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.4345 | [
{
"content": "Examine the abstracts provided from a main scientific article and its cited works. Craft a narrative that synthesizes the key insights and contributions, emphasizing their interconnections. Employ numerical citations such as [1], [2], etc., to reference the cited works.\n\nMain Article Summary:\nMobile edge computing (MEC) is a promising technique for providing low-latency access to services at the network edge. The services are hosted at various types of edge nodes with both computation and communication capabilities. Due to the heterogeneity of edge node characteristics and user locations, the performance of MEC varies depending on where the service is hosted. In this paper, we consider such a heterogeneous MEC system, and focus on the problem of placing multiple services in the system to maximize the total reward. We show that the problem is NP-hard via reduction from the set cover problem, and propose a deterministic approximation algorithm to solve the problem, which has an approximation ratio that is not worse than @math . The proposed algorithm is based on two sub-routines that are suitable for small and arbitrarily sized services, respectively. The algorithm is designed using a novel way of partitioning each edge node into multiple slots, where each slot contains one service. The approximation guarantee is obtained via a specialization of the method of conditional expectations, which uses a randomized procedure as an intermediate step. In addition to theoretical guarantees, simulation results also show that the proposed algorithm outperforms other state-of-the-art approaches.\n\nCited Works Summary:\n[1]: We give new approximation algorithms for packing integer programs (PIPs) by employing the method of randomized rounding combined with alterations. Our first result is a simpler approximation algorithm for general PIPs which matches the best known bounds, and which admits an efficient parallel implementation. We also extend these results to a multi-criteria version of PIPs. Our second result is for the class of packing integer programs (PIPs) that are column sparse, i. e., where there is a specified upper bound k on the number of constraints that each variable appears in. We give an (ek+ o(k))-approximation algorithm for k-column sparse PIPs, improving over previously known O(k 2 )-approximation ratios. We also generalize our result to the case of maximizing non-negative monotone submodular functions over k-column sparse packing constraints, and obtain an e 2k",
"role": "user"
},
{
"content": "Our problem is a special case of the @math -column sparse packing problem studied in [1] , where a randomized approximation algorithm is proposed. Applying the approach in [1] to our case, one can get an algorithm with an approximation ratio of @math (see [Appendix ] techReportThisPaper for details). (see Appendix for details). We improve on this result in two ways. First, we have a deterministic algorithm, which is more preferred due to its predictable behavior. Second, we have an approximation ratio that is at least @math , which is better.",
"role": "assistant"
}
] |
science.mslr2022_cochrane_multidoc_summarization | science.mslr2022_cochrane_multidoc_summarization.1584 | [
{
"content": "Review the provided collection of medical RCT summaries. Your objective is to distill the essential insights into a brief conclusion. Focus solely on the synthesis of findings without additional commentary. The RCT summaries are as follows:\n\n\n\nReport Title 1:\nU.K. Controlled trial of intrapleural streptokinase for pleural infection.\nReport Abstract 1:\nIntrapleural fibrinolytic agents are used in the drainage of infected pleural-fluid collections. This use is based on small trials that did not have the statistical power to evaluate accurately important clinical outcomes, including safety. We conducted a trial to clarify the therapeutic role of intrapleural streptokinase.\n In this double-blind trial, 454 patients with pleural infection (defined by the presence of purulent pleural fluid or pleural fluid with a pH below 7.2 with signs of infection or by proven bacterial invasion of the pleural space) were randomly assigned to receive either intrapleural streptokinase (250,000 IU twice daily for three days) or placebo. Patients received antibiotics and underwent chest-tube drainage, surgery, and other treatment as part of routine care. The number of patients in the two groups who had died or needed surgical drainage at three months was compared (the primary end point); secondary end points were the rates of death and of surgery (analyzed separately), the radiographic outcome, and the length of the hospital stay.\n The groups were well matched at baseline. Among the 427 patients who received streptokinase or placebo, there was no significant difference between the groups in the proportion of patients who died or needed surgery (with streptokinase: 64 of 206 patients [31 percent]; with placebo: 60 of 221 [27 percent]; relative risk, 1.14 [95 percent confidence interval, 0.85 to 1.54; P=0.43), a result that excluded a clinically significant benefit of streptokinase. There was no benefit to streptokinase in terms of mortality, rate of surgery, radiographic outcomes, or length of the hospital stay. Serious adverse events (chest pain, fever, or allergy) were more common with streptokinase (7 percent, vs. 3 percent with placebo; relative risk, 2.49 [95 percent confidence interval, 0.98 to 6.36]; P=0.08).\n The intrapleural administration of streptokinase does not improve mortality, the rate of surgery, or the length of the hospital stay among patients with pleural infection.\n Copyright 2005 Massachusetts Medical Society.\n\n\nReport Title 2:\nIntrapleural urokinase in the management of parapneumonic empyema: a randomised controlled trial.\nReport Abstract 2:\nWe compared the effectiveness of intrapleural urokinase versus normal saline via a thoracostomy tube in the treatment of parapneumonic empyema in a randomised controlled study. Forty-nine patients with parapneumonic empyema were randomly assigned to receive either intrapleural urokinase or normal saline treatment. The daily volume instilled through a chest tube was 100 ml in both groups. Urokinase (100,000 IU/day) was diluted in normal saline before instillation. The mean duration for defervescence was shorter (7 +/- 3 vs 13 +/- 5 days, p<0.01) and the mean volume of drained fluid during the five-day treatment period was significantly greater in the urokinase group (1.8 +/- 1.5 vs 0.8 +/- 0.8 litres, p<0.001) than in the control group. The subsequent decortication rate was 60% and 29.1%, respectively (p<0.001). The duration of hospitalisation was also shorter in the urokinase group (14 +/- 4 vs 21 +/- 4 days, p<0.001) than in the saline group. We conclude that intrapleural instillation of urokinase in the management of parapneumonic empyema provides a better outcome and reduces the need for decortication.\n\n\nReport Title 3:\nRandomised controlled trial of intrapleural streptokinase in community acquired pleural infection.\nReport Abstract 3:\nStandard treatment for pleural infection includes catheter drainage and antibiotics. Tube drainage often fails if the fluid is loculated by fibrinous adhesions when surgical drainage is needed. Streptokinase may aid the process of pleural drainage, but there have been no controlled trials to assess its efficacy.\n Twenty four patients with infected community acquired parapneumonic effusions were studied. All had either frankly purulent/culture or Gram stain positive pleural fluid (13 cases; 54%) or fluid which fulfilled the biochemical criteria for pleural infection. Fluid was drained with a 14F catheter. The antibiotics used were cefuroxime and metronidazole or were guided by culture. Subjects were randomly assigned to receive intrapleural streptokinase, 250,000 i.u. daily, or control saline flushes for three days. The primary end points related to the efficacy of pleural drainage--namely, the volume of pleural fluid drained and the chest radiographic response to treatment. Other end points were the number of pleural procedures needed and blood indices of inflammation.\n The streptokinase group drained more pleural fluid both during the days of streptokinase/control treatment (mean (SD) 391 (200) ml versus 124 (44) ml; difference 267 ml, 95% confidence interval (CI) 144 to 390; p < .001) and overall (2564 (1663) ml, 95% CI 465 to 2545; p < 0.01). They showed greater improvement on the chest radiograph at discharge, measured as the fall in the maximum dimension of the pleural collection (6.0 (2.7) cm versus 3.4 (2.7) cm; difference 2.9 cm, 95% CI 0.3 to 4.4; p < 0.05) and the overall reduction in pleural fluid collection size (p < 0.05, two-tailed Fisher's exact test). Systemic fibrinolysis and bleeding complications did not occur. Surgery was required by three control patients but none in the streptokinase group.\n Intrapleural streptokinase probably aids the treatment of pleural infections by improving pleural drainage without causing systemic fibrinolysis or local haemorrhage.\n\n\nReport Title 4:\nIntrapleural urokinase versus normal saline in the treatment of complicated parapneumonic effusions and empyema. A randomized, double-blind study.\nReport Abstract 4:\nIntrapleural administration of fibrinolytic agents has been shown to be effective and safe in the treatment of loculated parapneumonic pleural effusions. However, controlled studies of the possible role of the activity of urokinase (UK) through the volume effect are lacking. We therefore investigated the hypothesis that UK is effective through the lysis of pleural adhesions and not through the volume effect. Thirty-one consecutive patients with multiloculated pleural effusions were randomly assigned to receive either intrapleural UK (15 patients) or normal saline (NS) (16 patients) for 3 d, in a double-blind manner. All patients had inadequate drainage through a chest tube (< 70 ml/24 h). UK was given daily through the chest tube in a dose of 100.000 IU diluted in 100 ml of NS. Controls were given the same volume of NS intrapleurally. Response was assessed by clinical outcome, fluid drainage, chest radiography, pleural ultrasonography (US) and/or computed tomography (CT). Clinical and radiographic improvement was noted in all but two patients in the UK group but in only four in the control group. The net mean volume drained during the 3-d treatment period was significantly greater in the UK group (970 +/- 75 ml versus 280 +/- 55 ml, p < 0.001). Pleural fluid drainage was complete in 13 (86.5%) patients in the UK group (two patients were treated through video-assisted thoracoscopy) but in only four (25%) in the control group. Twelve patients in the control group were subsequently treated with UK and six of them had complete drainage; the remaining six patients had complete drainage after video-assisted thoracoscopy. Our results suggest that UK is effective in the treatment of loculated pleural effusions through the lysis of pleural adhesions and not through the volume effect.\n\n\nReport Title 5:\nIntrapleural streptokinase for empyema and complicated parapneumonic effusions.\nReport Abstract 5:\nWe conducted a single-center, randomized, placebo-controlled trial to determine whether streptokinase instillations adjunctive to chest tube drainage reduce the need for surgery and improve outcome in patients with pleural empyema. Fifty-three patients (frank pus aspirated, 81%; microbiological agent cultured, 62%; mean effusion pH, 6.6 +/- 0.4) received antibiotic treatment, chest tube drainage, and once-daily pleural rinses with either normal saline or normal saline with streptokinase (250,000 IU). Nine patients were excluded for various reasons before pleural rinses were started. Streptokinase (n = 22) was instilled over 4.5 +/- 2 days and saline (n = 22) was instilled over 3 +/- 1.3 days. One patient in each group died during treatment. Clinical treatment success and need for referral to surgery were the main outcome measures. No difference was observed after 3 days. After 7 days, streptokinase-treated patients had a higher clinical success rate (82 vs. 48%, p = 0.01) and fewer referrals for surgery (45 vs. 9%, p = 0.02). No significant radiologic or functional differences were observed between groups during follow-up over 6 months. We conclude that intrapleural streptokinase adjunctive to chest tube drainage reduces the need for surgery and improves the clinical treatment success in patients with pleural empyema.\n\n\nReport Title 6:\nEarly use of intrapleural fibrinolytics in the management of postpneumonic empyema. A prospective study.\nReport Abstract 6:\nA prospective randomized study was conducted in order to analyze the role of fibrinolytics in the treatment of complicated parapneumonic effusion.\n From 2001 to 2004, 127 consecutive patients were managed for thoracic empyema. In all cases the cause was bacterial pneumonia. Seventy patients were managed with sole tube thoracostomy (group A) and 57 with combination of tube thoracostomy and streptokinase instillation (group B). Groups were statistically compared for the age, gender, duration of symptoms, quality of pleural fluid, chest imaging, complete drainage, length of hospital stay and mortality. Multivariate analysis was used in order to define the factors that affect outcome.\n Tube thoracostomy was successful in 47 (67.1%) cases (group A), while fibrinolysis led to a favorable outcome in 50 cases (87.7%) (P<0.05). The length of stay in thoracic surgical department was significantly longer for group A (P<0.001). Mortality rate in group A was significantly higher (P<0.001). Multiple regression analysis disclosed as sole independent favorable factor for pleural drainage, the use of fibrinolysis during the course of chest tube drainage (P=0.006, odds ratio 4.29, 95% CI 1.51-12.14).\n Fibrinolytic agents are a useful adjunct in the management of complicated parapneumonic effusions. Intrapleural fibrinolytics, if used early in the fibrinopurulent stage of a parapneumonic effusion, decrease the rate of surgical interventions (VATS or open decortcation) and the length of hospital stay with minor associated morbidity.\n\n\nReport Title 7:\nIntrapleural streptokinase versus urokinase in the treatment of complicated parapneumonic effusions: a prospective, double-blind study.\nReport Abstract 7:\nIntrapleural administration of fibrinolytics has been shown in small numbers of patients with complicated parapneumonic effusions (CPE) and pleural empyema to be effective and relatively safe. Although streptokinase (SK) is recommended as the fibrinolytic of choice, there are no comparative studies among fibrinolytics. We therefore compared the efficacy, safety, and the cost of treatment two of the most used thrombolytics, SK and urokinase (UK). Fifty consecutive patients with CPE or empyema were randomly allocated to receive either SK (25 patients) or UK, in a double-blind fashion. All patients had inadequate drainage through chest tube (< 70 ml/24 h). Both drugs were diluted in 100 ml normal saline and were infused intrapleurally through the chest tube in a daily dose of 250,000 IU of SK or 100,000 IU of UK. The chest tube was clamped for 3 h after instillation. Response was assessed by clinical outcome, fluid drainage, chest radiography, pleural ultrasound, and/or computed tomography. Clinical and radiologic improvement was noted in all but two patients in each group, who required surgical intervention. The mean volume drained during the first 24 h after instillation was significantly increased; 380 +/- 99 ml for the SK group (p < 0.001) and 420.8 +/- 110 ml for the UK group (p < 0.001). The total volume (mean +/- SD) of fluid drained after treatment was 1,596 +/- 68 ml for the SK group, and 1,510 +/- 55 ml for the UK group (p > 0.05). The SK instillations (mean +/- SD) were 6 +/- 2.16 (range, 3 to 10) and those of UK 5.92 +/- 2.05 (range, 3 to 8). High fever as adverse reaction to SK was observed in two patients. The total cost of the drug in the UK group was two times higher than that of SK ($180 +/- 47 for SK and $320 +/- 123 for UK). The mean total hospital stay after beginning fibrinolytic therapy was 11.28 +/- 2.44 d (range, 7 to 15) for the SK group and 10.48 +/- 2.53 d (range, 6 to 18) for the UK group (p = 0.32). We conclude that intrapleural SK or UK is an effective adjunct in the management of parapneumonic effusions and may reduce the need for surgery. UK could be the thrombolytic of choice given the potentially dangerous allergic reactions to SK and relatively little higher cost of UK.",
"role": "user"
},
{
"content": "Intrapleural fibrinolytic therapy confers significant benefit in reducing the requirement for surgical intervention for patients in the early studies included in this review but not in the more recently published Maskell study. The reasons for this difference are uncertain. Separate subgroup analysis of proven loculated/septated effusions from the available data in our meta-analysis suggests a potential overall treatment benefit with fibrinolytics, but these results should be treated with caution as the data are incomplete and the benefit is not significant in the subgroup of high quality trials (Cochrane Grade A). Intrapleural fibrinolytics have not been shown to significantly increase adverse events, but the confidence interval is too wide to firmly exclude this possibility.",
"role": "assistant"
}
] |
science.ncbi_ner | science.ncbi_ner.531 | [
{
"content": "Examine the text below and extract disease-related entities. Present your findings in a structured JSON format.\n\nYour response should be a JSON object with keys as entity types and values as lists of entities. If no entities are found, return an empty list for that type.\n\nContent:\n\nNovel mutations in the Wiskott-Aldrich syndrome protein gene and their effects on transcriptional, translational, and clinical phenotypes. Wiskott-Aldrich syndrome (WAS) is an X-linked recessive immunodeficiency characterized by thrombocytopenia, eczema, and recurrent infections, and caused by mutations in the WAS protein (WASP) gene. WASP contains several functional domains through which it interacts with proteins involved in intracellular signaling and regulation of the actin cytoskeleton. In this report, 17 WASP gene mutations were identified, 12 of which are novel. DNA of affected males and obligate carriers was PCR amplified and analyzed by SSCA, heteroduplex analysis, and direct sequencing. The effects of the mutations at the mRNA and protein level were ascertained by RT-PCR and Western blot analyses. All missense mutations were located in exons 1-4. Most of the nonsense, frameshift and splice site mutations were found in exons 6-11. Mutations that alter splice sites led to the synthesis of several types of mRNAs, a fraction of which represented the normally spliced product. The presence of normally spliced transcripts was correlated with a milder phenotype. When one such case was studied by Western blotting, reduced amounts of normal-size WASP were present. In other cases as well, a correlation was found between the amount of normal or mutant WASP present and the phenotypes of the affected individuals. No protein was detected in two individuals with severe WAS. Reduced levels of a normal-size WASP with a missense mutation were seen in two individuals with XLT. It is concluded that mutation analysis at the DNA level is not sufficient for predicting clinical course. Studies at the transcript and protein level are needed for a better assessment..",
"role": "user"
},
{
"content": "{\"Disease\": [\"X-linked recessive immunodeficiency\", \"eczema\", \"thrombocytopenia\", \"XLT\", \"Wiskott-Aldrich syndrome\", \"WAS\"]}",
"role": "assistant"
}
] |
science.chia_ner | science.chia_ner.1032 | [
{
"content": "Analyze the provided clinical trial criteria. Identify and extract distinct entities from the text.\n\nReturn the results as a JSON structure. Ensure the output is strictly in JSON format without any additional commentary.\n\nCriteria Text:\n\nFemale subjects who are pregnant or breast-feeding or considering becoming pregnant during the study\nPatients aged <18 years of age\nPatients who cannot give informed consent,\nPregnant patients or those who are breastfeeding will be deemed ineligible.\nPrior treatment with any anti-TNF agent\nContra-indication to use of GLM (Hypersensitivity to the active substance or to any of the excipients; Active tuberculosis (TB), acute or chronic Hepatitis B infection or other severe infections such as sepsis and/or opportunistic infections including HIV infection; Moderate or severe heart failure (NYHA class III/IV)\nHave symptoms or signs suggestive of current active or latent TB upon medical history, physical examination and/or chest radiograph, or positive Mycobacterium tuberculosis antigen-specific interferon-gamma release assay (IGRA)\nPatients with a history of, or at imminent risk for, colectomy; who required gastrointestinal surgery within 2 months before screening;\nHistory of colonic mucosal dysplasia or adenomatous colonic polyps that were not removed\nScreening stool study positive for enteric pathogens or Clostridium difficile toxin.\nOral corticosteroids at a dose >40 mg prednisone or its equivalent per day; receipt of cyclosporine, tacrolimus, sirolimus, or mycophenolate mofetil within 8 weeks before the first study agent injection; or use of an investigational agent within 5 half-lives of that agent before the first study agent injection.\nPatients in recent receipt of live vaccinations within 4 weeks prior to enrolment",
"role": "user"
},
{
"content": "{\"Condition\": [\"opportunistic infections\", \"colonic mucosal dysplasia\", \"Hepatitis B infection\", \"Hypersensitivity\", \"sepsis\", \"Contra-indication\", \"HIV infection\", \"severe infections\", \"TB\", \"heart failure\", \"adenomatous colonic polyps\", \"tuberculosis (TB)\"], \"Device\": [], \"Drug\": [\"tacrolimus\", \"live vaccinations\", \"excipients\", \"anti-TNF agent\", \"cyclosporine\", \"Oral corticosteroids\", \"GLM\", \"active substance\", \"mycophenolate mofetil\", \"sirolimus\", \"investigational agent\"], \"Measurement\": [\"Mycobacterium tuberculosis antigen-specific interferon-gamma release assay (IGRA)\", \"NYHA\"], \"Mood\": [\"imminent risk for\"], \"Multiplier\": [\"dnisone or its eq eceipt\"], \"Negation\": [\"not\"], \"Observation\": [\"history of\", \"medical history\"], \"Person\": [\"aged\"], \"Procedure\": [\"treatment\", \"colectomy\", \"physical examination\", \"removed\", \"gastrointestinal surgery\", \"stool study\", \"chest radiograph\"], \"Qualifier\": [\"chronic\", \"Moderate\", \"severe\", \"enteric pathogens\", \"acute\", \"latent\", \"active\", \"Clostridium difficile toxin\"], \"ReferencePoint\": [], \"Scope\": [\"current active or laten\", \"Contra-indication to use of GLM\", \"Hypersensitivity to the active substance or to any of the excipients; Active tuberculosis (TB), acute or chronic Hepatitis B infection or other severe infections such as sepsis and/or opportunistic infections including HIV infection; Moderate or severe heart failure (NYHA class III/IV\", \"active substance or to any of the excipients\", \"acute or chronic\", \"cyclosporine, tacrolimus, sirolimus, or mycophenolate mofetil\", \"enteric pathogens or Clostridium difficile toxin\", \"medical history, physical examination and/or chest radiograph\", \"history of, or at imminent risk for\", \"Moderate or severe\"], \"Temporal\": [\"within 5 half-lives\", \"within 8 weeks before the first study agent injection\", \"Active\", \"before the first study agent injection\", \"within 4 weeks prior to enrolment\", \"Prior\", \"current\", \"within 2 months before screening\"], \"Value\": [\"\\u003c18 years of age\", \"class III/IV\", \"positive\"], \"Visit\": []}",
"role": "assistant"
}
] |
science.chemdner_ner | science.chemdner_ner.1808 | [
{
"content": "Analyze the given text to identify specific entities. Focus on extracting relevant data.\n\nProvide your findings in JSON format. Each category should list the entities found.\n\nText:\n\nNurse-Community Health Worker Team Improves Diabetes Care in American Samoa: Results of a randomized controlled trial. OBJECTIVETo evaluate the effectiveness of a culturally adapted, primary care-based nurse-community health worker (CHW) team intervention to support diabetes self-management on diabetes control and other biologic measures.RESEARCH DESIGN AND METHODSTwo hundred sixty-eight Samoan participants with type 2 diabetes were recruited from a community health center in American Samoa and were randomly assigned by village clusters to the nurse-CHW team intervention or to a wait-list control group that received usual care.RESULTSParticipants had a mean age of 55 years, 62% were female, mean years of education were 12.5 years, 41% were employed, and mean HbA(1c) was 9.8% at baseline. At 12 months, mean HbA(1c) was significantly lower among CHW participants, compared with usual care, after adjusting for confounders (b = -0.53; SE = 0.21; P = 0.03). The odds of making a clinically significant improvement in HbA(1c) of at least 0.5% in the CHW group was twice the odds in the usual care group after controlling for confounders (P = 0.05). There were no significant differences in blood pressure, weight, or waist circumference at 12 months between groups.CONCLUSIONSA culturally adapted nurse-CHW team intervention was able to significantly improve diabetes control in the U.S. Territory of American Samoa. This represents an important translation of an evidence-based model to a high-risk population and a resource-poor setting.",
"role": "user"
},
{
"content": "{\"Chemical\": []}",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_elife_single_doc_summ | science.scientific_lay_summarisation_elife_single_doc_summ.3200 | [
{
"content": "Transform the provided sections of a biomedical research paper into a digestible summary for a general audience. Focus on simplifying complex ideas while retaining key scientific insights.\n\nResearch Title: Kinetic competition during the transcription cycle results in stochastic RNA processing\nContent Overview:\nAbstract:\nSynthesis of mRNA in eukaryotes involves the coordinated action of many enzymatic processes, including initiation, elongation, splicing, and cleavage. Kinetic competition between these processes has been proposed to determine RNA fate, yet such coupling has never been observed in vivo on single transcripts. In this study, we use dual-color single-molecule RNA imaging in living human cells to construct a complete kinetic profile of transcription and splicing of the β-globin gene. We find that kinetic competition results in multiple competing pathways for pre-mRNA splicing. Splicing of the terminal intron occurs stochastically both before and after transcript release, indicating there is not a strict quality control checkpoint. The majority of pre-mRNAs are spliced after release, while diffusing away from the site of transcription. A single missense point mutation( S34F) in the essential splicing factor U2AF1 which occurs in human cancers perturbs this kinetic balance and defers splicing to occur entirely post-release.\nIntroduction:\nCo-transcriptional processing of nascent pre-mRNA is a central mechanism for gene regulation in eukaryotes and requires temporal coordination between transcription initiation, elongation, splicing, and cleavage. Each of these processes is carried out by megadalton macromolecular complexes acting at a single genetic locus, and kinetic competition between these processes has been proposed to determine RNA fate( Bentley, 2014). Genome-wide studies across organisms indicate heterogeneous distributions of both RNA polymerase and nascent RNA along the gene, suggesting that kinetic checkpoints exist throughout the gene, including at promoter-proximal sites, translation start sites, intron–exon boundaries, and at the 3′ end of genes( Core et al., 2008; Nechaev et al., 2010; Churchman and Weissman, 2011; Hah et al., 2011; Larson et al., 2014). However, population studies reflect the balance of kinetic rates and are unable to resolve the multiple competing processes occurring at a single gene. Moreover, genome-wide measurements lack the time-resolution which might provide mechanistic clues about the underlying enzymatic processes. The hypothesis of kinetic competition is that a fast process will out-compete a process which may in fact be more energetically preferred. Kinetic competition during the transcription cycle has been shown to influence splice site selection during alternative splicing( de la Mata et al., 2003), recruitment of factors which release promoter-proximal pausing( Li et al., 2013), and even RNAi-mediated genome defense( Dumesic et al., 2013). Since these processes occur within the dynamic milieu of the nucleus, the stochastic interactions between macromolecules may result in a range of possible outcomes for the nascent RNA. Stochastic RNA synthesis—the phenomenon whereby the inherently stochastic nature of bio-molecular encounters and reactions leads to a non-deterministic production of transcripts—has been directly visualized in multiple organisms( Golding et al., 2005; Chubb et al., 2006; Yunger et al., 2010; Larson et al., 2011, 2013). Yet, stochastic RNA processing—the possibility that stochastic bio-molecular reactions might lead to non-deterministic pathways/outcomes in the making and maturation of an RNA—has never been directly observed, and the potential consequences for gene regulation are largely unexplored. Alternatively, regulatory checkpoints have been proposed which safeguard against such stochastic RNA processing events, providing a level of quality control. For example, the model of exon definition requires that splicing of the terminal intron relies on synergy between 3′ end formation, nascent RNA cleavage, and intron excision( Berget et al., 1977; Niwa et al., 1990). Similarly, multiple studies indicate an increased density of nascent RNA present at the 3′ end of genes or in the chromatin-bound fraction, suggesting that nascent RNA is retained at the site of transcription to ensure correct processing( Glover-Cutter et al., 2008; Brody et al., 2011; Carrillo Oesterreich et al., 2010; Bhatt et al., 2012). In both the competition model and the checkpoint model, kinetics plays a prominent role, but in the latter case, the cell has developed additional safeguard mechanisms. In this article, we use an in vivo single-molecule RNA imaging approach to directly measure kinetic coupling between transcription and splicing of a human β-globin reporter gene. The approach is based on simultaneous dual-color imaging of both the intron and exon of the same pre-mRNA using both PP7 and MS2 stem loops( Bertrand et al., 1998; Chao et al., 2008). We find that kinetic competition results in multiple competing pathways for pre-mRNA splicing. Splicing of the terminal intron occurs stochastically both before and after transcript release, indicating there is not a strict quality control checkpoint. Post-release splicing occurs from freely diffusing transcripts in the nucleus and is an order of magnitude faster than splicing at the site of transcription. A single missense mutation( Ser34Phe) in the zinc finger domain of the conserved splicing factor U2AF1 which is recurrent in multiple cancers( Yoshida et al., 2011; Graubert et al., 2012; Waterfall et al., 2014) changes the balance, making all splicing post-release. This same effect can also be observed on the endogenous, un-modified fragile X mental retardation mRNA( FXR1). Our results show that kinetic competition governs the stochastic balance between multiple competing pathways for RNA synthesis and processing and that this balance is perturbed by oncogenic mutations.\nDiscussion:\nThe picture that emerges from this study is one in which the β-globin-terminal intron can be spliced during multiple steps of the transcription cycle( Figure 5). A minority of transcripts are spliced while retained at the 3′ end of the gene, but the dominant pathway is the one in which splicing is contemporaneous with release or occurs shortly thereafter. These results are consistent with several studies which suggested that intron removal is enhanced upon cleavage( Baurén et al., 1998; Bird et al., 2005). In fact, we find that splicing is 10-fold faster on freely diffusing transcripts than on chromatin-bound transcripts. Our data are consistent with a model where commitment to splicing of the terminal β-globin intron occurs co-transcriptionally but suggest there may be a high energy barrier to completion of intron removal while transcripts are still tethered to chromatin, possibly due to steric constraints. Thus, post-release splicing may be the energetically favored process, but there is a long kinetic window in which the less-favorable pre-release intron removal can occur. The time required to remove an intron therefore becomes a central parameter in our understanding of RNA processing, with implications for both constitutive and alternative splicing( Bentley, 2014). This fundamental kinetic quantity has been elusive. Population-based measurements suggest a splicing time for several endogenous, un-modified introns of 5–10 min( Singh and Padgett, 2009). Bulk in vitro measurements on β-globin indicate a 40- to 50-min timescale. A single-molecule in vitro study on yeast transcripts measured ∼10-min splicing time. On the opposite end of the spectrum, previous live-cell imaging approaches relying on both direct and indirect measurements indicate splicing times in the order of ∼30 s( Huranová et al., 2010; Martin et al., 2013). One of the primary experimental advances in our study is the ability to observe the transcription and splicing process with a time resolution that spans three orders of magnitude. Since the time to splice is a distributed quantity, and splicing times vary from transcript to transcript, the variation in previously reported splicing rates may be strongly influenced by the temporal dynamic range of the method. Biologically, this variability in splicing rate may provide regulatory potential, as we discuss below. One potential criticism of the live-cell imaging approach is that the stem-loops and the coat protein may perturb kinetics. Several arguments stand against this view. First, our splicing kinetics are consistent with both population measurements( Singh and Padgett, 2009) and in vitro single-molecule measurements( Hoskins et al., 2011). Second, the splicing efficiency of the reporter is high( Figure 1—figure supplement 1C), suggesting there are no dead-end intermediates. Third, endogenous metazoan RNAs are decorated with RNA-binding proteins from their inception( Castello et al., 2012), suggesting that the spliceosome is well-equipped to handle bulky messages. Finally, since we can never exclude the possibility that a synthetic reporter may be missing features found in an endogenous gene, we have also recapitulated the results on the un-modified endogenous FXR1 message. Importantly, our single-molecule study reveals the existence of multiple pathways, indicating the absence of a strict checkpoint for intron removal( Bird et al., 2005; Alexander et al., 2010; Brody et al., 2011; Carrillo Oesterreich et al., 2010; Bhatt et al., 2012; Pandya-Jones et al., 2013). The presence of a delay at the 3′ end may be interpreted as a checkpoint mechanism to ensure that splicing takes place before transcript release, but there are several reasons to reject this interpretation of our data. First, none of the models which assume dependence between splicing and elongation/release fit better than one that assumes independence. Second, both splicing and elongation inhibition experiments support the view of the two processes being kinetically independent. Abolishing splicing with SSA does not lead to an increase in release times. Conversely, reducing elongation speed with CPT does not slow down splicing. Instead, it leads to an increase in pre-release splicing( Figure 2E), as expected if the two processes are independent. Thus, splicing and release can happen in either order, with the order of events determined by kinetic competition. It is this competition which results in stochastic outcomes for the RNA. What are the physiological implications of this stochastic outcome? Does the cell utilize changes in kinetic balance to alter gene expression? Moreover, is the timing of intron removal of secondary importance to the timing of splicing commitment? While, single-cell variation in alternative splicing has been observed, the mechanism behind this variability has remained elusive( Waks et al., 2011; Lee et al., 2014). We have found that a single point mutation in U2AF1 that is recurrent in multiple human cancers alters this kinetic balance to favor post-transcriptional splicing of both the β-globin reporter and also endogenous FXR1 mRNAs. Interestingly, single-molecule measurements on fixed cells suggest that post-release splicing may be the preferred pathway for alternatively spliced transcripts( Vargas et al., 2011). Our studies on the mutant U2AF1 provide a mechanistic basis for this observation and suggest a role for mutations in the core splicing machinery for the increased levels of ‘noisy splicing’ which are observed in cancer( Pickrell et al., 2010; Chen et al., 2011). Furthermore, our time-resolved results indicate that post-release splicing is more efficient than pre-release splicing, which may explain how the timing of intron removal might lead to different outcomes for the message. Such a model relies on a degree of plasticity in spliceosome assembly and function, which has been suggested by in vitro single-molecule measurements( Hoskins et al., 2011; Shcherbakova et al., 2013). We speculate that the kinetic delay induced by U2AF1-S34F allows either for alternate pairing between 5′ and 3′ss during transcription or for post-release reconfiguration of pre-mRNA which is not yet committed to intron excision in the spliceosome. Other pathological RNA processing defects may also originate from a similar kinetic imbalance. In summary, the single-molecule approach developed here provides a blueprint for dissecting the many competing processes which take place at the earliest stages of gene expression.",
"role": "user"
},
{
"content": "To make a protein , part of a DNA sequence is copied to make a messenger RNA ( or mRNA ) molecule in a process known as transcription . The enzyme that builds an mRNA molecule first binds to a start point on a DNA strand , and then uses the DNA sequence to build a ‘pre-mRNA’ molecule until a stop signal is reached . To make the final mRNA molecule , sections called introns are removed from the pre-mRNA molecules , and the parts left behind—known as exons—are then joined together . This process is called splicing . However , it is not fully understood how the splicing process is coordinated with the other stages of transcription . For example , does splicing occur after the pre-mRNA molecule is completed or while it is still being built ? And what controls the order in which these processes occur ? One theory about how the different mRNA-making processes are coordinated is called kinetic competition . This theory states that the fastest process is the most likely to occur , even if the other processes use less energy and so might be expected to be preferred . Alternatively , the different steps may be started and stopped by ‘checkpoints’ that cause the different processes to follow on from each other in a set order . Coulon et al . used fluorescence microscopy to investigate how mRNA molecules are made during the transcription of a human gene that makes a hemoglobin protein . To make the RNA visible , two different fluorescent markers were introduced into the pre-mRNA that cause different regions of the mRNA to glow in different colors . Coulon et al . made the introns fluoresce red and the exons glow green . Unspliced pre-mRNA molecules contain both introns and exons and so fluoresce in both colors , whereas spliced mRNA molecules contain only exons and so only glow with a green color . By looking at both the red and green fluorescence signals at the same time , Coulon et al . could see when an intron was spliced out of the pre-mRNA . This revealed that in normal cells , splicing can occur either before or after the RNA is released from where it is transcribed . Thus , splicing and transcription does not follow a set pattern , suggesting that checkpoints do not control the sequence of events . Instead , the fact that a spliced mRNA molecule can be formed in different ways suggests kinetic competition controls the process . In some cancer cells , there are defects in the cellular machinery that controls splicing . When looking at cells with such a defect , Coulon et al . found that splicing only occurred after transcription was completed . This study thus provides insight into the complex workings of mRNA synthesis and establishes a blueprint for understanding how splicing is impaired in diseases such as cancer .",
"role": "assistant"
}
] |
science.craftchem_ner | science.craftchem_ner.337 | [
{
"content": "Analyze the following sentence to identify chemical entities of interest. List these entities in a JSON array format. If no entities are found, return an empty array.\n\nText to Analyze:\nEyesfromatleast2miceofthelistedstrainswerefixed(4%paraformaldehydeorFekete'sacid-alcohol-formalinfixative)processed,paraffinembeddedandsectionedaspreviouslyreported[15,79],exceptthattheparaformaldehydewasbufferedwith0.1Mphosphatebuffer.",
"role": "user"
},
{
"content": "[\"acid\", \"alcohol\", \"buffered\", \"phosphate\"]",
"role": "assistant"
}
] |
science.craftchem_ner | science.craftchem_ner.110 | [
{
"content": "Analyze the following sentence to identify chemical entities of interest. List these entities in a JSON array format. If no entities are found, return an empty array.\n\nText to Analyze:\nAnesthesiaprotocolavoidsIOPalterationandallowsdetectionofdiurnaldifferences",
"role": "user"
},
{
"content": "[]",
"role": "assistant"
}
] |
science.qasa_abstractive_qa | science.qasa_abstractive_qa.1424 | [
{
"content": "Review the following segment from a research document in AI. Answer the subsequent query based solely on the provided information. Limit your response to the answer itself.\n\nResearch Segment: We notice that state-of-the-art basic architectures such as Xception [3] and ResNeXt [40] become less efficient in extremely small networks because of the costly dense 1\\times 1 convolutions. We propose using pointwise group convolutions to reduce computation complexity of 1\\times 1 convolutions. To overcome the side effects brought by group convolutions, we come up with a novel channel shuffle operation to help the information flowing across feature channels. Based on the two techniques, we build a highly efficient architecture called ShuffleNet. Compared with popular structures like [30, 9, 40], for a given computation complexity budget, our ShuffleNet allows more feature map channels, which helps to encode more information and is especially critical to the performance of very small networks. In addition, in ShuffleNet depthwise convolution only performs on bottleneck feature maps. Even though depthwise convolution usually has very low theoretical complexity, we find it difficult to efficiently implement on low-power mobile devices, which may result from a worse computation/memory access ratio compared with other dense operations. Such drawback is also referred in [3], which has a runtime library based on TensorFlow [1]. In ShuffleNet units, we intentionally use depthwise convolution only on bottleneck in order to prevent overhead as much as possible. Recent leading convolutional units in VGG [30], ResNet [9], GoogleNet [33], ResNeXt [40] and Xception [3] have pursued state-of-the-art results with large models (e.g. \\geq 1GFLOPs), but do not fully explore low-complexity conditions. In this section we survey a variety of building blocks and make comparisons with ShuffleNet under the same complexity constraint. Recently Howard et al. have proposed MobileNets [12] which mainly focus on efficient network architecture for mobile devices. MobileNet takes the idea of depthwise separable convolution from [3] and achieves state-of-the-art results on small models. The concept of group convolution, which was first introduced in AlexNet [21] for distributing the model over two GPUs, has been well demonstrated its effectiveness in ResNeXt [40]. Depthwise separable convolution proposed in Xception [3] generalizes the ideas of separable convolutions in Inception series [34, 32]. Recently, MobileNet [12] utilizes the depthwise separable convolutions and gains state-of-the-art results among lightweight models. Our work generalizes group convolution and depthwise separable convolution in a novel form. Modern convolutional neural networks [30, 33, 34, 32, 9, 10] usually consist of repeated building blocks with the same structure. Among them, state-of-the-art networks such as Xception [3] and ResNeXt [40] introduce efficient depthwise separable convolutions or group convolutions into the building blocks to strike an excellent trade-off between representation capability and computational cost. However, we notice that both designs do not fully take the 1\\times 1 convolutions (also called pointwise convolutions in [12]) into account, which require considerable complexity. For example, in ResNeXt [40] only 3\\times 3 layers are equipped with group convolutions. As a result, for each residual unit in ResNeXt the pointwise convolutions occupy 93.4% multiplication-adds (cardinality = 32 as suggested in [40]). In tiny networks, expensive pointwise convolutions result in limited number of channels to meet the complexity constraint, which might significantly damage the accuracy.\n\nInquiry: Which networks introduced efficient depthwise seperable convolution into the building blocks of a state-of-the-art network?",
"role": "user"
},
{
"content": "Although AlexNet introduced the idea of group convolutions, the Xception and ResNeXt generalized depthwise separable convolutions and achieved state-of-the-art results under large computationl budget (~1 GFLOPs).",
"role": "assistant"
}
] |
science.healthver_entailment | science.healthver_entailment.722 | [
{
"content": "Evaluate the given public health statement using the provided research abstract. Your goal is to determine the relationship between the claim and the abstract content.\n\n- \"decision\": Indicate whether the abstract \"SUPPORTS\", \"CONTRADICTS\", or provides \"INSUFFICIENT\" information regarding the claim.\n- \"justification\": List sentences from the abstract that back your decision. If the decision is \"INSUFFICIENT\", leave this list empty.\n\nStatement: COVID-19: Novel Coronavirus May Lead to A Cardiac Arrest or heart attack\n\nResearch Abstract:\nThe presence of cardiovascular co-morbidities and the known effects of coronaviruses on the cardiovascular system have called attention to the potential implications for patients with cardiovascular risk factors.\nThis evidence-based viewpoint will address two questions: (a) are individuals with underlying cardiovascular risk factors (e.g. high blood pressure or diabetes) or overt disease (e.g. coronary heart disease, heart failure, kidney disease) more likely to develop severe Covid-19 and to die than those without underlying conditions?\n(b) does the regular use of angiotensin-converting enzyme inhibitors (ACE-i) or angiotensin-receptor blockers (ARB) make patients more likely to get infected and to die of Covid-19?\nWith a necessary cautionary note that the evidence around the links between Covid-19 and cardiovascular disease is accruing at a fast pace, to date we can conclude that: (a) the greater susceptibility of individuals with underlying cardiovascular conditions to develop more severe Covid-19 with higher mortality rate is likely to be confounded, in part, by age and the type of co-morbidities.\nPatients with heart failure or chronic kidney disease might show an excess risk; (b) neither ACE-i nor ARB are associated with greater risk of SARS-Cov2 infection, or severity or risk of death in patients with Covid-19.\nPatients on these drugs should not stop them, unless under strict medical supervision and with the addition of a suitable replacement medicine.",
"role": "user"
},
{
"content": "{\"verdict\": \"NEI\", \"evidence\": []}",
"role": "assistant"
}
] |
science.healthver_entailment | science.healthver_entailment.2232 | [
{
"content": "Evaluate the given public health statement using the provided research abstract. Your goal is to determine the relationship between the claim and the abstract content.\n\n- \"decision\": Indicate whether the abstract \"SUPPORTS\", \"CONTRADICTS\", or provides \"INSUFFICIENT\" information regarding the claim.\n- \"justification\": List sentences from the abstract that back your decision. If the decision is \"INSUFFICIENT\", leave this list empty.\n\nStatement: No, apple cider vinegar does not 'kill' coronavirus\n\nResearch Abstract:\nPURPOSE: SARS-CoV-2 is a new pandemic influenza caused by a coronavirus which main route of transmission is through exhaled droplets that primarily infect the nose and the nasopharynx.\nThe aim of this paper is to evaluate the effect of acetic acid, the active component of vinegar, as a potential disinfectant agent for upper airways.\nMETHODS: Twenty-nine patients were enrolled and divided into two groups: group 1 (14 patients) was composed of patients treated with off-label hydroxychloroquine and lopinavir/ritonavir, whereas group 2 (15 patients) was composed of patients treated with hydroxychloroquine only, combined with the inhalation of acetic acid disinfectant at a 0.34% concentration.\nA questionnaire-based evaluation of symptoms was performed after 15 days in both groups.\nRESULTS: It appears that the number of patients treated with acetic acid (group 2) that experienced improvement in individual symptoms was double that of the other group of patients (group 1), although numbers are too small for robust statistical analysis.\nCONCLUSIONS: Considering its potential benefits and high availability, acetic acid disinfection appears to be a promising adjunctive therapy in cases of non-severe COVID-19 and deserves further investigation.",
"role": "user"
},
{
"content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"CONCLUSIONS: Considering its potential benefits and high availability, acetic acid disinfection appears to be a promising adjunctive therapy in cases of non-severe COVID-19 and deserves further investigation.\"]}",
"role": "assistant"
}
] |
science.chemsum_single_document_summarization | science.chemsum_single_document_summarization.35655 | [
{
"content": "Craft a concise summary for a chemistry research paper using the provided title and selected text segments.\n\nHeading:\nImplications of the Fractional Charge of Hydroxide at the Electrochemical Interface\n\nContent Extracts:\nIntroduction\n<p>The electrochemical conversion of chemicals into value added-commodity products has been a growing interest to the scientific community in the past several decades, owing largely to the rapidly decreasing cost of renewable electricity from solar photovoltaics. 1,2 The rapid decrease in the cost of electricity allows processes such as the reduction of CO 2 and N 2 closer to be increasingly commercially viable despite significant limitations by poor catalytic activities of current catalysts [3][4][5][6][7][8] In particular, CO 2 and N 2 reduction catalysts are both limited by activity and selectivity. Ultimately, the rational design of catalysts for these and other related processes will rely on a detailed, atomic-scale understanding of the electrochemical interface. [9][10][11][12] Computational efforts in this space are dominated by density functional theory (DFT) methods at the generalized gradient approximation (GGA) level, since these offer a good balance between cost and accuracy. [13][14][15] However, GGA-DFT methods have long been known to poorly predict the band gaps of semiconducting materials, 16 barriers of chemical reactions, energies of dissociating molecular ions, and charge transfer excitation energies all due to a well-know delocalization error. [17][18][19][20] The same challenges with band gap prediction can lead to spurious charge transfer between the electrolyte and metal slab in electrochemical systems if the electrolyte band gap does not properly straddle the metal Fermi level, commonly referred to as spurious charge transfer (SCT). 17,21 In a recent study, 22 we showed using higher level hybrid DFT methods that the observed partial charge on hydronium is not caused by these delocalization errors in GGA-DFT.</p><p>Rather, it is caused by a physical electron charge density overlap between the ion and the nearby metal surface. The extension of this argument to the case of hydroxide is technically complicated by the aforementioned issues of SCT, where the highest occupied molecular orbital (HOMO) of the hydroxide ion often aligns with the metal Fermi level for metals with a relatively large work function such as Pt. In this work, we show that the challenge of SCT can be effectively mitigated by the placement of an excess surface charge density, with countercharges described by, e.g., the linearized Poisson-Boltzmann equation. When the electrolyte bands properly straddle the metal Fermi level, i.e. the Fermi level lies within the electrolyte bandgap without any alignment with either the HOMO or the lowest unoccupied molecular orbital (LUMO) bands, we show that the partial charge observed on hydroxide is also not caused by delocalization errors. The fractional charge of hydroxide has implications for kinetic studies of alkaline reactions, where hydroxide at the interface plays a critical role.</p>\n\nTheoretical methods and models\n<p>This work uses the Vienna ab-initio Software Package [23][24][25] in conjunction with the added functionality of implicit solvation provided by VASPsol. 26,27 Core electrons are described as projector augmented wave pseudopotentials, 28 while valence electrons are expanded as planewaves up to a kinetic energy cutoff of 400 eV. Electron exchange and correlation interactions are accounted for using the PBE functional 29 at the GGA level and with the HSE06 30 functional for hybrid-level calculations. When optimizing bulk platinum to determine the appropriate lattice constant, the Brillouin zone is sampled with a 12× 12 × 12 γ-centered Monkhorst-Pack 31 k -point mesh. The optimized lattice constant is determined to be 3.968 Å for PBE functional, which is very close to exprimental value of 3.912 Å. ?</p><p>A 3×3 supercell of Pt (111) surface with a single bilayer of water is used as the model system. A slab thickness of 3 layers, with the bottom two layers being fixed at the bulk lattice constant value, is used to describe the electrode. A complete monolayer of hydrogen is placed on the surface to facilitate stabilization of the hydroxide ion. The geometries are optimized in the case of a hydronium and hydroxide in the water layer, separately. The forces are minimized until the maximum force on all unconstrained atoms is less than 0.03 eV Å −1 . The solvent bilayer is systematically moved away from the surface to probe the effect of distance from the surface on the calculated ion charge. For these calculations, single-point energy calculations are performed to prevent water reorganization from complicating the calculated electron distribution or density of states (DOS). The computational and structural data is also available online at Catalysis-hub.org. 32,33 VASPsol treats the electrolyte at the electrochemical interface as a polarizable continuum, placing point counter-charges via the linearized Poisson-Boltzmann equation. A Debye screening length of 3 Å is chosen, which corresponds to a bulk electrolyte concentration of 1 Molar. As described in a recent work, 34 the non-electrostatic coefficient is set to zero to avoid numerical instabilities in the electrolyte region. Details regarding the implementation of VASPsol can be found in its documentation. 26,27 Ionic charges are determined through the Bader charge partitioning scheme, [35][36][37] which is used to calculate the charge on each atom by subtracting the expected number of electrons on the neutral atom. Due to the ambiguity intrinsic to charge partitioning (i.e. shared electrons between covalently bonded atoms), when calculating the ionic charge, we sum the charge of every atom in the solvent layer.</p>\n\nResults and discussion\n<p>We begin by showing that the previously reported spurious charge transfer associated with DFT calculated band misalignment, 16,21 exacerbated by solvated ions at the interface, can be mitigated through the use of continuum charging. This is similar to the homogeneously charged background approach previously used. 21 By charging the surface, the HOMO can be shifted relative to the Fermi level sufficiently to avoid SCT. This charging method allows us to investigate the behavior of anions at the interface where previously SCT made this impractical. We then show that the fractional charge of ions near the charged electrochemical interface is not due to GGA level delocalization errors by comparing to hybrid-level (HSE06 30,38 ) calculations. Previous analysis of the hydronium ion rationalized the partial charge by the asymmetric overlap of electron charge density between the metal slab and the ion in solution. We show here that this analysis cannot be extended to hydroxide, but the non-unity charge is nonetheless not caused by DFT delocalization errors.</p><p>Recently, the well-known challenge of DFT systematically under-predicting band gaps 16 was shown to occasionally cause spurious charge transfer at the metal/electrolyte interface, 21 by either the HOMO or LUMO of the electrolyte not appropriately straddling the metal Fermi level. This challenge, commonly referred to as SCT, is in practice exacerbated by ions (especially anions) between the electrochemical interface.</p><p>Figure 1: Charge density difference between hybrid and GGA level DFT calculations corresponding to (a) a case with a hydronium ion near the surface, not showing SCT, and (b) a case with a hydroxide ion near the surface, showing SCT. The isosurface values of ±2.5•10 −3 eÅ −3 corresponds to higher electron density (orange, more negatively charged) and lower electron density (blue, more positively charged). Illustrated here is the spurious transfer of charge from the electrolyte to the metal surface, totalling about 0.2 electrons. Panels (c) and (d) show the projected density of states (pDOS) at the GGA level for hydronium and hydroxide, respectively, where the incorrect extension of the hydroxide HOMO to above the Fermi level is clearly seen in panel (d). Finally, panels (e) and (f) show the pDOS at the hybrid level for hydronium and hydroxide, respectively; with the hybrid functional, the HOMO of hydroxide is shifted to be correctly below the Fermi level.</p><p>The lower edge of the LUMO for cations tends to be positioned below the Fermi level, while the upper edge of the HOMO for anions often extends above the Fermi level. This band misalignment results in the spurious transfer of electrons from the metal to the ion in the former case, and from ion to metal in the latter case. To illustrate this effect of GGA-DFT overdelocalizing electrons compared to hybrid-DFT, we calculate a charge density difference ρ diff as:</p><p>Here ρ diff is the charge density difference, ρ GGA is the charge density calculated at the GGA level using PBE as the exchange-correlation functional, and ρ HSE06 is the charge density calculated using a single-point calculation on the GGA level geometry. respectively. In both cases, the electrolyte has no states at the Fermi level, indicating that the system is not exhibiting SCT. In these systems, the ionic charge was determined to be 1.0e at both the GGA and hybrid level. The charge here is not fractional since the bilayer has been shifted away from the surface. This offset effectively removes the electron density overlap effect previously reported for acidic conditions. 22 A similar analysis is shown for the case of a hydroxide ion in the outer Helmholtz plane in Figure 1 (d) and (f), at the GGA and hybrid levels, respectively. Here the hydroxide ion shows a clear case of SCT at the GGA level, with the hydroxide HOMO states spilling over the metal Fermi level. This results in spurious charge transfer from the electrolyte to the metal, causing the predicted ionic charge to differ: a charge of 0.81e at the GGA level, and a charge of 1.0e at the hybrid level.</p><p>ing the electron density on the metal surface. We demonstrate this here by charging the surface, with the placement of counter-charges determined by solving the linearized Poisson-Boltzmann equation as implemented in VASPsol, thereby retaining overall charge neutrality of the system. In Figure 2, we plot the pDOS of the water and hydroxide ion at the GGA level for three different surface charges σ, and at the hybrid level in a simulation without excess charging of the surface. As the surface charge density is systematically increased, the hydroxide HOMO is shifted negatively, until there is no spillover to above the Fermi level at a charge of σ = 0.75 excess electrons, i.e. the integral of hydroxide pDOS states above the Fermi level is equal to zero.</p>\n\nImplications of the fractional charge of hydroxide\n<p>With the ability to mitigate SCT by charging the surface established, we now turn our attention to rationalizing the observed fractional charge of the hydroxide ion near the metal surface. We follow a similar approach to our previous work, 22 and measure the charge on different parts of the system as the ion is systematically moved from its equilibrium position at the interface. Single point calculations are performed at each position to prevent water structure reorientation from conflating the resulting charge measurement, illustrated in Figure 3. The ions are stabilized far from the surface with continuum solvation through VASPsol.</p><p>To determine the charge on the ion at a given distance from equilibrium, the Bader charge partitioning scheme [35][36][37] is used to determine the charge on each atom in the simulation.</p><p>Following the approach used previously, 22 the total ionic charge q ion is given by</p><p>Here n solv. is the number of atoms in the electrolyte, and q i is the Bader charge on the given atom in the electrolyte. In other words, the ionic charge is taken to be the sum of all charges on atoms in the solvent bilayer. We show the above ionic charge q ion for both hydronium and hydroxide as a function of its position relative to equilibrium in Figure 3, at both the GGA and hybrid level with varying amounts of the total number of electrons in the system.</p><p>Figure 3(a) shows the hydronium ionic charge as the solvent bilayer is moved away from the surface at both the GGA and hybrid levels. This analysis was carried out both with and without additional continuum charging. As we reported previously, the agreement between the GGA and hybrid level is quite good, 22 with less than 0.05 e difference at all points. The addition of excess surface electron density on the surface changes the measured ionic charge near the surface. Excess surface charge changes the electron density decay behavior of the surface, thereby affecting the resulting ionic charge by about 0.06 e in the case of hydronium when 0.75 excess electrons are put into the simulation.</p><p>Unlike the case of hydronium, the partial charge on hydroxide is not easily rationalized from a charge density overlap analysis, as we show in Supplementary Info Note 1. Furthermore, a direct comparison of the GGA and hybrid level is complicated by SCT with no excess surface charge, shown in Figure 3(b). As the hydroxide ion is moved away from the surface with σ = 0.0, the calculated ionic charge exhibits instability, fluctuating between 0.7 and 0.8e, largely due to SCT. When excess surface charge is introduced, pushing the system out of SCT, the measured ionic charge stabilizes and agrees quite well with the hybrid results.</p><p>We note that, as seen in the case of hydronium, Figure 3 (a), the excess surface charge can affect the calculated ionic charge near the surface. Since this perturbation resulted in less than 0.1e change in ionic charge in the case of hydronium, we conclude that the hybrid and GGA levels agree in the case of hydroxide.</p><p>The partial ion charge has implications for the potential dependence of reactions involving the ion; for example, coupled proton-electron transfer reactions which can utilize both hydronium and hydroxide as a donor. As we outlined in recent works, 39,40</p>\n\nConclusions\n<p>To summarize, we have shown here that due to the well-known failure of DFT to accurately describe band levels, the treatment of ions at the interface with GGA level DFT can be technically challenging, especially in the case of anions. However, these issues can be mitigated by introducing excess surface charge, with countercharge placed by (for instance) solving the linearized Poisson-Boltzmann equation as implemented in the VASPsol. We then show that hydroxide ions in the outer Helmholtz plane have a fractional charge of −0.8 to −0.7 e at both the GGA and hybrid levels of DFT. Although this fractional charge cannot easily be explained by a simple spillover analysis as in the case of hydronium, we show that the fractional charge is not a result of the DFT delocalization error, since both the GGA and hybrid levels predict a charge of unity as the ion moves away from the interface. This work has implications for electrochemical kinetics involving reactions in basic media, since the calculation of electrochemical barriers typically involves either the initial or final state with an ion in the outer Helmholtz plane. Whereas many previous works using the computational</p>",
"role": "user"
},
{
"content": "Rational design of materials that efficiently convert electrical energy into chemical bonds will ultimately depend on a thorough understanding of the electrochemical interface at the atomic level. Towards this goal, the use of density functional theory (DFT) at the generalized gradient approximation (GGA) level has been applied widely in the past 15 years. In the calculation of electrochemical reaction energetics using GGA-DFT, it is frequently implicitly assumed that ions in the Helmholtz plane have unit charge.However, the ion charge is observed to be fractional near the interface through both a capacitor model and through Bader charge partitioning. In this work, we show that this spurious charge transfer can be effectively mitigated by continuum charging of the electrolyte. We then show that, similar to hydronium, the observed fractional charge of hydroxide is not due to a GGA level self-interaction error, as the partial charge is observed even when using hybrid level exchange-correlation functionals.",
"role": "assistant"
}
] |
science.covid_deepset_qa | science.covid_deepset_qa.1450 | [
{
"content": "Analyze the following excerpt from a biomedical text. Identify the relevant information needed to respond to the inquiry. Provide only the necessary details in your answer.\n\nContext: The contaminated area is connected to the fever clinic and\nincludes the fever accessway, the CT examination room, and the DR examination room for\n6\nconfirmed and suspected cases. One CT scanner and one DR system closest to the emergency\ndepartment are designated the fever-CT and fever-DR to examine patients with suspected and\nconfirmed COVID-19. There is a separate dedicated access between the contaminated area and\nthe fever screening tents. The semicontaminated area includes the fever-CT control room,\nfever-DR control room, and other patient examination access areas. The buffer zone includes\naccess areas for medical personnel and a dressing area for technologists. T he clean area\nincludes the administrative office and the diagnostic room. \nThe contaminated area was isolated from other areas using physical barricades. Directional signs were newly installed to guide patients and staff. Personal protection and training of staff\nFor providing care for patients with confirmed and suspected COVID-19, all hospital staff\nare required to wear complete personal protective equipment [7]: medical protective clothing,\nsurgical cap, N95 mask, gloves, face shields, and goggles. Wearing and removing of the\nequipment must be performed in accordance with the procedures and under the supervision of\nthe infection control nurse. Because staff members working in the contaminated area are under much situational\npressure, periodically taking time off could lower their physical and mental stress levels. The\ntechnologists on fever-CT duty shifts are provided a break once a week for four hours. In\naddition, the health of staff in the contaminated area must be monitored closely for the\nsymptoms of COVID-19. Pregnant staff must be assigned to the clean area. 7\nThe EMICT formulates and continually updates guidelines and educates all staff for West\nChina Hospital of Sichuan University. The EMICT training for staff is mainly involves\ndocuments regarding infection control and CT findings of COVID-19 and maintains an EMICT\nWeChat group for West China Hospital of Sichuan University. WeChat is the most widely used\nsocial media app in China. The EMICT releases the latest national and hospital-based\ninformation regarding COVID-19, guidance documents, and other notices from the hospital\nand radiology department in the WeChat group on a daily basis. Staff can also report to the\nEMICT in the WeChat group any time. Protocols for each modality and infection control\ninstructions are posted on the walls in all examination rooms. The EMICT periodically reminds\nstaff to undertake personal measures to reduce infection, such as wearing masks at all instances\nin the radiology department and N95 masks if working in the contaminated area; not touching\nthe mask and the eyes; practicing hand hygiene; facing away from colleagues when eating,\ndrinking, and talking; and not using personal cell phones while on duty. In addition, the chief thoracic radiologist provided lectures on all radiologists and\ntechnologists on typical CT findings of COVID-19 infection using materials developed in\nWuhan, the epicenter of the outbreak in China. CT examination procedures\nThere are two sets of procedures for CT examination: the fever-CT procedure and routine CT\nprocedure for those not suspected of COVID-19. The fever-CT procedure for suspected or confirmed COVID-19 (Figure 2)\n8\nBefore the fever-CT technologist operates the equipment, he or she should wear personal\nprotective equipment according to three-level protection standard [8]. Before the CT\nexamination of patients with suspected and confirmed COVID-19 begins, the fever tent or\nisolation ward notifies the radiologist in advance. The fever-CT technologist checks the\nequipment and prepares to disinfect the imaging equipment immediately after the examination. The patient enters the fever-CT waiting area through the fever access area. If the patient\ncan get onto and off the examination table by themselves, the patient is allowed to do so. If the\npatient cannot get onto or off the examination table independently, the person accompanying\nthe patient assists the patient, rather than the technologist. The technologist checks the patient\ninformation and, using an intercom system in the examination room, asks the patient to remove\nany metal ornaments on the neck and chest. Also, by intercom, the technologist trains the\npatient to hold his or her breath during the examination. The technologist uses a low-dose chest CT protocol to scan the patient. After scanning, the\noriginal images are reconstructed as 1 mm-thick layers. The technologist browses the images to\nensure that their quality meets the diagnostic requirements and then guides the patient to leave\nthrough the fever access area. The disposable sheets for patient examination are changed after\neach patient. The equipment is disinfected according to the procedure below. To protect themselves, the technologists assigned to the fever-CT wear N95 mask and\nother personal protection as established by the EMICT. The CT procedure for regular patients (figure.3)\n9\nSome patients with COVID-19 have no symptoms, and they may call at the general clinic for\nother reasons. The following CT procedure is applicable under these circumstances:\nWhen the patient makes an appointment for examination, the staff asks the patient about\ntheir epidemiological history, symptoms, and signs. If suspected criteria are met, the patient\nwill be sent to the fever tent for further screening. When a patient presents to the radiology\ndepartment entrance, his/her temperature is measured. If the temperature is higher than 37.2 , ℃\nthe patient is sent to the fever tent for further investigation. Those with no exposure history, suspicious symptoms or fever are screened in one of the\nnon-contaminated CT scanners. The technologists assigned to these scanners wear surgical\nmasks. All patients and the person accompanying them are required to wear surgical masks. After the CT examination, the technologist browses the images quickly. If the CT appearance is\ntypical of lung infection, the technologist immediately reports it to the chest radiologist on duty\nand asks the patient to wait in the CT examination room. If the chest radiologist does not\nsuspect COVID-19 infection, the patient can leave the CT examination room. If the chest\nradiologist does suspect COVID-19 infection, the technologist immediately reports it to the\nEMICT and sends the patient to the fever tent. The floor and equipment in the CT examination\nroom are disinfected according to regulations, and air disinfection is conducted for 30 min\nbefore examining other patients. These CT scanners are considered noncontaminated (not\nfever-CTs) after these sterilization procedures. Fever-DR examination procedure\n10\nThe COVID-19 guideline of the NHC does not recommend chest DR because its ability in\ndiagnosing COVID-19 is limited. At our hospital, we only use mobile DR units to provide\nbedside examination for critically ill patients. The technologist operating the mobile DR\nwears personal protective equipment according to the three-level protection standard and\nsterilizes the mobile DR according to the ward management requirements as described below. Equipment and environment disinfection procedures\nRoutine disinfection procedure [9]\n1) Object surface disinfection: Object surface is wiped with 1000mg/L chlorine-containing\ndisinfectant, wipe twice with 75% ethanol for non-corrosion resistance, once /4 hours. 2) Equipment disinfection: The equipment in the contaminated area are wiped with\n2000mg/L chlorine-containing disinfectant. The DR and CT gantry in the contaminated\narea are wiped with 75% ethanol. The equipment in the buffer area is wiped with\n500-1000mg/L chlorine-containing disinfectant or alcohol-containing disposable\ndisinfectant wipes twice a day. 3) Air disinfection: Turning off all central air conditioners to prevent air contamination with\neach other. Polluted area: open the door for ventilation, each time more than 30 minutes,\nonce /4 hours; The air sterilizer is continuously sterilized or the ultraviolet ray is\ncontinuously used in the unmanned state for 60 minutes, four times a day, remembered to\nclose the inner shielding door when air disinfection. Other ambient air is sprayed with\n1000mg/L chlorine-containing disinfectant and ventilated twice a day\n4) Ground disinfection: The ground is wiped with 1000mg/L chlorine-containing\ndisinfectant, once /4 hours. 5) When contaminated, disinfect at any time. In case of visible contamination, disposable\nabsorbent materials should be used first to completely remove the pollutants, and then a\ncloth soaked with 2000mg/L chlorine-containing disinfectant should be used for 30\nminutes before wiping. 11\nFever-CT disinfection procedures after examination\nIn addition to the above, disinfect the examination bed and ground with chlorinated disinfectant\ncontaining 2000mg/L [10]. Noncontaminated CT disinfection procedures after suspected COVID-19 case examination\nIn addition to the above routine disinfection procedure, air disinfection is conducted for 30 min\nbefore examining other patients. Results\nFrom January 21, 2020 when screening for epidemiological history or symptoms\nsuspicious for COVID-19, to March 9, 2020, our hospital screened a total of 7,203 individuals\nand confirmed 24 cases of COVID-19. Of these, 3,083 people underwent fever-CT\nexaminations. Including the initial examination and reexamination, the total number of fever\nCT examination numbered 3,340. The fever-CT scanned a patient approximately every 21.5\nminutes. As a result of our precautions, none of the staff of the radiology department developed\nsymptoms suspicious for COVID-19. The fever-CT technologist, with the highest probability\nof exposure, remains PCR negative. Discussion\nIt has been 17 years since the severe acute respiratory syndrome (SARS) epidemic, the last\nnational spread of severe infectious disease, broke out. Currently, the Chinese people are\npanicking again. The speed and extent by which COVID-19 has spread in 2 months are\n12\nunprecedented, beyond those of SARS, and this has been aided by its contagious nature and\nrapid spread via droplets and contact. The droplet mode of transmission means that a person can\nbe infected easily by means of casual contact or even fomites on contaminated environmental\nsurfaces. Another theory has yet to be proved: aerosol propagation. How radiology departments respond to any infectious disease outbreak is determined\nprimarily by the estimated risk of cross-infection to the staff and other patients. Appropriate\nprecautions taken only by staff in direct contact with patients may be adequate when the risk is\nlow. The strongest measures need to be implemented to limit the spread of the disease when the\nrisk is high. With severe infectious diseases such as COVID-19, the highest level of infection\ncontrol measures must be implemented; these include providing adequate standard protective\nequipment, training staff, and instituting proper emergency plans. Once a contagious infectious disease has been identified, the EMICT must consider four\nmain areas of response: data gathering, collaboration, needs assessment, and expert advice [10]. Data gathering includes dissemination of up-to-date case definitions and information about\nconfirmatory tests to all staff with direct patient contact to allow appropriate barrier precautions\nto be taken. All typical and atypical imaging features of the disease should be made known to\nall radiologists to assist in recognition of the disease on images and to allow accurate reporting\nof these findings. We have stored images of all probable cases of COVID-19 in the PACS so\nthat these images were readily available for any radiologist to review, and images from\nprevious imaging studies are also available for comparison. Collaboration with the radiology departments of other hospitals is very important because\npatients may initially present to different centers, depending on geographic location and travel\n13\ndistance. These patients may be few in number at a single hospital, but if data from patients at\nseveral hospitals are available, a more accurate overall understanding of both imaging features\nand epidemiology can be achieved. Dissemination of this information to all healthcare facilities\nwill also lead to early recognition of the disease, and appropriate isolation measures may be\ninstituted. The Internet and social media apps, especially WeChat, have been used for distribution of\nmedical information, and because the exchange of information regarding infectious disease\noutbreaks is almost instantaneous, it is an indispensable tool for radiologists. In fact, within a\nmonth of the outbreak, the hospital that received the most infected patients from the source of\nthe outbreak made a PowerPoint presentation of the CT manifestations of COVID-19, which\nwas shared via WeChat and disseminated across the country in a very short time. Subsequently,\nCOVID-19-teaching PowerPoint presentations from various hospitals appeared and were\nquickly shared via WeChat. Our diagnostic process is limited as chest CT along is not diagnostic of COVID-19\nbecause of lack of imaging specificity. But when combined with other epidemiological,\nclinical, laboratory and virus nucleic acid information, typical chest CT imaging findings are\nhelpful for making the diagnosis. In our opinion, the major role of chest CT is to understand the\nextent and dynamic evolution of lung lesions induced by COVID-19. The reasons why we\nadopted the low-dose chest CT scan protocol are as follows: low-dose chest CT has been\nwidely used in the screening of early lung cancer. It is well known that many early lung cancers\nare ground-glass opacities (GGO), so we believe that low-dose screening is also applicable for\nCOVID-19. In addition, considering the rapid development of COVID-19, many CT\n14\nexaminations may be conducted in the same individual to monitor disease progress. Low-dose\nscanning can reduce the radiation damage to patients. Although the processes we established minimized the exposure of hospital staff, ancillary\npersonnel and other patients, it remains limited as follows. Sichuan province is not the center of\nthe epidemic. The number of patients with COVID-19 whom we have treated has not been\nhigh, and most cases are from other provinces of China. However, we believe that our\nexperience in management, the reconfiguration of our radiology department, and the workflow\nchanges implemented in the current COVID-19 situation are useful for other radiology\ndepartments that must prepare for dealing with patients with COVID-19. While no radiology\npersonnel developed symptoms suspicious for or were confirmed as having COVID-19, there\nmay be asymptomatic personnel. REFERENCES\n1. National Health Commission of the People’s Republic of China.(2020). March 12: Daily briefing\non novel coronavirus cases in China. Retrieved from\nhttp://en.nhc.gov.cn/2020-03/12/c_77618.htm. Accessed March 11, 2020. 2. World Health Organization. (2020). Coronavirus disease 2019 (COVID-19) Situation Report-52. Retrieved from\nhttps://www.who.int/docs/default-source/coronaviruse/20200312-sitrep-52-covid-19.pdf?sfvrsn=e\n2bfc9c0_2 9.\n\nInquiry: What does the clean area include?",
"role": "user"
},
{
"content": "he clean area\nincludes the administrative office and the diagnostic room.",
"role": "assistant"
}
] |
science.mslr2022_cochrane_multidoc_summarization | science.mslr2022_cochrane_multidoc_summarization.1327 | [
{
"content": "Review the provided collection of medical RCT summaries. Your objective is to distill the essential insights into a brief conclusion. Focus solely on the synthesis of findings without additional commentary. The RCT summaries are as follows:\n\n\n\nReport Title 1:\nRationale and design of a multicenter study of selegiline and alpha-tocopherol in the treatment of Alzheimer disease using novel clinical outcomes. Alzheimer's Disease Cooperative Study.\nReport Abstract 1:\nThis report describes the rationale and design of a clinical trial using selegiline (10 mg/day) and alpha-tocopherol (2,000 IU/day) to slow the progression of dementia in Alzheimer disease (AD). This study was developed by the Alzheimer's Disease Cooperative Study (ADCS), a consortium of clinical research centers actively involved in AD research. The major goal of the consortium is to design and conduct clinical investigations leading to the development of treatments for AD. This study uses a randomized double-blind, placebo-controlled, 2 x 2 factorial, parallel group design to test two drugs for the treatment of AD. The primary outcome of the study is the time to reach any one of the following four endpoints: death, institutionalization, loss of two of three basic activities of daily living, and progression of Clinical Dementia Rating (CDR) stage from 2 to 3. Patients with moderately severe disease (CDR = 2) were enrolled and evaluated 10 times over a period of 2 years to determine if these agents reduce the time to reach any endpoint. A database from the Consortium to Establish a Registry for Alzheimer's Disease indicated adequate power analyses to observe a treatment effect on this clinically meaningful outcome measure. Recruitment and baseline characteristics of the population are provided. The rationale for the choice of a factorial design, the use of a novel, clinically meaningful endpoint, and the selection of a cohort of patients with AD of moderate severity are discussed.\n\n\nReport Title 2:\nVitamin E paradox in Alzheimer's disease: it does not prevent loss of cognition and may even be detrimental.\nReport Abstract 2:\nThere is controversy as to whether vitamin E is beneficial in Alzheimer's disease (AD). In this study, we tested if vitamin E prevents oxidative stress and loss of cognition in AD. Fifty-seven AD patients were recruited and divided in two groups: placebo or treated with 800 IU of vitamin E per day for six months. Of these 57 patients, only 33 finished the study. We measured blood oxidized glutathione (GSSG) and used the following cognitive tests: Mini-Mental State Examination, Blessed-Dementia Scale, and Clock Drawing Test. Of those patients treated with vitamin E, we found two groups. In the first group, \"respondents\" to vitamin E, GSSG levels were lower after the treatment and scores on the cognitive tests were maintained. The second group, \"non-respondents\", consisted of patients in which vitamin E was not effective in preventing oxidative stress. In these patients, cognition decreased sharply, to levels even lower than those of patients taking placebo. Based on our findings, it appears that vitamin E lowers oxidative stress in some AD patients and maintains cognitive status, however, in those in which vitamin E does not prevent oxidative stress, it is detrimental in terms of cognition. Therefore, supplementation of AD patients with vitamin E cannot be recommended without determination of its antioxidant effect in each patient.",
"role": "user"
},
{
"content": "No convincing evidence that vitamin E is of benefit in the treatment of AD or MCI. Future trials assessing vitamin E treatment in AD should not be restricted to alpha-tocopherol.",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_plos_single_doc_summ | science.scientific_lay_summarisation_plos_single_doc_summ.1046 | [
{
"content": "Transform the following sections from a biomedical study into a simple summary for the general public. Focus on making the information accessible and easy to understand.\n\nResearch Heading: Characterization of TCF21 Downstream Target Regions Identifies a Transcriptional Network Linking Multiple Independent Coronary Artery Disease Loci\nContent:\nAbstract:\nTo functionally link coronary artery disease( CAD) causal genes identified by genome wide association studies( GWAS), and to investigate the cellular and molecular mechanisms of atherosclerosis, we have used chromatin immunoprecipitation sequencing( ChIP-Seq) with the CAD associated transcription factor TCF21 in human coronary artery smooth muscle cells( HCASMC). Analysis of identified TCF21 target genes for enrichment of molecular and cellular annotation terms identified processes relevant to CAD pathophysiology, including “growth factor binding, ” “matrix interaction, ” and “smooth muscle contraction. ” We characterized the canonical binding sequence for TCF21 as CAGCTG, identified AP-1 binding sites in TCF21 peaks, and by conducting ChIP-Seq for JUN and JUND in HCASMC confirmed that there is significant overlap between TCF21 and AP-1 binding loci in this cell type. Expression quantitative trait variation mapped to target genes of TCF21 was significantly enriched among variants with low P-values in the GWAS analyses, suggesting a possible functional interaction between TCF21 binding and causal variants in other CAD disease loci. Separate enrichment analyses found over-representation of TCF21 target genes among CAD associated genes, and linkage disequilibrium between TCF21 peak variation and that found in GWAS loci, consistent with the hypothesis that TCF21 may affect disease risk through interaction with other disease associated loci. Interestingly, enrichment for TCF21 target genes was also found among other genome wide association phenotypes, including height and inflammatory bowel disease, suggesting a functional profile important for basic cellular processes in non-vascular tissues. Thus, data and analyses presented here suggest that study of GWAS transcription factors may be a highly useful approach to identifying disease gene interactions and thus pathways that may be relevant to complex disease etiology.\nIntroduction:\nRecent large-scale GWAS have identified 46 genome-wide significant CAD loci and a further 104 independent variants associated at a 5% false discovery rate( FDR), yet the biological and disease-relevant mechanisms for these associations remain largely unknown[1]. It is estimated that at least two-thirds of the disease loci contain causal genes that are not related to known cardiovascular risk factors such as diabetes and lipid metabolism, suggesting that they are involved in disease promoting processes in the blood vessel wall. Thus the great promise of these genetic findings is the elucidation of atherosclerosis disease pathways, and further investigation of mechanisms by which genes in disease loci work together to regulate cellular and molecular functions that are involved in disease risk is sorely needed. Among the significant loci, a small subset of genes encode transcription factors( TFs) which are likely to impact disease risk by regulating disease relevant genes and possibly other CAD associated genes. Further study of downstream targets of these TFs, employing well established genome-wide methods, would be expected to provide biological insights through links to established pathways and to identify informative relationships among other apparently independent causal CAD loci. Tcf21 is a member of the basic helix-loop-helix( bHLH) TF family and is critical for the development of a number of cell types during embryogenesis of the heart, lung, kidney, and spleen[2–5]. Tcf21 is expressed in mesodermal cells in the proepicardial organ that give rise to coronary artery smooth muscle cells( SMC) and loss of Tcf21 results in increased expression of smooth muscle markers by cells on the heart surface consistent with premature SMC differentiation[6]. Knockout animals also exhibit a dramatic failure of cardiac fibroblast development suggesting a role for Tcf21 in the fate decisions of a precursor cell for SMC and cardiac fibroblast lineages[2, 6]. These data are consistent with the hypothesis that early expression of Tcf21 is important for expansion of the SMC compartment of the coronary circulation, with persistent Tcf21 expression being required for cardiac fibroblast development[2, 6]. To better understand the cellular functions of TCF21 in the SMC lineage, and to gain insights into how such functions might contribute to CAD risk, we performed chromatin immunoprecipitation coupled with high throughput sequencing( ChIP-Seq), examined the downstream target loci and genes that harbor TCF21 binding sites, and employed bioinformatic and experimental approaches to investigate how the target genes work together to mediate the risk of CAD. Pathway analysis of downstream target genes revealed that TCF21 regulates cell-cell and cell-matrix interactions as well as growth factor signaling pathways. Also, we found TCF21 target regions to be over-represented among CAD associated loci, and that genes in these regions assemble into pathways that mediate fundamental processes such as cell cycle, chromatin remodeling, and growth factor signaling. Taken together, these studies elucidate disease-associated genes and pathways that lie downstream of TCF21, and show how SMC related processes may be responsible for a substantial portion of the genetic risk for CAD.\nDiscussion:\nWe focused ChIP-Seq experiments on smooth muscle cells based on previous findings demonstrating the role of TCF21 in coronary vascular development during embryogenesis and its relevance for the CAD phenotype. During development, TCF21 is expressed in SMC progenitors but not in endothelial or other cell types present in vascular disease lesions[2, 6]. Vascular SMC response to growth factor signaling is increasingly recognized as an important component of the vascular response to injury where atherosclerotic disease progression is characterized by SMC undergoing phenotypic modulation, changing from a quiescent cell expressing contractile lineage markers to a proliferative, migratory, matrix synthesizing cell that is characteristic of the disease state[7]. In order to better understand the role of TCF21 in CAD progression, we utilized an in vitro model of undifferentiated SMC phenotype based on serum stimulation for the studies presented here[36]. Analysis of TCF21 peaks revealed enrichment for a number of pathways related to the known vascular developmental role of TCF21, as well as growth factor stimulation and signaling, cell-matrix interactions, matrix composition, and actomyosin contractile function. The query of relevant mouse databases revealed enrichment within terms related to vascular phenotypes, including “abnormal blood vessel morphology” and “abnormal blood circulation. ” These data suggest that TCF21 directly regulates genes centrally involved in pathways that mediate pathologic SMC processes. Given the association of TCF21 with CAD, our findings thus provide novel evidence supporting the role of SMC in the etiology of disease processes within the vascular wall. We provide evidence for cooperation of TCF21 with other transcription factors through the finding of other TF binding motifs in the ChIP-Seq peak sequences. We found evidence for enrichment of JUN( AP-1) and several other bZip factor binding motifs, including CEBP and ATF1, in the TCF21 peaks. There was also enrichment for binding sites recognizing the TEAD family of transcription factors that have been linked to signaling through the Hippo pathway that regulates developmental and cancer related cellular processes[37]. We found the identification of JUN related binding sites particularly interesting, but given that these AP-1 motifs are among those that have been found to be commonly enriched in datasets for which they were not the targeted transcription factor[38], we were compelled to perform ChIP-Seq studies for JUN and JUND in the HCASMC. These studies confirmed the in silico prediction, revealing significant enrichment of binding sites found for both of these TFs in genes that also contain TCF21 peaks. For a number of genes, TCF21 and JUN factors were found to bind in close proximity with evidence for JUN factor binding flanking the TCF21 binding site. As noted above, the pathway analysis of TCF21 target genes has identified enrichment for signaling through growth factor binding pathways, including PDGF. PDGF signaling related to smooth muscle cell proliferation is known to involve JUN TFs[39], and our previous studies have shown that PDGF and JUN are upstream regulators of TCF21 expression[27]. These findings, when coupled with the observation that TCF21 promotes cellular proliferation and dedifferentiation in skeletal muscle cells[40], provide new insight into the relationship between TCF21 and certain growth factor pathways involving AP-1, and suggest that TCF21 is both regulated by and cooperates with this TF family. These ChIP-Seq studies have identified a number of other CAD associated loci and genes that are bound by TCF21, identifying a disease related transcriptional network. To investigate the hypothesis that TCF21 targets a greater than expected number of CAD-related genes we performed enrichment analysis at both the gene and SNP level with GWAS data. These studies identified a greater than expected number of TCF21 targets among CAD GWAS genes, and showed that TCF21 peak region SNPs are in greater linkage disequilibrium with CAD GWAS SNPS than expected. Since the ChIPseq data was generated in HCASMC, a cell type found in the coronary artery, the results of CAD loci enrichment is consistent with the hypothesis that TCF21 regulates a greater than expected number of CAD genes in the vessel wall and that its effect on disease risk may be due in part to its regulation of the network of TCF21 target CAD associated genes. Consistent with TCF21 CAD target genes being active primarily in the vascular wall, we saw no difference in enrichment between the CAD analyses with and without lipid genes or SNPs being included in the analysis, and no enrichment was found for vascular risk factor traits “LDL cholesterol” and “Total cholesterol. ” The enrichment for “Platelets” is not surprising given the known association of platelet number with vascular disease events, and this association could explain some portion of the TCF21 risk for CAD[41–43]. Human genetic epistasis studies are required to validate these putative gene-gene interactions between TCF21 and the other CAD genes in the network, but further study of the functional links among validated TCF21 target CAD genes should provide an opportunity for further dissecting the molecular basis of disease risk. Surprisingly, both the gene- and SNP-level enrichment GWAS analyses also found TCF21 target genes to be over-represented among those associated with height and immune related IBD. Although SMC would seem unlikely to be involved in the height and IBD phenotypes, we speculate that enrichment of TCF21 target genes within these other phenotypes is real and reflects overlapping biology between the physiology/pathophysiology of these phenotypes and that of CAD. For instance, short stature has been repeatedly linked to an elevated risk for CAD, suggesting the presence of a substantial overlap in the biological pathways that determine height as well as the risk of CAD[44–46]. In support of a causal link between shorter stature and higher risk of CAD, a recent Mendelian Randomization study found a graded relationship between a genetic risk score of 180 height raising alleles identified by the GIANT consortium and the risk of CAD[47, 48]. Furthermore, pathway analysis of genes near or within the height loci identified enrichment of several canonical pathways involved in both growth-development and atherosclerosis[48]. It is also not surprising to observe enrichment for immune related pathways that mediate IBD given inflammation has been extensively linked to atherosclerosis at an experimental level[49–51]. These findings are also consistent with our pathway analyses of TCF21 related CAD genes( S7 Table). We note that the genes that contribute to the enrichment in these phenotypes are unlikely to be regulated by TCF21 given TCF21 is not associated with these phenotypes in GWAS. Instead, we suspect this enrichment reflects functional roles of related networks in traits that involve multiple other tissues. Our analyses provide multiple lines of circumstantial evidence supporting a role for the identified TCF21 related transcriptional network in the etiology of CAD. First, the TCF21 target genes are highly enriched for molecular processes with established relationships to CAD, such as growth factor binding, cell adhesion, vascular development, inflammation, and PDGF signaling. Second, the TCF21 target genes are enriched for established CAD genes identified through GWAS. Third, the functional SNPs of TCF21 target genes are enriched for, and tend to be in significant LD with, SNPs associated with CAD in GWAS. Fourth, the finding that the TCF21 transcriptional network is enriched in other traits is consistent with a biological program that contributes to a number of disease-relevant tissue-specific cellular processes. However, further work is required to firmly establish our hypothesis that the TCF21 transcriptional gene network contributes to CAD risk. If TCF21 binding to other CAD loci is important for the disease risk mechanism, one may be able to identify epistatic effects between polymorphisms in or near TCF21 and other CAD SNPs in the TCF21 target regions. At an experimental level it will be important to link TCF21 binding sites in target loci to function of the causal variant and the causal gene, and to investigate how TCF21 binding may regulate the causal mechanisms at other CAD loci. Finally, it will be important to functionally assess the predicted TCF21 gene network by perturbation in vitro and in vivo with experimental cell culture and animal models. Taken together, these approaches may provide significant new insights into how a disease associated transcription factor may work through other disease-associated loci to modulate disease risk.",
"role": "user"
},
{
"content": "While coronary artery disease ( CAD ) is due in part to environmental and metabolic factors , about half of the risk is genetically predetermined . Genome-wide association studies in human populations have identified approximately 150 sites in the genome that appear to be associated with CAD . The mechanisms by which mutations in these regions are responsible for predisposition to CAD remain largely unknown . To begin to explore how disease-specific gene sequences and disease gene function promotes pathology , we have mapped the loci and genes that are downstream of the transcription factor TCF21 , which is strongly associated with CAD . By identifying genes that are regulated by TCF21 we have been able to link together multiple other CAD associated genes and begin to identify the critical molecular processes that mediate atherosclerosis in the blood vessel wall and contribute to the genesis of ischemic cardiovascular events .",
"role": "assistant"
}
] |
science.drug_combo_extraction_re | science.drug_combo_extraction_re.954 | [
{
"content": "Analyze the given paragraph from a biomedical study. Focus on identifying drug pairings within a specified sentence. Your goal is to determine the nature of these pairings.\n\nPresent your findings as a JSON array. Each element should be an array containing the drug names and their interaction type. Use the exact names from the sentence.\n\nOnly consider the specified sentence for your analysis. The paragraph is for context only.\n\nReturn solely the JSON array. If no pairings are found, return an empty array.\n\nContext Paragraph: Unusual diversity of acquired β-lactamases in multidrug-resistant Pseudomonas aeruginosa isolates in a Mexican hospital. To investigate the presence of extended spectrum and metallo β-lactamases ( MBLs ) in Pseudomonas aeruginosa isolates which are resistant to imipenem and ceftazidime that were isolated in a hospital in Mexico . ### results Pulsed-field gel electrophoresis (PFGE) revealed the presence of four clonal types among the 14 isolates. All these genes were found either alone or simultaneously in the P. aeruginosa strains in the following five different arrangements: ; ; ; ; and . Class 1 integrons were detected and contained the cassettes bla(GES-5) and bla(OXA-2), but not that of bla(VIM). bla(VIM) genes occurred only in the chromosome, while bla(GES-5) was located in the chromosome and in the plasmids. ### conclusions To our knowledge, this is the first description of P. aeruginosa strains simultaneously producing the VIM-2 and VIM-11 variants, and the combination of GES-5 and MBL carbapenemases, which determines a major challenge for the clinical microbiology laboratory and a remarkable epidemiological risk for the nosocomial spread of multidrug-resistant determinants.\n\nTarget Sentence: To investigate the presence of extended spectrum and metallo β-lactamases ( MBLs ) in Pseudomonas aeruginosa isolates which are resistant to imipenem and ceftazidime that were isolated in a hospital in Mexico .",
"role": "user"
},
{
"content": "[]",
"role": "assistant"
}
] |
science.mslr2022_ms2_multidoc_summarization | science.mslr2022_ms2_multidoc_summarization.12940 | [
{
"content": "Analyze the following collection of medical RCT summaries. Your goal is to craft a concise synthesis of the findings. Focus on the core insights and present them succinctly. Only include the synthesis in your response. Below are the RCT summaries:\n\n\n\nReport 1:\nTitle: The management and outcome of lacerations in urban children.\nSummary: We prospect ively studied the management and outcome of 2,834 children , aged 1 month to 18 years , who presented to the emergency department of the Children 's Hospital of Philadelphia for laceration repair . Patients with bite wounds were excluded from the study . Eight percent ( 239 ) of all patients had complications on initial evaluation ; the most common was the presence of a foreign body ( 55 ) . Infection on presentation was diagnosed in 22 cases ( 0.8 % ) . All of these patients had delayed their initial care beyond 18 hours ( range , 18 to 288 hours ; mean , 18 hours ) . Other factors significantly associated with infection on presentation included occurrence of the injury outdoors ( 16 ; P less than .001 ) , injury due to broken \" street \" glass ( seven ; P less than .02 ) , and injury of an extremity ( 18 ; P less than .01 ) . The rate of prerepair infection was not influenced by the size of the wound . Infections developed subsequent to initial repair in 34 cases ( 1.2 % ) . Factors associated with development of subsequent infection included use of prophylactic antibiotics , use of subcutaneous sutures , laceration length of more than 5.0 cm , glass or ice as a causative agent , and upper- or lower-extremity involvement . The majority of injuries were repaired by ED personnel without surgical consultation . Postrepair infection rates were not influenced by the specialty of the physician managing the case . Although our study was not design ed to specifically test the issue , prophylactic antibiotics were of no proven benefit in reducing infection rates in any group of patients analyzed\n\n\nReport 2:\nTitle: Wound registry: development and validation.\nSummary: STUDY OBJECTIVE More than 11 million patients with traumatic wounds are seen annually in emergency departments . We developed and vali date d a data registry for traumatic wounds treated in the ED . DESIGN Prospect i ve , consecutive patient enrollment with a validation cohort of a convenience sample of 100 patients . SETTING University-affiliated hospital ED . PARTICIPANTS For all patients with traumatic wounds requiring sutures , wound registry data sheets were completed at the time of initial visit using a closed- question format . Data recorded included demographic characteristics , time from injury to evaluation , pertinent medical history , wound characteristics , type of anesthesia , details of wound-cleansing methods , details of wound closure , and postoperative care . We devised a follow-up tool to evaluate for the presence of infection and short-term cosmetic appearance . Interphysician reliability was assessed for wound description , presence of infection , and cosmetic appearance by use of the kappa statistic . RESULTS A wound registry data collection instrument that takes less than 1 minute to complete and enables the collection of most wound management techniques used by emergency physicians was found to have substantial interobserver concordance for wound description ( kappa range , .55 to .97 ) , wound infection ( kappa = 1.0 ) and overall cosmetic appearance ( kappa = .61 ) . CONCLUSION The wound registry is a reliable data collection instrument that is easy to use . It may be useful as a continuous quality -improvement tool or for st and ardization of wound surveillance and treatment data to facilitate future prospect i ve studies in wound management\n\n\nReport 3:\nTitle: The delayed treatment of wounds of the hand and forearm under antibiotic cover\nSummary: In wounds of the h and and forearm treated within 4 h of injury , prophylactic clindamycin , in a dose of 150 mg 6‐hourly for 5 days , did not reduce the infection rate . In wounds treated after 4 h the infection rate was reduced , but not significantly so . There was a significant reduction in the infection rate comparing all wounds treated within 4 h with those treated after 4 h. Early treatment of wounds is more important than prophylactic antibiotics , although these may have a part to play where late treatment is unavoidable\n\n\nReport 4:\nTitle: Comparison of delayed and primary wound closure in the treatment of open tibial fractures\nSummary: Introduction Primary wound closure in the management of open tibial fractures has generally been discouraged . Several prior studies suggest that infections are not caused by the initial contamination , but are instead the result of organisms acquired in the hospital . Primary wound closure after adequate wound care and fracture stabilisation could therefore be considered a reasonable option . Material s and methods We analysed 95 patients with open tibial fractures ( Gustilo– And erson type 1 to 3A ) treated with primary fracture stabilisation and either delayed wound closure ( group I ) or primary wound closure ( group II ) , with a minimum follow-up of 12 months . Results Group I included 46 patients with a mean age of 30.2 years ( 16–56 ) , and a mean follow-up of 13.5 months ( 12–18 ) . Group II included 49 patients with a mean age of 33.4 ( 18–69 ) , and a mean follow up of 13.7 months ( 12–16 ) . One infection developed in group I ( 2 % ) , and two infections developed in group II ( 4 % ) . This difference was not found to have any statistical significance . Conclusion Our results support other recent reports that the infection rate is not increased following primary wound closure after thorough debridement of less severe open fractures . The length of stay following primary closure ( group II ) was significantly shorter , and that should result in substantially more cost effective care of these serious injuries . We conclude that primary wound closure is a safe option in properly selected cases . Prospect i ve multi-centre studies are needed to further evaluate the safety and efficacy of this treatment alternative\n\n\nReport 5:\nTitle: Evaluation of the 'golden period' for wound repair: 204 cases from a Third World emergency department.\nSummary: Uncertainty about the existence and duration of a \" golden period \" for suture repair of simple wounds led us to evaluate prospect ively the consequences of delayed primary closure on wound healing . Wounds were eligible for study if they were not grossly infected , and had no associated injuries to nerves , blood vessels , tendons , or bone . Three hundred seventy-two patients underwent suture repair ; 204 ( 54.8 % ) returned for review seven days later . The mean time from wounding to repair for all patients was 24.2 + /- 18.8 hours . Wounds closed at up to 19 hours after wounding had a significantly higher rate of healing than those closed later : 82 of 89 ( 92.1 % ) compared with 89 of 115 ( 77.4 % ) ( P less than .01 ) . Of 23 wounds sutured 48 or more hours ( mean , 65.3 ) after wounding , 18 ( 78.3 % ) were healing at follow-up . In contrast to wounds involving other body areas , the healing of head wounds was virtually independent of time from injury to repair : 42 of 44 ( 95.5 % ) wounds involving the head and repaired later than 19 hours after injury were healing , compared with 47 of 71 ( 66.2 % ) of all other wounds ( P less than .001 ) . On the basis of these data we conclude that there is a 19-hour \" golden period \" for repair of simple wounds involving body areas other than the head , after which sutured wounds are significantly less likely to heal , and the healing of clean , simple wounds involving the head is unaffected by the interval between injury and repair\n\n\nReport 6:\nTitle: Wound healing assessment using 20 MHz ultrasound and photography.\nSummary: BACKGROUND / AIMS To compare two non-invasive techniques of assessing wound healing , photography and high resolution ultrasound ( HRUS ) scanning , in experimentally induced full-thickness human skin wounds . METHODS Punch biopsy wounds , 4 mm in diameter , were made aseptically through locally anaesthetised skin on the anterior ( volar ) surface of the non-dominant forearm , 3 cm below the base of the cubital fossa , of 20 human participants . The wounds were treated with a topical antibiotic and covered for 3 days with Mepore sterile dressings . Wound healing was assessed on post-operative days 3 , 7 , 14 and 21 from photographs and HRUS B-scans . All photographs were taken of the wound site and adjacent intact skin under st and ardised conditions . The prints obtained were examined visually and digitised . Digital HRUS B-scans were taken through the centre of the wound bed and the adjacent intact skin parallel to the epidermis . Using the scanner 's calibrated linear measurement capability , the wound width was measured adjacent to the deep surface of the scab , at the base of the wound , and midway between these two levels . RESULTS The wound margins were more clearly defined in the HRUS scans than in the photographs of the wounds ; in some of the latter the scab masked the wound margins . Changes in the surface width of the wound were affected by the time of scab dehiscence , which varied between volunteers . There was less individual variation in the width of the base of the wound , as measured from the HRUS scans . CONCLUSIONS In contrast to photography , which allows recording of changes in the superficial aspect of the wound only , HRUS scanning permits the quantitative assessment of structural changes deep within the wound . Temporal changes in the width of the base of the wound can be used as an indication of the progress of repair",
"role": "user"
},
{
"content": "There is currently no systematic evidence to guide clinical decision-making regarding the timing for closure of traumatic wounds .",
"role": "assistant"
}
] |
science.chemdner_ner | science.chemdner_ner.2174 | [
{
"content": "Analyze the given text to identify specific entities. Focus on extracting relevant data.\n\nProvide your findings in JSON format. Each category should list the entities found.\n\nText:\n\nGraphene-Bonded and -Encapsulated Si Nanoparticles for Lithium Ion Battery Anodes. Silicon (Si) has been considered a very promising anode material for lithium ion batteries due to its high theoretical capacity. However, high-capacity Si nanoparticles usually suffer from low electronic conductivity, large volume change, and severe aggregation problems during lithiation and delithiation. In this paper, a unique nanostructured anode with Si nanoparticles bonded and wrapped by graphene is synthesized by a one-step aerosol spraying of surface-modified Si nanoparticles and graphene oxide suspension. The functional groups on the surface of Si nanoparticles (50-100 nm) not only react with graphene oxide and bind Si nanoparticles to the graphene oxide shell, but also prevent Si nanoparticles from aggregation, thus contributing to a uniform Si suspension. A homogeneous graphene-encapsulated Si nanoparticle morphology forms during the aerosol spraying process. The open-ended graphene shell with defects allows fast electrochemical lithiation/delithiation, and the void space inside the graphene shell accompanied by its strong mechanical strength can effectively accommodate the volume expansion of Si upon lithiation. The graphene shell provides good electronic conductivity for Si nanoparticles and prevents them from aggregating during charge/discharge cycles. The functionalized Si encapsulated by graphene sample exhibits a capacity of 2250 mAh g(-1) (based on the total mass of graphene and Si) at 0.1C and 1000 mAh g(-1) at 10C, and retains 85% of its initial capacity even after 120 charge/discharge cycles. The exceptional performance of graphene-encapsulated Si anodes combined with the scalable and one-step aerosol synthesis technique makes this material very promising for lithium ion batteries.",
"role": "user"
},
{
"content": "{\"Chemical\": [\"graphene oxide\", \"Silicon\", \"lithium\", \"Si\", \"Graphene\", \"Lithium\", \"graphene\"]}",
"role": "assistant"
}
] |
science.genia_ner | science.genia_ner.1066 | [
{
"content": "Analyze the provided text excerpt from a scientific document. Identify and extract distinct entities based on predefined categories.\n\nDeliver the results as a JSON structure where each key represents an entity category, and the associated value is a list of identified entities. If no entities are found for a category, return an empty list.\n\nText Excerpt:\n\nOverexpression of HSF2-beta inhibits hemin-induced heat shock gene expression and erythroid differentiation in K562 cells. Acquisition of heat shock factor 2 (HSF2) DNA binding activity is accompanied by induced transcription of heat shock genes in hemin-treated K562 cells undergoing erythroid differentiation. Previous studies revealed that HSF2 consists of two alternatively spliced isoforms, HSF2-alpha and HSF2-beta, whose relative abundance is developmentally regulated and varies between different tissues. To investigate whether the molar ratio of HSF2-alpha and HSF2-beta isoforms is crucial for the activation of HSF2 and whether the HSF2 isoforms play functionally distinct roles during the hemin-mediated erythroid differentiation, we generated cell clones expressing different levels of HSF2-alpha and HSF2-beta. We show that in parental K562 cells, the HSF2-alpha isoform is predominantly expressed and HSF2 can be activated upon hemin treatment. In contrast, when HSF2-beta is expressed at levels exceeding those of endogenous HSF2-alpha, the hemin-induced DNA binding activity and transcription of heat shock genes are repressed, whereas overexpression of HSF2-alpha results in an enhanced hemin response. Furthermore, the hemin-induced accumulation of globin, known as a marker of erythroid differentiation, is decreased in cells overexpressing HSF2-beta. We suggest that HSF2-beta acts as a negative regulator of HSF2 activity during hemin-mediated erythroid differentiation of K562 cells.",
"role": "user"
},
{
"content": "{\"CellLine\": [\"K562 cells\"], \"CellType\": [], \"DNA\": [\"heat shock genes\"], \"Protein\": [\"HSF2\", \"HSF2 - alpha\", \"HSF2 - beta\", \"endogenous HSF2 - alpha\", \"HSF2 - alpha isoform\", \"the HSF2 - alpha isoform\", \"globin\", \"hemin\"], \"RNA\": []}",
"role": "assistant"
}
] |
science.craftchem_ner | science.craftchem_ner.2233 | [
{
"content": "Analyze the following sentence to identify chemical entities of interest. List these entities in a JSON array format. If no entities are found, return an empty array.\n\nText to Analyze:\nWild-typeintestineincubatedwithSCpre-immuneserum(panele),orSCanti-Abcg5/sterolin-1antibodypre-incubatedwiththeblockingpeptide(panelf)showednospecificsignals.",
"role": "user"
},
{
"content": "[]",
"role": "assistant"
}
] |
science.chemdner_ner | science.chemdner_ner.3309 | [
{
"content": "Analyze the given text to identify specific entities. Focus on extracting relevant data.\n\nProvide your findings in JSON format. Each category should list the entities found.\n\nText:\n\nTwo new compounds from Schisandra glaucescens. One new lignan (7S,8R,7'R,8'R)-7-(3,4-methylenedioxyphenyl)-8,8'-dimethyl-8'-hydroxyl-7'-methoxyl-7'-(3',4'-methylenedioxyphenyl)-tetrahydrofuran (1), one new sesquiterpene 2-hydroxy-11,12-dehydrocalamenene (2), one new natural product erythro-1-(3,4-dimethoxyphenyl)-4-(3,4-methylenedioxyphenyl)-2,3-dimethyl-butane (3), and two known lignans (+)-anwulignan(erythro-1-(4-hydroxy-3-methoxyphenyl)-4-(3,4-methylenedioxyphenyl)-2,3-dimethyl-butane) (4) and ( - )-zuonin-A (5) were isolated from the stems of Schisandra glaucescens Diels. Their structures were elucidated by spectroscopic methods. The cytotoxicity of compounds 1 and 2 was assayed.",
"role": "user"
},
{
"content": "{\"Chemical\": [\"(+)-anwulignan(erythro-1-(4-hydroxy-3-methoxyphenyl)-4-(3,4-methylenedioxyphenyl)-2,3-dimethyl-butane)\", \"(\\u00a0-\\u00a0)-zuonin-A\", \"lignans\", \"2-hydroxy-11,12-dehydrocalamenene\", \"(7S,8R,7\\u0027R,8\\u0027R)-7-(3,4-methylenedioxyphenyl)-8,8\\u0027-dimethyl-8\\u0027-hydroxyl-7\\u0027-methoxyl-7\\u0027-(3\\u0027,4\\u0027-methylenedioxyphenyl)-tetrahydrofuran\", \"erythro-1-(3,4-dimethoxyphenyl)-4-(3,4-methylenedioxyphenyl)-2,3-dimethyl-butane\"]}",
"role": "assistant"
}
] |
science.scierc_re | science.scierc_re.107 | [
{
"content": "Examine the provided abstract from a computer science paper. Your objective is to identify and extract relationships between entities. Present these relationships in a structured JSON format.\n\nInput Text:\n\nMany description logics (DLs) combine knowledge representation on an abstract, logical level with an interface to 'concrete'domains such as numbers and strings. We propose to extend such DLs with key constraints that allow the expression of statements like'US citizens are uniquely identified by their social security number'. Based on this idea, we introduce a number of natural description logics and present (un) decidability results and tight NEx-PTlME complexity bounds.",
"role": "user"
},
{
"content": "[[\"knowledge representation\", \"description logics (DLs)\", \"CONJUNCTION\"]]",
"role": "assistant"
}
] |
science.scireviewgen_multidoc_summarization | science.scireviewgen_multidoc_summarization.2766 | [
{
"content": "Craft a cohesive chapter for a literature review using the abstracts provided. Focus on the chapter's theme, weaving together insights and findings from the abstracts. Highlight key points and note any research gaps.\nLiterature Review Title: A Survey of Big Data Analytics for Smart Forestry \nChapter Title: 1) CLASSIFICATION AND CHANGE DETECTION OF FOREST VEGETATION \n1. Abstract of Cited Paper (BIB001): Our work addresses the problem of extracting and classifying tree species from mobile LiDAR data. The work includes tree preprocessing and tree classification. In tree preprocessing, voxel-based upward-growing filtering is proposed to remove ground points from the mobile LiDAR data, followed by a tree segmentation that extracts individual trees via Euclidean distance clustering and voxel-based normalized cut segmentation. In tree classification, first, a waveform representation is developed to model geometric structures of trees. Then, deep learning techniques are used to generate high-level feature abstractions of the trees’ waveform representations. Quantitative analysis shows that our algorithm achieves an overall accuracy of 86.1% and a kappa coefficient of 0.8 in classifying urban tree species using mobile LiDAR data. Comparative experiments demonstrate that the uses of waveform representation and deep Boltzmann machines contribute to the improvement of classification accuracies of tree species. \n2. Abstract of Cited Paper (BIB002): Abstract Spatially explicit information on tree species composition of managed and natural forests, plantations and urban vegetation provides valuable information for nature conservationists as well as for forest and urban managers and is frequently required over large spatial extents. Over the last four decades, advances in remote sensing technology have enabled the classification of tree species from several sensor types. While studies using remote sensing data to classify and map tree species reach back several decades, a recent review on the status, potentials, challenges and outlooks in this realm is missing. Here, we search for major trends in remote sensing techniques for tree species classification and discuss the effectiveness of different sensors and algorithms based on a literature review. This review demonstrates that the number of studies focusing on tree species classification has increased constantly over the last four decades and promising local scale approaches have been presented for several sensor types. However, there are few examples for tree species classifications over large geographic extents, and bridging the gap between current approaches and tree species inventories over large geographic extents is still one of the biggest challenges of this research field. Furthermore, we found only few studies which systematically described and examined the traits that drive the observed variance in the remote sensing signal and thereby enable or hamper species classifications. Most studies followed data-driven approaches and pursued an optimization of classification accuracy, while a concrete hypothesis or a targeted application was missing in all but a few exceptional studies. We recommend that future research efforts focus stronger on the causal understanding of why tree species classification approaches work under certain conditions or – maybe even more important - why they do not work in other cases. This might require more complex field acquisitions than those typically used in the reviewed studies. At the same time, we recommend reducing the number of purely data-driven studies and algorithm-benchmarking studies as these studies are of limited value, especially if the experimental design is limited, e.g. the tree population is not representative and only a few sensors or acquisition settings are simultaneously investigated.\n\nReview Chapter:",
"role": "user"
},
{
"content": "A major basis for forest vegetation classification based on remote sensing images is ''same objects with same spectrums''. However, the problems of ''same object with different spectrums'' and ''different objects with same spectrums'' caused by the increasing in resolution have a great influence on the classification accuracy. For this problem, we can combine spectral information with features such as texture and terrain to accurately classify vegetation cover. The forestry vegetation classification uses data collected by multiple sensors with multiple temporal characteristics, and the classification methods are gradually diversified. For instance, combining Landsat-8 images with the topographic map data, sample plot data, and the forest manager survey data to classify tree species based on textures , and using MODIS data and DEM data to classify phenological attributes of tree species. It is also possible to extract forest vegetation types based on environmental satellite imagery (HJ-1 CCD) and MODIS time series data , or to classify tree species fine by fusing SAR or LiDAR data with hyperspectral remote sensing data BIB003 . At present, the data is mainly processed through various big data and machine learning technologies, including deep learning, support vector machines, and so on. For example, deep learning can be used to extract and classify tree species from LiDAR data BIB002 . Facing the huge challenge of computing performance due to the increasing data volume, Li et al. BIB001 parallelize the ISODATA clustering algorithm based on MapReduce, which is easy to use. Chebbi et al. implement a satellite remote sensing classification method based on Hadoop, which includes two parts: data storage and processing. In the storage part, the system stores heterogeneous data through HDFS. In the processing part it uses the MapReduce programming paradigm to implement the K-means algorithm. The system can improve the classification accuracy, and also make full use of the integrated configuration resources. Based on the classification technology, the change detection of remote sensing images can obtain timely and consistent forest interference information, so that forest resources can be fully utilized and effectively protected. Based on Hadoop, Dan Hammer and others implement a high-confidence forest disturbance warning system, which establishes appropriate geolocation classification rules and enforces local forest conservation efforts.",
"role": "assistant"
}
] |
science.chemsum_single_document_summarization | science.chemsum_single_document_summarization.22669 | [
{
"content": "Craft a concise summary for a chemistry research paper using the provided title and selected text segments.\n\nHeading:\nUsing UPLC-QTOF-MS Method to Analyse the Spot Constituents of the Thin Layer Chromatograms for Chuzhou Stemona sessilifolia (Miq.) Miq.\n\nContent Extracts:\n1. Introduction\n<p>Stemona Root is the dried root tuber of Stemona sessilifolia (Miq.) Miq. (S. sessilifolia), Stemona japonica (Bl.) Miq., or Stemona tuberose Lour. (Fam. Stemonaceae). The herbal drug is collected in summer or autumn and then removed of rootlet, washed clean, treated with boiling water for a moment or steamed until the center of the cut section surface is devoid of a white core, and finally dried in the sun [1].</p><p>S. sessilifolia is one of the three largest sources of Stemona Root. Its flavor is mild warm, sweet, and bitter. It belongs to lung meridian. The therapeutic action of S. sessilifolia includes moistening the lung, direct qi downward to suppress cough, and killing worms and lice [2–8].</p><p>S. sessilifolia is planted in several provinces such as Anhui, Henan, and Jiangsu in China [9–11]. However, the genuine production district of S. sessilifolia is regarded to be Chuzhou county of Anhui Province [12–14] and is thus known as Chuzhou S. sessilifolia[7, 15].</p><p>We collected 10 batches of S. sessilifolia samples from districts all over the country of China, with seven batches from Chuzhou county.</p><p>On the basis of a large number of references, we have carried out a lot of preliminary research work on S. sessilifolia [16–18] and established the thin layer chromatography method used to confirm that protostemonine is just the index chemical composition.</p><p>From the thin layer chromatogram, we found the identical spots which exist in the products from major producing areas of Chuzhou county and is, called by us, the geographical indication spot. Herein, we used UPLC-QTOF-MS method to identify the constituents of the geographical indication spot and other three clear spots of Chuzhou S. sessilifolia. Moreover, we inferred the possible molecular formula and structures of the constituents of the four spots.</p>\n\n2.1. Instruments\n<p>The instruments used for this study include Automatic TLC Sampler 4 (Switzerland CAMAG company), Camag TLC Visualizer (Switzerland CAMAG company), Waters Acquity UPLC™, Xevo G2 QTOF mass spectrometer, Waters Marker Lynx 4.1 chromatographic workstation, Electronic analytical balance (BS210S, Sartorius), Numerical control ultrasonic cleaner (Kunshan Ultrasonic Instrument Co., Ltd.), Merck SG60 F254 prefabricated slab, and twin trough glass chamber.</p>\n\n2.2. Chemicals and Reagents\n<p>Methanol and acetonitrile of HPLC-grade were purchased from Fisher Chemical. Ether, ethanol, cyclohexane, ethyl acetate, acetone, and ammonium hydroxide solution (25%, v/v) were of analytical purity and purchased from Sinopharm Chemical Reagent Co., Ltd.</p><p>Ammonium carbonate, guaranteed reagent, was purchased from Sinopharm Chemical Reagent Co., Ltd. Ultrapure water was self-made with the Milli-Q water purification system (Millipore, Milford, MA).</p><p>And 5% (w/v) sodium nitrite in 70% (v/v) ethanol solution, the dilute bismuth potassium iodide solution, and the reference substance of protostemonine (purity >98%) were prepared in our lab.</p><p>A total of 10 batches of S. sessilifolia samples were collected from Anhui and Henan provinces and authenticated by Prof. Zhang Yazhong (Department of Chinese Medicine Tests, Anhui Institute for Food and Drug Control, China).</p>\n\n2.3. UPLC-QTOF-MS Conditions\n<p>Chromatographic analyses were developed on Waters Acquity UPLC™.</p><p>The chromatographic separation was achieved on Acquity UPLC BEH Shield RP C18 column (2.1 × 150 mm, 1.7μm) at 30°C with a flow rate of 0.4 ml/min. Mobile phases of A and B were acetonitrile and 0.25 mM ammonium carbonate solution. The gradient elution programs were as follows: 0 -10 min, 85-20% B; 10-10.50 min, 20-85% B; and 10.5-12.00 min, 85% B. The detector was PDA, and injection volume was 5 μl.</p><p>The mass spectrometer analysis was performed on Water Xevo G2 QTOF/MS (Waters), operating in the positive electrospray ionization mode, and collecting data in the full-scan mode in the mass range m/z 80-1000 with a 0.2 s scan time.</p><p>The other optimal parameters employed were as follows: the capillary voltage was 3.0 kV; the cone voltage was 20-60 V; the source temperature was 150°C; the desolvation temperature was 450°C; the desolvation gas flow was 600 L/h. All data were recorded and processed by Waters Marker Lynx 4.1 Chromatographic workstation.</p>\n\n2.4. Sample Preparation\n<p>Pretreatment for the powdered sample with reagent and preparation of sample solution was carried out according to a previous report [19].</p>\n\n2.5. Reference Standard Preparation\n<p>Protostemonine was dissolved with ethanol [19].</p>\n\n2.6. Thin Layer Chromatography\n<p>According to the preliminary exploration of the developing solvent system, the TLC plate, the mobile phase, sample volume, spraying reagent, and observation condition were established [19].</p>\n\n2.7. Pretreatment of the Thin Layer Plate Spot\n<p>The corresponding position of the four spots was in Figure 2. The four corresponding spots were scraped out with a knife and dissolved in 5 ml of methanol, followed by sonication (360 W) for 10 min and then filtration to obtain the sample solutions. For control, the blank of silica gel G plate was also scraped out and treated in the same way as the sample solutions.</p>\n\n3.1. Thin Layer Chromatogram of Samples\n<p>The mobile phase was acetate-acetone-ammonium hydroxide solution (the upper layer) (6:4:4:1) and the spotting was carried out with 5 μl sample volume.</p><p>Under the above method, 10 batches of samples were separated without any tailing and diffuseness (Figure 1).</p><p>The thin layer chromatographic behaviors of the 10 batches of samples were similar. It indicated that the species of alkaloids of samples from different producing areas were consistent. Samples numbered 3,4,5,6,10,11,12 showed one more spot than those numbered 2,8,9, as marked in red circles. Since samples numbered 3,4,5,6,10,11,12 were from Chuzhou county, we inferred that the extra spot was the geographical indication spot of Chuzhou S. sessilifolia.</p>\n\n3.2. Selection of Thin Layer Chromatography Spots\n<p>We selected four spots from the sample of Chuzhou S. sessilifolia, including the geographical indication spot corresponding to protostemonine (Figure 2).</p>\n\n3.3. Composition of Spots Analysis with UPLC-QTOF-MS Method\n<p>Injection of 5 ul of the blank or sample solution into the UPLC- QTOF-MS system resulted in total ion chromatograms of four spots by UPLC-QTOF-MS, as shown in Figure 3. Comparing the retention time, one characteristic chromatographic peak was detected in the total ion chromatogram of each thin layer chromatography spot, as marked by the red circle.</p><p>Then we analyzed the fragment ion spectrogram of the four spots.</p><p>The detailed mass spectrograms were listed in Figure 4.</p><p>The main constituents of four spots were tentatively confirmed in comparison to the known available reference standards with the mass spectra. The detailed structural information of four spots was listed in Table 1.</p>\n\n4. Conclusion\n<p>In conclusion, we identified the geographical indication spot of Chuzhou S. sessilifolia by establishing the thin layer chromatogram of S. sessilifolia. In addition, a reliable and sensitive well-developed method based on UPLC-QTOF-MS in a positive ion mode was used for analyzing and inferring compounds of the geographical indication spot and three other clear spots of Chuzhou S. sessilifolia. It may provide a certain basis for demonstrating the particular geoherbalism of Chuzhou S. sessilifolia.</p>",
"role": "user"
},
{
"content": "A simple, rapid, and highly sensitive analytical method was established for identification of constituents in the spot of the thin layer chromatogram of Chuzhou Stemona sessilifolia (Miq.) Miq. (Chuzhou S. sessilifolia) with ultra performance liquid chromatography-quadrupole time-of-flight mass spectrometry (UPLC-QTOF-MS). The technology was applied to systematically analyze and detect the targeted spots. Compared with the fragmentation behaviors of more than thirty reference constituents, the possibly existing compounds of the target spots were identified or tentatively identified by their exact masses and diagnostic fragment ions. Finally, the four clear spots of the thin layer chromatograms of Chuzhou S. sessilifolia were screened and identified of possible molecular formula and structures.",
"role": "assistant"
}
] |
science.chemsum_single_document_summarization | science.chemsum_single_document_summarization.25700 | [
{
"content": "Craft a concise summary for a chemistry research paper using the provided title and selected text segments.\n\nHeading:\nIntramolecular substitution uncages fluorogenic probes for detection of metallo-carbapenemase-expressing bacteria\n\nContent Extracts:\nIntroduction\n<p>Fluorescent probes have been widely used for bioanalytical and bioimaging applications because of high sensitivity, ease of use and low expense. [1][2][3][4] Among them uorogenic probes offer the feature of signal \"turn-on\" in contact with the analyte without the need to remove or wash off the unreacted probes, and are thus particularly attractive. 5 A number of strategies have been reported to develop uorogenic probes, such as the popular uorescence resonance energy transfer (FRET) mechanism, [6][7][8][9] the modulation of the pi-conjugation [10][11][12] or conformation 13 of the uorophore, and photo electron transfer. [14][15][16] Another common strategy is to chemically cage the uorophore with a group that can be specically removed by the analyte (Fig. 1). 17 The caging group may be directly coupled to the uorophore and provide the \"off-state\" probe. When this direct caging strategy cannot be implemented or a uorogenic probe with the direct caging performs poorly, for example, slow kinetics due to the steric effect of the bulky uorophore, a linker is oen designed to connect the two units to facilitate the design. Such linkers are designed to be immolative to enable the generation of the free uorophore and restoration of the uorescence emission. In this work, we report a new linker design for indirect caging of uorophore: instead of self-immolation, the uorophore regeneration proceeds through intramolecular substitution reaction aer uncaging. In demonstrating this design, we successfully applied it to develop turn-on uorogenic probes for detecting b-lactamases.</p><p>b-Lactamases are a class of bacterial hydrolase responsible for the microbial resistance to lactam antibiotics through enzymatic degradation of the lactam rings. 18 Since the rst blactamase was discovered in 1940, 19 hundreds of b-lactamases have been identied. Among them, carbapenemases 20 can hydrolyse broad-spectrum carbapenem antibiotics that are oen considered the last option for effective therapy of infections with resistant organisms, and are responsible for the emergence of carbapenem-resistant Enterobacteriaceae (CRE) 21 over the past decade. CRE infection has le clinicians with few treatment options, and the global spread of CRE has created a serious threat to public health.</p><p>Carbapenemases have been classied into two major groups based on their active sites: (i) serine carbapenemases belonging to the class A penicillinases and class D oxacillinases, which possess a serine residue active-site as a nucleophile to cleave the b-lactam ring; and (ii) metallo-b-lactamases (MBLs) belonging to the class B carbapenemases, which require a Zn 2+ -OH for nucleophilic attack on the b-lactam bond. [21][22][23] Among the members of the metallo-carbapenemase family, Verona integron-encoded MBLs (VIM), \"active on imipenem\" (IMP), and New Delhi metallo-b-lactamase-1 (NDM-1) are the most common. 21,24 Antibiotic resistance due to acquired metallocarbapenemase-expressing bacteria is considered to be more serious because of their strong hydrolysis ability and rapid dissemination. 25 Inexpensive and simple operational uorescent carbapenemase imaging assay is thus requisite to monitor the occurrence of carbapenemase-expressing bacteria.</p><p>There have been a number of uorescent probes reported for the detection of b-lactamases based on cephalosporin where uorophores are oen attached to the 3 0 carbon position. [26][27][28][29][30][31][32][33] The hydrolysis of the lactam ring by b-lactamases results in the release of attached uorophores through fragmentation. In designing specic carbapenemase-responsive probes, carbapenem has been used as recognition moiety and conjugated a uorophore BODIPY via an alkenyl linkage, though it is unclear whether this approach can be generalized for other uorophores. 34 We have previously converted the R conguration of C7 in the cephalosporin to S, 35 and directly attach a blue uorophore coumarin to the 3 0 position. Herein, we conceptualized an indirect caging strategy using an intramolecular substitution and designed a probe specic for metallocarbapenemases like IMP-1, VIM-27 and NDM-1 with green uorescent emission.</p>\n\nResults and discussion\n<p>We began our caging strategy design from the observations that many of the uorophores carry a phenolic group and that caged phenolic groups lead to the decrease, sometimes even loss, of uorescence emission. The direct coupling of the phenolic group of the uorophore to the 3 0 position of cephalosporin affords an allyl phenyl ether structure, which is prone to spontaneous hydrolysis in water. In comparison, allyl phenyl thioether would offer better stability. However, in order to take advantage of the stability of allyl phenyl thioether in the linker, a uorophore-releasing strategy would have to be in place since the thiophenyl group is generally not present in uorophores. We envision a design that the released thiophenyl group can act through an intramolecular substitution reaction to produce uorescence activation. In this design, the uorophore caging takes place through a separate group, unlike in the direct caging strategy that uses the linker to cage the uorophore. Fig. 2 shows examples of such a design based on cephalosporin with a thiophenyl linker to a green uorescence emission uorophore Tokyo green caged by dinitrophenyl arylation. The choice of dinitrophenyl caging is motivated by reported high efficiency in uorescence quenching and stability in water toward endogenous nucleophiles such as cysteine and glutathione (GSH). [36][37][38] Cephalosporin-arylated Tokyo green (CAT) probes (Fig. 2) were prepared by conjugating arylated Tokyo green to cephalosporin through a thiophenyl linker (Scheme 1). Probe CAT-1 was obtained by direct conjugation of thiophenylacetic acid to arylated Tokyo green and cephalosporin. As expected, CAT-1 was slowly responsive upon treatment with 20 nM or even 200 nM BlaC to produce only 2-3-fold increase in uorescence emission in the phosphate-buffered saline (PBS, pH ¼ 7.4), likely due to the short linker that prevents intramolecular substitution from occurring (Fig. 3 and S1 †). To our delight, the kinetics was signicantly improved when probe CAT-2 bearing a longer linkage was tested under the same condition to give a 19-fold of increase in uorescence emission within 2 h (Fig. 3 and S1 †).</p><p>We further examined the correlation between enzymatic reaction rate and the linker length and synthesized probes CAT-3 to CAT-6. It was found that probes bearing a longer linker generally showed faster uorescence enhancement upon the treatment of b-lactamase (Fig. 3 and S1 †). For example, treatment of CAT-6 with 20 nM BlaC in PBS (pH ¼ 7.4) at 22 C yielded a maximal uorescence increase within 20 min. CAT-5 was slightly slower in reaching the maximal uorescence increase than CAT-6, and both were much faster than CAT-4 under the same condition. These observations were conrmed by the catalytic efficiency data of the CAT probes summarized in Table S1. † Moreover, these cephalosporin-arylated Tokyo green scaffolds were proved to be stable in PBS (pH ¼ 7.4) at 22 C within 24 h (k uncat z 10 À8 s À1 ).</p><p>Based on these results, we selected CAT-6 as the design for metallo-carbapenemase detection. In addition to the conversion of the R conguration of C7 in the cephalosporin to S, an isopropyloxy substitution was incorporated to the 7-position to produce metallo-carbapenemase specic probe CAT-7 (Fig. 2). The enzymatic activation process of CAT-7 by metallocarbapenemase IMP-1 was analysed by plate reader, uorometer and HPLC (Fig. 4-6), revealing the formation of the expected product CAT-7-P conrmed by HRMS (Fig. S2 †). Probe CAT-7 showed a typical broad absorption band at 460 nm; upon treatment with IMP-1, the absorption peak shied to 490 nm, and its uorescent intensity around 510 nm increased by 37fold in PBS (pH ¼ 7.4) at 22 C within 1 h (Fig. 6).</p><p>To further validate the intramolecular substitution mechanism, we replaced the dinitrophenyl caging group by a mononitrophenyl group to make probe CAT-8. This substitution would expect to affect the intramolecular reaction due to the enhanced stability of the mono-nitrophenyl ether. As shown in Fig. 4, CAT-8 displayed a much lower uorescent response than CAT-7 and yielded just 3-fold increase in uorescence emission within 2 h.</p><p>The specicity of CAT-7 was investigated by measuring the uorescence response aer exposure to a variety of b-lactamases in PBS (pH ¼ 7.4) at 22 C. As shown in Fig. 7, there is a concentration-dependent uorescent response to metallocarbapenemases VIM-27, IMP-1 and NDM-1 but not other blactamases. These results demonstrated the excellent selectivity of CAT-7 for metallo-carbapenemases over other b-lactamases. Treatment of CAT-7 with 5 nM IMP-1 in aqueous buffer (PBS, pH 7.4) resulted in a 77-fold increase in uorescence over 2 h (Fig. 7a). The kinetic parameters of CAT-7 upon incubation with IMP-1, VIM-27 and NMD-1 were determined (Table 1). In comparison to the coumarin based CRE probe (S)-CC-3, 35 the kinetic efficiency for IMP-1, VIM-27 and NDM-1 is similar, but the chemical stability of CAT-7 was signicantly improved (the spontaneous hydrolysis rate decreased by 30-fold). To conrm that the uorescence enhancement was dependent on the metallo-carbapenemase catalysed process, EDTA was added to inhibit the enzyme activity (Fig. 7c). No detectable signal increase was observed aer 2 h, conrming that the uorescence increase is directly related to the activity of metallo-carbapenemases. Furthermore, the inhibition of IMP-1 with a thiol analogue captopril (100 mM) was also investigated and conrmed using probe CAT-7, as shown in Fig. S4. † Finally, we evaluated if CAT-7 could detect metallocarbapenemases in clinical bacteria samples. As shown in Fig. 8, a variety of bacterial lysates (4 Â 10 5 cfu in 100 mL PBS, pH ¼ 7.4) was incubated with CAT-7 for 2 hours, and bacteria samples VIM-27, IMP-1, and NDM-1 expressing showed a greater enhancement in uorescence upon incubation with over control b-lactamases, demonstrating the ability of CAT-7 to detect metallo-carbapenemases in clinical CRE strains. As a comparison, commercially available uorogenic probe Fluorocillin showed uorescence enhancement when incubated with all bacterial strains without any specicity (Fig. S5 †).</p>\n\nConclusions\n<p>In summary, a novel platform for caging uorophores and designing uorogenic probes is presented, which uses a thiophenyl linker connected to a uorophore caged by a good leaving group-dinitrophenyl. Upon the reaction with the analyte, the thiophenyl group is released and intramolecularly substitutes the dinitrophenyl caging group on the uorophore. While we demonstrated this caging platform for designing Tokyo green (CAT) based uorogenic probes to detect metallocarbapenemases in carbapenem-resistant Enterobacteriaceae (CRE) lysates, we expect that it can be generally applied to other analytes and uorophores.</p>\n\nConflicts of interest\n<p>There are no conicts to declare.</p>",
"role": "user"
},
{
"content": "This work reports a novel caging strategy for designing fluorogenic probes to detect the activity of b-lactamases. The caging strategy uses a thiophenyl linker connected to a fluorophore caged by a good leaving group-dinitrophenyl. The uncaging proceeds in two steps through the sulfa-releasing and subsequent intramolecular substitution. The length of the linker has been examined and optimized to maximize the rate of intramolecular reaction and thus the rate of fluorescence activation. Finally based on this strategy, we prepared a green fluorogenic probe CAT-7 and validated its selectivity for detecting metallo-carbapenemases (VIM-27, IMP-1, NDM-1) in carbapenem-resistant Enterobacteriaceae (CRE) lysates.",
"role": "assistant"
}
] |
science.data_reco_mcq_mc | science.data_reco_mcq_mc.1211 | [
{
"content": "Consider the following research inquiry and associated key terms. Review the dataset summaries provided and determine which datasets are suitable for addressing the research question. Base your decision on your understanding of machine learning datasets.\nProvide your answer as a JSON list. For example, for the inquiry \"Semi supervised image classification\", a possible response could be: [\"CIFAR-10\", \"CIFAR-100\"]. Ensure the response contains only the dataset names in JSON format.\n\nResearch Inquiry: I want to develop an unsupervised method for discovering new categories of objects in images.\n\nKey Terms: novel class discovery images\n\nDataset Summaries:\nCIFAR-10: The CIFAR-10 dataset (Canadian Institute for Advanced Research, 10 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The images are labelled with one of 10 mutually exclusive classes: airplane, automobile (but not truck or pickup truck), bird, cat, deer, dog, frog, horse, ship, and truck (but not pickup truck).\nRADIATE: RADIATE (RAdar Dataset In Adverse weaThEr) is new automotive dataset created by Heriot-Watt University which includes Radar, Lidar, Stereo Camera and GPS/IMU. The data is collected in different weather scenarios (sunny, overcast, night, fog, rain and snow) to help the research community to develop new methods of vehicle perception.\nSyn2Real: Syn2Real, a synthetic-to-real visual domain adaptation benchmark meant to encourage further development of robust domain transfer methods. The goal is to train a model on a synthetic \"source\" domain and then update it so that its performance improves on a real \"target\" domain, without using any target annotations.\nDOTA: DOTA is a large-scale dataset for object detection in aerial images. It can be used to develop and evaluate object detectors in aerial images.\nImageNet: The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\n\nDataset Options: A: CIFAR-10 B: RADIATE C: Syn2Real D: DOTA E: ImageNet",
"role": "user"
},
{
"content": "[\"CIFAR-10\", \"ImageNet\"]",
"role": "assistant"
}
] |
science.scireviewgen_multidoc_summarization | science.scireviewgen_multidoc_summarization.871 | [
{
"content": "Craft a cohesive chapter for a literature review using the abstracts provided. Focus on the chapter's theme, weaving together insights and findings from the abstracts. Highlight key points and note any research gaps.\nLiterature Review Title: Kernels for Vector-Valued Functions: A Review \nChapter Title: A Connection Between Bayesian and Regularization Point of Views \n1. Abstract of Cited Paper (BIB001): Kernel methods provide a powerful and unified framework for pattern discovery, motivating algorithms that can act on general types of data (e.g. strings, vectors or text) and look for general types of relations (e.g. rankings, classifications, regressions, clusters). The application areas range from neural networks and pattern recognition to machine learning and data mining. This book, developed from lectures and tutorials, fulfils two major roles: firstly it provides practitioners with a large toolkit of algorithms, kernels and solutions ready to use for standard pattern discovery problems in fields such as bioinformatics, text analysis, image analysis. Secondly it provides an easy introduction for students and researchers to the growing field of kernel-based pattern analysis, demonstrating with examples how to handcraft an algorithm or a kernel for a new specific application, and covering all the necessary conceptual and mathematical tools to do so. \n2. Abstract of Cited Paper (BIB002): Gaussian processes (GPs) provide a principled, practical, probabilistic approach to learning in kernel machines. GPs have received increased attention in the machine-learning community over the past decade, and this book provides a long-needed systematic and unified treatment of theoretical and practical aspects of GPs in machine learning. The treatment is comprehensive and self-contained, targeted at researchers and students in machine learning and applied statistics.The book deals with the supervised-learning problem for both regression and classification, and includes detailed algorithms. A wide variety of covariance (kernel) functions are presented and their properties discussed. Model selection is discussed both from a Bayesian and a classical perspective. Many connections to other well-known techniques from machine learning and statistics are discussed, including support-vector machines, neural networks, splines, regularization networks, relevance vector machines and others. Theoretical issues including learning curves and the PAC-Bayesian framework are treated, and several approximation methods for learning with large datasets are discussed. The book contains illustrative examples and exercises, and code and datasets are available on the Web. Appendixes provide mathematical background and a discussion of Gaussian Markov processes.\n\nReview Chapter:",
"role": "user"
},
{
"content": "Connections between regularization theory and Gaussian process prediction or Bayesian models for prediction have been pointed out elsewhere BIB001 BIB003 . Here we just give a very brief sketch of the argument. We restrict ourselves to finite dimensional RKHS. Under this assumption one can show that every RKHS can be described in terms of a feature map , that is a map Φ : X → R p , such that The solid line corresponds to the predictive mean, the shaded region corresponds to two standard deviations of the prediction. Dots are values of the output function Y. We have also included some samples from the posterior distribution, shown as dashed lines. In fact in this case one can show that functions in the RKHS with kernel k can be written as Then we can build a Gaussian process by assuming the coefficient w = w 1 , . . . , w p to be distributed according to a multivariate Gaussian distribution. Roughly speaking, in this case the assumption f * ∼ GP(0, k) becomes As we noted before if we assume a Gaussian likelihood we have Then the posterior distribution is proportional to e and we see that a maximum a posteriori estimate will in turn give the minimization problem defining Tikhonov regularization , where the regularization parameter is now related to the noise variance. We note that in regularization the squared error is often replaced by a more general error term 1 . In a regularization perspective, the loss function ℓ : R × R → R + measure the error we incur when predicting f (x) in place of y. The choice of the loss function is problem dependent. Often used examples are the square loss, the logistic loss or the hinge loss used in support vector machines (see ). The choice of a loss function in a regularization setting can be contrasted to the choice of the likelihood in a Bayesian setting. In this context, the likelihood function models how the observations deviate from the assumed true model in the generative process. The notion of a loss function is philosophically different. It represents the cost we pay for making errors. In Bayesian modeling decision making is separated from inference. In the inference stage the posterior distributions are computed evaluating the uncertainty in the model. The loss function appears only at the second stage of the analysis, known as the decision stage, and weighs how incorrect decisions are penalized given the current uncertainty. However, whilst the two notions are philosophically very different, we can see that, due to the formulation of the frameworks, the loss function and the log likelihood provide the same role mathematically. The discussion in the previous sections shows that the notion of a kernel plays a crucial role in statistical modeling both in the Bayesian perspective (as the covariance function of a GP) and the regularization perspective (as a reproducing kernel). Indeed, for scalar valued problems there is a rich literature on the design of kernels (see for example BIB002 BIB003 and references therein). In the next sections we show how the concept of a kernel can be used in multi-output learning problems. Before doing that, we describe how the concepts of RKHSs and GPs translate to the setting of vector valued learning.",
"role": "assistant"
}
] |
science.scireviewgen_multidoc_summarization | science.scireviewgen_multidoc_summarization.3814 | [
{
"content": "Craft a cohesive chapter for a literature review using the abstracts provided. Focus on the chapter's theme, weaving together insights and findings from the abstracts. Highlight key points and note any research gaps.\nLiterature Review Title: An overview of centralised middleware components for sensor networks \nChapter Title: Application-driven components \n1. Abstract of Cited Paper (BIB001): We propose an adaptive middleware design for context-aware applications that abstracts the applications from the sensors that provide context. Further, we use application-specific utility functions to choose, given multiple alternatives for providing a specific context, one alternative at any time that provides the context for all applications, whilst maximising the applications' total \"satisfaction\" with the quality of context from the chosen provider. Our middleware also implements autonomic properties, such as self-configuration and resilience to failures, in the provision of context information to context-aware applications.\n\nReview Chapter:",
"role": "user"
},
{
"content": "Whereas the middleware components in the previous categories were grouped by an architectural similarity, the components in this category have a common goal: optimising Quality of Service (QoS). There is no formal definition of QoS, so the various QoS aspects are typically application-specific. This is why this category is named 'application-driven', although perhaps 'QoS-driven' is a more appropriate name. An example of an application-driven middleware component is MiLAN (Middleware Linking Applications and Networks, by BIB001 ), which considers both the QoS requirements of an application and the QoS capabilities of the available sensors. These requirements and capabilities : Agent-based middleware components equip each sensor with an execution environment (ee) allowing agents to, e.g., migrate to other sensors are then matched to select the sensors involved in the application. In MiLAN, the QoS levels reflect uncertainty in sensor measurements, i.e., if faced with a choice between using a sensor with level 0.7 and a sensor with level 0.8, the latter is preferred due to its higher reliability. Besides MiLAN, MidFusion (Alex et al. (2008)) and the adaptive middleware component (AMC) by BIB002 also belong this category.",
"role": "assistant"
}
] |
science.nlmgene_ner | science.nlmgene_ner.59 | [
{
"content": "Analyze the provided text to identify gene-related entities. \n\nFormat your findings as a JSON object: {\"Gene\": [\"ExampleGene\", ...]}. Use the entity type as the key, and list the entities under it. If no entities are found, use an empty list.\n\nFocus solely on the JSON output.\n\nText:\n\nThe leaves of Diospyros kaki exert beneficial effects on a benzalkonium chloride-induced murine dry eye model. PURPOSE: In this study, the beneficial effects of the oral administration of ethanol extract of Diospyros kaki (EEDK) were tested on a mouse dry eye model induced by benzalkonium chloride (BAC). METHODS: A solution of 0.2% BAC was administered topically to mouse eyes for 14 days, twice daily, to induce dry eye. Various concentrations of EEDK were administrated daily by oral gavage for 14 days after BAC treatment. Preservative-free eye drops were instilled in the positive-control group. The tear secretion volume (Schirmer's test), tear break-up time (BUT), and fluorescein score were measured on the ocular surface. BAC-induced corneal damage was tested with hematoxylin-eosin staining. Moreover, apoptotic cell death in the corneal epithelial layer was investigated with terminal deoxynucleotidyl transferase-mediated dUTP nick-end labeling (TUNEL) staining. The protein expression level of interleukin-1alpha (IL-1α), IL-1β, IL-6, tumor necrosis factor- alpha (TNF-α), and monocyte chemotactic protein-1 (MCP-1) was determined with western blot analysis. Furthermore, squamous metaplasia in the corneal epithelial layer was detected with immunofluorescent staining for cytokeratine-10. The cellular proliferation in the cornea was examined with immunohistochemical staining for Ki-67. RESULTS: EEDK treatment resulted in prolonged BUT, decreased fluorescein score, increased tear volume, and smoother epithelial cells compared with BAC treatment alone in the cornea. Moreover, EEDK treatment inhibited the inflammatory response and corneal epithelial cell death in a BAC-induced murine dry eye model, and changes in squamous cells were inhibited. Proliferative activity in the corneal epithelium cells was improved with EEDK. CONCLUSIONS: EEDK could be a potential therapeutic agent in the clinical treatment of dry eye.",
"role": "user"
},
{
"content": "{\"Gene\": [\"IL-1\\u03b1\", \"Ki-67\", \"TNF-\\u03b1\", \"monocyte chemotactic protein-1\", \"IL-6\", \"interleukin-1alpha\", \"tumor necrosis factor- alpha\", \"cytokeratine-10\", \"IL-1\\u03b2\", \"MCP-1\"]}",
"role": "assistant"
}
] |
science.pico_ner | science.pico_ner.3799 | [
{
"content": "Examine the following abstract and identify distinct entities. Focus on extracting entities related to participants, interventions, and outcomes.\n\nPresent your findings in a JSON format like: {\"Participant\": [\"example entity\", ...], \"Intervention\": [\"example entity\", ...], \"Outcome\": [\"example entity\", ...]}. If no entities are found for a category, use an empty list.\n\nAbstract Content:\n\nSucralfate overcomes adverse effect of cigarette smoking on duodenal ulcer healing and prolongs subsequent remission . A unicenter , single-blind , randomized study was conducted on 283 patients with active duodenal ulcer to compare possible factors that may affect healing and relapse in patients treated with a potent antisecretory agent , cimetidine , or a site-protective and cytoprotective agent , sucralfate . The endoscopic healing rates at 4 wk were 76 % and 79 % , respectively , and cross-over treatment of the failures for a further 4 wk resulted in 68 % healing with cimetidine and 69 % healing with sucralfate , both differences being not statistically different . Unlike cimetidine , healing by sucralfate was unaffected by cigarette smoking , reluctance to give up smoking , habitual use of alcohol , high maximal acid output , and large ulcer diameter . In particular , the healing rate of smokers treated with sucralfate ( 82 % ) was significantly greater than that of smokers treated with cimetidine ( 63 % ) . Duodenal bulb deformity significantly affected healing in both groups , and was the only offsetting factor identifiable for sucralfate out of 46 factors examined . Of the patients with healed ulcers , 238 participated in a 24-mo follow-up study consisting of interviews at 2-mo intervals and endoscopy at 4-mo intervals or whenever symptoms recurred . The cumulative relapse rate was significantly ( p less than 0.007 ) greater in patients healed with cimetidine than with sucralfate , 50 % relapse occurring at 6 and 12 mo , respectively . In both , the cumulative relapse rate was significantly greater in cigarette smokers than in nonsmokers , but smokers and nonsmokers treated with cimetidine relapsed ( 50 % at 4 and 8 mo , respectively ) faster than the corresponding smokers and nonsmokers treated with sucralfate ( 50 % at 8 and 18 mo , respectively ) . Furthermore , in cimetidine- but not sucralfate-healed patients , early ulcer relapse ( within 6 mo ) was associated with short duration of illness , short remission period , long symptomatic spell , and reluctance to give up smoking . We conclude that smoking adversely affects duodenal ulcer healing by cimetidine and hastens subsequent relapse , and that sucralfate overcomes the adverse effect of smoking on healing as encountered with cimetidine , and results in a subsequent remission period double that of cimetidine .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"sucralfate\", \"sucralfate-healed\", \"Sucralfate\", \"cimetidine\", \"antisecretory agent\", \"cimetidine-\"], \"Outcome\": [\"ulcer healing\", \"remission period\", \"duration of illness\", \"relapsed\", \"healing rate\", \"endoscopic healing rates\", \"cumulative relapse rate\", \"long symptomatic spell\", \"reluctance to give up smoking\"], \"Participant\": [\"ulcers\", \"238\", \"antisecretory agent , cimetidine ,\", \"283\", \"duodenal ulcer\", \"site-protective and cytoprotective agent\"]}",
"role": "assistant"
}
] |
science.genia_ner | science.genia_ner.29 | [
{
"content": "Analyze the provided text excerpt from a scientific document. Identify and extract distinct entities based on predefined categories.\n\nDeliver the results as a JSON structure where each key represents an entity category, and the associated value is a list of identified entities. If no entities are found for a category, return an empty list.\n\nText Excerpt:\n\nExpression of the nucleoside diphosphate kinase in human skin cancers: an immunohistochemical study. Expression of nucleoside diphosphate (NDP) kinase, which is homologous to the nm23 gene product in a variety of species, has been found to be inversely associated with metastatic potential. However, the relationship remains controversial according to the tumor cell types and experimental system, with conflicting results from different research groups. In order to determine whether NDP kinase expression serves as a marker for metastatic potential in human skin cancer, we assessed the levels of NDP kinase expression in 9 keratoacanthomas (KAs), 26 squamous cell carcinomas (SCCs), and 25 basal cell carcinomas (BCCs) using immunohistochemistry. The expression of NDP kinase was intense in KA and SCC compared with BCC. However, the difference of NDP kinase expression between KA and SCC was not statistically significant. And there was no statistically significant difference in NDP kinase expression between SCC with metastasis and SCC without metastasis. Our results contradict the hypothesis concerning the possible role of nm23 gene as a metastatic suppressor gene in human skin cancer. The mechanism of overexpression in various tumor cell types and its biological significance in cutaneous carcinogenesis remain to be determined.",
"role": "user"
},
{
"content": "{\"CellLine\": [], \"CellType\": [\"tumor cell types\"], \"DNA\": [\"nm23 gene\"], \"Protein\": [\"NDP kinase\", \"nucleoside diphosphate kinase\", \"nucleoside diphosphate (NDP) kinase\", \"nm23 gene product\"], \"RNA\": []}",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_plos_single_doc_summ | science.scientific_lay_summarisation_plos_single_doc_summ.16342 | [
{
"content": "Transform the following sections from a biomedical study into a simple summary for the general public. Focus on making the information accessible and easy to understand.\n\nResearch Heading: Drosophila CaV2 channels harboring human migraine mutations cause synapse hyperexcitability that can be suppressed by inhibition of a Ca2+ store release pathway\nContent:\nAbstract:\nGain-of-function mutations in the human CaV2. 1 gene CACNA1A cause familial hemiplegic migraine type 1( FHM1). To characterize cellular problems potentially triggered by CaV2. 1 gains of function, we engineered mutations encoding FHM1 amino-acid substitutions S218L( SL) and R192Q( RQ) into transgenes of Drosophila melanogaster CaV2/cacophony. We expressed the transgenes pan-neuronally. Phenotypes were mild for RQ-expressing animals. By contrast, single mutant SL- and complex allele RQ, SL-expressing animals showed overt phenotypes, including sharply decreased viability. By electrophysiology, SL- and RQ, SL-expressing neuromuscular junctions( NMJs) exhibited enhanced evoked discharges, supernumerary discharges, and an increase in the amplitudes and frequencies of spontaneous events. Some spontaneous events were gigantic( 10–40 mV), multi-quantal events. Gigantic spontaneous events were eliminated by application of TTX–or by lowered or chelated Ca2+–suggesting that gigantic events were elicited by spontaneous nerve firing. A follow-up genetic approach revealed that some neuronal hyperexcitability phenotypes were reversed after knockdown or mutation of Drosophila homologs of phospholipase Cβ( PLCβ), IP3 receptor, or ryanodine receptor( RyR) –all factors known to mediate Ca2+ release from intracellular stores. Pharmacological inhibitors of intracellular Ca2+ store release produced similar effects. Interestingly, however, the decreased viability phenotype was not reversed by genetic impairment of intracellular Ca2+ release factors. On a cellular level, our data suggest inhibition of signaling that triggers intracellular Ca2+ release could counteract hyperexcitability induced by gains of CaV2. 1 function.\nIntroduction:\nEpisodic neurological disorders like migraine, epilepsy, and ataxia can result from underlying ion channel dysfunctions[1–3]. For many such disorders, little is known about how aberrant channel functions affect neuronal signaling paradigms. Cell-based and model organism-based examinations of disease-causing mutations could offer insights into disease-relevant biological processes. One Mendelian form of migraine–familial hemiplegic migraine type 1( FHM1) –results from gain-of-function missense mutations in human CACNA1A, which encodes the α1 subunit of CaV2. 1( P/Q)-type calcium channels[4]. Two FHM1-causing amino-acid substitutions alter highly conserved CaV2. 1 α1 amino-acid residues, R192 and S218[4, 5]. The R192Q amino-acid substitution( RQ) causes “pure” FHM1, while the S218L substitution( SL) causes a severe combination of FHM1, seizures, and susceptibility to edema following head injury[4, 5]. These two FHM1-causing amino-acid substitutions have been studied intensely[6], most notably in knock-in mouse models of FHM1[7–9]. FHM1 knock-in mice display gain-of-function CaV2. 1 phenotypes at neurons and synapses. Model synapses studied include the diaphragm neuromuscular junction( NMJ)[10, 11], the calyx of Held[12–14], the trigeminal sensory neuron pathway[15–17], and cortical neurons[18, 19]. At the mouse NMJ, both RQ and SL increase the frequency of spontaneous excitatory potentials[10, 11]. These increases in quantal frequency are dependent on mutation dose and are more pronounced in SL versus RQ. SL also elicits broadening of evoked end-plate potentials at the mouse NMJ[11]. At the calyx of Held, both substitutions result in enhanced excitatory postsynaptic currents( EPSCs)[12–14], and it has been reported that SL causes an increase in the resting intracellular neuronal calcium, which could be responsible for some potentiation of synapse function[12]. It was recently reported that 2, 5′-di( tertbutyl)-1, 4,-benzohydroquinone( BHQ) reverses aspects of SL-induced gating dysfunction and short-term plasticity[20]. As part of that study, we found that BHQ also restores short-term synaptic plasticity to NMJs in fruit fly larvae expressing a transgene that encodes an S161L amino-acid substitution in Drosophila CaV2/Cacophony–the functional equivalent of human CaV2. 1 S218L[20]. Independent follow-up work in the mouse S218L model demonstrated that BHQ application also blunts cortical spreading depression susceptibility[21]. Given these collective results, a further examination of fruit fly synapses could be valuable for uncovering relevant molecular and electrophysiological consequences of CaV2. 1 gains of function. For the present study, we characterized the fruit fly as a way to model neuronal effects of FHM1-causing mutations. We neuronally expressed cacophony transgenes harboring the Drosophila melanogaster equivalents of RQ or SL–or both RQ and SL concurrently( denoted as “RQ, SL”). On the organismal level, neuronal expression of SL or RQ, SL transgenes drastically impaired overall health. On the synapse level, SL and RQ, SL transgenes markedly enhanced aspects of evoked and spontaneous neurotransmission, consistent with prior studies in mice. Through a combination of genetics, RNA interference, pharmacology, and electrophysiology, we uncovered evidence that impairment of a conserved intracellular signaling pathway that triggers store Ca2+ release reverses hyperexcitability phenotypes in the context of gain-of-function Drosophila CaV2.\nDiscussion:\nOur data suggest that a fly model could uncover molecules that could be targeted to mitigate effects of gain-of-function calcium channel activity associated with migraine. A novel and intriguing finding of our study is factors controlling intracellular calcium store release can be targeted to mitigate FHM1-like hyperexcitability( Figs 8 and 9). Indeed, the RyR channel blocker dantrolene has established uses in the clinic[85, 86]. Moreover, significant evidence indicates that blockade of RyR by dantrolene could have neuroprotective applications[87]. In the context of FHM1, store operated calcium release would be a novel pathway to consider. Furthermore, lithium( Fig 9) has been employed in treating migraine, but only in limited cases. One caveat to our findings is that impairing intracellular calcium release signaling pathways did not reverse all phenotypes associated with SL- and RQ, SL-expressing NMJs. For instance, in the case of increased mEPSP frequency, there is no significant reversal( Table 3). Another caveat is our fly lethality data( Table 4), which suggest that the excitability of single nerve terminals or circuits is not be the only factor to consider. In the case of our fly model, global impairment of factors controlling calcium store release dampened hyperexcitability but enhanced lethality( Table 4). These results point to the fact that gain-of-function CaV2 substitutions may cause multiple, separable, pleiotropic effects. It is possible that the neuronal hyperexcitability phenotypes are somehow protective for fruit fly viability or health–or are a reflection of a protective process that gets blunted when PLCβ and RyR are diminished. Similar considerations could be important in the context of any human migraine treatment. There is no single, gold-standard pharmaceutical treatment for forms of hemiplegic migraine[88]. Several treatments have been employed in clinical settings[89, 90], each with serious drawbacks. Some agents employed to treat hemiplegic migraine include calcium channel blockers like flunarizine[91, 92] and verapamil( CaV1-blocking and potentially CaV2-blocking at higher doses)[93, 94]. Blocking of voltage-gated calcium channels would seem to be an intuitive way to counter gain-of-function CACNA1A mutations; yet there would be obvious side effects of interfering with CaV2. 1 function globally. Other agents reported to be effective in treating hemiplegic migraine are lamotrigine( targeting NaV and CaV2 channels), sodium valproate( several targets including CaV3 channels, resulting in increased inhibitory signaling), and acetazolamide( a pH modulator via carbonic anhydrase inhibition)( see for detailed review[89, 95]). Finally, in cases where hemiplegic migraine attacks are frequent, prophylactic use of triptans has been employed[88, 89]. Triptans are a standard treatment for generalized migraine attacks, but since they are vasoconstrictors, there has been some thought that they may not be appropriate for hemiplegic migraine. Why might a new model be useful, specifically for FHM? FHM is unlike other chronic migraine conditions due its underpinning in central and cortical hyperexcitability and susceptibility to cortical spreading depression[96]. In recent years, calcitonin gene-related peptide( CGRP)-based and peripheral approaches have been the focus of generalized migraine treatment. From recent work in mice, there is evidence that CGRP induces migraine-reminiscent photophobia both peripherally and centrally[97]. Yet it is uncertain whether CGRP-based therapies would be effective for FHM. For one consideration, CGRP injections do not induce migraine in individuals with FHM in the same manner that it does for other chronic migraineurs sensitive to CGRP levels[98, 99]. Recent clinical trials support the use of anti-CGRP receptor antibodies for migraine prophylaxis[100–102], and the Food and Drug Administration( FDA) of the United States has recently approved the anti-CGRP receptor antibody drug Erenumab as a therapeutic[103]. Yet the supporting studies did not use individuals with a history of hemiplegic migraine and the antibodies likely act by peripheral mechanisms because they cannot readily cross the blood/brain barrier[104–106]. Finally, triptan-based treatments act via reduction in CGRP release and act peripherally where they reverse the effects of CGRP on vasculature[107–109]. Given these facts, a new model by which to screen for pharmaco-genetic targets of FHM-causing mutations–such as use of coarse phenotypes of electrophysiological phenotypes in flies–may be valuable. One strength of Drosophila is the power of genetic manipulation. The genetic toolkit afforded to Drosophila neuroscience makes the NMJ a useful model synapse. One caution regarding the model we generated for this study is that it utilizes over-expression of wild-type or mutant UAS-cacophony transgenes. The wild-type version of this transgene recapitulates wild-type cac function without generating hyper-excitability phenotypes[22, 28], and we also controlled for potential overexpression phenotypes( Fig 2). Nevertheless, downstream analyses can be obscured by the need to separate overexpression( hypermorphic) gain-of-function versus mutant( neomorphic) gain-of-function analyses. Other methods, such as CRISPR-based knock-in mutations or expression of a genomic cac construct( as employed in[110]) could yield expression levels more similar to endogenous cac. Although mutations in the endogenous cac locus would be advantageous, we do not expect that these particular limitations detract from our core findings. We have shown that genetic or pharmacological impairment of an intracellular calcium release signaling pathway suppresses some gain-of-function CaV2 electrophysiological phenotypes. Yet the precise mechanism and sequence of events underlying hyperexcitability suppression we observe are unclear. Potentiation of the baseline activities of the IP3R and RyR channels by mutant CaV2 channels is one possibility[111–114]. In principle, potentiated RyR or IP3R activity could feedback to and further potentiate CaV2 channels. Another possibility is that these gain of function mutations result in chronically increased of intracellular[Ca2+]( as in[12]), which could then be reversed indirectly by targeting store pathways. Yet another possibility is that impairment of Ca2+ store-release mechanisms somehow dampens CaV2 gating functions–effectively reversing gating gains of function that result from FHM1-causing mutations. Many future directions are possible, utilizing reagents that exist for Drosophila work. A mechanistic refinement could be aided by visual data– for instance by combining inhibition of Ca2+ store release along with visual analyses of action potential waveforms via voltage imaging[115] and measurements of CaV2-mediated influx of Ca2+ via genetically-encoded indicators tethered to active zone sites[116]–and separately by examining Ca2+ dynamics at the stores themselves by using an ER-localizing, genetically encoded calcium sensor developed for Drosophila[117]. The implication of PLCβ activity and intracellular calcium in hyperexcitability is novel within the context of FHM1 mutations, but in hindsight, it also fits with results of prior studies. One recent RNA profiling analysis of the cerebellum of SL knock-in mice revealed an overrepresentation of several signaling components, including PLCβ[118]. Moreover, PLCβ and the release of calcium from intracellular stores have been implicated in signaling by CGRP[119–121], whose levels are correlated with generalized migraine[122–124]. Beyond work in the Drosophila model, further investigation will be needed to establish whether there is actually a causative link between the action of intracellular calcium stores either in inducing migraine or in precipitating neurological events that precede some forms of migraine, like aura and cortical spreading depression.",
"role": "user"
},
{
"content": "Prior research has demonstrated that gain-of-function mutations in a gene important for neurotransmission ( CACNA1A ) cause migraine in humans . We attempted to mimic some of those gain-of-function mutations in a simple genetic model organism and to examine neurotransmission by electrophysiology . Our findings yield potential clues as to how particular migraine-causing mutations may impact neurophysiology on a cellular level . We used the fruit fly Drosophila melanogaster and its model synapse , the neuromuscular junction ( NMJ ) to perform our studies . We document three main advances: 1 ) characterization of fruit fly models harboring gain-of-function calcium channel alterations known to cause human familial hemiplegic migraine type 1 ( FHM1 ) ; 2 ) characterization of hyperactive neurotransmission caused by one of these alterations; and 3 ) an ability to quell hyperactive neurotransmission by impairing intracellular Ca2+ store release , through both genetic and pharmacological means . Our work contributes to a broader understanding of how pathological mutations could impact cellular physiology . More generally , the utilization of genetic model organisms promises to uncover potential ways to reverse those impacts .",
"role": "assistant"
}
] |
science.pico_ner | science.pico_ner.2080 | [
{
"content": "Examine the following abstract and identify distinct entities. Focus on extracting entities related to participants, interventions, and outcomes.\n\nPresent your findings in a JSON format like: {\"Participant\": [\"example entity\", ...], \"Intervention\": [\"example entity\", ...], \"Outcome\": [\"example entity\", ...]}. If no entities are found for a category, use an empty list.\n\nAbstract Content:\n\nEffects of 10 Hz repetitive transcranial magnetic stimulation ( rTMS ) on clinical global impression in chronic schizophrenia . UNLABELLED We conducted a randomized , sham-controlled repetitive transcranial magnetic stimulation ( rTMS ) study in chronic schizophrenia in-patients ( n=35 ) to evaluate the therapeutic efficacy of 10 Hz stimulation . Patients , who were on stable antipsychotic treatment , were randomly assigned to the active or sham condition . In the active rTMS group , ten sessions with a total of 10,000 stimuli were applied over the left dorsolateral prefrontal cortex at 110 % of motor threshold . The sham group received corresponding sham stimulation . Clinical improvement was measured by the Clinical Global Impression scale ( primary outcome measure ) , the Global Assessment of Functioning Scale ( GAF ) and the Positive and Negative Symptom Scale ( PANSS ; secondary outcome measures ) . Between-group comparisons revealed no significant differences in clinical outcome variables . Only a subgroup of patients with pronounced negative symptoms developed some clinical improvement as indicated by significant changes in the GAF-scale . Besides there is some evidence for a more favourable clinical outcome within this subgroup after rTMS in the CGI-S and PANSS negative scale , too . In line with earlier investigations , our results suggest a moderate - potentially clinically relevant - treatment effect of prefrontal 10 Hz rTMS stimulation in chronic patients . However , in our study this beneficial effect was restricted to subjects with pronounced negative symptoms . CLINICAL TRIAL REGISTRATION INFORMATION ClinicalTrial.gov Identifier : NCT00169689 , http : //www.clinicaltrials.gov .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"repetitive transcranial magnetic stimulation ( rTMS\", \"repetitive transcranial magnetic stimulation ( rTMS )\", \"sham stimulation\", \"10 Hz\", \"prefrontal 10 Hz rTMS\", \"rTMS\"], \"Outcome\": [\"of\", \"Only a subgroup of patients with pronounced negative symptoms developed some\", \"within this subgroup after rTMS in the CGI-S and PANSS negative scale , too\", \"moderate - potentially clinically relevant - treatment effect\", \"no significant differences in clinical outcome variables\", \"clinical improvement as indicated by significant changes in the GAF-scale\", \"there is some evidence for a\", \"in chronic patients\", \"results suggest a\", \"Between-group comparisons revealed\", \"prefrontal 10 Hz rTMS stimulation\", \"more favourable clinical outcome\"], \"Participant\": [\"chronic schizophrenia\", \"stable antipsychotic treatment\", \"n=35\", \"chronic\"]}",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.3002 | [
{
"content": "Examine the abstracts provided from a main scientific article and its cited works. Craft a narrative that synthesizes the key insights and contributions, emphasizing their interconnections. Employ numerical citations such as [1], [2], etc., to reference the cited works.\n\nMain Article Summary:\nDeep reinforcement learning (DRL) has achieved great successes in recent years with the help of novel methods and higher compute power. However, there are still several challenges to be addressed such as convergence to locally optimal policies and long training times. In this paper, firstly, we augment Asynchronous Advantage Actor-Critic (A3C) method with a novel self-supervised auxiliary task, i.e. , measuring temporal closeness to terminal states, namely A3C-TP. Secondly, we propose a new framework where planning algorithms such as Monte Carlo tree search or other sources of (simulated) demonstrators can be integrated to asynchronous distributed DRL methods. Compared to vanilla A3C, our proposed methods both learn faster and converge to better policies on a two-player mini version of the Pommerman game.\n\nCited Works Summary:\n[1]: Multi-robot teams are useful in a variety of task allocation domains such as warehouse automation and surveillance. Robots in such domains perform tasks at given locations and specific times, and are allocated tasks to optimize given team objectives. We propose an efficient, satisficing and centralized Monte Carlo Tree Search based algorithm exploiting branch and bound paradigm to solve the multi-robot task allocation problem with spatial, temporal and other side constraints. Unlike previous heuristics proposed for this problem, our approach offers theoretical guarantees and finds optimal solutions for some non-trivial data sets.\n\n[2]: This paper considers the problem of active object recognition using touch only. The focus is on adaptively selecting a sequence of wrist poses that achieves accurate recognition by enclosure grasps. It seeks to minimize the number of touches and maximize recognition confidence. The actions are formulated as wrist poses relative to each other, making the algorithm independent of absolute workspace coordinates. The optimal sequence is approximated by Monte Carlo tree search. We demonstrate results in a physics engine and on a real robot. In the physics engine, most object instances were recognized in at most 16 grasps. On a real robot, our method recognized objects in 2–9 grasps and outperformed a greedy baseline.\n\n[3]: \n\n[4]: Fuelled by successes in Computer Go, Monte Carlo tree search (MCTS) has achieved wide-spread adoption within the games community. Its links to traditional reinforcement learning (RL) methods have been outlined in the past; however, the use of RL techniques within tree search has not been thoroughly studied yet. In this paper we re-examine in depth this close relation between the two fields; our goal is to improve the cross-awareness between the two communities. We show that a straightforward adaptation of RL semantics within tree search can lead to a wealth of new algorithms, for which the traditional MCTS is only one of the variants. We confirm that planning methods inspired by RL in conjunction with online search demonstrate encouraging results on several classic board games and in arcade video game competitions, where our algorithm recently ranked first. Our study promotes a unified view of learning, planning, and search.\n\n[5]: A Monte-Carlo evaluation consists in estimating a position by averaging the outcome of several random continuations. The method can serve as an evaluation function at the leaves of a min-max tree. This paper presents a new framework to combine tree search with Monte-Carlo evaluation, that does not separate between a min-max phase and a Monte-Carlo phase. Instead of backing-up the min-max value close to the root, and the average value at some depth, a more general backup operator is defined that progressively changes from averaging to minmax as the number of simulations grows. This approach provides a finegrained control of the tree growth, at the level of individual simulations, and allows efficient selectivity. The resulting algorithm was implemented in a 9 × 9 Go-playing program, Crazy Stone, that won the 10th KGS computer-Go tournament.",
"role": "user"
},
{
"content": "Monte Carlo Tree Search (see Figure ) is a best first search algorithm that gained traction after its breakthrough performance in Go [5] . Other than for game playing agents, MCTS has been employed for a variety of domains such as robotics [1] [2] and Sokoban puzzle generation [3] . A recent work [4] provided an excellent unification of MCTS and RL.",
"role": "assistant"
}
] |
science.pico_ner | science.pico_ner.1500 | [
{
"content": "Examine the following abstract and identify distinct entities. Focus on extracting entities related to participants, interventions, and outcomes.\n\nPresent your findings in a JSON format like: {\"Participant\": [\"example entity\", ...], \"Intervention\": [\"example entity\", ...], \"Outcome\": [\"example entity\", ...]}. If no entities are found for a category, use an empty list.\n\nAbstract Content:\n\nMidazolam as a main anesthesia induction agent -- a comparison with thiopental and diazepam . A clinical randomized investigation was undertaken to determine the value of midazolam as a narcotic adjuvant for anesthetic induction . Thirty ASA I-II adult patients undergoing selective surgery were allocated randomly into two groups to receive one of the following agents : midazolam 0.3 mg/kg , thiopental 5 mg/kg , or diazepam 0.4 mg/kg . The induction time as measured from the onset of injection to loss of the eyelash reflex was shortest in the case of thiopental ; while a lower frequency of apnea , lesser suppression of circulation , and lack of venous irritation were points favoring midazolam . However , further study is needed to clarify the moderate cardiovascular response seen during intubation in some cases as well as to elucidate any patient population differences .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"midazolam 0.3 mg/kg , thiopental 5 mg/kg , or diazepam 0.4 mg/kg .\", \"Midazolam\", \"thiopental and diazepam .\", \"midazolam\"], \"Outcome\": [\"lower frequency of apnea , lesser suppression of circulation\", \"moderate cardiovascular response\", \"induction time as measured from the onset of injection to loss of the eyelash reflex\", \"value\", \"lack of venous irritation\"], \"Participant\": [\"ASA I-II\", \"Thirty\"]}",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_plos_single_doc_summ | science.scientific_lay_summarisation_plos_single_doc_summ.1428 | [
{
"content": "Transform the following sections from a biomedical study into a simple summary for the general public. Focus on making the information accessible and easy to understand.\n\nResearch Heading: An Integrated Regulatory Network Reveals Pervasive Cross-Regulation among Transcription and Splicing Factors\nContent:\nAbstract:\nTraditionally the gene expression pathway has been regarded as being comprised of independent steps, from RNA transcription to protein translation. To date there is increasing evidence of coupling between the different processes of the pathway, specifically between transcription and splicing. To study the interplay between these processes we derived a transcription-splicing integrated network. The nodes of the network included experimentally verified human proteins belonging to three groups of regulators: transcription factors, splicing factors and kinases. The nodes were wired by instances of predicted transcriptional and alternative splicing regulation. Analysis of the network indicated a pervasive cross-regulation among the nodes; specifically, splicing factors are significantly more connected by alternative splicing regulatory edges relative to the two other subgroups, while transcription factors are more extensively controlled by transcriptional regulation. Furthermore, we found that splicing factors are the most regulated of the three regulatory groups and are subject to extensive combinatorial control by alternative splicing and transcriptional regulation. Consistent with the network results, our bioinformatics analyses showed that the subgroup of kinases have the highest density of predicted phosphorylation sites. Overall, our systematic study reveals that an organizing principle in the logic of integrated networks favor the regulation of regulatory proteins by the specific regulation they conduct. Based on these results, we propose a new regulatory paradigm postulating that gene expression regulation of the master regulators in the cell is predominantly achieved by cross-regulation.\nIntroduction:\nThe operation of a functioning living cell depends on its ability to tightly regulate its different pathways. Most of this regulation is done by proteins that control the function of many other genes( or themselves in the case of autoregulation). Transcription factors( TFs) are the most abundant regulators in eukaryotic cells, controlling transcription of genes and playing a key role in many important cell functions[1]. Transcriptional regulation is usually a combinatorial effect of multiple TFs binding to regulatory elements in promoter or enhancer regions[2]. Splicing regulation is coordinated mainly by splicing factors( SFs) that bind to short regulatory motifs on the pre-mRNA, called splicing factor binding sites( SFBS), usually located in close proximity to the splice sites[3]. Over the past decade, there has been growing evidence of coupling and interconnectivity between the different steps of the gene expression pathway, specifically between RNA transcription and RNA processing[4]–[6]. The physical coupling between the different steps is known to be mediated by the CTD( C-Terminal Domain) of the largest subunit of RNA polymerase II that is recruited to the transcription complex by specific TFs[7]. This coupling is required both for efficient gene expression in higher eukaryotes and for enabling rapid response to diverse signaling events in the cell[8]. Alternative splicing( AS) events are known to play an important role in modulating the activity of TFs[9]. In a recent study, it was shown that an AS event within a TF mRNA encoding a DNA-binding protein alters the transcription regulatory network controlling the transition between pluripotency and differentiation in embryonic stem cells[10]. In another study, changes in AS patterns of TFs triggered by the activation of signal transduction pathways were shown to play an important role in development. In the latter study, the authors found that 40% of the genes that underwent AS changes also showed changes in transcription, supporting extensive cross-talk between the processes[11]. While the gene expression pathway is largely regulated by TFs and SFs, their activity is modulated by, among other things, post-translational modifications( PTMs). PTMs such as phosphorylation can switch the function of TFs, as was recently shown for CEBPB[12]. PTMs have also been shown to influence splice site selection, changing the spliceosome composition and changing the sub-cellular localization of regulatory proteins[13]. Since AS can remove or insert short fragments in a protein, it may also alter the phosphorylation pattern of the protein, thus suggesting another important role for AS in modulating the gene expression pathway. Most recent knowledge from high-throughput experiments on transcriptional and splicing regulation provides a pair-wise relationship between a specific regulatory factor and its targets[14]–[16]. However, the complex interaction between the genes and the environment governing the cellular response cannot be understood at the level of individual interactions, but could rather emerge through the intricate interplay between the different regulators and their target genes. Understanding the complex interactions between the diverse regulators in the cell is crucial for unraveling the gene regulatory network in multicellular organisms, such as humans, as well as for helping to reveal the causes that render disease states. In recent years, many regulatory networks have been reconstructed to study this complex interplay between gene expression regulations. Most of the work in this direction has focused on transcription regulation in single-cell organisms, such as E. coli[17],[18] and S. cerevisiae[19]–[22]. In addition, several attempts have been made to integrate transcription networks into other regulatory networks. This approach has revealed elements of integration between a transcription regulatory network and splicing regulatory networks during the meiotic gene expression program in S. cerevisiae[23]. In a recent systematic study integrating transcription and phosphorylation networks in different species, the authors suggest a positive correlation between the species' complexity and the degree of cooperation in the network[24]. The complexity of the human regulatory network and a lack of experimental data explain why only a few studies to date have attempted to systematically explore regulatory networks in humans. One such study is the TF-microRNA network[25] based on predictions of transcription regulation and microRNA target regulation. This study revealed a scale-free behavior in which a small number of microRNA-TF pairs regulate large sets of common targets. In this study, we focus on an integrated network of transcriptional and splicing regulation in humans. Our results show extensive wiring of the regulatory genes, specifically by AS regulation. Most strikingly, the network reveals that the subgroup of SFs has significantly higher density of splicing inedges( predicted alternative splicing regulatory interactions) compared to the subgroup of TFs, while transcriptional regulation is much more dense towards the TFs. Consistent with the network results, we found that the subgroup of kinases has significantly higher density of predicted phosphorylation sites relative to TFs and SFs. Taken together, our results indicate that cross-regulation within functional groups is significantly more prevalent than cross-talk regulation between groups, supporting the hypothesis that these functional groups are consistently under similar regulatory constraints. This new regulatory paradigm may point to a more general principle whereby a biological process is controlled predominantly by the entities that compose it.\nDiscussion:\nRecent high-throughput experiments and genome-scale analyses have greatly increased our understanding of the interplay between different steps of the gene expression pathway, revealing extensive coupling and coordination between transcriptional and post-transcriptional regulation[2]. Studying the cross-talk between transcriptional and splicing regulation is thus crucial for unraveling the complex gene expression regulation in higher eukaryotic organisms. The most apparent observation from the human integrated regulatory network we reconstructed in this study is the noticeable preference of regulatory proteins to be regulated via the specific regulation they conduct, namely cross-regulation. Specifically, we observed that transcription inedges were significantly denser towards the subgroup of TFs compared to the transcription inedge density towards SFs and kinases, while the splicing inedges were much denser toward the subgroup of SFs compared to TFs and kinases. These results suggest that cross-regulation among regulatory factors predominates over the regulatory interactions between the different functional groups( cross-talk). SFs have been previously shown to autoregulate the expression of their own transcripts via splicing regulation, as well as to be cross-regulated by AS[46],[47]. The most well-known example is the autoregulation of Sxl involved in sex-fate decisions in Drosophila[48]. Among the splicing regulation interactions in our integrated network, we identified many experimentally verified autoregulations of SFs such as for SC35[49], SRp20[50], 9G8[51], Htra2-beta[52], PTB[53] and NOVA[54]. We also identified putative interactions, which, to the best of our knowledge have not yet been reported, such as the predicted autoregulation of QKI. In addition, we detected many known interactions between different SR proteins, for example, the interactions between SF2/ASF and SRp20 that have been shown to antagonize the autoregulation of SRp20[50], as well as interactions between SFs belonging to different protein families, such as the validated interaction between hnRNPH/F and SC35[55] and between QKI and SF2/ASF[56]. Based on the relatively high number of AS events in gene coding for SR proteins and the extremely high conservation of their alternative exons, it has been previously suggested that AS plays a critical role in the regulation of SR protein transcripts across multiple eukaryotic lineages[57]–[59]. While many studies have pointed to the general tendency of SFs to regulate other SFs[47], our study is the first comprehensive analysis showing the significant preference of AS regulation towards SFs compared to other regulatory proteins. The prominent mode of regulation for SFs to regulate genes involved in splicing is also supported by RNA-binding data from recent CLIP/PAR-CLIP experiments conducted in human cell lines in which we found a significant enrichment of splicing-related GO annotations among the targets of four different SFs. This is consistent with recent high-throughput RNA-binding studies that noticed overrepresentation of RNA processing factors among the targets of SFs( as, for example, SF2/ASF[60]). Furthermore, indirect evidence of the tendency of SFs to regulate splicing-related genes has been found in other species. For example, in S. cerevisiae it has been shown that the knockdown of SFs predominantly downregulated the expression of splicing-related genes[5]. In addition to the noticeably higher inedge splicing density of the SF subgroup compared to TFs and kinases in our integrated network, our data suggest that SFs as a group are generally more regulated, both independently and via combinatorial regulation. The high density of inedges towards SFs in the network is also supported by the greater number of exons in the genes within this subgroup and their high disorder propensity. Moreover, we observed a strong preference of pairs of SFs and TFs to be connected to other regulatory proteins in a coordinated manner. These results are again in agreement with many recent studies suggesting an important role played by coordinated binding of transcription[17],[24],[61] and splicing factors[62]–[64] on their mutual targets. Combinatorial regulation may offer elegant solutions for a quick cellular response when cell conditions change or for the integration of different signals. In addition, combinatorial binding can contribute to expanding the functional diversity achieved by AS[65]). Here, we propose that combinatorial regulation by SFs is specifically widespread among regulatory proteins. More so, our results support that the SFs themselves are significantly more controlled by combinatorial regulation in comparison to other groups of regulatory factors. We postulate that SFs tend to tightly control regulatory genes at the post-transcriptional level in a coordinated manner as a possible mechanism for their role in ‘fine-tuning’ the gene expression regulation. Overall, consistent with many examples of feedback regulation in the gene expression pathway( such as in the sxl example[48]), our data suggest that cross-regulation among the master regulators of the pathway is highly predominant. This phenomenon was also strengthened by phosphorylation site prediction analyses we conducted on the proteins( nodes) belonging to the different subgroups in the network, demonstrating that kinases as a group are more tightly regulated by phosphorylation in comparison to transcription and splicing factors. These latter results are in agreement with the well-known knowledge that kinases self-modulate each other's function and activity through phosphorylation events[66] and are consistent with recent large-scale proteomic analyses showing significant enrichment of kinases in the human kinome[67]. The prevalent cross-regulation within the functional groups observed in our integrated network can explain recent findings showing distinctive functional characteristics( mRNA and protein half-lives) for each of the regulatory groups in the network; proteins involved in transcriptional regulation having unstable mRNA and unstable proteins, proteins regulating RNA splicing having unstable mRNA and stable proteins; and proteins involved in phosphorylation having stable mRNA and unstable proteins[68]. Our network results showing that the different members within each group tend to be regulated by the same cohort of regulators is consistent with the experimental observations that they all tend to have the same expression pattern( i. e., mRNA stability and protein levels). Taken together, the network results and the experimental observations from the transcriptomic and proteomic data support the hypothesis that these regulatory protein groups are consistently under similar regulatory constraints. Notwithstanding, in addition to the tendency for extensive cross-regulation within each subgroup, we observed a significant number of interactions between factors( i. e., SF regulating TF via alternative splicing and vice versa). Among these interactions, we observed a putative splicing regulation between the SF SRp55 and the TF Pax6 known to regulate eye development in vertebrates. An interaction between the D. melanogaster SR protein B52/SRp55 and eyeless( the Drosophila homolog of Pax6) has been previously shown to control eye organogenesis and size in Drosophila[69]. Interestingly, based on our network, we predict that the human Pax6 gene is also regulated by the SR protein SF2/ASF while Fic et al. could not confirm the homologous interaction in Drosophila[69]. Overall, we predict many putative interactions in the network between SFs and TFs, arguing that this type of cross-talk regulation may play a unique role in the gene expression pathway, for example, in directing stem cell pluripotency[10] or deriving a specific developmental program[11]. While cross-talk interactions were clearly less abundant in our network, we postulate that they may be key players in tissue specificity and development. Clearly, modeling and testing other integrated networks of regulatory factors in different human tissues and other species will be required to better understand the relative contribution of cross-regulation and cross-talk interactions to modulating gene expression in high eukaryotic systems.",
"role": "user"
},
{
"content": "The operation of a living cell depends on its ability to regulate its different functions . The master regulators in the cell are proteins , which control the function of many other genes by several mechanisms . Transcription factors can differentially activate or repress the transcription of genes by binding to their regulatory elements . A second major mechanism of gene expression regulation occurs at the level of alternative splicing . Alternative splicing is regulated by splicing factors that bind to short regulatory motifs on the RNA and dictate the final gene architecture . To date there is increasing evidence of coupling between transcription and splicing . In this study , we modeled a network integrating the two regulations . Analysis of the network indicated that splicing factors were more often regulated by alternative splicing while transcription factors were more extensively controlled by transcriptional regulation . Overall , we postulate that regulatory proteins in the cell are controlled preferentially by the specific regulation they conduct .",
"role": "assistant"
}
] |
science.bc7_litcovid_topic_classification | science.bc7_litcovid_topic_classification.4614 | [
{
"content": "Analyze the given paragraph from COVID-19 research. Determine the relevant topics discussed in the text. Your response should be a JSON array containing the identified topics. Ensure the output is a valid JSON array with no additional commentary.\n\nText:\n\nWe ran a simulation comparing 3 methods to calculate case-fatality risk for coronavirus disease using parameters described in previous studies. Case-fatality risk calculated from these methods all are biased at the early stage of the epidemic. When comparing real-time case-fatality risk, the current trajectory of the epidemic should be considered.",
"role": "user"
},
{
"content": "[\"Epidemic Forecasting\"]",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_pubmed | science.scientific_papers_summarization_single_doc_pubmed.7307 | [
{
"content": "Read the provided biomedical research paper and create a concise abstract. Ensure the abstract captures the main points of the paper.\n\nFull Paper:\nhip fractures are a disabling event and an important cause of morbidity and mortality in elderly people ( 1 - 4 ) .\nthe majority of patients following hip fracture are treated operatively to obtain pain relief , hasten mobility , and minimize complications .\nthe indications of conservative treatment after hip fracture in previous reported studies were patients with impacted femoral neck fracture , minimally symptomatic following late diagnosis of hip fracture , already bed - ridden , and significant medical co - morbidity ( 5 - 8 ) . in these patients ,\nhigh mortality rates in cases with conservative treatment were reported because of poor medical comorbidity ( 7 - 9 ) . however , little information is available concerning the prognosis of elderly patients who need surgical intervention but who are discharged without surgery after hip fracture because of economic burden .\nthe purpose of this study was to determine the causal factors of non - operative treatment following hip fracture and mortality rate and functional recovery according to the causal factors compared to that of patients who underwent surgery .\nbetween january 2009 and december 2011 , 451 patients were diagnosed as femoral neck or intertrochanteric fractures . of these , we identified 32 elderly patients ( patients over 65-yr - of - age , 32 hips ) with hip fracture who refused surgical treatment and received non - operative procedure at the author 's hospital .\nthe exclusion criteria were impacted femoral neck fracture , isolated trochanteric fracture , high - energy injury such as traffic accident , and metastatic pathological femur fractures .\nwe excluded four patients from enrollment : two patients with pathological fracture and two patients with traumatic trochanteric fracture of the femur .\nfifty six age- , gender- , american society of anesthesiologist ( asa ) score- ( 10 ) , and diagnosis - matched patients with hip fracture who had undergone surgical treatment at our hospital were used as the control group ( group ii ) in a two - to - one ratio using the propensity scoring method .\ndemographic data including gender , age , initial diagnosis , asa score , and pre - fracture ambulatory status by koval 's categories ( 11 ) were obtained by reviewing medical records ( table 1 ) .\nthe causes of operation refusal were obtained by reviewing medical records and interview with patients or family members .\nchange of patient activity and mortality were compared within a minimum of one year between the two groups .\nactivity levels were defined as follows : i , independent community ambulatory ; ii , community ambulatory with cane ; iii , community ambulatory with walker / crutches ; iv , independent household ambulator ; v , household ambulatory with cane ; vi , household ambulatory with walker / crutches , and vii , nonfunctional ambulatory ( 11 ) . in the analysis , koval 's grade i , ii , and iii cases were also classified as ambulatory outdoors , whereas koval 's grade iv , v , vi , and vii cases were classified as shut - in patients .\npatients unable to attend follow - up evaluations were interviewed by telephone . during the follow - up evaluations , care was taken to interview the caregiver previously interviewed during the patient 's hospitalization .\nthe propensity score matching method using age , gender , asa score , and diagnosis was used for a comparable control group retrospectively .\ncumulative crude mortality rate were calculated at 3 , 6 , 12 , and 24 , months and compared between the two groups .\nkaplan - meier survival analysis was performed on both of the groups with a minimum 12 months follow - up using mortality as a primary end point .\nthe correlation of survival rate in both groups was tested by the log - rank test .\nthe student 's t - test was used to analyze age , and the mann - whitney to analyze asa score and koval 's grade .\nthe chi square test was used to analyze gender , diagnosis , and cumulative mortality .\nthe analyses were carried out using spss version 18.0 ( chicago , il , usa ) .\nthe design and protocol of this retrospective study were approved by the institutional review board of our hospital ( cauh - irb no c2013039 ) .\nall patients were informed that their medical data could be used in a scientific study and provided their consent .\nthe design and protocol of this retrospective study were approved by the institutional review board of our hospital ( cauh - irb no c2013039 ) .\nall patients were informed that their medical data could be used in a scientific study and provided their consent .\nof 28 patients who could not receive operative treatment , the reason in 10 patients ( 36% ) was medical problems with high - risk of surgery .\nthe reason for the remaining 18 ( 64% ) was economic burden , even if surgery is quite possible .\nthe cumulative mortality rate over 3 , 6 , and 12 months post - fracture was 54% , 61% , and 64% in group i and 9% , 11% , and 14% in group ii , respectively .\nthe cumulative mortality rate was 82% ( 23/28 ) at 24 months in group i and 21% ( 12/56 ) at 24 months in group ii ( table 2 ) .\nwith regard to the cause - specific mortality in group i , the cumulative mortality rate over the 3 , 6 , and 12 months post - fracture was consistently 80% in 10 patients who refused surgery for medical problems and 39% , 50% , and 56% in 18 patients who refused surgery because of economic burdens .\nthe cumulative mortality rate at the 24-month follow - up was 90% ( 9/10 ) and 78% ( 14/18 ) , respectively ( table 3 ) .\nthe cause of death in group i by economic reasons was pneumonia ( n=8 ) , congestive heart failure ( n=1 ) , chronic liver disease ( n=1 ) , and unknown reasons ( n=4 ) .\nkaplan - meier survivorship analysis with mortality as the end point in group i and group ii estimated a 57.1% ( 95% confidence interval , 38.7% to 75.5% ) chance of survival during 12 months and 82.1% ( 95% confidence interval , 72.1% to 92.1% ) chance of survival during 12 months , respectively ( fig .\ni , the survival rate was 20.0% ( 95% confidence interval , 0% to 44.7% ) with mortality as the end point at 12 months in the medical reason group and 38.1% ( 95% confidence interval , 15.4% to 60.8% ) with mortality as the end point at 12 months in the economic reason group , respectively .\nof the group i patients at final follow - up , five patients were alive and all were in a nonfunctional ambulatory state ( koval 's grade vii ) ( table 2 ) .\nof the group ii patients , 44 patients were alive , and 16 patients were outdoor ambulatory and 28 patients were shut - in patients .\nonly seven patients were in a nonfunctional ambulatory state ( koval 's grade vii ) .\nalthough many studies have examined mortality and morbidity of conservative treatment after hip fracture in elderly patients , the mortality of conservative treatment that has been done because of economic burden ( i.e. , inability to pay the costs of surgery ) after hip fracture is unclear .\nto the best of our knowledge , this study is the first study of mortality of conservative treatment after hip fracture in patients who refused surgery because of an economic burden . during the 3-yr study period , 18 of 28 patients who were diagnosed with displaced hip fracture and required surgery refused surgery because of an economic burden and instead received conservative treatment . at a minimum 12-month follow - up , mortality was 56% and all patients were bed - ridden .\nseveral recent observational studies addressing several different indications of conservative treatment after hip fracture in elderly patients reported favorable results .\n( 8) conducted a case - control study with 25 patients who were treated non - operatively and 22 patients treated surgically over the same time period .\nthe authors reported that non - surgical treatment with early mobilization did not result in a statistically significant difference in functional outcome compared to surgical treatment patients .\ngregory et al . ( 7 ) evaluated 102 patients diagnosed with a displaced intracapsular femoral neck fracture .\neighty of these patients underwent hemiarthroplasty and 22 were managed non - operatively because of an unacceptably high risk of death within the perioperative period despite medical optimization .\nthe authors reported that the 30-day mortality in the hemiarthroplasty group was 4% ( 3/82 ) compared with 34% ( 7/22 ) in the non - operative group .\npatients surviving 30 days after fracture had similar mortality rates at 1 yr , regardless of their treatment ( 27% non - operative vs 25% hemiarthroplasty ) . raaymakers and marti\n( 12 ) performed a prospective observation study with 170 patients with impacted femoral neck fractures treated with early mobilization . of these\n, 16% had died within 1 yr and 86% had achieved union . although the impacted femoral neck fracture has been described as one of the possible indication of non - operative treatment in some articles , it should be treated surgically because of more than 15% of them displaced eventually .\nin addition , these findings were not consistent with other previous studies ( 13 ) .\nions and stevens ( 13 ) performed a prospective observation study with 158 patients who sustained hip fracture ( 135 in the operation group and 22 in the conservative treatment group ) .\nthe authors reported that the 6-month mortality rate of the operation group was 9.6% ( 13 of 135 ) and that of the non - operated group who had severe medical comorbidity was 60.8% ( 14 of 23 ) . in this study ,\nthe cumulative mortality rate at 6 months follow - up was 61% in group i and 11% in group ii .\nin addition , subgroup analysis in conservative treatment group revealed 80% mortality in patients who had severe medical comorbidity and 50% mortality in patients who had refused surgery because of economic burden . at the latest follow - up ,\nfive patients in group i were all bed - ridden , but more than half of the remaining 44 patients in group ii were restored to the same pre - injury or grade 1 reduced pre - injury activity levels . therefore , conservative treatment after hip fracture might lead to serious dependency .\nfirst , the study was retrospective and was performed in a cohort of prospectively followed patients\n. however , the study could not be prospectively randomized one because of the high mortality of non - operative treatment .\nsecond , the study design included a small cohort of patients and so might not have had sufficient statistical power to be conclusive .\nhowever , the calculated sample size of only 20 patients was deemed sufficient because of the high mortality rate of 4-times that of the surgical control group .\nhowever , to make a matched case - control study by using a propensity scoring method , we used the asa score as the medical status indicating the risk of surgery .\nwhite et al . ( 10 ) reported the asa score is a reliable index of predictor of mortality and surgical risk .\nfourth , the patients who refused surgery because of economic reasons were not universally followed - up .\nwe persuaded the patients and their family and could finished interviews and get information with one orthopedic surgeon and two nurses . in conclusion , this study demonstrates that the conservative treatment following hip fracture due to economic burden is associated with substantially higher mortality and serious functional loss compared with patients who are treated surgically .\nthis finding should be important for health affiliations and health care providers , and health policy decision makers should take action to reduce mortality and maintain functional activity in elderly patients who refuse surgical treatment .",
"role": "user"
},
{
"content": "although many studies have assessed mortality and morbidity of conservative treatment after hip fracture in elderly patients , the mortality of conservative treatment done because of economic burden is unclear . among 451 patients diagnosed with \n displaced hip fracture during 3 yr , 28 patients ( group i ) were enrolled as conservative treatment . \n fifty - six patients matched in age , gender , asa score , and diagnosis ( group ii ) who had undergone surgical treatment were used as the control group . \n the causal factors of non - operative treatment and mortality rate and functional recovery were evaluated according to the causal factors of patients with surgical procedure . \n ten patients ( 36% ) in group i involved medical problems and 18 ( 64% ) by economic burdens . \n the cumulative mortality rate over 3 , 6 , 12 , and 24 months was 54% , 61% , 64% , and 82% in group i and 9% , 11% , 14% , and 21% in group ii , respectively . at the latest follow - up \n , all five patients in group i displayed a nonfunctional ambulatory state , whereas only seven of 44 patients in group ii were in a nonfunctional ambulatory state . \n non - surgical treatment following hip fracture that is done because of the economic burden is associated with substantially high mortality and serious functional loss .",
"role": "assistant"
}
] |
science.qasper_extractive_qa | science.qasper_extractive_qa.748.2 | [
{
"content": "Analyze the given research paper excerpts and address the question provided. Extract the necessary information directly from the text. Your answer should be formatted as a JSON object with the following structure:\n\n\"answer\": A list of text segments from the paper that directly respond to the question.\n\"evidence\": A list of text segments from the paper where the answer can be found.\n\nIf the question cannot be answered with the given text, return \"null\".\n\nDocument: Contextual Recurrent Units for Cloze-style Reading Comprehension\n\nRecurrent Neural Networks (RNN) are known as powerful models for handling sequential data, and especially widely utilized in various natural language processing tasks. In this paper, we propose Contextual Recurrent Units (CRU) for enhancing local contextual representations in neural networks. The proposed CRU injects convolutional neural networks (CNN) into the recurrent units to enhance the ability to model the local context and reducing word ambiguities even in bi-directional RNNs. We tested our CRU model on sentence-level and document-level modeling NLP tasks: sentiment classification and reading comprehension. Experimental results show that the proposed CRU model could give significant improvements over traditional CNN or RNN models, including bidirectional conditions, as well as various state-of-the-art systems on both tasks, showing its promising future of extensibility to other NLP tasks as well.\n\nIntroduction\nNeural network based approaches have become popular frameworks in many machine learning research fields, showing its advantages over traditional methods. In NLP tasks, two types of neural networks are widely used: Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN).\nRNNs are powerful models in various NLP tasks, such as machine translation BIBREF0, sentiment classification BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, reading comprehension BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, etc. The recurrent neural networks can flexibly model different lengths of sequences into a fixed representation. There are two main implementations of RNN: Long Short-Term Memory (LSTM) BIBREF12 and Gated Recurrent Unit (GRU) BIBREF0, which solve the gradient vanishing problems in vanilla RNNs.\nCompared to RNN, the CNN model also shows competitive performances in some tasks, such as text classification BIBREF13, etc. However, different from RNN, CNN sets a pre-defined convolutional kernel to “summarize” a fixed window of adjacent elements into blended representations, showing its ability of modeling local context.\nAs both global and local information is important in most of NLP tasks BIBREF14, in this paper, we propose a novel recurrent unit, called Contextual Recurrent Unit (CRU). The proposed CRU model adopts advantages of RNN and CNN, where CNN is good at modeling local context, and RNN is superior in capturing long-term dependencies. We propose three variants of our CRU model: shallow fusion, deep fusion and deep-enhanced fusion.\nTo verify the effectiveness of our CRU model, we utilize it into two different NLP tasks: sentiment classification and reading comprehension, where the former is sentence-level modeling, and the latter is document-level modeling. In the sentiment classification task, we build a standard neural network and replace the recurrent unit by our CRU model. To further demonstrate the effectiveness of our model, we also tested our CRU in reading comprehension tasks with a strengthened baseline system originated from Attention-over-Attention Reader (AoA Reader) BIBREF10. Experimental results on public datasets show that our CRU model could substantially outperform various systems by a large margin, and set up new state-of-the-art performances on related datasets. The main contributions of our work are listed as follows.\n[leftmargin=*]\nWe propose a novel neural recurrent unit called Contextual Recurrent Unit (CRU), which effectively incorporate the advantage of CNN and RNN. Different from previous works, our CRU model shows its excellent flexibility as GRU and provides better performance.\nThe CRU model is applied to both sentence-level and document-level modeling tasks and gives state-of-the-art performances.\nThe CRU could also give substantial improvements in cloze-style reading comprehension task when the baseline system is strengthened by incorporating additional features which will enrich the representations of unknown words and make the texts more readable to the machine.\n\nOur approach ::: Contextual Recurrent Unit\nBy only modeling word-level representation may have drawbacks in representing the word that has different meanings when the context varies. Here is an example that shows this problem.\nThere are many fan mails in the mailbox.\nThere are many fan makers in the factory.\nAs we can see that, though two sentences share the same beginning before the word fan, the meanings of the word fan itself are totally different when we meet the following word mails and makers. The first fan means “a person that has strong interests in a person or thing\", and the second one means “a machine with rotating blades for ventilation\". However, the embedding of word fan does not discriminate according to the context. Also, as two sentences have the same beginning, when we apply a recurrent operation (such as GRU) till the word fan, the output of GRU does not change, though they have entirely different meanings when we see the following words.\nTo enrich the word representation with local contextual information and diminishing the word ambiguities, we propose a model as an extension to the GRU, called Contextual Recurrent Unit (CRU). In this model, we take full advantage of the convolutional neural network and recurrent neural network, where the former is good at modeling local information, and the latter is capable of capturing long-term dependencies. Moreover, in the experiment part, we will also show that our bidirectional CRU could also significantly outperform the bidirectional GRU model.\nIn this paper, we propose three different types of CRU models: shallow fusion, deep fusion and deep-enhanced fusion, from the most fundamental one to the most expressive one. We will describe these models in detail in the following sections.\n\nOur approach ::: Contextual Recurrent Unit ::: Deep Fusion\nThe contextual information that flows into the update gate and reset gate of GRU is identical in shallow fusion. In order to let the model adaptively control the amount of information that flows into these gates, we can embed CNN into GRU in a deep manner. We can rewrite the Equation 1 to 3 of GRU as follows.\nwhere $\\phi _z, \\phi _r, \\phi $ are three different CNN layers, i.e., the weights are not shared. When the weights share across these CNNs, the deep fusion will be degraded to shallow fusion.\n\nOur approach ::: Contextual Recurrent Unit ::: Deep-Enhanced Fusion\nIn shallow fusion and deep fusion, we used the convolutional operation to summarize the context. However, one drawback of them is that the original word embedding might be blurred by blending the words around it, i.e., applying the convolutional operation on its context.\nFor better modeling the original word and its context, we enhanced the deep fusion model with original word embedding information, with an intuition of “enriching word representation with contextual information while preserving its basic meaning”. Figure FIGREF17 illustrates our motivations.\nFormally, the Equation 9 to 11 can be further rewritten into\nwhere we add original word embedding $e_t$ after the CNN operation, to “enhance” the original word information while not losing the contextual information that has learned from CNNs.\n\nApplications ::: Reading Comprehension ::: Task Description\nThe cloze-style reading comprehension is a fundamental task that explores relations between the document and the query. Formally, a general cloze-style query can be illustrated as a triple $\\langle {\\mathcal {D}}, {\\mathcal {Q}}, {\\mathcal {A}} \\rangle $, where $\\mathcal {D}$ is the document, $\\mathcal {Q}$ is the query and the answer $\\mathcal {A}$. Note that the answer is a single word in the document, which requires us to exploit the relationship between the document and query.\n\nExperiments: Sentiment Classification ::: Experimental Setups\nIn the sentiment classification task, we tried our model on the following public datasets.\n[leftmargin=*]\nMR Movie reviews with one sentence each. Each review is classified into positive or negative BIBREF18.\nIMDB Movie reviews from IMDB website, where each movie review is labeled with binary classes, either positive or negative BIBREF19. Note that each movie review may contain several sentences.\nSUBJ$^1$ Movie review labeled with subjective or objective BIBREF20.\nThe statistics and hyper-parameter settings of these datasets are listed in Table TABREF33.\nAs these datasets are quite small and overfit easily, we employed $l_2$-regularization of 0.0001 to the embedding layer in all datasets. Also, we applied dropout BIBREF21 to the output of the embedding layer and fully connected layer. The fully connected layer has a dimension of 1024. In the MR and SUBJ, the embedding layer is initialized with 200-dimensional GloVe embeddings (trained on 840B token) BIBREF22 and fine-tuned during the training process. In the IMDB condition, the vocabulary is truncated by descending word frequency order. We adopt batched training strategy of 32 samples with ADAM optimizer BIBREF23, and clipped gradient to 5 BIBREF24. Unless indicated, the convolutional filter length is set to 3, and ReLU for the non-linear function of CNN in all experiments. We use 10-fold cross-validation (CV) in the dataset that has no train/valid/test division.\n\nExperiments: Reading Comprehension ::: Experimental Setups\nWe also tested our CRU model in the cloze-style reading comprehension task. We carried out experiments on the public datasets: CBT NE/CN BIBREF25. The CRU model used in these experiments is the deep-enhanced type with the convolutional filter length of 3. In the re-ranking step, we also utilized three features: Global LM, Local LM, Word-class LM, as proposed by BIBREF10, and all LMs are 8-gram trained by SRILM toolkit BIBREF27. For other settings, such as hyperparameters, initializations, etc., we closely follow the experimental setups as BIBREF10 to make the experiments more comparable.\n\nExperiments: Reading Comprehension ::: Results\nThe overall experimental results are given in Table TABREF38. As we can see that our proposed models can substantially outperform various state-of-the-art systems by a large margin.\n[leftmargin=*]\nOverall, our final model (M-AoA Reader + CRU + Re-ranking) could give significant improvements over the previous state-of-the-art systems by 2.1% and 1.4% in test sets, while re-ranking and ensemble bring further improvements.\nWhen comparing M-AoA Reader to the original AoA Reader, 1.8% and 0.4% improvements can be observed, suggesting that by incorporating additional features into embedding can enrich the power of word representation. Incorporating more additional features in the word embeddings would have another boost in the results, but we leave this in future work.\nReplacing GRU with our CRU could significantly improve the performance, where 1.6% and 1.5% gains can be obtained when compared to M-AoA Reader. This demonstrates that incorporating contextual information when modeling the sentence could enrich the representations. Also, when modeling an unknown word, except for its randomly initialized word embedding, the contextual information could give a possible guess of the unknown word, making the text more readable to the neural networks.\nThe re-ranking strategy is an effective approach in this task. We observed that the gains in the common noun category are significantly greater than the named entity. One possible reason is that the language model is much beneficial to CN than NE, because it is much more likely to meet a new named entity that is not covered in the training data than the common noun.\n\nQualitative Analysis\nIn this section, we will give a qualitative analysis on our proposed CRU model in the sentiment classification task. We focus on two categories of the movie reviews, which is quite harder for the model to judge the correct sentiment. The first one is the movie review that contains negation terms, such as “not”. The second type is the one contains sentiment transition, such as “clever but not compelling”. We manually select 50 samples of each category in the MR dataset, forming a total of 100 samples to see if our CRU model is superior in handling these movie reviews. The results are shown in Table TABREF45. As we can see that, our CRU model is better at both categories of movie review classification, demonstrating its effectiveness.\nAmong these samples, we select an intuitive example that the CRU successfully captures the true meaning of the sentence and gives the correct sentiment label. We segment a full movie review into three sentences, which is shown in Table TABREF46.\nRegarding the first and second sentence, both models give correct sentiment prediction. While introducing the third sentence, the GRU baseline model failed to recognize this review as a positive sentiment because there are many negation terms in the sentence. However, our CRU model could capture the local context during the recurrent modeling the sentence, and the phrases such as “not making fun” and “not laughing at” could be correctly noted as positive sentiment which will correct the sentiment category of the full review, suggesting that our model is superior at modeling local context and gives much accurate meaning.\n\nTable 2: Results on MR, IMDB and SUBJ sentiment classification task. Best previous results are marked in italics, and overall best results are mark in bold face. Multi-channel CNN (Kim, 2014): A CNN architecture with static and non-static word embeddings. HRL (Wang and Tian, 2016): A hybrid residual LSTM architecture. Multitask arc-II (Liu et al., 2016): A deep architectures with shared local-global hybrid memory for multi-task learning. CNN-GRU-word2vec (Wang et al., 2016): An architecture that combines CNN and GRU model with pre-trained word embeddings by word2vec. DSCNN-Pretrain (Zhang et al., 2016): Dependency sensitive convolutional neural networks with pretrained sequence autoencoders. AC-BLSTM (Liang and Zhang, 2016): Asymmetric convolutional bidirectional LSTM networks.\n\nFigure 3: Trends of MR test set accuracy with the increasing convolutional filter length.\n\nTable 4: Number of correctly classified samples.\n\nInquiry: What datasets are used for testing sentiment classification and reading comprehension?",
"role": "user"
},
{
"content": "{\"answer\": [\"CBT NE/CN\", \"MR Movie reviews\", \"IMDB Movie reviews\", \"SUBJ\"], \"evidence\": [\"In the sentiment classification task, we tried our model on the following public datasets.\", \"MR Movie reviews with one sentence each. Each review is classified into positive or negative BIBREF18.\", \"IMDB Movie reviews from IMDB website, where each movie review is labeled with binary classes, either positive or negative BIBREF19. Note that each movie review may contain several sentences.\", \"SUBJ$^1$ Movie review labeled with subjective or objective BIBREF20.\", \"We also tested our CRU model in the cloze-style reading comprehension task. We carried out experiments on the public datasets: CBT NE/CN BIBREF25. The CRU model used in these experiments is the deep-enhanced type with the convolutional filter length of 3. In the re-ranking step, we also utilized three features: Global LM, Local LM, Word-class LM, as proposed by BIBREF10, and all LMs are 8-gram trained by SRILM toolkit BIBREF27. For other settings, such as hyperparameters, initializations, etc., we closely follow the experimental setups as BIBREF10 to make the experiments more comparable.\"]}",
"role": "assistant"
}
] |
science.bc7_litcovid_topic_classification | science.bc7_litcovid_topic_classification.4095 | [
{
"content": "Analyze the given paragraph from COVID-19 research. Determine the relevant topics discussed in the text. Your response should be a JSON array containing the identified topics. Ensure the output is a valid JSON array with no additional commentary.\n\nText:\n\nBACKGROUND: The mesenchymal stem cell (MSC), known to remodel in disease and have an extensive secretome, has recently been isolated from the human heart. However, the effects of normal and diseased cardiac MSCs on myocyte electrophysiology remain unclear. We hypothesize that in disease the inflammatory secretome of cardiac human MSCs (hMSCs) remodels and can regulate arrhythmia substrates. METHODS: hMSCs were isolated from patients with or without heart failure from tissue attached to extracted device leads and from samples taken from explanted/donor hearts. Failing hMSCs or nonfailing hMSCs were cocultured with normal human cardiac myocytes derived from induced pluripotent stem cells. Using fluorescent indicators, action potential duration, Ca2+ alternans, and spontaneous calcium release (SCR) incidence were determined. RESULTS: Failing and nonfailing hMSCs from both sources exhibited similar trilineage differentiation potential and cell surface marker expression as bone marrow hMSCs. Compared with nonfailing hMSCs, failing hMSCs prolonged action potential duration by 24% (P<0.001, n=15), increased Ca2+ alternans by 300% (P<0.001, n=18), and promoted spontaneous calcium release activity (n=14, P<0.013) in human cardiac myocytes derived from induced pluripotent stem cells. Failing hMSCs exhibited increased secretion of inflammatory cytokines IL (interleukin)-1beta (98%, P<0.0001) and IL-6 (460%, P<0.02) compared with nonfailing hMSCs. IL-1beta or IL-6 in the absence of hMSCs prolonged action potential duration but only IL-6 increased Ca2+ alternans and promoted spontaneous calcium release activity in human cardiac myocytes derived from induced pluripotent stem cells, replicating the effects of failing hMSCs. In contrast, nonfailing hMSCs prevented Ca2+ alternans in human cardiac myocytes derived from induced pluripotent stem cells during oxidative stress. Finally, nonfailing hMSCs exhibited >25x higher secretion of IGF (insulin-like growth factor)-1 compared with failing hMSCs. Importantly, IGF-1 supplementation or anti-IL-6 treatment rescued the arrhythmia substrates induced by failing hMSCs. CONCLUSIONS: We identified device leads as a novel source of cardiac hMSCs. Our findings show that cardiac hMSCs can regulate arrhythmia substrates by remodeling their secretome in disease. Importantly, therapy inhibiting (anti-IL-6) or mimicking (IGF-1) the cardiac hMSC secretome can rescue arrhythmia substrates.",
"role": "user"
},
{
"content": "[\"Treatment\", \"Mechanism\"]",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.2599 | [
{
"content": "Examine the abstracts provided from a main scientific article and its cited works. Craft a narrative that synthesizes the key insights and contributions, emphasizing their interconnections. Employ numerical citations such as [1], [2], etc., to reference the cited works.\n\nMain Article Summary:\nLive video-streaming platforms such as Twitch enable top content creators to reap significant profits and influence. To that effect, various behavioral norms are recommended to new entrants and those seeking to increase their popularity and success. Chiefly among them are to simply put in the effort and promote on social media outlets such as Twitter, Instagram, and the like. But does following these behaviors indeed have a relationship with eventual popularity? In this paper, we collect a corpus of Twitch streamer popularity measures --- spanning social and financial measures --- and their behavior data on Twitch and third party platform. We also compile a set of community-defined behavioral norms. We then perform temporal analysis to identify the increased predictive value that a streamer's future behavior contributes to predicting future popularity. At the population level, we find that behavioral information improves the prediction of relative growth that exceeds the median streamer. At the individual level, we find that although it is difficult to quickly become successful in absolute terms, streamers that put in considerable effort are more successful than the rest, and that creating social media accounts to promote oneself is effective irrespective of when the accounts are created. Ultimately, we find that studying the popularity and success of content creators in the long term is a promising and rich research area.\n\nCited Works Summary:\n[1]: Pinterest is a popular social curation site where people collect, organize, and share pictures of items. We studied a fundamental issue for such sites: what patterns of activity attract attention (audience and content reposting)-- We organized our studies around two key factors: the extent to which users specialize in particular topics, and homophily among users. We also considered the existence of differences between female and male users. We found: (a) women and men differed in the types of content they collected and the degree to which they specialized; male Pinterest users were not particularly interested in stereotypically male topics; (b) sharing diverse types of content increases your following, but only up to a certain point; (c) homophily drives repinning: people repin content from other users who share their interests; homophily also affects following, but to a lesser extent. Our findings suggest strategies both for users (e.g., strategies to attract an audience) and maintainers (e.g., content recommendation methods) of social curation sites.\n\n[2]: Follower count is important to Twitter users: it can indicate popularity and prestige. Yet, holistically, little is understood about what factors -- like social behavior, message content, and network structure - lead to more followers. Such information could help technologists design and build tools that help users grow their audiences. In this paper, we study 507 Twitter users and a half-million of their tweets over 15 months. Marrying a longitudinal approach with a negative binomial auto-regression model, we find that variables for message content, social behavior, and network structure should be given equal consideration when predicting link formations on Twitter. To our knowledge, this is the first longitudinal study of follow predictors, and the first to show that the relative contributions of social behavior and mes-sage content are just as impactful as factors related to social network structure for predicting growth of online social networks. We conclude with practical and theoretical implications for designing social media technologies.",
"role": "user"
},
{
"content": "Unlike predicting the popularity of content, which focuses on predicting popularity in the near future, our goal is to study the process of on a social network by observing behavioral characteristics over long spans of time. To the best of our knowledge, there are many community-based anecdotes about effective behavior, and relatively few quantitative or longitudinal studies. suggest that behavior may be a factor in growing social network influence; find that how Twitter users interact with their social network affects their follower count [2] ; find that diverse content can help increase followings on Pinterest [1] . We extend these ideas by examining a broad set of behaviors derived from the Twitch community and quantitatively studying their ability to predict future popularity growth for varying time ranges.",
"role": "assistant"
}
] |
science.anat_em_ner | science.anat_em_ner.9 | [
{
"content": "Analyze the provided text from a biomedical study. Your task is to detect and list entities based on predefined categories.\n\nPresent your results in a JSON format: {\"AnatomicalEntity\": [\"entity1\", \"entity2\", ...]}. Each category should have a corresponding list of identified entities. \n\nText:\n\nResults\nOFTT was given to 33 men with MetS (defined by the Adult Treatment Panel III), 17 HTN and 14 healthy men. The MetS group was further divided according to fasting triglycerides (TG) into TG >= 150 [MetS+TG, (n = 22)] or <150 mg/dl [MetS-TG (n = 11)], and into those with or without hypertension [MetS+HTN (n = 24), MetS-HTN (n = 9), respectively]. TG concentrations were measured before and at 4, 6 and 8 h after OFTT and the postprandial response was quantified using the area under the curve (AUC) for TG.\nThe postprandial response was significantly higher in MetS compared to HTN and healthy men [AUC (SD) in mg/dl/h; 2534 +/- 1016 vs. 1620 +/- 494 and 1019 +/- 280, respectively, p <= 0.001]. The TG levels were increased significantly in MetS+TG compared to MetS-TG subjects at 4 (p = 0.022), 6 (p < 0.001) and 8 hours (p < 0.001). The TG were increased significantly in MetS-TG compared to healthy subjects at 4 (p = 0.011), 6 (p = 0.001) and 8 hours (p = 0.015). In linear regression analysis only fasting TG levels were a significant predictor of the AUC (Coefficient B = 8.462, p < 0.001).",
"role": "user"
},
{
"content": "{\"AnatomicalEntity\": []}",
"role": "assistant"
}
] |
science.scireviewgen_multidoc_summarization | science.scireviewgen_multidoc_summarization.1611 | [
{
"content": "Craft a cohesive chapter for a literature review using the abstracts provided. Focus on the chapter's theme, weaving together insights and findings from the abstracts. Highlight key points and note any research gaps.\nLiterature Review Title: RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY \nChapter Title: Introduction \n1. Abstract of Cited Paper (BIB001): We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds. \n2. Abstract of Cited Paper (BIB002): We present a probabilistic generative model of visual attributes, together with an efficient learning algorithm. Attributes are visual qualities of objects, such as 'red', 'striped', or 'spotted'. The model sees attributes as patterns of image segments, repeatedly sharing some characteristic properties. These can be any combination of appearance, shape, or the layout of segments within the pattern. Moreover, attributes with general appearance are taken into account, such as the pattern of alternation of any two colors which is characteristic for stripes. To enable learning from unsegmented training images, the model is learnt discriminatively, by optimizing a likelihood ratio. ::: ::: As demonstrated in the experimental evaluation, our model can learn in a weakly supervised setting and encompasses a broad range of attributes. We show that attributes can be learnt starting from a text query to Google image search, and can then be used to recognize the attribute and determine its spatial extent in novel real-world images. \n3. Abstract of Cited Paper (BIB003): The objective of this paper is classifying images by the object categories they contain, for example motorbikes or dolphins. There are three areas of novelty. First, we introduce a descriptor that represents local image shape and its spatial layout, together with a spatial pyramid kernel. These are designed so that the shape correspondence between two images can be measured by the distance between their descriptors using the kernel. Second, we generalize the spatial pyramid kernel, and learn its level weighting parameters (on a validation set). This significantly improves classification performance. Third, we show that shape and appearance kernels may be combined (again by learning parameters on a validation set). Results are reported for classification on Caltech-101 and retrieval on the TRECVID 2006 data sets. For Caltech-101 it is shown that the class specific optimization that we introduce exceeds the state of the art performance by more than 10%. \n4. Abstract of Cited Paper (BIB004): This article presents a novel scale- and rotation-invariant detector and descriptor, coined SURF (Speeded-Up Robust Features). SURF approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster. This is achieved by relying on integral images for image convolutions; by building on the strengths of the leading existing detectors and descriptors (specifically, using a Hessian matrix-based measure for the detector, and a distribution-based descriptor); and by simplifying these methods to the essential. This leads to a combination of novel detection, description, and matching steps. The paper encompasses a detailed description of the detector and descriptor and then explores the effects of the most important parameters. We conclude the article with SURF's application to two challenging, yet converse goals: camera calibration as a special case of image registration, and object recognition. Our experiments underline SURF's usefulness in a broad range of topics in computer vision. \n5. Abstract of Cited Paper (BIB005): We propose to shift the goal of recognition from naming to describing. Doing so allows us not only to name familiar objects, but also: to report unusual aspects of a familiar object (“spotty dog”, not just “dog”); to say something about unfamiliar objects (“hairy and four-legged”, not just “unknown”); and to learn how to recognize new objects with few or no visual examples. Rather than focusing on identity assignment, we make inferring attributes the core problem of recognition. These attributes can be semantic (“spotty”) or discriminative (“dogs have it but sheep do not”). Learning attributes presents a major new challenge: generalization across object categories, not just across instances within a category. In this paper, we also introduce a novel feature selection method for learning attributes that generalize well across categories. We support our claims by thorough evaluation that provides insights into the limitations of the standard recognition paradigm of naming and demonstrates the new abilities provided by our attribute-based framework. \n6. Abstract of Cited Paper (BIB006): We present a method to learn visual attributes (eg.“red”, “metal”, “spotted”) and object classes (eg. “car”, “dress”, “umbrella”) together. We assume images are labeled with category, but not location, of an instance. We estimate models with an iterative procedure: the current model is used to produce a saliency score, which, together with a homogeneity cue, identifies likely locations for the object (resp. attribute); then those locations are used to produce better models with multiple instance learning. Crucially, the object and attribute models must agree on the potential locations of an object. This means that the more accurate of the two models can guide the improvement of the less accurate model. Our method is evaluated on two data sets of images of real scenes, one in which the attribute is color and the other in which it is material. We show that our joint learning produces improved detectors. We demonstrate generalization by detecting attribute-object pairs which do not appear in our training data. The iteration gives significant improvement in performance. \n7. Abstract of Cited Paper (BIB007): We present two novel methods for face verification. Our first method - “attribute” classifiers - uses binary classifiers trained to recognize the presence or absence of describable aspects of visual appearance (e.g., gender, race, and age). Our second method - “simile” classifiers - removes the manual labeling required for attribute classification and instead learns the similarity of faces, or regions of faces, to specific reference people. Neither method requires costly, often brittle, alignment between image pairs; yet, both methods produce compact visual descriptions, and work on real-world images. Furthermore, both the attribute and simile classifiers improve on the current state-of-the-art for the LFW data set, reducing the error rates compared to the current best by 23.92% and 26.34%, respectively, and 31.68% when combined. For further testing across pose, illumination, and expression, we introduce a new data set - termed PubFig - of real-world images of public figures (celebrities and politicians) acquired from the internet. This data set is both larger (60,000 images) and deeper (300 images per individual) than existing data sets of its kind. Finally, we present an evaluation of human performance. \n8. Abstract of Cited Paper (BIB008): Image category recognition is important to access visual information on the level of objects and scene types. So far, intensity-based descriptors have been widely used for feature extraction at salient points. To increase illumination invariance and discriminative power, color descriptors have been proposed. Because many different descriptors exist, a structured overview is required of color invariant descriptors in the context of image category recognition. Therefore, this paper studies the invariance properties and the distinctiveness of color descriptors (software to compute the color descriptors from this paper is available from http://www.colordescriptors.com) in a structured way. The analytical invariance properties of color descriptors are explored, using a taxonomy based on invariance properties with respect to photometric transformations, and tested experimentally using a data set with known illumination conditions. In addition, the distinctiveness of color descriptors is assessed experimentally using two benchmarks, one from the image domain and one from the video domain. From the theoretical and experimental results, it can be derived that invariance to light intensity changes and light color changes affects category recognition. The results further reveal that, for light intensity shifts, the usefulness of invariance is category-specific. Overall, when choosing a single descriptor and no prior knowledge about the data set and object and scene categories is available, the OpponentSIFT is recommended. Furthermore, a combined set of color descriptors outperforms intensity-based SIFT and improves category recognition by 8 percent on the PASCAL VOC 2007 and by 7 percent on the Mediamill Challenge. \n9. Abstract of Cited Paper (BIB009): With the rise in popularity of digital cameras, the amount of visual data available on the web is growing exponentially. Some of these pictures are extremely beautiful and aesthetically pleasing, but the vast majority are uninteresting or of low quality. This paper demonstrates a simple, yet powerful method to automatically select high aesthetic quality images from large image collections. Our aesthetic quality estimation method explicitly predicts some of the possible image cues that a human might use to evaluate an image and then uses them in a discriminative approach. These cues or high level describable image attributes fall into three broad types: 1) compositional attributes related to image layout or configuration, 2) content attributes related to the objects or scene types depicted, and 3) sky-illumination attributes related to the natural lighting conditions. We demonstrate that an aesthetics classifier trained on these describable attributes can provide a significant improvement over baseline methods for predicting human quality judgments. We also demonstrate our method for predicting the “interestingness” of Flickr photos, and introduce a novel problem of estimating query specific “interestingness”. \n10. Abstract of Cited Paper (BIB010): The Scale-Invariant Feature Transform (or SIFT) algorithm is a highly robust method to extract and consequently match distinctive invariant features from images. These features can then be used to reliably match objects in diering images. The algorithm was rst proposed by Lowe [12] and further developed to increase performance resulting in the classic paper [13] that served as foundation for SIFT which has played an important role in robotic and machine vision in the past decade.\n\nReview Chapter:",
"role": "user"
},
{
"content": "Attributes are the visual properties of objects. They are mainly related to the appearance and geometrical structures of the objects BIB003 . Visual attributes used by humans to describe an object are called semantic attributes. They are used to represent parts, shape and materials of objects BIB006 . Humans have the ability to describe unfamiliar objects up to some extent by using the semantic attributes. An Object category recognition algorithm based on training images is likely to fail when presented with an image from a particular category for which no images are included in the training data set. It can be observed in the real world that the objects share visual and semantic attributes. Researchers have taken the advantage of the sharing nature of the attributes and have shown that attributes work well in situations where the images from a particular object category are absent in the training data set. Attributes have been used to identify an object or at least parts of an unknown object BIB006 [3] BIB007 . Apart from object category recognition, attributes can be used for image retrieval in a way that is natural and favorable for humans. Current methods of image retrieval use local or global features of images which are more related to the rigid structure of objects being queried. Semantic attributes can be used to describe an object due which they present a natural choice for image retrieval. Attributes have also been used for face verification BIB008 , describing aesthetics and interestingness of images BIB010 and generation of sentences from images as well. The low level features are extracted from annotated training images and then quantized. The attributes classifiers are then learnt based on these quantized low-level features. As stated earlier, attributes are used to represent materials, parts and shapes. Color and texture are used for materials, visual words are used for parts, and edges are used for shapes BIB006 . Low-level feature used for these attributes are texture descriptors extracted with a texton filter bank, HOG BIB002 descriptor for the construction of visual words, and canny edge detectors BIB001 is used for edge detection. Apart from these, SIFT BIB011 , rgSIFT BIB009 , PHOG BIB004 , SURF BIB005 and self-symmetry Histograms are also used as low-level features. In most of the works, a classifier is trained for each attribute based on the quantized low-level features. The learned parameters from these classifiers are then used to predict the presence or absence of an attribute in a given image. Most of the algorithms learn these parameters by localizing the objects of interest in a bounding box that is pre-detected thus making the problem as \"what is this\" instead of \"where is this\" BIB006 . Some of the algorithms combine the localization as well as . Attributes are the mid-level features which can be named and described making them more efficient than low level features for problems like object category recognition. They can efficiently model the visual characteristics of objects and their spatial relationships in an image. Attributes can also represent image properties and concepts. They have been shown to give better performance than low-level features in face verification BIB008 and image aesthetics prediction BIB010 . These results give the motivation to switch from solutions based on low-level features to attributes based solutions. Therefore an overview of the attributes and their applications becomes a pre-requisite before one can apply them to solution of a problem. This article will give the required overview. Following are the contributions of this article. • An overview of the attributes based solutions of object classification and description, image retrieval and image aesthetics prediction. • Potential future directions. Rest of the paper is organized as follows. Section 2 summarizes the techniques that use attributes for object classification and description. An overview of the attributes used for image aesthetics prediction is given in section 3, section 4 gives an overview of attributes based image retrieval methods, the attributes data sets and future directions to apply attributes for some potential problems are given in section 6 and section 7 concludes the article.",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_elife_single_doc_summ | science.scientific_lay_summarisation_elife_single_doc_summ.2546 | [
{
"content": "Transform the provided sections of a biomedical research paper into a digestible summary for a general audience. Focus on simplifying complex ideas while retaining key scientific insights.\n\nResearch Title: Localized hypoxia within the subgranular zone determines the early survival of newborn hippocampal granule cells\nContent Overview:\nAbstract:\nThe majority of adult hippocampal newborn cells die during early differentiation from intermediate progenitors( IPCs) to immature neurons. Neural stem cells in vivo are located in a relative hypoxic environment, and hypoxia enhances their survival, proliferation and stemness in vitro. Thus, we hypothesized that migration of IPCs away from hypoxic zones within the SGZ might result in oxidative damage, thus triggering cell death. Hypoxic niches were observed along the SGZ, composed of adult NSCs and early IPCs, and oxidative byproducts were present in adjacent late IPCs and neuroblasts. Stabilizing hypoxia inducible factor-1α with dimethyloxallyl glycine increased early survival, but not proliferation or differentiation, in neurospheres in vitro and in newly born SGZ cells in vivo. Rescue experiments in Baxfl/fl mutants supported these results. We propose that localized hypoxia within the SGZ contributes to the neurogenic microenvironment and determines the early, activity-independent survival of adult hippocampal newborn cells.\nIntroduction:\nIn the subgranular zone( SGZ) of the adult hippocampal dentate gyrus( DG), one of the two neurogenic niches in the adult mammalian brain, new neurons are continuously generated throughout adulthood( Taupin and Gage, 2002; van Praag et al., 2002). Adult hippocampal neurogenesis is a highly dynamic and regulated process that evolves slowly over several weeks( Zhao et al., 2008). Adult neural stem/ radial glia-like cells( Type I cells) reside at the interface between the SGZ and the hilus, and give rise to rapidly dividing intermediate progenitors( Type 2 cells). These cells migrate a short distance into the SGZ and progressively( via Type 2a and Type 2b intermediate progenitors) differentiate into neuroblasts( Type 3 cells)( Zhao et al., 2008; Fuentealba et al., 2012; Bonaguidi et al., 2012). Neuroblasts then exit the cell cycle and become immature granule cell neurons, which in turn migrate into the granule cell layer and incorporate into the pre-existing functional hippocampal circuits( Toni et al., 2008). In the young adult mouse DG as many as 4000 new cells are born daily, but only a subset( ~30%) survive at 4 weeks post-mitosis to become mature granule cell neurons( Dayer et al., 2003; Kempermann, 2003; Kempermann et al., 2006; Sierra et al., 2010). The survival of adult-generated granule cells exhibits two critical periods; an early one during the transition from transient amplifying progenitors to neuroblasts and a later one during the integration of the immature neurons( Dayer et al., 2003; Kempermann, 2003; Sierra et al., 2010; Mandyam et al., 2007). The early phase is associated with phagocytosis of apoptotic cells by microglia( Sierra et al., 2010). GABA-mediated-depolarization and NMDA-receptor-mediated neuronal activity regulate the later phase of survival, at two and three weeks after mitosis respectively( Jagasia et al., 2009; Tashiro et al., 2006). However, less is known about the mechanisms responsible for the early death of newborn granule cells which occurs during the first days post mitosis, as cells exit the specialized microenvironment( NSC niche) of the adult SGZ. There is increasing evidence that differential oxygen tensions may be an important component of the NSC niche and that oxygen sensing can regulate the proliferation, survival and differentiation of neural stem cells and progenitors( Panchision, 2009; de Filippis and Delia, 2011; Mazumdar et al., 2010). Thus, we investigated hypoxia within the SGZ, and whether stabilizing Hypoxia Inducible Factor-1 α( HIF-1α), the master regulator of oxygen homeostasis, influences the early survival of the adult hippocampal newborn cells.\nDiscussion:\nThe partial oxygen pressure( pO2) and concentration in brain tissue is well below the 40 mm Hg in venous blood exiting the brain and is also heterogeneous( Erecińska and Silver, 2001). In rat hippocampus the pO2 has been estimated to be 20. 3 mmHg( Erecińska and Silver, 2001), whereas even more hypoxic regions have been detected in the SGZ of the adult mouse DG( Mazumdar et al., 2010). Using the hypoxia marker, pimonidazole hydrochloride, which labels cells with oxygen levels <10 mm Hg( 1. 3%), we found that SGZ stem-cell like-radial glia and early intermediate progenitors lie within hypoxic zones. Interestingly, not all cells in the SGZ were labeled, but were localized to narrow zones of a few cell diameters wide along the hilar border of the SGZ. Only a fraction of GFAP+ or Tbr2+ cells were pimonidazole-positive whereas the hypoxic cell did not label with the proliferating marker Ki67( data not shown). Thus heterogeneous zones of tissue pO2 may reflect some degree of cell-state specificity as well as local metabolic demand that changes constantly even within a small specific area/volume of the brain. The two major determinants for tissue oxygen concentration are blood flow( oxygen supply) and oxygen consumption rate( cellularity). The latter may predominate in the SGZ with its high ratio of nuclei to blood vessels compared to other brain regions( Mazumdar et al., 2010). Although some neural stem cells are perivascular( Ottone et al., 2014; Tavazoie et al., 2008; Palmer et al., 2000), it remains to be determined whether these vessels carry highly oxygenated blood or how easily oxygen is distributed to the cell. Interestingly, a recent paper( Sun et al., 2015) revealed that within the SGZ zone only late amplifying progenitors( Tbr2+/DCX+) and horizontal early neuroblasts( Tbr2-/DCX+), among the entire population of SGZ progenitors are in direct contact with blood vessels. Numerous adult stem cells reside in hypoxic niches, where they maintain a quiescent state depending predominantly on anaerobic glycolysis( de Filippis and Delia, 2011; Mohyeldin et al., 2010; Platero-Luengo et al., 2014). Stem cells exhibit different metabolic properties than their differentiated progeny that may promote ‘stemness’. However, whether anaerobic metabolism is an adaptation to low oxygen levels in the specific niches in vivo or is an intrinsic stem cell property is still unclear. In either case, our results are consistent with two-photon phosphorescence direct in vivo measurements of local oxygen tension in the bone marrow, which revealed a hypoxic adult stem cell niche with high heterogeneity in local pO2( Spencer et al., 2014). Thus our data supports the existence of steep pO2 gradients between the neurogenic niche and the granule cell layer. Although a microenvironment with low oxygen concentration confers resistance to reactive oxygen species and cytotoxic stressors, stem cells and their progeny are susceptible to changes in redox status upon migration away from hypoxic zones( Mohyeldin et al., 2010; Platero-Luengo et al., 2014; Suda et al., 2011). Interestingly, we detected oxidative byproducts in intermediate progenitors and neuroblasts located adjacent to the hypoxic niches. This correlation suggests that migration of intermediate progenitors away from hypoxic zones leads to oxidative damage, and thus triggers an early phase of apoptosis. In vitro, hypoxic conditions( defined as <5% O2) reduces apoptosis, promotes proliferation, and increases cultured embryonic, adult neural stem cells and neuronal progenitors( Sierra et al., 2010; Shingo et al., 2001; Studer et al., 2000; Felling, 2006; Gustafsson et al., 2005; Clarke and van der Kooy, 2009; Chen et al., 2007; Bürgers et al., 2008). Deletion of hypoxia-inducible factor-1 alpha subunit( HIF-1α) in vivo, a transcriptional activator mediating adaptive cellular responses to hypoxia, dramatically decreased adult hippocampal neurogenesis( Mazumdar et al., 2010). Additionally, conditional ablation of HIF1-α in adult mouse brain resulted in hydrocephalus, decreased neurogenesis( partly due to an increase in apoptosis) and deficits in spatial memory( Tomita et al., 2003). In our experiments, DMOG, an agent that stabilizes HIF1-α under normoxic conditions, reversed the early phase of cell death of newborn cells in the adult dentate gyrus. These results strongly support the idea that transition of progenitors from a hypoxic niche to normal tissue oxygen contributes to cell death. Prior studies using mitotic markers such as BrdU found that only 30–50% of adult-born newborn neurons survive by 30 days post-mitosis( Dayer et al., 2003; Kempermann, 2003; Mandyam et al., 2007; Jagasia et al., 2009; Tashiro et al., 2006), but these studies often did not investigate survival during the first few days post-mitosis. Thus the prevailing idea, until recently, had been that adult-generated hippocampal granule cell survival is regulated mostly at the immature neuron stage( 2–4 weeks post-mitosis), when is affected by neuronal activity and behavior( Jagasia et al., 2009; Tashiro et al., 2006; Gage et al., 1999). Our results support two critical periods for survival of newborn cells in the adult SGZ, early( 1–4 days post-mitosis) and late( 1–3 weeks post-mitosis) with most( 2/3rds) of cell death occurring in the first week( Dayer et al., 2003; Kempermann, 2003; Sierra et al., 2010; Mandyam et al., 2007; Jagasia et al., 2009; Tashiro et al., 2006; Mazumdar et al., 2010). The early apoptosis is likely executed by members of the p53 and Bcl-2 protein families that are key players regulating the apoptotic machinery of adult DG neural precursors and newborn neurons( Sun, 2004; Sahay et al., 2011; Cancino et al., 2013; Fatt et al., 2014). Although programmed neuronal death is often activity-regulated, whether there is synaptic innervation during the first few days post-mitosis has been unclear( Song et al., 2013; Esposito, 2005). Our results show that at 3 days post-mitosis, adult-born cells demonstrate no detectable neural activity. Rather, we propose that the metabolic milieu of the adult SGZ, a previously underexplored variable of this highly specialized neurogenic niche, plays a critical role in the early survival of adult generated hippocampal granule cells.",
"role": "user"
},
{
"content": "The hippocampus is a region in the mammalian brain that has been implicated in the formation of new memories . This process involves the birth of new neurons , which are created at a rate of ∼4000 a day in the hippocampus of a young adult mouse . Yet only a fraction of these cells survive to form mature neurons . These cells die in two main waves – the first occurs days after they form , and the second several weeks later when as immature neurons they integrate into the brain . During this later wave , new neurons become active and survive if they connect with other nerve cells and die if they don’t . But little is known about what causes the earlier wave of cell death . The tissues that contain the precursors of new neurons often have lower oxygen levels compared to other tissues . This means that when these cells start to become neurons and leave these sites , they have to face higher levels of oxygen and may undergo “oxidative” damage . This led Chatzi et al . to ask whether such oxidative damage might cause the early loss of new neurons in the hippocampus . First , the part of the hippocampus that contains the precursor cells ( called the subgranular zone or SGZ ) was found to have patchy areas of low oxygen . Further experiments then revealed that chemicals that may cause oxidative damage were present in the nearby cells that had already started on the path to become new neurons . Chatzi et al . then tested whether chemically stabilizing a protein called Hypoxia Inducible Factor-1α ( or HIF1α for short ) , which naturally helps cells to adapt to low oxygen environments , might increase the survival of the cells in the SGZ . Higher levels of HIF1α did indeed increase the survival of these cells . These findings suggest that newborn cells in the SGZ walk a tightrope between a low oxygen environment that supports the early precursors and the surrounding higher oxygen levels that can be toxic to those cells that start to become neurons . Further studies of the proteins and molecules that act downstream of HIF1α could shed light on ways to enhance the survival of these newly-generated neurons .",
"role": "assistant"
}
] |
science.chemdner_ner | science.chemdner_ner.950 | [
{
"content": "Analyze the given text to identify specific entities. Focus on extracting relevant data.\n\nProvide your findings in JSON format. Each category should list the entities found.\n\nText:\n\nParticle shape optimization by changing from an isotropic to an anisotropic nanostructure: preparation of highly active and stable supported Pt catalysts in microemulsions. We recently introduced a new method to synthesize an active and stable Pt catalyst, namely thermo-destabilization of microemulsions (see R. Y. Parapat, V. Parwoto, M. Schwarze, B. Zhang, D. S. Su and R. Schomäcker, J. Mater. Chem., 2012, 22 (23), 11605-11614). We are able to produce Pt nanocrystals with a small size (2.5 nm) of an isotropic structure i.e. truncated octahedral and deposit them well on support materials. Although we have obtained good results, the performance of the catalyst still needed to be improved and optimized. We followed the strategy to retain the small size but change the shape to an anisotropic structure of Pt nanocrystals which produces more active sites by means of a weaker reducing agent. We found that our catalysts are more active than those we reported before and even show the potential to be applied in a challenging reaction such as hydrogenation of levulinic acid.",
"role": "user"
},
{
"content": "{\"Chemical\": [\"Pt\", \"levulinic acid\"]}",
"role": "assistant"
}
] |
science.craftchem_ner | science.craftchem_ner.372 | [
{
"content": "Analyze the following sentence to identify chemical entities of interest. List these entities in a JSON array format. If no entities are found, return an empty array.\n\nText to Analyze:\nThesequenceofthislargeprotein(3418aminoacids)offersverylittlecluetoitsfunction,althoughthereareeightrepeatedsegments(termedBRCrepeats)inthemiddleoftheproteinthatarehighlyconservedamongmammalianorthologs[7,8].",
"role": "user"
},
{
"content": "[\"protein\", \"amino acids\", \"protein\"]",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_plos_single_doc_summ | science.scientific_lay_summarisation_plos_single_doc_summ.8398 | [
{
"content": "Transform the following sections from a biomedical study into a simple summary for the general public. Focus on making the information accessible and easy to understand.\n\nResearch Heading: The Spatiotemporal Pattern of Src Activation at Lipid Rafts Revealed by Diffusion-Corrected FRET Imaging\nContent:\nAbstract:\nGenetically encoded biosensors based on fluorescence resonance energy transfer( FRET) have been widely applied to visualize the molecular activity in live cells with high spatiotemporal resolution. However, the rapid diffusion of biosensor proteins hinders a precise reconstruction of the actual molecular activation map. Based on fluorescence recovery after photobleaching( FRAP) experiments, we have developed a finite element( FE) method to analyze, simulate, and subtract the diffusion effect of mobile biosensors. This method has been applied to analyze the mobility of Src FRET biosensors engineered to reside at different subcompartments in live cells. The results indicate that the Src biosensor located in the cytoplasm moves 4–8 folds faster( 0. 93±0. 06 µm2/sec) than those anchored on different compartments in plasma membrane( at lipid raft: 0. 11±0. 01 µm2/sec and outside: 0. 18±0. 02 µm2/sec). The mobility of biosensor at lipid rafts is slower than that outside of lipid rafts and is dominated by two-dimensional diffusion. When this diffusion effect was subtracted from the FRET ratio images, high Src activity at lipid rafts was observed at clustered regions proximal to the cell periphery, which remained relatively stationary upon epidermal growth factor( EGF) stimulation. This result suggests that EGF induced a Src activation at lipid rafts with well-coordinated spatiotemporal patterns. Our FE-based method also provides an integrated platform of image analysis for studying molecular mobility and reconstructing the spatiotemporal activation maps of signaling molecules in live cells.\nIntroduction:\nSrc is a protein tyrosine kinase which plays crucial roles in cell adhesion, migration and cancer invasion[1]. In fact, epidermal growth factor( EGF) and its receptor EGFR has been well documented to couple with Src kinase to regulate cancer progression[2]. Before stimulation, Src is localized at microtubule-associated endosomes around the nucleus[3]–[7]. The SH3 and SH2 domains of Src kinase are coupled together by intramolecular interaction, and the catalytic kinase domain of Src is masked by the interaction with C-terminal tail, thus preventing its action on substrate molecules[8]. Upon EGF stimulation, Src can translocate to focal adhesion sites and associate with actin filaments at cell periphery[4],[5],[9]–[12], possibly through the Src N-terminal tail and SH3 domain, but not the catalytic domain[3],[10],[13]. Recent evidence indicates that EGF can enhance the Src localization and activation at lipid rafts to regulate cancer development[14]–[16]. However, the existence of the extremely small and dynamic lipid rafts, and the mechanism on how these lipid rafts function as docking sites to coordinate signaling molecules, remain controversial[17],[18]. It is also not clear how EGF activates Src spatially and temporally at lipid rafts to impact on cellular functions. Genetically encoded biosensors based on fluorescence resonance energy transfer( FRET) are powerful tools for live cell imaging[19],[20]. A variety of such biosensors utilizing cyan fluorescence protein( CFP) and yellow fluorescence protein( YFP) have been developed to visualize the activities of important kinases in live cells, including epithelial growth factor receptor( EGFR), Abl[21], protein kinase A[22], protein kinase B[23], protein kinase C[24], and insulin receptor[25]. We have also developed a genetically-encoded FRET biosensor for monitoring Src activity in live cells[21],[26]. The investigations based on these biosensors have provided invaluable information about the spatiotemporal activation pattern of the molecules studied[27],[28]. However, the observed FRET signal reported by these biosensors at any given spot represents the combined effect of two main factors:( 1) the local kinase activity acting on biosensors and( 2) the signal of activated biosensors moving in the cell among locations. The movement of these biosensors is not dependent on the motion of the targeting enzymes or their endogenous substrate molecules. Hence, the rapid motion of the biosensors can artificially dissipate the cumulative signals engendered by the in situ enzymatic activity. Therefore, it is essential to identify and subtract the effect of biosensor motility from the apparent FRET signals to allow an accurate reconstruction of the spatiotemporal activation map of the targeting kinase. The fluorescence recovery after photobleaching( FRAP) analysis has been widely used to estimate the apparent diffusion coefficients and characterize the motion of fluorescent molecules in live cells[29]–[32]. In classical FRAP analysis, the fluorescence recovery curve is obtained by monitoring the average fluorescence intensity in a small region after photobleaching. Based on the recovery curve, the apparent diffusion coefficient of fluorescent molecules can be estimated by parameter fitting[29]. However, this approach has specific requirements on the cell geometry, photobleached spot, and the actual photobleaching process[29]–[31],[33]. Most recently, FRAP analysis using numerical methods, such as the computational particle method, the finite difference method, and the Monte Carlo simulation, have been developed to address these limitations[34]–[40]. Results from FRAP analysis have revealed the characteristics of transport kinetics for many important molecules[41]–[43]. Nonetheless, there is a need to apply these methods to quantify and analyze live-cell FRET images. The finite element( FE) method is well known for its flexibility in resolving the complex geometry of tissue and cellular structures[44],[45]. It has been used to estimate the apparent diffusion constant in inhomogeneous tissues[46] and for modeling protein transport in single cells[47]. In this study, we have developed a new imaging analysis method based on FE and FRAP to evaluate the motility of different Src biosensors. The results revealed that the motility of biosensors tethered to lipid rafts is governed by 2D diffusion. After the effect of biosensor diffusion on FRET signals was subtracted from the apparent FRET images, the diffusion-corrected FRET signals revealed that, at lipid rafts, high Src activities upon EGF stimulation are concentrated at relatively stationary clusters around cell periphery. Our FE-based imaging analysis method, integrated with FRAP and FRET technologies, can also serve as a general method to study the spatiotemporal kinetics of other enzymatic activity in living cells.\nDiscussion:\nThe timing and localization of molecular activities are crucial for their proper functions. In this paper, we have integrated FE-based imaging analysis modeling, FRAP and FRET technologies, to reconstruct and visualize the spatiotemporal Src activity in lipid rafts upon EGF stimulation. The mobility of the Src biosensor tethered in the lipid rafts of plasma membrane was shown to be dominated by diffusion. The subtraction of this diffusion effect from FRET images has helped to reconstruct the Src activation map at lipid rafts, with high Src activity localized at stationary clusters proximal to cell edge. Given the important roles of Src and lipid rafts in mediating EGF/EGFR-regulated cancer development[2],[14], our results should shed new lights on how cells coordinate molecular activities in space and time to orchestrate pathophysiological responses upon external stimulation. The advantage of our live-cell imaging approach is further underscored by the controversial effect of non-ionic detergents used for isolating lipid rafts in traditional assays[17],[61]. Although the roles of Src in regulating downstream signaling pathways are well studied, the detailed mechanism of Src activation in response to EGF is not clearly elucidated[62]. It has been shown that growth factors can induce the translocation of Src from perinuclear regions to cell periphery through RhoB and actin cytoskeleton[5],[63]. Our results suggest that Src can be transported and activated at lipid rafts. The active Src molecules upon arrival at lipid rafts appear relatively stationary with sub-compartment localization since the activation pattern of Src biosensor showed clusters with increasing size, but little motion( Figure 11C). It has been shown that EGF can form complex with its receptor EGFR, which further binds to integrins[64]. Since integrins are anchored to immobile extracellular matrix and well documented to coordinate the localization of lipid rafts and its associated signaling molecules[65],[66], it is possible that EGF and its ligation with EGFR induce localized Src activation at lipid rafts via integrins. In fact, evidence has shown that integrin β3 can directly bind to Src through the interaction of β3 C-terminal tail and Src SH3 domain[67]. Some evidence has shown that EGFR did not colocalize with caveolae at rest state[68]. Hence it is also possible that either EGF receptor or Src is activated outside of lipid rafts and then sequestered inside lipid rafts. Further studies are warranted to elucidate the underlying mechanism for this localized and stationary Src activity at lipid rafts in response to EGF stimulation. The motility of the Lyn-Src biosensor is dominated by diffusion, as evidenced by the close match between experimental and simulated results, and by the strong linear correlation between WDLC and WCCT( Figures 7C and 10B_i). The mobility of the KRas-Src biosensor, however, displays some nonlinear features between WDLC and WCCT( Figure 10B_ii), suggesting that it is not completely governed by 2D diffusion. Intracellular molecule mobility is influenced by molecular interaction, diffusion, and active transportation[31],[33]. Hence, molecular interaction or active transportation may contribute to the motion of KRas-Src biosensor besides diffusion. The mobility difference between KRas- and Lyn-Src biosensors may be attributable to the tight membrane-binding of the Lyn tag through deep insertion of side chains into the bilayer interior and the fluctuating membrane-binding of the KRas tag through electrostatic switches[69]. Because the membrane-tethered biosensors extend appreciably into the cytoplasm, it is also possible that some of the restricted motion at the proximity of the plasma membrane may be due to the interaction of the biosensor with the cortical actin cytoskeletal network[70]. These interactions may have particularly contributed to the motion of KRas-Src biosensor, which is not dominated by random diffusion. Our estimated diffusion coefficient of the Cytosolic-Src biosensor is several-fold higher than those of the membrane-targeted versions. One of the possible reasons for the difference between the diffusion coefficients of the Cytosolic-Src and membrane-targeted biosensors may be the difference in the physicochemical properties of local environment, e. g. the diffusion of the Cytosolic-Src biosensors is 3D in nature whereas that of the membrane-targeted biosensors is 2D. While our diffusion model can be used to estimate the apparent diffusion coefficient and simulate diffusion process in principle, it cannot be directly applied to study the Cytosolic-Src biosensor. The low coefficient of determination( R2 = 0. 33±0. 1, n = 5) suggests that the mobility of a large portion of the Cytosolic-Src biosensors cannot be described by diffusion. This is possibly because the Cytosolic-Src biosensors reside in different sub-compartments of the cell, e. g., the nucleus vs. the cytoplasm, as shown in Figure 5B and evidenced by the results from our fluorescence loss in photobleaching( FLIP) experiments( data not shown). The movement of the Cytosolic-Src biosensor will likely be better described by a 3D and multi-compartment diffusion model. The approach of evaluating and subtracting diffusion based on FRAP and FRET video images can also be implemented by employing other numerical methods including finite difference method, computational particle method, and Monte Carlo simulation. We decided to choose the FE-based method because it has been well-established for modeling the diffusion processes with complex geometry in 2D and 3D[44],[71]. Since the FE methods have great flexibility in resolving the complex geometry of tissue and cellular structures[44]–[47], no specific requirement on the cell geometry, the bleaching light beam, or the photobleaching process is needed in our new FRAP analysis method. Further, efficient solvers[72] and parallel implementation on distributed computers have been extensively developed for FE methods[73]. Thus, with the integration of 3D imaging techniques, e. g. confocal microscopy, our system can be conveniently extended to 3D analysis and parallel computing environment. In summary, our FE-based method can successfully separate the effect of biosensor diffusion from the apparent FRET signals to reconstruct the diffusion-corrected spatiotemporal activation map of membrane-tethered Src kinase. The results suggest that the EGF-induced Src activation at lipid rafts has localized and stationary patterns clustered at cell periphery. This methodology can be conveniently utilized to reconstruct other molecular activation maps from those reported by indirect and diffusion-driven biosensors.",
"role": "user"
},
{
"content": "Fluorescence biosensors have been widely used to report the spatial and temporal activity of target molecules in live cells . However , biosensors can move independently of the target molecule and carry its signal to other subcellular locations . Therefore , the observed images appear to be the combination of the target molecular activity and the artifacts introduced by the movement of the biosensors ( mainly due to diffusion ) . The intriguing question is how to estimate and exclude the movement effect of biosensors from the observed fluorescent images and to reconstruct the real activity map of the target molecules . The Src molecule plays important roles in cell adhesion , migration , and cancer invasion . In this paper , we developed a novel computational method to analyze and simulate the movement of the Src biosensor , which was then subtracted from the original fluorescent images . With this computational method , we observed discrete clusters of high Src activity at relatively stationary locations on the plasma membrane . Therefore , our results highlight the coordination of molecular activities in space and time . In addition to Src , our computational method can be used to reconstruct the activity map of other signaling molecules .",
"role": "assistant"
}
] |
science.genia_ner | science.genia_ner.407 | [
{
"content": "Analyze the provided text excerpt from a scientific document. Identify and extract distinct entities based on predefined categories.\n\nDeliver the results as a JSON structure where each key represents an entity category, and the associated value is a list of identified entities. If no entities are found for a category, return an empty list.\n\nText Excerpt:\n\nCloning and characterization of the beta subunit of human proximal sequence element-binding transcription factor and its involvement in transcription of small nuclear RNA genes by RNA polymerases II and III. The proximal sequence element (PSE) -binding transcription factor (PTF), which binds the PSE of both RNA polymerase II-and RNA polymerase III-transcribed mammalian small nuclear RNA (snRNA) genes, is essential for their transcription. We previously reported the purification of human PTF, a complex of four subunits, and the molecular cloning and characterization of PTF gamma and delta subunits. Here we describe the isolation and expression of a cDNA encoding PTF beta, as well as functional studies using anti-PTF beta antibodies. Native PTF beta, in either protein fractions or a PTF-Oct-1-DNA complex, can be recognized by polyclonal antibodies raised against recombinant PTF beta. Immunodepletion studies show that PTF beta is required for transcription of both classes of snRNA genes in vitro. In addition, immunoprecipitation analyses demonstrate that substantial and similar molar amounts of TATA-binding protein (TBP) and TFIIIB90 can weakly associate with PTF at low salt conditions, but this association is dramatically reduced at high salt concentrations. Along with our previous demonstration of both physical interactions between PTF gamma / PTF delta and TBP and the involvement of TFIIIB90 in the transcription of class III snRNA genes, these results are consistent with the notion that a TBP-containing complex related to TFIIIB is required for the transcription of class III snRNA genes, and acts through weak interaction with the four-subunit PTF.",
"role": "user"
},
{
"content": "{\"CellLine\": [], \"CellType\": [], \"DNA\": [\"PSE\", \"snRNA genes\", \"RNA polymerase II - and RNA polymerase III - transcribed mammalian small nuclear RNA (snRNA) genes\", \"human proximal sequence element\", \"cDNA\", \"small nuclear RNA genes\", \"mammalian small nuclear RNA (snRNA) genes\", \"class III snRNA genes\"], \"Protein\": [\"both RNA polymerase II - and RNA polymerase III - transcribed mammalian small nuclear RNA (snRNA) genes\", \"human proximal sequence element - binding transcription factor\", \"four - subunit PTF\", \"PTF gamma and delta subunits\", \"the four - subunit PTF\", \"polyclonal antibodies\", \"the PSE\", \"beta subunit\", \"RNA polymerases II and III\", \"four subunits\", \"TFIIIB90\", \"PTF gamma / PTF delta\", \"PTF beta\", \"TBP\", \"PTF - Oct - 1 - DNA complex\", \"RNA polymerase II\", \"RNA polymerase III\", \"TBP - containing complex\", \"PTF\", \"The proximal sequence element (PSE) -\", \"recombinant PTF beta\", \"anti - PTF beta antibodies\", \"human PTF\", \"The proximal sequence element (PSE) - binding transcription factor (PTF)\", \"proximal sequence element (PSE) - binding transcription factor\", \"TATA - binding protein\", \"TFIIIB\"], \"RNA\": []}",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_pubmed | science.scientific_papers_summarization_single_doc_pubmed.8967 | [
{
"content": "Read the provided biomedical research paper and create a concise abstract. Ensure the abstract captures the main points of the paper.\n\nFull Paper:\nthe presence of postoperative seromadesis is common , corresponding to the presence of serum in the subcutaneous tissue post a surgical event .\nwe present a case of a 63-year - old woman having undergone femoral prosthesis surgery and total hip replacement with a subfacial seroma without findings of infection , refractory to standard treatment of compression bandages , massage and cleaning surgery in two oportunities .\na literature review was undertaken to obtain the therapeutic alternatives where erythromycin seromadesis is chosen with excellent response .\nerythromycin sclerotherapy should be considered as an effective and safe option in the treatment of seroma in general surgery and traumatology .\nwe believe that this is the first study of use of erythromycin as sclerotherapy in a traumatology case .\na seroma is the collection of fat , liquid , serum and lymph in one area of the body following a traumatic or surgical event .\nusually , it is formed in a wound in the immediate postoperative period , usually in the first week , although it may be delayed .\ngenerally it is benign , but often annoying to the patient and has a moderate risk of infection .\nthe presence of a seroma may be asymptomatic , or may have swelling , pain and exude from the wound\n. the diagnosis can be by inspection and palpation of the wound , with computed tomography and the ultrasound beign useful for identifying fluid collection .\nseroma s have been seen in orthopaedic surgery as well , where they are likely to have serious repercussions .\nthis is true especially in prosthetic surgery , with greater chances of infection of prosthetic components and failure of the joint replacement associated with high costs , lengthy hospitalizations , and at times need for a subsequent surgery .\ntreatment options can be categorized as preventive or curative , xonsisting of intraoperative application of sclerosing chemicals as hypertonicsaline and talc in humans and gram positive anaerobe like corynebacterium parvum in rat models , tissue adhesives ( fibrin or platelet concentrates ) , use of negative pressure drainage , repeated percutaneous aspiration and surgical drainage with complete removal of seroma and its capsule .\nthe use of erythromycin as a sclerosing agent was described in animal studies , that was first assessed as a pleural sclerosant , proving effective and with fewer sideeffects than talc , tetracycline and diazepam .\nthis is used safely in humans in the management of breast surgeries , non neoplastic ovaran cysts and pleurodesis .\nthere are no reports of its use in orthopedics , and we believe that this is the first study of this kind .\na female patient of 63-year - old presented to our orthopaedic department with prosthetic loosening of the isoelastic stem ( fig .\n1 ) , also femur and acetabulum with massive osteolytic defect performed by modular aloprosthesis of proximal femur and acetabulum of tantalum , and metal on polyethylene pair of friction ( fig .\npatient had good clinical evolution with the drain removed at the first postoperative day with output of 100 cc .\nfive days later , patient was re - admitted to the emergency room for increased volume in relation to wound without infectious signs .\n3 ) . cemented total hip arthroplasty left hip , with extensive osteolysis in bone cement interface .\n. soft tissue ultrasonography , in hypodermic one hypoechoic collection of 10.5 4.0 4.6 cm . to avoid superinfection of a large extent surgery\n, it was decided to make an early cleaning surgery , in order to drain and take tissue samples and fluid culture .\na collection is located under muscle planes adjacent to the aloprotesis , with communication between up fascia lata muscleplane .\nafter the cleaning surgery and drainage , equally the seroma is maintained for 5 days with daily output of 200 cc of serous fluid .\nit was decided to perform a new scouring , with obliteration of third space , massage and compression bandages .\nnegative cultures were obtained . given the persistence of seroma , a literature review was undertaken to obtain therapeutic alternatives .\nfinally , surgical cleanliness was performed with resection of pseudocapsule seroma with curettage , making crops and application of 2 g. erythromycin diluted in 200 cc.of 0.9% saline solution .\ncustomary closing was performed flat leaving a aspiration drainage n 9 undert he fascia , which was measured daily .\nthe next day , the drainage measured serohematic liquid 100cc , and 60cc was measured on the 2nd day . on the 3rd day , 10 cc of liquid was measured , and the drainage was removed on the 4th day .\nsubsequently , the patient was discharged in good condition without signs of infection , tension , edema or collection in the thigh wound .\nit has been assessed that there are certain factors that could favor the formation of seroma , such as a high bodymass index , use of electrocautery to dissect , early withdrawal of drain ( within 24 hours ) , obliteration of dead space and use of drains under vacuum . in our case\n, it is highlighted that the drainage was removed on the first postoperative day , which could have favored the formation of seroma .\nalthough , maintaining a drainage for more number of days also increases the risk ofinfection .\nit is important to consider the differential diagnosis of surgical wound infection , hence blood inflammatory parameters were evaluated , and fluid cultures of the wound were taken . if an infection is confirmed , systemic antibiotic treatment should be given .\nin addition , open the wound , drain the liquid and perform irrigation and debridement folllowed by wound closure . in the search for bibliographical studies , with terms \nmesh in pubmed for seroma and erythromycin a single study was located , ali - khan et al .\n2009 , which refers to the use of erythromycin as sclerotherapy in the management of seroma in series of patients of plastic surgery , showing effectiveness and no serious complications .\nthe search for terms mesh in pubmed for seroma and sclerotherapy , located two investigations , throckmorton et al . in 2008 with povidoneiodine or ethyl alcohol in postmastectomy and moritz et al . in 2013 with polidocanol in varicose vein surgery .\nthe current evidence allows the use of this therapy in this patient , although there is no evidence in trauma surgery .\nit is necessary to conduct more studies to evaluate the effectiveness and reproducibility of this therapy .\nit is known that erythromycin as sclerosing product tends to be very effective with fewer adverse effects than other compounds used for the same condition .\nit has evaluated with satisfactory results in pleurodesis , non - neoplastic adnexalcyst , and plastic surgery .\nerythromycin use as sclerosing product in orthopedic surgery must be evaluated as a effective and safe therapeutic alternative .",
"role": "user"
},
{
"content": "introduction : the presence of postoperative seromadesis is common , corresponding to the presence of serum in the subcutaneous tissue post a surgical event . \n erythromycin has been reported as sclerosing , although not in orthopedic surgery . \n we report a case of erythromycin seromadesis in orthopedic surgery.case presentation : we present a case of a 63-year - old woman having undergone femoral prosthesis surgery and total hip replacement with a subfacial seroma without findings of infection , refractory to standard treatment of compression bandages , massage and cleaning surgery in two oportunities . \n a literature review was undertaken to obtain the therapeutic alternatives where erythromycin seromadesis is chosen with excellent response.conclusion:erythromycin sclerotherapy should be considered as an effective and safe option in the treatment of seroma in general surgery and traumatology . \n more studies are necessary to get a better evidence . \n we believe that this is the first study of use of erythromycin as sclerotherapy in a traumatology case .",
"role": "assistant"
}
] |
science.scireviewgen_multidoc_summarization | science.scireviewgen_multidoc_summarization.1423 | [
{
"content": "Craft a cohesive chapter for a literature review using the abstracts provided. Focus on the chapter's theme, weaving together insights and findings from the abstracts. Highlight key points and note any research gaps.\nLiterature Review Title: Industrial Internet of Things (IIoT) Applications of Edge and Fog Computing: A Review and Future Directions \nChapter Title: II. RELEVANT COMPUTING PARADIGMS AND \n1. Abstract of Cited Paper (BIB001): The proliferation of Internet of Things (IoT) and the success of rich cloud services have pushed the horizon of a new computing paradigm, edge computing, which calls for processing the data at the edge of the network. Edge computing has the potential to address the concerns of response time requirement, battery life constraint, bandwidth cost saving, as well as data safety and privacy. In this paper, we introduce the definition of edge computing, followed by several case studies, ranging from cloud offloading to smart home and city, as well as collaborative edge to materialize the concept of edge computing. Finally, we present several challenges and opportunities in the field of edge computing, and hope this paper will gain attention from the community and inspire more research in this direction. \n2. Abstract of Cited Paper (BIB002): Abstract With the Internet of Things (IoT) becoming part of our daily life and our environment, we expect rapid growth in the number of connected devices. IoT is expected to connect billions of devices and humans to bring promising advantages for us. With this growth, fog computing, along with its related edge computing paradigms, such as multi-access edge computing (MEC) and cloudlet, are seen as promising solutions for handling the large volume of security-critical and time-sensitive data that is being produced by the IoT. In this paper, we first provide a tutorial on fog computing and its related computing paradigms, including their similarities and differences. Next, we provide a taxonomy of research topics in fog computing, and through a comprehensive survey, we summarize and categorize the efforts on fog computing and its related computing paradigms. Finally, we provide challenges and future directions for research in fog computing.\n\nReview Chapter:",
"role": "user"
},
{
"content": "REQUIREMENTS The edge computing paradigm is a computing technology which enables data to be processed almost exclusively on the \"edge\" of the network, which refers to locations between the end devices (like sensors, controllers, and actuators) and the centralized cloud servers. The rationale behind the development of this technology is that computations performed closer to the end devices will lead to a lower latency in the system. This is because the system does not need to transfer data between edge devices and central cloud servers as the computations have been offset to closer locations on the edge. Therefore, in edge computing systems, edge devices can not only request content and services from the cloud servers but can also perform computational offloading, caching, storage, and processing, thereby making the edge devices both data producers and consumers BIB002 . The fog computing paradigm can be understood as an extension of the traditional cloud computing model wherein additional computational, data handling, and networking resources (nodes) are placed at locations on the network which are in close proximity to the end devices BIB001 . The consequence of this extension is that processes involving data management, data processing, networking, and storage can occur not only on the centralized cloud servers, but also on the connections between end devices and the cloud servers BIB003 . Fog computing, therefore, can be extremely useful for low latency applications as well as applications that generate an enormous amount of data that cannot be practically transferred to cloud servers in real-time due to bandwidth constraints [20] . As discussed in the previous section, there are many requirements which cyber physical systems need to maintain so as to become a viable supplement for real-world operations and applications. These include the following: 1) Scalability which ensures that the increased data transfer between nodes does not degrade latency or response time. 2) Fault tolerance and reliability which guarantees that the system functions normally under variable external factors like under high load conditions. 3) Data security which ensures that the system is resistant to external attacks attempting to steal confidential information stored in the system or network.",
"role": "assistant"
}
] |
science.chemdner_ner | science.chemdner_ner.156 | [
{
"content": "Analyze the given text to identify specific entities. Focus on extracting relevant data.\n\nProvide your findings in JSON format. Each category should list the entities found.\n\nText:\n\nEffects of biotin supplementation in the diet on insulin secretion, islet gene expression, glucose homeostasis and beta-cell proportion. Besides its role as a carboxylase cofactor, biotin has a wide repertoire of effects on gene expression, development and metabolism. Pharmacological concentrations of biotin enhance insulin secretion and the expression of genes and signaling pathways that favor islet function in vitro. However, the in vivo effects of biotin supplementation on pancreatic islet function are largely unknown. In the present study, we investigated whether in vivo biotin supplementation in the diet has positive effects in rodent pancreatic islets. Male BALB/cAnN Hsd mice were fed a control or a biotin-supplemented diet over 8 weeks postweaning and tested for glucose homeostasis, insulin secretion, islet gene expression and pancreatic morphometry. Insulin secretion increased from the islets of biotin-supplemented mice, together with the messenger RNA (mRNA) expression of several transcription factors regulating insulin expression and secretion, including forkhead box A2, pancreatic and duodenal homeobox 1 and hepatocyte nuclear factor 4α. The mRNA abundance of glucokinase, Cacna1d, acetyl-CoA carboxylase, and insulin also increased. Consistent with these effects, glucose tolerance improved, and glucose-stimulated serum insulin levels increased in biotin-supplemented mice, without changes in fasting glucose levels or insulin tolerance. Biotin supplementation augmented the proportion of beta cells by enlarging islet size and, unexpectedly, also increased the percentage of islets with alpha cells at the islet core. mRNA expression of neural cell adhesion molecule 1, an adhesion protein participating in the maintenance of islet architecture, decreased in biotin-supplemented islets. These findings provide, for the first time, insight into how biotin supplementation exerts its effects on function and proportion of beta cells, suggesting a role for biotin in the prevention and treatment of diabetes.",
"role": "user"
},
{
"content": "{\"Chemical\": [\"biotin\", \"Biotin\", \"glucose\", \"acetyl-CoA\"]}",
"role": "assistant"
}
] |
science.craftchem_ner | science.craftchem_ner.3105 | [
{
"content": "Analyze the following sentence to identify chemical entities of interest. List these entities in a JSON array format. If no entities are found, return an empty array.\n\nText to Analyze:\nPCRanalysesidentifiedtransientSnailmRNAexpressionduringaperiodofskinembryogenesiswhenwavesofhairfolliclesareforming(unpublisheddata).TopinpointspecificallywhereSnailmRNAisexpressedinthedevelopingskin,weconductedinsituhybridizationusingacRNAprobeuniquetotheSnail3'untranslatedregion(UTR).",
"role": "user"
},
{
"content": "[\"mRNA\", \"mRNA\"]",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_plos_single_doc_summ | science.scientific_lay_summarisation_plos_single_doc_summ.11763 | [
{
"content": "Transform the following sections from a biomedical study into a simple summary for the general public. Focus on making the information accessible and easy to understand.\n\nResearch Heading: GrowMatch: An Automated Method for Reconciling In Silico/In Vivo Growth Predictions\nContent:\nAbstract:\nGenome-scale metabolic reconstructions are typically validated by comparing in silico growth predictions across different mutants utilizing different carbon sources with in vivo growth data. This comparison results in two types of model-prediction inconsistencies; either the model predicts growth when no growth is observed in the experiment( GNG inconsistencies) or the model predicts no growth when the experiment reveals growth( NGG inconsistencies). Here we propose an optimization-based framework, GrowMatch, to automatically reconcile GNG predictions( by suppressing functionalities in the model) and NGG predictions( by adding functionalities to the model). We use GrowMatch to resolve inconsistencies between the predictions of the latest in silico Escherichia coli( iAF1260) model and the in vivo data available in the Keio collection and improved the consistency of in silico with in vivo predictions from 90. 6% to 96. 7%. Specifically, we were able to suggest consistency-restoring hypotheses for 56/72 GNG mutants and 13/38 NGG mutants. GrowMatch resolved 18 GNG inconsistencies by suggesting suppressions in the mutant metabolic networks. Fifteen inconsistencies were resolved by suppressing isozymes in the metabolic network, and the remaining 23 GNG mutants corresponding to blocked genes were resolved by suitably modifying the biomass equation of iAF1260. GrowMatch suggested consistency-restoring hypotheses for five NGG mutants by adding functionalities to the model whereas the remaining eight inconsistencies were resolved by pinpointing possible alternate genes that carry out the function of the deleted gene. For many cases, GrowMatch identified fairly nonintuitive model modification hypotheses that would have been difficult to pinpoint through inspection alone. In addition, GrowMatch can be used during the construction phase of new, as opposed to existing, genome-scale metabolic models, leading to more expedient and accurate reconstructions.\nIntroduction:\nThere are currently 700 completely sequenced genomes along with extensive compilations of data[1] assembled after decades of experimental studies on the metabolic behavior of organisms. This has enabled the reconstruction of stoichiometric models of metabolism for about twenty[2] organisms. This process began with the metabolic characterization of prokaryotic organisms such as Escherichia coli[1], moved to the reconstruction of eukaryotic organisms such as Saccharomyces cerevisiae[3] and, more recently, to the first reconstruction of the more complex Homo Sapiens metabolic map[4]. The completeness and accuracy of microbial metabolic reconstructions are typically assessed by comparing the model growth predictions( i. e., presence or absence) of single and/or multiple knockout mutants for a variety of substrates against experimental data[5]–[7]. As shown in Figure 1, these comparisons lead to four possible outcomes: GG when both model and experimental point at growth, GNG when the model predicts growth but the experiment does not, NGG when the model fails to predict the experimentally observed growth, and finally NGNG when both model and experiment show no growth. Cases GG and NGNG are indicative of agreement between model predictions and experimental data whereas cases GNG and NGG signify disagreement. Specifically, in GNG cases the model over-predicts the metabolic capabilities of the organism due to the use of reactions that are absent in vivo, down-regulation or inhibition of genes/enzymes under the experimental conditions, or absence of biomass constituents from the in silico biomass description. Conversely in NGG cases, the model under-predicts the metabolic capabilities of the organism due to the absence of relevant functionalities/reactions in the model. In this study, we introduce optimization-based techniques to systematically suggest modifications( conditionally add/delete reactions, restrict/expand directionalities or add/suppress uptake/secretion mechanisms for NGG/GNG inconsistencies) in genome-scale metabolic reconstructions in order to reconcile experimental and computational growth predictions across different mutants. The proposed method makes use of gene essentiality data sets currently available for many microorganisms[8]–[17]. For example, the Keio collection[17] catalogues the optical density( OD), under different substrate conditions, of the single gene deletion mutants of all 3, 985 non essential genes in the E. coli K-12 BW25113. Several studies are already available that use gene essentiality data available at the Keio database and other sources to suggest targeted improvements in existing metabolic reconstructions[3],[5],[7],[18]–[20]. As seen in Figure 2, in these studies, in silico models of increasing complexity were successively contrasted against in vivo datasets of differing size to correct the predictive capabilities of the models. Recently, Joyce et al.[7] used the Keio mutant collection[17] to pinpoint conditionally essential genes in vivo in a glycerol supplemented minimal medium and then compared them with the corresponding in silico predictions to suggest improvements in the model[7]. In another study, Harrison and co-workers identified computationally predicted synthetic lethal gene deletion pairs in yeast and then proceeded to test the growth characteristics of these double deletion mutants in vivo[21]. While these studies have successfully used gene deletion datasets in many different contexts to pinpoint gaps in in silico models, the key step of resolving these gaps was performed manually. The need to develop automated procedures to improve the accuracy of existing metabolic reconstructions has been recognized and has led to the development of a number of computational procedures. To this end, Reed et al.[22] recently described a systems based approach to modify an existing genome-scale metabolic reconstruction of E. coli[1] by adding new reactions that ensured growth in NGG cases by enabling in silico growth consistent with in vivo data across various carbon/nitrogen substrates. Alternatively, methods to identify and fill gaps in metabolic models based on connectivity information have also been described and applied to the genome scale models of E. coli and S. cerevisiae[23]. These studies represent only the beginning of efforts geared towards methods that automatically resolve network inconsistencies using a variety of metrics[22]–[28] ranging from unreachable metabolites, DNA microarray data and gene essentiality data. It is becoming increasingly clear that it is necessary to bring to bear all types of experimental data to achieve the aim of a high quality metabolic model. In this paper, we supplement previous efforts[23] on identifying( i. e., GapFind) and filling( i. e., GapFill) gaps in metabolic reconstructions with an automated procedure for resolving growth prediction inconsistencies while minimally perturbing the original model. Briefly, we resolve GNG inconsistencies by converting them into NGNG one-by-one by identifying the minimal set of restrictions that need to be imposed( i. e., through reaction or transport mechanism suppression or reaction reversibility prohibition) on the model describing the GNG mutant so that biomass formation is negated( or reduced below a pre-specified cutoff). If a particular identified restriction does not invalidate any correct GG predictions then we refer to it as global suppression meaning that it can be imposed universally for all experimental perturbations( e. g., single gene deletion mutants and wild type). Alternatively, if an identified restriction clashes with one or more GG predictions then it is referred to as a conditional suppression meaning that it is imposed only in the mutant strain associated with the GNG prediction for which it is correcting. Similarly, NGG inconsistencies are corrected one-by-one to GG by identifying the minimal set of model modifications( i. e., through reaction or transport mechanism addition or reaction reversibility allowance) that enable biomass formation( above a pre-specified cutoff). If none of these modifications affect any of the consistent NGNG cases, we refer to them as global additions; otherwise, we refer to them as conditional additions. In the next section we discuss the results obtained by applying GrowMatch to the most recent genome-scale model of E. coli, iAF1260[20]. We note here that we can also use GrowMatch to reconcile growth prediction inconsistencies across different substrates. The E. coli reconstruction was chosen as the focus of this study to benchmark the ability of GrowMatch to identify model corrections even for a very well curated model. Using GrowMatch, we improved the growth prediction consistency of the iAF1260 model with the data available at the Keio database from 90. 6% to 94. 6% when considering only globally valid corrections and to 96. 7% when additionally considering conditional corrections.\nDiscussion:\nHere we have developed an automated procedure, GrowMatch, to resolve in silico/in vivo growth prediction inconsistencies in single gene-deletion mutants. In GNG mutants, GrowMatch restores consistency by suppressing reactions to prohibit growth. In NGG mutants, GrowMatch restores consistency by adding growth-enabling pathways. We demonstrated this procedure by reconciling the growth prediction inconsistencies between the most recent in silico model of E. coli, iAF1260[20], with the in vivo growth data available at the Keio mutant collection[17]. Using GrowMatch, we suggested consistency-restoring hypotheses for 56/72 GNG mutants and 13/38 NGG mutants. The inconsistencies in 26 GNG mutants were resolved by carrying out conditional suppressions. In the case of NGG mutants, all the suggested modifications were global modifications. By carrying out only global modifications in wild-type E. coli, we were able to improve the consistency from 90. 6% to 94. 6%. In addition, by carrying out conditional modifications in the specific mutants, we further improve the overall consistency in growth predictions to 96. 7%. Moreover, specificity has been recently proposed to be an important measure to determine the effectiveness of in silico simulations as a screen in computational gene essentiality predictions[35]. Notably, we improved the specificity from 67. 6% to 79. 3%( considering only global corrections) using GrowMatch. This value further improves to 92. 8% when we also consider conditional corrections. GrowMatch resolved eighteen GNG inconsistencies by suggesting suppressions in the mutant metabolic networks whereas fifteen inconsistencies were resolved by suppressing isozymes in the metabolic network. The remaining 23 inconsistencies corresponding to blocked genes were repaired by simply adding component( s) of the associated blocked reactions to the biomass equation( Table S4). GrowMatch suggested consistency-restoring hypotheses for five of the NGG mutants by adding functionalities to the model whereas eight inconsistencies were resolved by pinpointing alternate genes that have a high likelihood of carrying out the deleted function. Note that one NGG mutant( ΔluxS) had alternative means of consistency restoration. In this study, we were able to pinpoint missing functionalities that may have been overlooked during model reconstruction. In one such example, were able to resolve a NGG mutant by adding a reaction( i. e., sulfate adenylyltransferase) with documented evidence of its being present in E. coli but absent in the in silico model iAF1260[20]. Furthermore, when checking for alternative genes that restore consistency to NGG mutants, we identified possible alternative activities for aldA and epd that were not associated with them in the iAF1260 model( succinate semialdehyde dehydrogenase and glyceraldehyde-3-phosphate dehydrogenase, respectively). GrowMatch also resolved two NGG mutants by indirectly preventing the toxic accumulation of metabolites. Surprisingly, in the case of NGG mutants, none of the resolutions were achieved by allowing the reversibility of irreversible reactions in the model. This result is in contrast to previous results in which a large proportion of connectivity problems in the previous version of the E. coli genome-scale model were resolved by expanding reversibility of reactions in the model[23]. This finding may be due to the increased accuracy in the characterization of reversible reactions in the latest E. coli model[20] brought about by making use of ΔG values during the reconstruction process. In line with recent explanations for GNG inconsistencies in in silico models[35], we find that about 33% of the GNG mutants correspond to genes associated with blocked reactions in the metabolic network. Using GapFill, we were unable to identify any flow restoring hypotheses for blocked reactions corresponding to three NGG mutants using reactions from the multi-organism databases of MetaCyc and KEGG. Also, these databases of reactions were also unable to contribute growth-enabling functionalities in 25 NGG mutants, which is likely due to the recent systematic reconciliation of the latest reconstruction of E. coli with data available in the MetaCyc and EcoCyc databases[30],[33]. This motivates the need to further expand the size of catalogued functionalities( e. g., the increase of experimentally determined enzyme functionalities), and also to supplement these reaction compilations with hypothetical reactions that will serve as missing links to bridge pathway gaps. There is already a large body of research focusing on deriving hypothetical reactions by iteratively changing the substrate specificity or cofactor dependence of well-characterized enzymes[36]–[40]. It is important to note that GrowMatch makes use of parsimony criteria to prioritize alternative model correcting hypotheses. Therefore, biologically relevant hypotheses that involve more than the selected maximum allowed limit of model modifications will be missed. Also, using alternate cellular objectives such as MOMA[41] or ROOM[42] instead of maximizing biomass as the objective function may help correct some GNG mutants into NGNG mutants. A recent study by Motter et al.,[43] addresses this concern and defines the corresponding genes as suboptimally essential genes. It would be worthwhile to explore whether, in addition to model modifications, if more elaborate( re) definitions of objective functions[44] may be needed to improve consistency with experimental data. Furthermore, GrowMatch can also be used to reconcile growth prediction inconsistencies across various substrates. To this end, Biolog data[20] for substrate utilization( e. g., carbon, nitrogen, phosphorous and sulphur sources) can be used to propose model modifications that will ensure in silico growth prediction consistency with the available data. In summary, we believe that GrowMatch, in conjunction with GapFill, are useful model-refinement tools during the reconstruction of new metabolic models or testing/curation of existing ones. In addition to the use of GrowMatch to restore growth inconsistencies for the latest E. coli model presented here, our group has recently used it( Suthers 2008, accepted) during the construction phase of the genome-scale metabolic model of Mycoplasma genitalium iPS189.",
"role": "user"
},
{
"content": "Over the past decade , mathematical models of cellular metabolism have been constructed for describing existing metabolic processes . The gold standard for testing the accuracy and completeness of these models is to compare their cellular growth predictions ( i . e . , cell life/death ) across different scenarios with available experimental data . Although these comparisons have been used to suggest model modifications , the key step of identifying these modifications has often been performed manually . Here , we describe an automated procedure GrowMatch that addresses this challenge . When the model overpredicts the metabolic capabilities of the organism by predicting growth in contrast with experimental data , we use GrowMatch to restore consistency by suppressing growth enabling biotransformations in the model . Alternatively , when the model underpredicts the metabolic capabilities of the organism by predicting no growth ( i . e . , cell death ) in contrast with available data , we use GrowMatch to restore consistency by adding growth-enabling biotransformations to the model . We demonstrate the use of GrowMatch by reconciling growth prediction inconsistencies of the latest Escherichia coli model with data available at the Keio database . Despite the highly curated nature of the Escherichia coli model , GrowMatch identified and resolved a large number of model prediction inconsistencies by taking advantage of available compilations of experimental data .",
"role": "assistant"
}
] |
science.data_reco_mcq_sc | science.data_reco_mcq_sc.2057 | [
{
"content": "Explore the following research question and associated terms. Identify the most suitable dataset from the list provided. Use the dataset descriptions to guide your choice.\n\nInquiry: We propose to attack a machine learning model using the alternating direction method of\n\nKeywords: black-box adversarial attacks\n\nDataset Information:\nXCOPA: The Cross-lingual Choice of Plausible Alternatives (XCOPA) dataset is a benchmark to evaluate the ability of machine learning models to transfer commonsense reasoning across languages. The dataset is the translation and reannotation of the English COPA (Roemmele et al.\nMNIST-1D: A minimalist, low-memory, and low-compute alternative to classic deep learning benchmarks. The training examples are 20 times smaller than MNIST examples yet they differentiate more clearly between linear, nonlinear, and convolutional models which attain 32, 68, and 94% accuracy respectively (these models obtain 94, 99+, and 99+% on MNIST).\nImageNet: The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nWMCA: The Wide Multi Channel Presentation Attack (WMCA) database consists of 1941 short video recordings of both bonafide and presentation attacks from 72 different identities. The data is recorded from several channels including color, depth, infra-red, and thermal.\n\nAvailable Datasets: A: XCOPA B: MNIST-1D C: ImageNet D: WMCA",
"role": "user"
},
{
"content": "ImageNet",
"role": "assistant"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.