Commit
·
d0798d1
1
Parent(s):
86b0483
Upload 77/76/methresdmie_NBK569735.jsonl with huggingface_hub
Browse files
77/76/methresdmie_NBK569735.jsonl
ADDED
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{"file": "methresdmie_NBK569735/ch3.nxml", "text": "Literature Review\nThe initial literature search identified 1,458 articles for review; after title and abstract screening 208 references remained. After full-text screening, 73 articles were included in the final review. Of these, the workgroup members judged 44 to be relevant. The articles included methodology framework papers, systematic reviews, commentaries, opinion pieces, consensus documents, and qualitative studies. The reviewed documents are summarized in Appendix B. The relevant literature was classified into the following five categories:\nTypes and definitions of insufficient evidence (n=5): these papers differentiated insufficient evidence due to\nhaving no evidence;imprecision (small number of events and large variance);other reasons leading to an inability to make a decision despite being able to estimate effect size.Frameworks for rating insufficient evidence (n=2): these papers identified frameworks and definitions of insufficient evidence, such the United States Preventive Services Task Force (USPSTF) approach.11Decisionmaking with insufficient evidence (n=27): these papers discussed\nchallenges of decisionmaking in light of insufficient evidence;the difficulty of identifying relevant evidence to guide decisions in children, rare diseases, and in primary care settings;patients\u2019 dissatisfaction with uncertainty;decisionmaking using local health system data (such as data from electronic medical records) when the evidence was insufficient;healthcare providers preference for having a recommendation about management even if the evidence was insufficient to support the recommendation.Evidence synthesis and insufficient evidence (n=1): this paper suggested providing additional interpretation and extrapolation by content experts, that was labeled on one occasion \u201crigorous speculation,\u201d to improve evidence uptake when the evidence is insufficient.4Other themes (n=9): these papers discussed\ninterpretation of p-values;value of cost effectiveness analysis;the unique setting of coverage decisions;making guideline recommendations using GRADE that are strong recommendations based on weak evidence.12\nTypes and definitions of insufficient evidence (n=5): these papers differentiated insufficient evidence due to\nhaving no evidence;imprecision (small number of events and large variance);other reasons leading to an inability to make a decision despite being able to estimate effect size.\nhaving no evidence;\nimprecision (small number of events and large variance);\nother reasons leading to an inability to make a decision despite being able to estimate effect size.\nFrameworks for rating insufficient evidence (n=2): these papers identified frameworks and definitions of insufficient evidence, such the United States Preventive Services Task Force (USPSTF) approach.11\nDecisionmaking with insufficient evidence (n=27): these papers discussed\nchallenges of decisionmaking in light of insufficient evidence;the difficulty of identifying relevant evidence to guide decisions in children, rare diseases, and in primary care settings;patients\u2019 dissatisfaction with uncertainty;decisionmaking using local health system data (such as data from electronic medical records) when the evidence was insufficient;healthcare providers preference for having a recommendation about management even if the evidence was insufficient to support the recommendation.\nchallenges of decisionmaking in light of insufficient evidence;\nthe difficulty of identifying relevant evidence to guide decisions in children, rare diseases, and in primary care settings;\npatients\u2019 dissatisfaction with uncertainty;\ndecisionmaking using local health system data (such as data from electronic medical records) when the evidence was insufficient;\nhealthcare providers preference for having a recommendation about management even if the evidence was insufficient to support the recommendation.\nEvidence synthesis and insufficient evidence (n=1): this paper suggested providing additional interpretation and extrapolation by content experts, that was labeled on one occasion \u201crigorous speculation,\u201d to improve evidence uptake when the evidence is insufficient.4\nOther themes (n=9): these papers discussed\ninterpretation of p-values;value of cost effectiveness analysis;the unique setting of coverage decisions;making guideline recommendations using GRADE that are strong recommendations based on weak evidence.12\ninterpretation of p-values;\nvalue of cost effectiveness analysis;\nthe unique setting of coverage decisions;\nmaking guideline recommendations using GRADE that are strong recommendations based on weak evidence.12\nReview of Systematic Reviews\nNine EPCs analyzed 43 purposefully selected systematic reviews. These reviews are summarized in Appendix C. Of these, 29 (67%) were commissioned by the AHRQ EPC Program. Other entities that commissioned reviews included international non-for-profit organizations (e.g, the World Health Organization), state (e.g., State of Washington) and federal entities (e.g, Department of Defense, Veterans Administration), professional societies (e.g, the American College of Rheumatology, American Society of Hematology, Endocrine Society, and Kidney Disease: Improving Global Outcomes), and academic health systems. The American Academy of Child and Adolescent Psychiatry and the USPSTF were most commonly reported as end-users.\nThe reviews varied in terms of how the decisional dilemma was phrased. Many statements of the decisional dilemma were specific, and included a clear statement of the population, intervention, and outcome (e.g., \u201cShould disulfiram be recommended to reduce alcohol use or increase abstinence in adolescents with alcohol use disorder?\u201d). A few reported a decisional dilemma but phrased it in broader terms (e.g., \u201cTo screen or not to screen\u201d).\nDefinition of the Term \u201cInsufficient\u201d\nIn the 43 systematic reviews, the use of the term \u201cinsufficient\u201d covered both instances of no evidence (i.e., no studies directly evaluated the population, intervention, or outcome of interest; or no studies directly evaluated the research question) and instances of evidence being present but insufficient to estimate an effect (or to make a conclusion). Although the reasons for rating evidence as insufficient frequently pertained to imprecision and lack of evidence, other reasons were high risk of bias in individual studies, and inconsistency among evaluated studies.\nAdding Information Sources to Address the Decisional Dilemma in the Context of Insufficient Evidence\nTwenty-six (60%) of the reviews included specific solutions to help address the decisional dilemma in the context of insufficient evidence. In 21 (49%) reviews, the finding of insufficient evidence was anticipated by the review team (e.g., the systematic review was an update of a previous one, or scoping the literature indicated lack of evidence). When authors found insufficient evidence for specific outcomes, one set of solutions included summarizing the evidence for ineligible populations, interventions (i.e., indirect evidence), and study designs in the discussion section of the report. A second solution involved providing information on other important factors (not addressed as Key Questions) that may be used to inform decisionmaking; these were addressed as contextual questions. A third solution involved revising the review criteria to include observational study designs.\nSystematic reviewers offered examples of how they summarized results outside the review parameters. For example, a review on screening and early treatment for asymptomatic peripheral artery disease with the ankle-brachial index included limited data on screening accuracy in their target population (asymptomatic general population).13 To address this, in the discussion section of the report, they supplemented the review with a summary of the larger evidence base on screening accuracy in populations symptomatic for peripheral arterial disease, recognizing that extrapolation may exaggerate the diagnostic accuracy.13 As an example of interventions outside the scope of the review, an evaluation of screening for atrial fibrillation with electrocardiograms included information on screening with electrocardiograms for atherosclerotic cardiovascular disease.14\nRegarding presenting contextual information, several examples came from reviews conducted for the USPSTF, which has an established process for summarizing and using contextual information for conceptual bounding of benefits and harms.11 For example, this process might involve presenting information on incidence of the harm in a narrower population, so that the incidence in the general population (with lower risk) can be inferred as very low. Other contextual information pertained to the \u201cdiagnostic odyssey of the families\u201d for whole exome sequencing15 and the \u201cpenetrance/prevalence\u201d of multigene panels for hereditary breast cancer risk assessment test.16 Some reviewers described compiling contextual information in a rigorous manner (relying on systematic reviews and targeted searches); two specifically mentioned clinical judgment and expert opinion.\nMaking Recommendations Despite Insufficient Evidence\nSeveral systematic reviews sponsored by guideline developers described their end-users as making recommendations in the context of insufficient evidence. In these instances, contextual information supported recommendations for or against the treatment. In a review of whole exome sequencing, the end-user \u201cacknowledged the limitations in the evidence, but in the end recognized the very specific contextual circumstances regarding use of this test, and felt that there was enough of a signal of potential benefit, without significant harms and with some evidence of cost-effectiveness that it could be reasonable to use by qualified professionals (i.e. medical geneticists) according to specified criteria for use.\u201d15 Another example was a recommendation against screening asymptomatic patients with sickle cell disease with pulmonary function tests.17 The evidence supporting benefits was deemed insufficient but considering the cost and inconvenience of pulmonary function testing, particularly in children, the panel made a recommendation against testing.\nAudit of Technical Briefs\nA total of 21 technical briefs were included in this audit. A full listing of the 21 reports included in this audit and a summary data abstraction table can be found in Appendix D. Sixteen (76%) discussed information from Key Informant (KI) interviews within the findings of their report and were not simply being used to assist topic scoping. KI interviews were used to summarize contextual or implementation factors and to develop conceptual frameworks or tables of theoretical advantages and disadvantages of an intervention. Two technical briefs reported using formal qualitative research methodology to analyze content from KI interviews.18,19\nMany technical brief questions focused on practical aspects of decisional dilemmas, such as context in which interventions have been studied, implementation strategies for various interventions, or barriers and facilitators affecting implementation. Fifteen (71%) included a Guiding Question pertaining to contextual or implementation factors. For example, the report \u201cCharacteristics of Existing Asthma Self-Management Education Packages\u201d interviewed KIs to identify practical barriers to effective implementation of chronic asthma self-management/education packages.6 KIs revealed the need to consider factors well beyond the instructional content of these packages, such as patient demographics, health literacy levels, and access issues such as the availability of health insurance coverage. Another technical brief that assessed the available evidence regarding strategies for comprehensive health management for women with HIV/AIDS used input from clinicians and social workers to devise a series of clinical vignettes to illustrate complex psychosocial and other contextual issues commonly encountered in the care of these patients.20\nTechnical briefs also often include analyses of data sources beyond what is typically included in systematic reviews. For example, \u201cPharmacologic and Nonpharmacologic Treatments for Posttraumatic Stress Disorder: Groundwork for a Publicly Available Repository of Randomized Controlled Trial Data\u201d8 and \u201cTreatment for Acute Pain: An Evidence Map\u201d9 both included clinical practice guidelines. \u201cSkin Substitutes for Treating Chronic Wounds,\u201d7 \u201cCharacteristics of Existing Asthma Self-Management Education Packages,\u201d6 and \u201cDecision Aids for Advance Care Planning\u201d21 all abstracted characteristics of commercially available products from industry web pages. The review \u201cImpact of Community Health Worker Certification on Workforce and Service Delivery for Asthma and Other Selected Chronic Diseases\u201d10 included reports published by State and county health officials on certification requirements posted on state health department web pages.\nData sources for evaluating representativeness of initial decisional dilemma in report scoping and subsequent end-user satisfaction were not as readily available as initially anticipated. Additional efforts are being taken outside of this workgroup by the SRC and AHRQ to more easily link topic nomination decisional dilemma details and feedback with eventual published reports. Published disposition of comments tables for technical briefs were the most readily available proxy measure, but they should be interpreted cautiously and more as clues for future improvements rather than sources to identify problems with past methods.\nOf the 16 technical briefs that included KI information within the findings of their report, 8 had requests from peer reviewers suggesting samples may not have been representative of all relevant stakeholders (6 reports) or all relevant information did not come out of interviews (2 reports), which could be related to an insufficiently representative sample of KIs. Formal qualitative methods rely on representative samples and utilize interview techniques to reach thematic saturation. Since technical briefs did not always reach thematic saturation, more formal qualitative methods may be useful in providing findings that include the content of importance to end-users. Peer reviewers asked for patient advocates or front-line clinicians to be included as KIs, and often questioned the selection process for KIs.\nPeer reviewers often asked for an expanded scope of the technical brief to include costs, additional populations, interventions, or settings.", "pairs": [], "interleaved": []}
|
2 |
+
{"file": "methresdmie_NBK569735/ch2.nxml", "text": "A workgroup comprised members from 9 EPCs (RTI International-University of North Carolina, Mayo Clinic, ECRI Institute-Penn Medicine, University of Minnesota/Minneapolis VA, Brown University, Kaiser Permanente, Southern California/RAND, Johns Hopkins University, University of Connecticut), AHRQ, and the Scientific Resource Center (SRC). We met twice monthly for 10 months and gathered additional input from the wider EPC Program at two virtual meetings. This report draws on three sources of data: a literature review, a review of a purposive sample of EPC reports that identified insufficient evidence, and an audit of EPC technical briefs. Technical briefs were reviewed because they are often prepared for topics where there is anticipated to be a small body of direct evidence. Technical briefs may answer foundational and definitional questions in addition to (or instead of) questions of benefits and harms and may use a variety of information sources to guide report writing or support decision making. Strategies were subsequently identified and suggested based on the three data sources and iterative discussions among the workgroup members.\nLiterature Review\nThe SRC staff librarian conducted two literature searches (see Appendix A) to identify articles describing insufficient evidence in terms of: (1) how it was defined or acted on in decision making or guideline development and (2) how different audiences might react to the term \u201cinsufficient.\u201d The research librarian conducted a first-pass abstract and title screening to exclude irrelevant references. The two workgroup leads screened the remaining citations to select articles for inclusion. The workgroup members evaluated the included full-text papers for relevance and extracted pertinent information into a standardized form. The two workgroup leads categorized the articles into five topic areas: (1) types and definitions of insufficient evidence, (2) existing frameworks to rate insufficient evidence, (3) decision making in the face of insufficient evidence, (4) evidence synthesis and insufficient evidence, and (5) other miscellaneous themes. The workgroup organized results thematically and used these findings to identify potential recommendations.\nReview of Systematic Reviews\nTo uncover how EPCs currently classify and present insufficient evidence ratings, we reviewed a purposive sample of systematic reviews previously published by the EPCs. Twelve EPCs were asked to select their own examples of at least five examples of systematic reviews, either AHRQ funded or non-AHRQ funded, that were completed during the last 5 years, in which at least one Key Question had insufficient evidence. Systematic reviews with a specific sponsor or stakeholder were prioritized because these reflect scenarios when the review was most likely carried out to directly inform decisionmaking.\nFor each included review, workgroup members (or a volunteer from the EPC if the EPC was not already represented on the workgroup) extracted information pertaining to the decisional dilemma addressed by the review, whether the insufficient evidence rating was anticipated at the start of the review, reasons for insufficient evidence ratings, any approaches used to address the insufficient evidence and help decision makers act on the evidence, and whether the main stakeholder of the review made any recommendations based on the insufficient evidence.\nAudit of Technical Briefs\nMembers of the SRC extracted information from 21 technical briefs accepted for publication by the AHRQ EPC Program between May 2014 and February 2020. Technical briefs often combine multiple sources of information (Key Informant [KI] interviews, published literature, grey literature, audits of commercially available products) and they consider practical aspects of implementing various clinical or quality improvement interventions.\nAside from traditional evidence sources, such as systematic reviews and primary literature, many briefs also explored sources of data not typically used in systematic reviews, such as websites of commercially available products,5\u20137 clinical practice guidelines,8,9 and numerous sources of grey literature(like reports published by state and county health officials).10\nTherefore, we reviewed data sources and analytic approaches that have been used in technical briefs. Our review also reported the methods used in technical briefs when the questions summarized contextual and implementation information. As a proxy to determine whether the intended audience was satisfied with the report and methods used, the SRC examined peer and public review comments on the draft technical brief. The SRC also considered summaries from the topic nominator to better elucidate the decisional dilemma faced by the end-user a priori to commissioning the technical brief. From the technical brief reports, peer and public comment summaries, and nomination summaries, the SRC extracted information pertaining to the reports\u2019 decisional dilemma, a subjective determination of how well the technical brief research questions directly addressed that dilemma, evidence synthesis methods, and whether peer and public reviewers recommended substantial changes to the synthesis, conduct, or framing of the report. The SRC and EPC workgroup then reviewed the extracted data to identify common themes.\nWorkgroup Discussion and Consensus Process\nThe method for determining consensus of these strategies was informal. We discussed issues until no one voiced disagreements.", "pairs": [], "interleaved": []}
|
3 |
+
{"file": "methresdmie_NBK569735/ch1.nxml", "text": "Systematic reviewers synthesize a body of evidence and rate the strength of evidence available for each eligible outcome based on study limitations, consistency, directness, precision, and additional factors. When criteria are not adequately met, evidence may be rated as \u201cinsufficient.\u201d The phrase \u201cinsufficient strength of evidence\u201d is used by the Agency for Healthcare Research and Quality\u2019s (AHRQ) Evidence-based Practice Center (EPC) Program to indicate, \u201cWe have no evidence, we are unable to estimate an effect, or we have no confidence in the estimate of effect for this outcome. No evidence is available or the body of evidence has unacceptable deficiencies, precluding reaching a conclusion.\u201d1 By contrast, the lowest category of GRADE certainty of evidence ratings is \u201cvery low,\u201d which is defined as, \u201cWe have very little confidence in the effect estimate [for this outcome]. The true effect is likely to be substantially different from the estimate of effect.\u201d2\nThe term \u201cinsufficient,\u201d may be interpreted differently by various end-users of the review and may refer to different limitations of a given literature base. In the absence of qualifiers, readers of systematic reviews may conflate the insufficiency of evidence about an effect (e.g., on benefits or harms on a particular population, intervention, comparison or outcome) with the insufficiency of information to make a decision. Insufficient evidence does not necessarily mean that decisionmakers will not or should not act on the evidence that is available. In fact, healthcare decision makers consider evidence as one of many decisional factors, which may include patient and healthcare provider values and preferences, resources, feasibility, acceptability of the recommended actions,3 and concerns about inaction. When there is no evidence or insufficient evidence on benefits or harms, information on these other factors may be important to summarize for decision makers.4\nA workgroup from the AHRQ EPC Program sought to understand how systematic reviewers can support decision making in the face of insufficient evidence. The workgroup aimed to identify (1) the various ways in which the term \u201cinsufficient evidence\u201d has been used, defined, and understood in the literature; (2) published frameworks for decision-making based on insufficient evidence; and (3) strategies that can be adopted by systematic reviewers to provide additional information to support decision making when facing insufficient evidence. Finally, the workgroup provided recommendations for systematic reviewers on how to handle insufficient evidence during scoping of the topic, developing the protocol, and conducting and reporting the review.", "pairs": [], "interleaved": []}
|
4 |
+
{"file": "methresdmie_NBK569735/ch5.nxml", "text": "EPC systematic reviews examine the available evidence on one of the main factors that inform decisions \u2013 that is, the evidence for benefit or harm. However, decision makers must also consider a range of other factors, such as costs, values, preferences, and equity, and the relative weight of these factors may vary depending on the topic or the availability of evidence for benefit or harm. This report summarizes the findings of a workgroup from the EPC Program that sought to understand how systematic reviewers can further support decision making in the face of insufficient evidence for benefits or harms. We identified the various ways in which the term insufficient evidence has been used, defined, and understood in the literature; and what additional strategies can be performed by systematic reviewers to facilitate decisionmaking in the context of insufficient evidence.\nWe identified several strategies to augment decision making, such as summarizing indirect evidence, summarizing contextual and implementation evidence, modeling, and incorporating unpublished health system data in the review. One key challenge is that appropriate planning and budgeting of a review needs to be done early, alongside conversations with key informants about scope, whereas the determination of insufficient evidence may not be made until late in the review. As such, reviewers should consider options and strategies early and at each stage of the process: during the scoping or the review, protocol development, conducting of the review, and when reporting the findings. Even when insufficient evidence is identified later in the review process, some of the proposed approaches described to supplement the review may still be feasible with appropriate protocol amendments.\nIt is important to acknowledge the limitations of the evidence even with the implementation of these strategies, as well as our approach to identify them. These strategies do not \u201cfix the problem\u201d of insufficient evidence, but rather facilitate decision making in in the context of insufficient evidence. For example, adding unpublished health system data to a systematic review can improve precision of the estimates and may enhance applicability; however, such data are not peer reviewed and can suffer from various types of bias. We may have not included important strategies because our sample was limited to reports in which EPC investigators were involved, most of which are conducted for guideline groups or governmental agencies. Our evidence about the relative success of these strategies was also indirect and inferred from peer and public comments and not directly from end-users.\nThe examples, strategies, and recommendations in this document apply to the EPC Program and may not apply to other systematic reviewers. Although this project focuses on systematic reviews conducted following a traditional timeline, the approach can be applied to rapid reviews to address urgent clinical issues. The Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) pandemic serves as an ongoing example of a public health crisis requiring rapid reviews; these reviews may conclude insufficient evidence, yet provide information to support decision making.32", "pairs": [], "interleaved": []}
|
5 |
+
{"file": "methresdmie_NBK569735/gl1.nxml", "text": "Agency for Healthcare Research and Quality\nEvidence-based Practice Center\nGrading of Recommendations Assessment, Development and Evaluation\nKey Informant\nSevere Acute Respiratory Syndrome Coronavirus 2\nStrength of evidence\nScientific Resource Center\nUnited States Preventive Services Task Force", "pairs": [], "interleaved": []}
|
6 |
+
{"file": "methresdmie_NBK569735/ch6.nxml", "text": "Systematic reviews commonly examine the evidence on benefits and harms of interventions but other factors are required for decisionmaking. When the strength of this evidence warrants insufficient rating, information on these factors can enhance the utility of systematic reviews for health systems and other stakeholders. We identified five potential strategies including broadening eligibility criteria to other study designs, summarizing indirect evidence, summarizing contextual and implementation evidence, modelling, and incorporating unpublished health system data in the review.", "pairs": [], "interleaved": []}
|