title
stringlengths 1
827
⌀ | uuid
stringlengths 36
36
| pmc_id
stringlengths 5
8
| search_term
stringclasses 18
values | text
stringlengths 0
8.42M
|
---|---|---|---|---|
Can internal medicine specialists diagnose functional somatic disorders (FSDs)? Training and comparison with FSD specialists | 53c27c41-49ca-444b-a819-6145b881a0af | 11244839 | Internal Medicine[mh] | Functional Somatic Disorders (FSDs) are common across medical settings , and recent studies have found a high prevalence of FSD (8–10%) in the general population . Patients with FSD predominantly present in non-psychiatric settings with multiple physical symptoms that may mimic various other physical diseases and therefore present complex differential diagnostic dilemmas. Furthermore, patients with FSD may also have physical and mental comorbidities, which increases the complexity of the diagnostic evaluation. Many patients with undiagnosed FSD are repeatedly referred for diagnostic evaluation by various specialists, leading to overutilization of diagnostic and treatment resources . Thus, some of these patients may be on a seemingly endless “odyssey” of repeated referrals for diagnostic evaluation in subspecialty clinics for years, potentially leading to more chronic disease, psychological distress, lower labor market participation, delay in treatment, and risk of iatrogenic harm from excessive diagnostic procedures/interventions . A wide range of terms have been employed to denote these disorders, such as medically unexplained symptoms, somatoform disorders, and functional somatic syndromes (FSS), including fibromyalgia, irritable bowel syndrome (IBS), chronic fatigue syndrome (CFS) /myalgic encephalopathy (ME), and multiple chemical sensitivity (MCS) . Based on empirical research, and equivalent to the bodily distress syndrome (BDS) research diagnosis, a functional somatic disorder phenotype has been identified . FSDs are characterized by identifiable, persistent and bothersome physical symptom patterns from one (single-organ FSD) or several (multi-organ FSD) of four organ groups: cardiopulmonary (CP), gastrointestinal (GI), musculoskeletal (MS), and general symptoms (GS) group . As with all other clinical diagnoses, relevant differential diagnoses must have been considered. General practitioners find that this patient group is among the most difficult to manage . Many patients with undiagnosed FSD are repeatedly referred for specialized diagnostic evaluation, leading to consternation among some clinicians . From a societal perspective, the FSD patient group is overly costly due to excess use of medical care, lower labor market participation, and lost working years . In Denmark, the Danish Health Authority (Sundhedsstyrelsen) has mandated that FSD cases of mild and moderate severity should be managed in primary care, and only severe FSD cases should be managed at specialized FSD clinics . This implies that most FSD patients have to be diagnosed and treated in primary care with the support of FSD specialists. Since 2008, all trainee General Practitioners (GPs) in Western Denmark have received basic training in diagnosing FSD and communicating/negotiating the diagnosis with FSD patients using The Extended Reattribution and Management (TERM) model . However, many GPs feel inadequate in managing the more complicated cases, especially at the early stage where the diagnosis is still uncertain and physical differential diagnoses have not been excluded. Furthermore, the current general medicine diagnostic centers still have inadequate knowledge of FSD and are thus unable to sufficiently assist GPs in obtaining diagnostic certainty for patients with mild or moderate FSD. To address this gap in diagnostic availability, it was decided to set up a Diagnostic Clinic for Functional Disorders (FSD clinic) at the general internal medicine Diagnostic Center (DC) at Silkeborg Regional Hospital. DC Silkeborg encompasses all internal medicine subspecialties, including radiology. As in other diagnostic centers, DC Silkeborg offers a patient-centric, multidisciplinary "same-day diagnosis" approach, where patients receive a comprehensive diagnostic evaluation and results within the same day in order to swiftly identify and address serious medical conditions . The new FSD Clinic at DC Silkeborg is being evaluated in a randomized clinical trial called the DISTRESS Trial . As a novelty at this clinic, internal medicine specialists, after being trained in FSD assessment and patient education for FSD by FSD experts, performed the FSD diagnostic work using a tailored version of the SCAN interview . This approach transfers knowledge from a specialized FSD department, traditionally staffed mainly by psychiatrists, to a physical diagnostic center setting staffed by internal medicine specialists. In this way, patients presenting with physical symptoms at a physical diagnostic center are assessed using a customized SCAN interview similar to those seen at the highly specialized FSD center. To our knowledge, this has never been attempted before, and thus there is a clear gap in our knowledge regarding the feasibility of internal medicine specialists (internists) to diagnose FSD in a clinical setting. In the present study, we aimed to evaluate the feasibility of training internists in carrying out clinical assessments of FSD as well as compare their diagnostic outcomes with those from gold standard interviews by experienced FSD clinicians (FSD specialists). Study population and study design Participants in the DISTRESS Trial were recruited from the Central Denmark Region, comprising ∼1.2 million people. To be included in the DISTRESS Trial, patients had to be referred by their GP for diagnostic evaluation due to symptoms presenting a diagnostic dilemma between well-defined physical diseases and a suspected FSD. Additionally, for inclusion, patients had to be between 18 and 60 years old, speak and read Danish fluently, and to have had suspected symptoms of FSD for at least 6 months and no more than 3 years. Patients were excluded if they had a pre-existing severe chronic physical disease that explained their reduction in level of functioning in daily life, or if they had previously been evaluated at a specialized FSD clinic. Patients were also excluded if they were pregnant, were abusing alcohol or non-prescription drugs, or had an acute or severe psychiatric disorder, such as psychosis, bipolar affective disorder, or severe depression with psychotic symptoms. After inclusion in the DISTRESS Trial, patients were randomized 1:1 to either the intervention group, and subsequently seen at the FSD Clinic at DC Silkeborg by our internists trained in FSD diagnostics and patient education, or to diagnostic as usual, where they were offered an alternative diagnostic evaluation at another pre-existing specialist clinic. The choice of the most relevant alternative diagnostic evaluation was determined by their GP. For the present study, 27 consecutive patients from the intervention group of the DISTRESS Trial were invited to a subsequent SCAN Gold Standard interview by an experienced FSD clinician from a specialized FSD clinic with substantial experience in diagnosing FSD including the use of the SCAN interview. The present study started recruitment on 13 May 2020 and was completed on 21 April 2022. Training the internists Seven internists received training from FSD specialists and then carried out clinical patient visits including a tailored SCAN and an FSD-focused interview at the new FSD Clinic at DC Silkeborg. The seven internists were from the internal medicine subspecialties: rheumatology (N = 4), endocrinology (N = 2), and gastrointestinal medicine (N = 1). None of the trainees had had previous training in FSD diagnostics and patient education. Training was carried out by three FSD specialists (psychiatrists) at the specialized department for Functional Disorders at Aarhus University Hospital. The training consisted of a four-day, 26 hour in-person initial residential training course in March 2019, followed by four biannual follow-up 1-day seminars that took place from October 2019 to November 2021 with regular supervision by an FSD specialist. The training schedule was somewhat extended due to the COVID-19 pandemic. Furthermore, the internists attended one SCAN interview performed by an FSD specialist. Additionally, bimonthly Q&A sessions with the principal investigator of this project were held. The training was a multifaceted educational program . The course content included: 1) The assessment, treatment, and management of FSD, i.e., the TERM model with hands-on role play including micro-skill training using professional actors, and 2) An introduction to the principles of SCAN Rating and hands-on training in a tailored version of SCAN. The internists were trained in the entire FSD diagnostic process, including biopsychosocial history-taking, the SCAN interview to identify the positive diagnostic criteria for FSD and to identify psychiatric or general medicine comorbid/differential diagnostic conditions, as well as in the FSD communication approach using TERM . The trainees received subsequent training and calibration through consensus conversations with FSD specialists as described below in the “SCAN re-interview procedure” section . Instruments Prior to randomization in the DISTRESS trial, participants filled out a questionnaire, including a battery of patient-reported outcomes . Specifically, the battery included the BDS Checklist , Whiteley-6R health anxiety index, and several symptom checklists: SCL-8 , SCL-4anx , and SCL-6dep . A tailored version of the semi-structured online-based computer-assisted SCAN 3.0 for use by internists for FSD diagnostics in a physical diagnostic setting was used. The SCAN version 2.1 is among the most widely used and broadly validated instruments for diagnosis in neuropsychiatry. The SCAN 3.0 is a new and updated version of SCAN and includes, i.a., a second section on functional disorders, physical disease, and health anxiety as well as on conversion and dissociative disorders that has been developed to replace the old section on somatoform and related conditions. The online-based electronic version was recently developed. The tailored version is a reduced version of the full SCAN 3.0, e.g., within affective disorders it only includes the key symptoms of depression and within anxiety only the screening questions, and psychotic disorders are not included. After symptom rating by the clinician rater, the SCAN 3.0 software algorithm identifies any FSD as well as FSD single- vs. multi-organ if present. The SCAN 3.0 software also identifies somatoform disorders as well as specialty-specific syndrome diagnoses (FSS) such as fibromyalgia, irritable bowel disease, and CFS/ME using various criteria including the CDC and Oxford criteria. SCAN re-interview procedure The reinterview, based on the gold standard SCAN interview, was conducted by an experienced FSD clinician (FSD specialist) from a highly specialized FSD center as soon as possible after the first interview. The FSD specialist (LG) used the full SCAN interview, excluding the chapters on psychosis. The FSD specialist was blinded to the evaluation of the internist interviews, and she received all relevant clinical data up to the time when the internist had seen the patient. The reinterview consultation lasted three hours and had almost the same format as the initial diagnostic visit with the internist: one hour for preparation including review of patient records, information from the GP, and the baseline questionnaire followed by the patient visit, in which one hour was dedicated to biopsychosocial history taking and one hour to the SCAN interview . As at the initial diagnostic visit, a diagnosis of FSD was made where applicable and documented based on the SCAN interview, informed by the SCAN software algorithm’s output. Subsequently, a consensus discussion was held between the “primary” SCAN interviewer and the FSD specialist. After the FSD specialist had documented and revealed her final SCAN interview conclusion, the evaluation including the conclusion from the internist SCAN interview was unblinded, and a consensus conversation was held between the FSD specialist and the internist. Statistical analysis The FSD specialist’s SCAN interview outcome was regarded as the gold standard, and comparisons were made examining the extent of agreement between the FSD specialist and the internists. A two-level hierarchy of diagnostic boundaries was applied for the evaluation of the accuracy of the diagnostic outcomes of the internists’ interviews. The first level was a two-fold classification: absence or presence of FSD, and the second level was single-organ vs. multi-organ FSD. At each level, 2x2 tables were constructed. We measured the agreement between the internists and the FSD specialist by inter-observer agreement as well as by Cohen’s Kappa . Landis and Koch’s division into classes of agreement was used for interpretation of interrater reliability : 0.00–0.20 slight agreement, 0.21–0.40 fair agreement, 0.41–0.60 moderate agreement, 0.61–0.80 substantial agreement, and 0.81–1.00 near perfect agreement. We also calculated the sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of the internist ratings versus the assessments by the FSD specialists. For all statistical measures calculated, 95% confidence intervals were constructed. All analyses were performed in Stata 17.0 for Windows (StataCorp LLC, College Station, USA). Trial registration and ethical approval The DISTRESS Trial has been registered at clinicaltrials.gov with ID NCT06025617 . The Scientific Ethics Committees for the Central Denmark Region concluded in November 2018 that the project did not require ethical approval from the committee. Regarding data protection, we have sought and received approval (case number 1-16-02-160-19) from the Data Protection Authority in the Central Denmark Region. Written informed consent for inclusion in the study was obtained from each patient prior to inclusion. Participants in the DISTRESS Trial were recruited from the Central Denmark Region, comprising ∼1.2 million people. To be included in the DISTRESS Trial, patients had to be referred by their GP for diagnostic evaluation due to symptoms presenting a diagnostic dilemma between well-defined physical diseases and a suspected FSD. Additionally, for inclusion, patients had to be between 18 and 60 years old, speak and read Danish fluently, and to have had suspected symptoms of FSD for at least 6 months and no more than 3 years. Patients were excluded if they had a pre-existing severe chronic physical disease that explained their reduction in level of functioning in daily life, or if they had previously been evaluated at a specialized FSD clinic. Patients were also excluded if they were pregnant, were abusing alcohol or non-prescription drugs, or had an acute or severe psychiatric disorder, such as psychosis, bipolar affective disorder, or severe depression with psychotic symptoms. After inclusion in the DISTRESS Trial, patients were randomized 1:1 to either the intervention group, and subsequently seen at the FSD Clinic at DC Silkeborg by our internists trained in FSD diagnostics and patient education, or to diagnostic as usual, where they were offered an alternative diagnostic evaluation at another pre-existing specialist clinic. The choice of the most relevant alternative diagnostic evaluation was determined by their GP. For the present study, 27 consecutive patients from the intervention group of the DISTRESS Trial were invited to a subsequent SCAN Gold Standard interview by an experienced FSD clinician from a specialized FSD clinic with substantial experience in diagnosing FSD including the use of the SCAN interview. The present study started recruitment on 13 May 2020 and was completed on 21 April 2022. Seven internists received training from FSD specialists and then carried out clinical patient visits including a tailored SCAN and an FSD-focused interview at the new FSD Clinic at DC Silkeborg. The seven internists were from the internal medicine subspecialties: rheumatology (N = 4), endocrinology (N = 2), and gastrointestinal medicine (N = 1). None of the trainees had had previous training in FSD diagnostics and patient education. Training was carried out by three FSD specialists (psychiatrists) at the specialized department for Functional Disorders at Aarhus University Hospital. The training consisted of a four-day, 26 hour in-person initial residential training course in March 2019, followed by four biannual follow-up 1-day seminars that took place from October 2019 to November 2021 with regular supervision by an FSD specialist. The training schedule was somewhat extended due to the COVID-19 pandemic. Furthermore, the internists attended one SCAN interview performed by an FSD specialist. Additionally, bimonthly Q&A sessions with the principal investigator of this project were held. The training was a multifaceted educational program . The course content included: 1) The assessment, treatment, and management of FSD, i.e., the TERM model with hands-on role play including micro-skill training using professional actors, and 2) An introduction to the principles of SCAN Rating and hands-on training in a tailored version of SCAN. The internists were trained in the entire FSD diagnostic process, including biopsychosocial history-taking, the SCAN interview to identify the positive diagnostic criteria for FSD and to identify psychiatric or general medicine comorbid/differential diagnostic conditions, as well as in the FSD communication approach using TERM . The trainees received subsequent training and calibration through consensus conversations with FSD specialists as described below in the “SCAN re-interview procedure” section . Prior to randomization in the DISTRESS trial, participants filled out a questionnaire, including a battery of patient-reported outcomes . Specifically, the battery included the BDS Checklist , Whiteley-6R health anxiety index, and several symptom checklists: SCL-8 , SCL-4anx , and SCL-6dep . A tailored version of the semi-structured online-based computer-assisted SCAN 3.0 for use by internists for FSD diagnostics in a physical diagnostic setting was used. The SCAN version 2.1 is among the most widely used and broadly validated instruments for diagnosis in neuropsychiatry. The SCAN 3.0 is a new and updated version of SCAN and includes, i.a., a second section on functional disorders, physical disease, and health anxiety as well as on conversion and dissociative disorders that has been developed to replace the old section on somatoform and related conditions. The online-based electronic version was recently developed. The tailored version is a reduced version of the full SCAN 3.0, e.g., within affective disorders it only includes the key symptoms of depression and within anxiety only the screening questions, and psychotic disorders are not included. After symptom rating by the clinician rater, the SCAN 3.0 software algorithm identifies any FSD as well as FSD single- vs. multi-organ if present. The SCAN 3.0 software also identifies somatoform disorders as well as specialty-specific syndrome diagnoses (FSS) such as fibromyalgia, irritable bowel disease, and CFS/ME using various criteria including the CDC and Oxford criteria. The reinterview, based on the gold standard SCAN interview, was conducted by an experienced FSD clinician (FSD specialist) from a highly specialized FSD center as soon as possible after the first interview. The FSD specialist (LG) used the full SCAN interview, excluding the chapters on psychosis. The FSD specialist was blinded to the evaluation of the internist interviews, and she received all relevant clinical data up to the time when the internist had seen the patient. The reinterview consultation lasted three hours and had almost the same format as the initial diagnostic visit with the internist: one hour for preparation including review of patient records, information from the GP, and the baseline questionnaire followed by the patient visit, in which one hour was dedicated to biopsychosocial history taking and one hour to the SCAN interview . As at the initial diagnostic visit, a diagnosis of FSD was made where applicable and documented based on the SCAN interview, informed by the SCAN software algorithm’s output. Subsequently, a consensus discussion was held between the “primary” SCAN interviewer and the FSD specialist. After the FSD specialist had documented and revealed her final SCAN interview conclusion, the evaluation including the conclusion from the internist SCAN interview was unblinded, and a consensus conversation was held between the FSD specialist and the internist. The FSD specialist’s SCAN interview outcome was regarded as the gold standard, and comparisons were made examining the extent of agreement between the FSD specialist and the internists. A two-level hierarchy of diagnostic boundaries was applied for the evaluation of the accuracy of the diagnostic outcomes of the internists’ interviews. The first level was a two-fold classification: absence or presence of FSD, and the second level was single-organ vs. multi-organ FSD. At each level, 2x2 tables were constructed. We measured the agreement between the internists and the FSD specialist by inter-observer agreement as well as by Cohen’s Kappa . Landis and Koch’s division into classes of agreement was used for interpretation of interrater reliability : 0.00–0.20 slight agreement, 0.21–0.40 fair agreement, 0.41–0.60 moderate agreement, 0.61–0.80 substantial agreement, and 0.81–1.00 near perfect agreement. We also calculated the sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of the internist ratings versus the assessments by the FSD specialists. For all statistical measures calculated, 95% confidence intervals were constructed. All analyses were performed in Stata 17.0 for Windows (StataCorp LLC, College Station, USA). The DISTRESS Trial has been registered at clinicaltrials.gov with ID NCT06025617 . The Scientific Ethics Committees for the Central Denmark Region concluded in November 2018 that the project did not require ethical approval from the committee. Regarding data protection, we have sought and received approval (case number 1-16-02-160-19) from the Data Protection Authority in the Central Denmark Region. Written informed consent for inclusion in the study was obtained from each patient prior to inclusion. Inclusion In total, 27 consecutive patients from the intervention group of the DISTRESS Trial were invited to participate in Gold standard interviews. The median age of the 27 participants was 34 years (range 18–64) and 23 of the participants (85%) were female. All 27 agreed to participate, and hence all 27 participants completed interviews by both an internist and the FSD specialist. Evaluation of the SCAN interviews by the internists as a diagnostic test The results of the comparison of the 27 gold standard interviews with the primary SCAN interviews by the internists are shown in . The FSD specialist found that 24 out of 27 patients assessed had any FSD (positive rate 0.89), and 16 out of 23 patients with any FSD as rated by both the Internist and the FSD specialist had multi-organ FSD (positive rate 0.70). At the 1 st diagnostic level of classification of any FSD vs. no FSD, the internists were very good at “detecting” FSD and quite good at “ruling out” FSD, having missed only one positive and one negative of the 27 patient cases compared with the gold standard interviews by the FSD specialist. At the 2 nd diagnostic level of classification of single- vs. multi-organ FSD, the internists rated all the single-organ FSD cases as single-organ and 3 of 16 multi-organ FSD cases as single-organ FSD. Thus, the internists misclassified three out of 16 cases of multi-organ FSD as single-organ FSD compared to the gold standard interview. shows the interrater agreement, which we found to be substantial, with Kappa values of 0.63 (0.15–1.00) and 0.73 (0.45–1.00) for any FSD vs. no FSD, and single-organ vs. multi-organ FSD, respectively . However, the confidence intervals were quite wide (especially for the 1 st diagnostic level). In total, 27 consecutive patients from the intervention group of the DISTRESS Trial were invited to participate in Gold standard interviews. The median age of the 27 participants was 34 years (range 18–64) and 23 of the participants (85%) were female. All 27 agreed to participate, and hence all 27 participants completed interviews by both an internist and the FSD specialist. The results of the comparison of the 27 gold standard interviews with the primary SCAN interviews by the internists are shown in . The FSD specialist found that 24 out of 27 patients assessed had any FSD (positive rate 0.89), and 16 out of 23 patients with any FSD as rated by both the Internist and the FSD specialist had multi-organ FSD (positive rate 0.70). At the 1 st diagnostic level of classification of any FSD vs. no FSD, the internists were very good at “detecting” FSD and quite good at “ruling out” FSD, having missed only one positive and one negative of the 27 patient cases compared with the gold standard interviews by the FSD specialist. At the 2 nd diagnostic level of classification of single- vs. multi-organ FSD, the internists rated all the single-organ FSD cases as single-organ and 3 of 16 multi-organ FSD cases as single-organ FSD. Thus, the internists misclassified three out of 16 cases of multi-organ FSD as single-organ FSD compared to the gold standard interview. shows the interrater agreement, which we found to be substantial, with Kappa values of 0.63 (0.15–1.00) and 0.73 (0.45–1.00) for any FSD vs. no FSD, and single-organ vs. multi-organ FSD, respectively . However, the confidence intervals were quite wide (especially for the 1 st diagnostic level). This feasibility study suggests that internists can indeed reliably diagnose FSD in a diagnostic center setting. A tailored SCAN interview was used to support the clinical diagnostic process and taught in a 30-hour intensive training course followed by supervision by a specialist in FSD. In this study, we found that internists in a non-psychiatric diagnostic center setting after receiving training in FSD diagnostics and patient education for FSD were good at identifying FSD, with substantial agreement compared with FSD specialists. To our knowledge, this is the first time internists have been systematically trained in how to identify FSD using the SCAN interview as a diagnostic aid for clinical FSD diagnostics. The substantial interrater agreement is perhaps further remarkable given that we not only transferred the ability to conduct SCAN from one group of specialists to another, but also from a highly specialized clinical setting to a general internal medicine diagnostic center in the secondary sector. There was a high positive rate of FSD among our interview subjects, which was expected, since patients were recruited from the DISTRESS trial–in which subjects are included after being referred by their GP to our clinic based on suspicion of FSD by their GP. Due to the low number of interviews and the high positive rate of FSD in our study sample, we obtained wide confidence intervals on our measurements of agreement, however, even considering this uncertainty, the interrater agreement between the trained internists and the FSD specialist was substantial. The extent of agreement achieved at both levels of the FSD diagnostic hierarchy is on par with those found in the WHO PSE-10 SCAN field trials . When considering any generalization to other diagnostic centers, we should highlight that the internists in this study had substantial time allocated to each clinical visit for the interviews (3–4 hours per patient) equivalent to the time dedicated at the highly specialized FSD department in Aarhus. Not one case of single-organ FSD was misclassified as multi-organ FSD, however in three of 16 instances where the internist concluded single-organ FSD, the gold standard interviewer concluded multi-organ FSD. These misclassifications occurred throughout the study period and thus cannot be ascribed to the learning curve. The misclassifications were not associated with individual internist subspecialties, since symptoms from the cardiac symptom cluster were overlooked in two of the cases, whereas, general symptoms were overlooked in the final case. A possible explanation could be that the internists were more prone to judge specific symptoms as pertaining to other causes than FSD, or that they were more likely to overlook some FSD symptoms. Furthermore, although the FSD specialist and the internists rated the same period for the SCAN interviews, patients’ responses may have differed between the interviews due to repeated questioning, increased symptom awareness, or recall bias due to the time passed between the two interviews. In any test/retest study, the amount of time elapsed between two interviews can reduce concordance between the test and retest interview. The median time between the two interviews in our study was 42 days [23–52 IQR] due to scheduling constraints by the FSD specialist as well as interruptions due to COVID restrictions during the study period. However, this time delay arguably lends more credence to our finding of substantial interrater agreement, since if the time delay did introduce a bias, it would be a bias towards greater disagreement between the two interview results. The strengths of this study include the prospective design where all invited patients agreed to participate and completed both interviews, increasing the validity of our findings. Furthermore, the internists’ diagnoses were compared to an established gold standard of FSD specialist assessments. Also, the study was undertaken in a non-psychiatric clinical diagnostic center setting and thus arguably has relevance to clinical practice in similar settings. A notable limitation of this study is the feasibility of dedicating 3–4 hours per patient in one day as practiced in our study. While this approach aligns with the same-day diagnostic models already established at DC Silkeborg and at other diagnostic centers, it may be difficult to implement in all clinical settings. However, the clinicians at the FSD clinic assessed that shortening the duration of each visit was not feasible and argued that the extensive time spent was entirely necessary. They noted that reducing the time per visit would require patients to attend multiple sessions, resulting in an equal or greater overall time commitment. The cost-effectiveness of this comprehensive visit model is currently being evaluated in the DISTRESS trial , however, adapting this model to conventional practice elsewhere may require significant adjustments in scheduling and resource allocation. The present study is also limited by the small sample size (N = 27). Thus, to fully evaluate the validity of our approach, a larger study would be needed. Furthermore, since all patients in this study were recruited after being referred by their GP on suspicion of FSD, the pretest positive rate in the included population was high. In summary, we cannot conclude on the generalizability to other clinical settings or to patient populations not specifically referred by their GP on suspicion of having an FSD based on these data. The results suggest that with proper training and sufficient time per patient, internists can reliably diagnose FSD using a tailored version of the SCAN interview in a general physical diagnostic center setting. S1 Dataset (XLSX) |
Effects of an Exercise and Lifestyle Education Program in Brazilians living with prediabetes or diabetes: study protocol for a multicenter randomized controlled trial | 64ddc6f5-e4fc-4527-8f54-b03755ee646f | 11492483 | Patient Education as Topic[mh] | Note: the numbers in curly brackets in this protocol refer to SPIRIT checklist item numbers. The order of the items has been modified to group similar items (see http://www.equator-network.org/reporting-guidelines/spirit-2013-statement-defining-standard-protocol-items-for-clinical-trials/ ).
Background and rationale {6a} While the global prevalence of diabetes continues to rise, there are indications that some high-income countries (HIC) are experiencing a stabilization or even a decline in new cases . This trend is likely attributable to effective preventive measures and comprehensive public health education programs . In contrast, diabetes rates among adults in Brazil are on a steady upward trajectory . Additionally, the incidence of prediabetes is showing a marked increase on a global scale . This contrast underscores the critical importance of sustained efforts in public health strategies tailored to different socioeconomic contexts. The functional capacity and cardiac autonomic control are impaired in patients with diabetes , and the former is related to poor prognosis due to increased cardiac risk and mortality . The mechanisms that explain the impact of diabetes on functional capacity are not completely known . The harmful effects of hyperglycemia on muscle strength and resistance, as well as other factors such as impaired glucose metabolism, long-term complications, and comorbidities, could contribute to the reduction of functional capacity in patients with diabetes . Therefore, diabetes requires continuous and comprehensive medical care to prevent and manage its acute and long-term complications, which may affect the quality of life and patients’ daily physical capacity . Diabetes treatment involves the use of medications and lifestyle modifications, including self-management education and support, medical nutrition therapy, and physical exercise to achieve glycemic control . Although it is well known that physical exercise contributes to blood glucose control and the reduction of cardiovascular risk factors , compliance remains a challenge. It demands overcoming several barriers to engage in a healthy lifestyle, such as being active . According to data from the Surveillance System of Risk and Protection Factors for Chronic Diseases by Telephone Survey, adults with diabetes in Brazil do not get enough physical activity . In this context, patient education is a fundamental component of diabetes care due to the effectiveness of educational interventions in promoting lifestyle change , and the positive association between disease-related knowledge, treatment adherence, and a healthy lifestyle . Although diabetes guidelines recommend patient education as a component of diabetes care , educational interventions for individuals living with or at risk of this condition have been scantly investigated in low- and middle-income countries (LMIC) such as Brazil . The impact of patient education on behavior change has not been frequently considered in these settings, with most studies on this topic being carried out in high-income countries . Additionally, there is a lower adherence to self-care behaviors by patients with diabetes in LMIC compared to their peers in HIC which could be explained by the limited access to healthcare services . A patient education program tailored to Brazilians living with diabetes and prediabetes was developed drawing from the Diabetes College curriculum. Originally developed in English as part of the Diabetes Program at the Toronto Rehabilitation Institute in Canada, this curriculum has been validated by prior studies showing significant improvements in disease-related knowledge , physical activity , food intake , exercise self-efficacy , and health literacy . Given that physical exercise is recommended by the Brazilian guidelines to prevent and treat diabetes , the patient education program for Brazilians was combined with an exercise program (Exercise and Lifestyle Education Program) to investigate whether patient education can promote better outcomes than physical exercise alone. The feasibility, acceptability, and initial efficacy of the Exercise and Lifestyle Education Program were evaluated through a randomized pilot trial . Additionally, its feasibility for remote delivery in an internet-based format was demonstrated . Drawing from the results of the randomized pilot trial , we hypothesize that the Exercise and Lifestyle Education Program will yield significantly superior outcomes compared to exercise alone for individuals with prediabetes and diabetes in our setting. Objectives {7} The purpose of this multicenter randomized controlled trial is to pragmatically investigate the effects of an Exercise and Lifestyle Education (ExLE) program compared to an Exercise Program (Ex) on functional capacity, disease-related knowledge, health behavior, and cardiometabolic health parameters in individuals with prediabetes and diabetes living in Brazil. Furthermore, program adherence, satisfaction with the program, quality of life, depression, diet quality, and 6-month related diabetes morbidity will also be investigated. Trial design {8} This is a double-blinded (both outcomes’ assessors and data analysts) multicenter randomized controlled trial featuring two-arm parallel groups over 9 months (comprising a 12-week intervention period followed by a 6-month follow-up) with a 1:1 allocation ratio that follows the SPIRIT reporting guidelines .
While the global prevalence of diabetes continues to rise, there are indications that some high-income countries (HIC) are experiencing a stabilization or even a decline in new cases . This trend is likely attributable to effective preventive measures and comprehensive public health education programs . In contrast, diabetes rates among adults in Brazil are on a steady upward trajectory . Additionally, the incidence of prediabetes is showing a marked increase on a global scale . This contrast underscores the critical importance of sustained efforts in public health strategies tailored to different socioeconomic contexts. The functional capacity and cardiac autonomic control are impaired in patients with diabetes , and the former is related to poor prognosis due to increased cardiac risk and mortality . The mechanisms that explain the impact of diabetes on functional capacity are not completely known . The harmful effects of hyperglycemia on muscle strength and resistance, as well as other factors such as impaired glucose metabolism, long-term complications, and comorbidities, could contribute to the reduction of functional capacity in patients with diabetes . Therefore, diabetes requires continuous and comprehensive medical care to prevent and manage its acute and long-term complications, which may affect the quality of life and patients’ daily physical capacity . Diabetes treatment involves the use of medications and lifestyle modifications, including self-management education and support, medical nutrition therapy, and physical exercise to achieve glycemic control . Although it is well known that physical exercise contributes to blood glucose control and the reduction of cardiovascular risk factors , compliance remains a challenge. It demands overcoming several barriers to engage in a healthy lifestyle, such as being active . According to data from the Surveillance System of Risk and Protection Factors for Chronic Diseases by Telephone Survey, adults with diabetes in Brazil do not get enough physical activity . In this context, patient education is a fundamental component of diabetes care due to the effectiveness of educational interventions in promoting lifestyle change , and the positive association between disease-related knowledge, treatment adherence, and a healthy lifestyle . Although diabetes guidelines recommend patient education as a component of diabetes care , educational interventions for individuals living with or at risk of this condition have been scantly investigated in low- and middle-income countries (LMIC) such as Brazil . The impact of patient education on behavior change has not been frequently considered in these settings, with most studies on this topic being carried out in high-income countries . Additionally, there is a lower adherence to self-care behaviors by patients with diabetes in LMIC compared to their peers in HIC which could be explained by the limited access to healthcare services . A patient education program tailored to Brazilians living with diabetes and prediabetes was developed drawing from the Diabetes College curriculum. Originally developed in English as part of the Diabetes Program at the Toronto Rehabilitation Institute in Canada, this curriculum has been validated by prior studies showing significant improvements in disease-related knowledge , physical activity , food intake , exercise self-efficacy , and health literacy . Given that physical exercise is recommended by the Brazilian guidelines to prevent and treat diabetes , the patient education program for Brazilians was combined with an exercise program (Exercise and Lifestyle Education Program) to investigate whether patient education can promote better outcomes than physical exercise alone. The feasibility, acceptability, and initial efficacy of the Exercise and Lifestyle Education Program were evaluated through a randomized pilot trial . Additionally, its feasibility for remote delivery in an internet-based format was demonstrated . Drawing from the results of the randomized pilot trial , we hypothesize that the Exercise and Lifestyle Education Program will yield significantly superior outcomes compared to exercise alone for individuals with prediabetes and diabetes in our setting.
The purpose of this multicenter randomized controlled trial is to pragmatically investigate the effects of an Exercise and Lifestyle Education (ExLE) program compared to an Exercise Program (Ex) on functional capacity, disease-related knowledge, health behavior, and cardiometabolic health parameters in individuals with prediabetes and diabetes living in Brazil. Furthermore, program adherence, satisfaction with the program, quality of life, depression, diet quality, and 6-month related diabetes morbidity will also be investigated.
This is a double-blinded (both outcomes’ assessors and data analysts) multicenter randomized controlled trial featuring two-arm parallel groups over 9 months (comprising a 12-week intervention period followed by a 6-month follow-up) with a 1:1 allocation ratio that follows the SPIRIT reporting guidelines .
Study setting {9} This study will be conducted in Juiz de Fora and Belo Horizonte, two cities in Minas Gerais, a southeastern Brazil’s state. Belo Horizonte, the state capital and largest city, is located in its Central Region, while Juiz de Fora, although medium-sized, holds the distinction of being the largest city in the Mata Region of Minas Gerais. Eligibility criteria {10} Individuals from both sexes were eligible to participate in the study if they met all the following inclusion criteria and did not meet any exclusion criteria (Table ). The potential trial participants were invited to self-respond to questions regarding inclusion criteria on a Google digital form. From the responses, those who seemed eligible were invited for the baseline assessment during which all the eligibility criteria were confirmed by the research team before starting the data collection. Cognitive impairment was assessed on-site by the six-item screener prior to collecting signatures on the informed consent form. The six-item screener is scored by a simple summation of errors, with individuals making more than two errors (i.e., scoring lower than four hits) identified as having cognitive impairment . Participants who present cardiac electrical conduction or rhythm disturbances (e.g., atrial or ventricular ectopy and atrioventricular or ventricular block) or who have changes in cardiovascular medication prescription during the study will be excluded from cardiac autonomic control assessment. Informed consent {26a} Research team members, previously trained in good clinical practices and about the study protocol, screened the eligibility of potential trial participants, enrolled those with confirmed eligibility, provided explanations of the study procedures, and collected signatures on the informed consent form from each participant. Additional consent provisions for collection and use of participant data and biological specimens {26b} This trial will not collect biological specimens; therefore, biological specimens will not be stored or utilized for research purposes. All data collection procedures are outlined in the informed consent form.
This study will be conducted in Juiz de Fora and Belo Horizonte, two cities in Minas Gerais, a southeastern Brazil’s state. Belo Horizonte, the state capital and largest city, is located in its Central Region, while Juiz de Fora, although medium-sized, holds the distinction of being the largest city in the Mata Region of Minas Gerais.
Individuals from both sexes were eligible to participate in the study if they met all the following inclusion criteria and did not meet any exclusion criteria (Table ). The potential trial participants were invited to self-respond to questions regarding inclusion criteria on a Google digital form. From the responses, those who seemed eligible were invited for the baseline assessment during which all the eligibility criteria were confirmed by the research team before starting the data collection. Cognitive impairment was assessed on-site by the six-item screener prior to collecting signatures on the informed consent form. The six-item screener is scored by a simple summation of errors, with individuals making more than two errors (i.e., scoring lower than four hits) identified as having cognitive impairment . Participants who present cardiac electrical conduction or rhythm disturbances (e.g., atrial or ventricular ectopy and atrioventricular or ventricular block) or who have changes in cardiovascular medication prescription during the study will be excluded from cardiac autonomic control assessment.
Research team members, previously trained in good clinical practices and about the study protocol, screened the eligibility of potential trial participants, enrolled those with confirmed eligibility, provided explanations of the study procedures, and collected signatures on the informed consent form from each participant.
This trial will not collect biological specimens; therefore, biological specimens will not be stored or utilized for research purposes. All data collection procedures are outlined in the informed consent form.
Explanation for the choice of comparators {6b} Given the established evidence demonstrating the beneficial effects of physical exercise in managing blood glucose levels and reducing cardiovascular risk factors for individuals with diabetes , as well as the national recommendations to exercise regularly to prevent and treat diabetes , this study will employ an exercise intervention for the control group. Intervention description {11a} The study interventions could be delivered on-site or remotely. Before randomization, the enrolled participants were screened for internet access and technology literacy using an instrument developed by the researchers based on best practices in digital health literacy (see Additional file 1). Participants who answered “yes” to all questions were able to choose their preferred format for receiving the intervention. Those without internet access and/or limited technology literacy will receive the intervention on-site. Exercise Program The Ex program was developed based on recommendations of the Brazilian Diabetes Society Guidelines . It will last 12 weeks and consist of aerobic exercise sessions designed to achieve a minimum of 150 min per week, with muscle-strengthening exercises incorporated two to three times per week starting from the fourth week of the intervention. Each exercise session will include warm-up (stretching), aerobic exercise (moderate-to-vigorous intensity walking according to the Borg Rating of Perceived Exertion exercise scale modified ), and muscle-strengthening using the own body weight and elastic bands (one-to-two sets of 10–12 repetitions of row, half-squat or leg extension, biceps curl, standing knee flexion, shoulder external rotation, heel raise, wall push up, lying down abdominal, elbow extension, and 15 s of plank). For participants receiving the intervention on-site, 16 supervised 1-h exercise sessions will be delivered twice a week from the first 4 weeks and once a week from the fifth week onward. In addition, these participants will receive counseling to exercise in the community to accumulate 150 min per week of aerobic exercise and perform muscle-strengthening exercises 2 to 3 times per week. Participants receiving the intervention remotely will first attend an on-site supervised exercise session to be instructed on the correct execution of aerobic and strengthening exercises to ensure proper performance. After this initial session, the exercise intervention will be delivered through a website specifically developed for the Ex program participants. Additionally, participants will receive weekly reminders about the exercise routine via WhatsApp messages from the research team. Participants will receive guidance on recognizing signs and symptoms of effort intolerance and instructions on how to respond in such situations. They will be asked to record their weekly exercise routine and any exercise side effects in an exercise diary. Additionally, they will be encouraged to self-monitor their heart rate by taking their pulse before and after exercise. Participants who use insulin or secretagogue medications will be instructed to measure capillary blood glucose before and after exercise. Participants with hypertension will be counseled to measure blood pressure before exercise during the first two sessions or in subsequent sessions if symptoms suggestive of changing blood pressure levels are present. Exercise and Lifestyle Education Program The ExLE program combines the Ex program with a patient education program tailored for Brazilians living with diabetes and prediabetes, with detailed descriptions available in a previously published study . The ExLE will last 12 weeks and consist of the same procedures described for the Ex program plus eighteen education classes following the schedule of educational sessions for the Diabetes College program in Brazil . For participants receiving the intervention on-site, 18 30-min education classes will be delivered twice a week during the first 4 weeks and once a week from the 5th week onward, before or after the supervised 1-h exercise sessions. Participants will receive a printed version of the Diabetes College patient guide . Participants receiving the intervention remotely will first attend an on-site supervised exercise session to be instructed on aerobic and strengthening exercises exertion and ensure proper exercise execution. Additionally, they will receive orientation on navigating the ExLE program website to access and fill out exercise and study diaries, as well as access educational video lessons and other support materials. After this on-site session, both the exercise and patient education components of the program will be delivered through a website specific to ExLE program participants developed for this study. The educational content will include: Eighteen educational video lessons: recorded by the research team, lasting approximately 20 min each, and based on the Diabetes College program in Brazil; Twelve videos related to weekly topics: THRiVE videos integrating chronic disease management and behavior change principles to help develop self-management skills through goal setting and action planning; and, A printed version of the Diabetes College patient guide . Additionally, participants will receive weekly WhatsApp text messages from the research team to remind them about the exercise routine, materials in their lesson plan, and the importance of tracking access to educational content in the study diary. Criteria for discontinuing or modifying allocated interventions {11b} Since study participation is voluntary, participants have the right to withdraw from the study at any time. Upon withdrawal, no further data will be collected from them, although any data previously collected will be preserved in accordance with the terms outlined in the informed consent form. It is important to note that once a participant withdraws from the study, they will not be permitted to re-enter at a later date. Strategies to improve adherence to interventions {11c} In on-site delivery mode, participants who did not attend the intervention session for that week will receive a WhatsApp message from the research team to remind them of the scheduled intervention for the following week. In remote delivery mode, the research team will check participants’ exercise and/or study diary completion to assess their engagement in exercise and/or education lessons. Participants who show no engagement in exercise and/or educational lessons for that week will receive a WhatsApp message reminding them about the exercise and/or study routine. Relevant concomitant care permitted or prohibited during the trial {11d} All participants will receive counseling on exercising in the community to achieve the recommended 150 min per week of aerobic exercise and perform muscle-strengthening exercises two to three times per week using elastic bands provided during the intervention. Besides walking, participants may choose alternative aerobic exercise modalities such as swimming, running, or biking to meet the aerobic exercise recommendations. Provisions for post-trial care {30} No post-trial care provision will be offered, as no harm is anticipated from trial participation, and participants will not receive compensation. Outcomes {12} Primary outcomes The primary outcomes of this study will include functional capacity and disease-related knowledge. Functional capacity will be measured by the distance covered in meters during the incremental shuttle walk test (ISWT) , while disease-related knowledge will be assessed using the total score of the Brazilian Portuguese version of the DiAbeTes Education Questionnaire (DATE-Q) . The DATE-Q total score, ranging from 0 (indicating no disease-related knowledge) to 20 (indicating the highest level of disease-related knowledge) is obtained by a 20-item questionnaire with response options of true/false/do-not-know. The primary outcomes will be measured during baseline and outcomes assessment appointments. Secondary outcomes The secondary outcomes are divided in health behaviors (health literacy , physical activity level , exercise self-efficacy , adherence to Mediterranean diet , and medication adherence ) and cardiometabolic health parameters (glycemic control , anthropometric characteristics, and cardiac autonomic control ), as described in Table . Health literacy, exercise self-efficacy, adherence to the Mediterranean diet, medication adherence, and all cardiometabolic health parameters will be measured during both the baseline and outcomes assessment appointments. Physical activity levels will be measured (1) over the 7 days following the baseline assessment appointment; (2) over the 7 days leading up to the post-intervention assessment appointment; and (3) over the 7 days following the post-follow-up assessment appointment or seven days leading up to this appointment to those participants who kept with them the pedometer worn in the past outcome assessment appointment. Tertiary outcomes The tertiary outcomes included in this study will be program adherence, satisfaction with the program, diabetes-related morbidity, quality of life, depression, and diet quality. Program adherence will be measured by the attendance rate in the program for each participant. The attendance rate of participants in the on-site delivery mode will be calculated based on the number of education classes and/or exercise sessions attended, divided by the total number of education classes and/or exercise sessions offered. For participants in the remote delivery mode, the attendance rate will be determined by the number of weeks exercise and /or study diaries were filled out, divided by the program’s duration in weeks. Satisfaction with the program will be measured by questionnaires developed by the researchers. For participants in the on-site delivery mode, the level of satisfaction with the patient education program will be assessed by an 11-item questionnaire, and with the exercise program by a 4-item questionnaire. For participants in the remote delivery mode, the level of satisfaction with the patient education program will be assessed by a ten-item questionnaire and with the exercise program by a five-item questionnaire. Diabetes-related morbidity will be assessed through the number and description of acute complications and diagnoses of chronic complications of diabetes, along with the number of hospitalizations associated with diabetes. The information will be collected using a three-item tool during the 6-month follow-up period. Quality of life will be evaluated using the Medical Outcomes Study 36-Item Short-Form Health Survey (SF-36) . The total scores range from 0 (indicating no quality of life) to 100 (indicating the highest quality of life). This is obtained through a 36-item tool distributed across various physical and mental health domains, including physical functioning, social functioning, role–physical, bodily pain, mental health, role–emotional, vitality, and general health. Depression will be measured by the Brazilian version of the Center for Epidemiological Scale—Depression (CESD) . The total score ranges from 0 (indicating no depressive symptoms) to 60 (indicating the highest level of depressive symptoms). This is obtained through a 20-item tool used to rate how often the interviewed experienced symptoms associated with depression over the past week, such as restless sleep, poor appetite, and feeling lonely. Response options range from 0 to 3 for each item (0 = rarely or none of the time, 1 = some or little of the time, 2 = moderately or much of the time, 3 = most or almost all the time). Diet quality will be measured through the consumption of macro and micronutrients. Food consumption will be assessed using an adapted version of a validated Food Frequency Questionnaire (FFQ) for Brazilian individuals with type 2 diabetes , evaluating consumption frequency (daily, weekly, monthly, or yearly), number of portions, and serving size (small, medium, large or extra large). The adaptation of the FFQ was necessary since the individuals in this research have different eating habits than in the validated FFQ study, so regional adaptations were necessary. This adaptation was made based on a food diary on three non-consecutive days, including a weekend day carried out with 65 volunteers with type 1 or type 2 diabetes . The final FFQ consists of 104 food items distributed across eight food groups (“cereals, tubers, roots, and derivatives”; “vegetables and legumes”; “fruits”; “beans”; “meat and eggs”; “milk and dairy products”; “oils and fats”; “sugars and sweets”), as well as 11 items investigating beverages, 4 items investigating oilseeds, and 1 item investigating food supplement. Participant timeline {13} All assessment appointments will be scheduled individually, based on participants’ and research team members’ schedules. After attending the baseline assessment appointment, the participant will be wearing a pedometer for 7 days before starting the intervention. To accommodate all participants enrolled at that time into the timetable of baseline assessment appointment and have their physical activity level measured before the intervention gets started, a period of up to 21 days including the 7-day wearing a pedometer will be allowed between the baseline assessment appointment and the first intervention session. Therefore, the baseline assessment could be carried out between one and 14 days previous to the first intervention session. After the intervention conclusion, the participant will be wearing a pedometer for 7 days before attending the post-intervention assessment appointment. To accommodate all participants into the post-intervention assessment appointment timetable and have their physical activity level measured before this appointment, a period of up to 21 days including the 7-day wearing a pedometer will be allowed between the intervention conclusion and post-intervention assessment appointment. Therefore, the post-intervention assessment could be carried out between 7 and 21 days after the intervention conclusion. Upon completing the 6-month follow-up, the participant will be wearing a pedometer for 7 days: (1) after attending the post-follow-up assessment appointment when this appointment is scheduled between 1 and 14 days after the 6-month follow-up completion or (2) before attending the post-follow-up assessment appointment when this appointment is scheduled between 7 and 21 days after 6-month follow-up completion. To accommodate all participants into the post-follow-up assessment appointment schedule and have their physical activity level measured after 6-month follow-up completion, a period of up to 21 days including the 7-day wearing a pedometer will be allowed between the 6-month follow-up completion and post-follow-up assessment appointment. Therefore, the post-follow-up assessment could be carried out between 1 and 14 days after the 6-month follow-up completion. Table shows the participant timeline. Sample size {14} The sample size was calculated considering the ISWT distance as the primary outcome, using parameters derived from a previous study involving educational intervention in cardiac rehabilitation . The calculation was performed using R Software version 3.4.3, with the following parameters: a moderate effect size ( d = 0.20), 80% statistical power, a 5% alpha level, one-sided test, two arms, and three measurements. A total sample size of 200 participants (100 per arm) was determined. Assuming a 20% attrition rate, based on a previous study , 120 participants would be enrolled in each arm to ensure that 200 participants complete the study protocol. Recruitment {15} The research team recruited potential participants through face-to-face interactions at health services in the two cities, as well as phone calls to individuals listed in a database from previous studies conducted by the research group , none of which involved exercise or educational interventions. In addition, social media advertisements were utilized, and the study was disseminated among local healthcare providers through face-to-face interactions and via email among employees of the Federal University of Juiz de Fora (UFJF) and the Federal University of Minas Gerais (UFMG), respectively.
Given the established evidence demonstrating the beneficial effects of physical exercise in managing blood glucose levels and reducing cardiovascular risk factors for individuals with diabetes , as well as the national recommendations to exercise regularly to prevent and treat diabetes , this study will employ an exercise intervention for the control group.
The study interventions could be delivered on-site or remotely. Before randomization, the enrolled participants were screened for internet access and technology literacy using an instrument developed by the researchers based on best practices in digital health literacy (see Additional file 1). Participants who answered “yes” to all questions were able to choose their preferred format for receiving the intervention. Those without internet access and/or limited technology literacy will receive the intervention on-site.
The Ex program was developed based on recommendations of the Brazilian Diabetes Society Guidelines . It will last 12 weeks and consist of aerobic exercise sessions designed to achieve a minimum of 150 min per week, with muscle-strengthening exercises incorporated two to three times per week starting from the fourth week of the intervention. Each exercise session will include warm-up (stretching), aerobic exercise (moderate-to-vigorous intensity walking according to the Borg Rating of Perceived Exertion exercise scale modified ), and muscle-strengthening using the own body weight and elastic bands (one-to-two sets of 10–12 repetitions of row, half-squat or leg extension, biceps curl, standing knee flexion, shoulder external rotation, heel raise, wall push up, lying down abdominal, elbow extension, and 15 s of plank). For participants receiving the intervention on-site, 16 supervised 1-h exercise sessions will be delivered twice a week from the first 4 weeks and once a week from the fifth week onward. In addition, these participants will receive counseling to exercise in the community to accumulate 150 min per week of aerobic exercise and perform muscle-strengthening exercises 2 to 3 times per week. Participants receiving the intervention remotely will first attend an on-site supervised exercise session to be instructed on the correct execution of aerobic and strengthening exercises to ensure proper performance. After this initial session, the exercise intervention will be delivered through a website specifically developed for the Ex program participants. Additionally, participants will receive weekly reminders about the exercise routine via WhatsApp messages from the research team. Participants will receive guidance on recognizing signs and symptoms of effort intolerance and instructions on how to respond in such situations. They will be asked to record their weekly exercise routine and any exercise side effects in an exercise diary. Additionally, they will be encouraged to self-monitor their heart rate by taking their pulse before and after exercise. Participants who use insulin or secretagogue medications will be instructed to measure capillary blood glucose before and after exercise. Participants with hypertension will be counseled to measure blood pressure before exercise during the first two sessions or in subsequent sessions if symptoms suggestive of changing blood pressure levels are present.
The ExLE program combines the Ex program with a patient education program tailored for Brazilians living with diabetes and prediabetes, with detailed descriptions available in a previously published study . The ExLE will last 12 weeks and consist of the same procedures described for the Ex program plus eighteen education classes following the schedule of educational sessions for the Diabetes College program in Brazil . For participants receiving the intervention on-site, 18 30-min education classes will be delivered twice a week during the first 4 weeks and once a week from the 5th week onward, before or after the supervised 1-h exercise sessions. Participants will receive a printed version of the Diabetes College patient guide . Participants receiving the intervention remotely will first attend an on-site supervised exercise session to be instructed on aerobic and strengthening exercises exertion and ensure proper exercise execution. Additionally, they will receive orientation on navigating the ExLE program website to access and fill out exercise and study diaries, as well as access educational video lessons and other support materials. After this on-site session, both the exercise and patient education components of the program will be delivered through a website specific to ExLE program participants developed for this study. The educational content will include: Eighteen educational video lessons: recorded by the research team, lasting approximately 20 min each, and based on the Diabetes College program in Brazil; Twelve videos related to weekly topics: THRiVE videos integrating chronic disease management and behavior change principles to help develop self-management skills through goal setting and action planning; and, A printed version of the Diabetes College patient guide . Additionally, participants will receive weekly WhatsApp text messages from the research team to remind them about the exercise routine, materials in their lesson plan, and the importance of tracking access to educational content in the study diary.
Since study participation is voluntary, participants have the right to withdraw from the study at any time. Upon withdrawal, no further data will be collected from them, although any data previously collected will be preserved in accordance with the terms outlined in the informed consent form. It is important to note that once a participant withdraws from the study, they will not be permitted to re-enter at a later date.
In on-site delivery mode, participants who did not attend the intervention session for that week will receive a WhatsApp message from the research team to remind them of the scheduled intervention for the following week. In remote delivery mode, the research team will check participants’ exercise and/or study diary completion to assess their engagement in exercise and/or education lessons. Participants who show no engagement in exercise and/or educational lessons for that week will receive a WhatsApp message reminding them about the exercise and/or study routine.
All participants will receive counseling on exercising in the community to achieve the recommended 150 min per week of aerobic exercise and perform muscle-strengthening exercises two to three times per week using elastic bands provided during the intervention. Besides walking, participants may choose alternative aerobic exercise modalities such as swimming, running, or biking to meet the aerobic exercise recommendations.
No post-trial care provision will be offered, as no harm is anticipated from trial participation, and participants will not receive compensation.
Primary outcomes The primary outcomes of this study will include functional capacity and disease-related knowledge. Functional capacity will be measured by the distance covered in meters during the incremental shuttle walk test (ISWT) , while disease-related knowledge will be assessed using the total score of the Brazilian Portuguese version of the DiAbeTes Education Questionnaire (DATE-Q) . The DATE-Q total score, ranging from 0 (indicating no disease-related knowledge) to 20 (indicating the highest level of disease-related knowledge) is obtained by a 20-item questionnaire with response options of true/false/do-not-know. The primary outcomes will be measured during baseline and outcomes assessment appointments. Secondary outcomes The secondary outcomes are divided in health behaviors (health literacy , physical activity level , exercise self-efficacy , adherence to Mediterranean diet , and medication adherence ) and cardiometabolic health parameters (glycemic control , anthropometric characteristics, and cardiac autonomic control ), as described in Table . Health literacy, exercise self-efficacy, adherence to the Mediterranean diet, medication adherence, and all cardiometabolic health parameters will be measured during both the baseline and outcomes assessment appointments. Physical activity levels will be measured (1) over the 7 days following the baseline assessment appointment; (2) over the 7 days leading up to the post-intervention assessment appointment; and (3) over the 7 days following the post-follow-up assessment appointment or seven days leading up to this appointment to those participants who kept with them the pedometer worn in the past outcome assessment appointment. Tertiary outcomes The tertiary outcomes included in this study will be program adherence, satisfaction with the program, diabetes-related morbidity, quality of life, depression, and diet quality. Program adherence will be measured by the attendance rate in the program for each participant. The attendance rate of participants in the on-site delivery mode will be calculated based on the number of education classes and/or exercise sessions attended, divided by the total number of education classes and/or exercise sessions offered. For participants in the remote delivery mode, the attendance rate will be determined by the number of weeks exercise and /or study diaries were filled out, divided by the program’s duration in weeks. Satisfaction with the program will be measured by questionnaires developed by the researchers. For participants in the on-site delivery mode, the level of satisfaction with the patient education program will be assessed by an 11-item questionnaire, and with the exercise program by a 4-item questionnaire. For participants in the remote delivery mode, the level of satisfaction with the patient education program will be assessed by a ten-item questionnaire and with the exercise program by a five-item questionnaire. Diabetes-related morbidity will be assessed through the number and description of acute complications and diagnoses of chronic complications of diabetes, along with the number of hospitalizations associated with diabetes. The information will be collected using a three-item tool during the 6-month follow-up period. Quality of life will be evaluated using the Medical Outcomes Study 36-Item Short-Form Health Survey (SF-36) . The total scores range from 0 (indicating no quality of life) to 100 (indicating the highest quality of life). This is obtained through a 36-item tool distributed across various physical and mental health domains, including physical functioning, social functioning, role–physical, bodily pain, mental health, role–emotional, vitality, and general health. Depression will be measured by the Brazilian version of the Center for Epidemiological Scale—Depression (CESD) . The total score ranges from 0 (indicating no depressive symptoms) to 60 (indicating the highest level of depressive symptoms). This is obtained through a 20-item tool used to rate how often the interviewed experienced symptoms associated with depression over the past week, such as restless sleep, poor appetite, and feeling lonely. Response options range from 0 to 3 for each item (0 = rarely or none of the time, 1 = some or little of the time, 2 = moderately or much of the time, 3 = most or almost all the time). Diet quality will be measured through the consumption of macro and micronutrients. Food consumption will be assessed using an adapted version of a validated Food Frequency Questionnaire (FFQ) for Brazilian individuals with type 2 diabetes , evaluating consumption frequency (daily, weekly, monthly, or yearly), number of portions, and serving size (small, medium, large or extra large). The adaptation of the FFQ was necessary since the individuals in this research have different eating habits than in the validated FFQ study, so regional adaptations were necessary. This adaptation was made based on a food diary on three non-consecutive days, including a weekend day carried out with 65 volunteers with type 1 or type 2 diabetes . The final FFQ consists of 104 food items distributed across eight food groups (“cereals, tubers, roots, and derivatives”; “vegetables and legumes”; “fruits”; “beans”; “meat and eggs”; “milk and dairy products”; “oils and fats”; “sugars and sweets”), as well as 11 items investigating beverages, 4 items investigating oilseeds, and 1 item investigating food supplement.
The primary outcomes of this study will include functional capacity and disease-related knowledge. Functional capacity will be measured by the distance covered in meters during the incremental shuttle walk test (ISWT) , while disease-related knowledge will be assessed using the total score of the Brazilian Portuguese version of the DiAbeTes Education Questionnaire (DATE-Q) . The DATE-Q total score, ranging from 0 (indicating no disease-related knowledge) to 20 (indicating the highest level of disease-related knowledge) is obtained by a 20-item questionnaire with response options of true/false/do-not-know. The primary outcomes will be measured during baseline and outcomes assessment appointments.
The secondary outcomes are divided in health behaviors (health literacy , physical activity level , exercise self-efficacy , adherence to Mediterranean diet , and medication adherence ) and cardiometabolic health parameters (glycemic control , anthropometric characteristics, and cardiac autonomic control ), as described in Table . Health literacy, exercise self-efficacy, adherence to the Mediterranean diet, medication adherence, and all cardiometabolic health parameters will be measured during both the baseline and outcomes assessment appointments. Physical activity levels will be measured (1) over the 7 days following the baseline assessment appointment; (2) over the 7 days leading up to the post-intervention assessment appointment; and (3) over the 7 days following the post-follow-up assessment appointment or seven days leading up to this appointment to those participants who kept with them the pedometer worn in the past outcome assessment appointment.
The tertiary outcomes included in this study will be program adherence, satisfaction with the program, diabetes-related morbidity, quality of life, depression, and diet quality. Program adherence will be measured by the attendance rate in the program for each participant. The attendance rate of participants in the on-site delivery mode will be calculated based on the number of education classes and/or exercise sessions attended, divided by the total number of education classes and/or exercise sessions offered. For participants in the remote delivery mode, the attendance rate will be determined by the number of weeks exercise and /or study diaries were filled out, divided by the program’s duration in weeks. Satisfaction with the program will be measured by questionnaires developed by the researchers. For participants in the on-site delivery mode, the level of satisfaction with the patient education program will be assessed by an 11-item questionnaire, and with the exercise program by a 4-item questionnaire. For participants in the remote delivery mode, the level of satisfaction with the patient education program will be assessed by a ten-item questionnaire and with the exercise program by a five-item questionnaire. Diabetes-related morbidity will be assessed through the number and description of acute complications and diagnoses of chronic complications of diabetes, along with the number of hospitalizations associated with diabetes. The information will be collected using a three-item tool during the 6-month follow-up period. Quality of life will be evaluated using the Medical Outcomes Study 36-Item Short-Form Health Survey (SF-36) . The total scores range from 0 (indicating no quality of life) to 100 (indicating the highest quality of life). This is obtained through a 36-item tool distributed across various physical and mental health domains, including physical functioning, social functioning, role–physical, bodily pain, mental health, role–emotional, vitality, and general health. Depression will be measured by the Brazilian version of the Center for Epidemiological Scale—Depression (CESD) . The total score ranges from 0 (indicating no depressive symptoms) to 60 (indicating the highest level of depressive symptoms). This is obtained through a 20-item tool used to rate how often the interviewed experienced symptoms associated with depression over the past week, such as restless sleep, poor appetite, and feeling lonely. Response options range from 0 to 3 for each item (0 = rarely or none of the time, 1 = some or little of the time, 2 = moderately or much of the time, 3 = most or almost all the time). Diet quality will be measured through the consumption of macro and micronutrients. Food consumption will be assessed using an adapted version of a validated Food Frequency Questionnaire (FFQ) for Brazilian individuals with type 2 diabetes , evaluating consumption frequency (daily, weekly, monthly, or yearly), number of portions, and serving size (small, medium, large or extra large). The adaptation of the FFQ was necessary since the individuals in this research have different eating habits than in the validated FFQ study, so regional adaptations were necessary. This adaptation was made based on a food diary on three non-consecutive days, including a weekend day carried out with 65 volunteers with type 1 or type 2 diabetes . The final FFQ consists of 104 food items distributed across eight food groups (“cereals, tubers, roots, and derivatives”; “vegetables and legumes”; “fruits”; “beans”; “meat and eggs”; “milk and dairy products”; “oils and fats”; “sugars and sweets”), as well as 11 items investigating beverages, 4 items investigating oilseeds, and 1 item investigating food supplement.
All assessment appointments will be scheduled individually, based on participants’ and research team members’ schedules. After attending the baseline assessment appointment, the participant will be wearing a pedometer for 7 days before starting the intervention. To accommodate all participants enrolled at that time into the timetable of baseline assessment appointment and have their physical activity level measured before the intervention gets started, a period of up to 21 days including the 7-day wearing a pedometer will be allowed between the baseline assessment appointment and the first intervention session. Therefore, the baseline assessment could be carried out between one and 14 days previous to the first intervention session. After the intervention conclusion, the participant will be wearing a pedometer for 7 days before attending the post-intervention assessment appointment. To accommodate all participants into the post-intervention assessment appointment timetable and have their physical activity level measured before this appointment, a period of up to 21 days including the 7-day wearing a pedometer will be allowed between the intervention conclusion and post-intervention assessment appointment. Therefore, the post-intervention assessment could be carried out between 7 and 21 days after the intervention conclusion. Upon completing the 6-month follow-up, the participant will be wearing a pedometer for 7 days: (1) after attending the post-follow-up assessment appointment when this appointment is scheduled between 1 and 14 days after the 6-month follow-up completion or (2) before attending the post-follow-up assessment appointment when this appointment is scheduled between 7 and 21 days after 6-month follow-up completion. To accommodate all participants into the post-follow-up assessment appointment schedule and have their physical activity level measured after 6-month follow-up completion, a period of up to 21 days including the 7-day wearing a pedometer will be allowed between the 6-month follow-up completion and post-follow-up assessment appointment. Therefore, the post-follow-up assessment could be carried out between 1 and 14 days after the 6-month follow-up completion. Table shows the participant timeline.
The sample size was calculated considering the ISWT distance as the primary outcome, using parameters derived from a previous study involving educational intervention in cardiac rehabilitation . The calculation was performed using R Software version 3.4.3, with the following parameters: a moderate effect size ( d = 0.20), 80% statistical power, a 5% alpha level, one-sided test, two arms, and three measurements. A total sample size of 200 participants (100 per arm) was determined. Assuming a 20% attrition rate, based on a previous study , 120 participants would be enrolled in each arm to ensure that 200 participants complete the study protocol.
The research team recruited potential participants through face-to-face interactions at health services in the two cities, as well as phone calls to individuals listed in a database from previous studies conducted by the research group , none of which involved exercise or educational interventions. In addition, social media advertisements were utilized, and the study was disseminated among local healthcare providers through face-to-face interactions and via email among employees of the Federal University of Juiz de Fora (UFJF) and the Federal University of Minas Gerais (UFMG), respectively.
Sequence generation {16a} A block random assignment sequence was generated with a 1:1 ratio, using the website www.randomization.com by assigning enrolled individuals to one of the two study arms in each research center. Concealment mechanism {16b} The principal investigator maintained an allocation sequence for each center in two separate password-protected files. It was shared with the research center coordinators, who distributed the allocated program to the intervention team members only after conducting theparticipants’ baseline assessment. This allocation information will not be accessible to the outcome assessors to ensure allocation concealment. Implementation {16c} After the baseline assessment, the center research coordinator received the participant’s allocation from the principal investigator. The allocation was disclosed to the participant and the intervention team members, but it remains concealed from the outcome assessors, who are blinded to the allocated intervention.
A block random assignment sequence was generated with a 1:1 ratio, using the website www.randomization.com by assigning enrolled individuals to one of the two study arms in each research center.
The principal investigator maintained an allocation sequence for each center in two separate password-protected files. It was shared with the research center coordinators, who distributed the allocated program to the intervention team members only after conducting theparticipants’ baseline assessment. This allocation information will not be accessible to the outcome assessors to ensure allocation concealment.
After the baseline assessment, the center research coordinator received the participant’s allocation from the principal investigator. The allocation was disclosed to the participant and the intervention team members, but it remains concealed from the outcome assessors, who are blinded to the allocated intervention.
Who will be blinded {17a} Due to the nature of the intervention, participants and the intervention team members cannot be blinded to the allocation. However, to ensure objectivity, two independent research team members, blinded to the allocation, will handle database management and conduct statistical analyses, respectively. Procedure for unblinding if needed {17b} Given that the trial does not involve drug testing, there will be no need for emergency unblinding.
Due to the nature of the intervention, participants and the intervention team members cannot be blinded to the allocation. However, to ensure objectivity, two independent research team members, blinded to the allocation, will handle database management and conduct statistical analyses, respectively.
Given that the trial does not involve drug testing, there will be no need for emergency unblinding.
Plans for assessment and collection of outcomes {18a} All research team members will receive training specific to their assigned responsibilities in the research, whether it involves conducting baseline assessments or outcome assessments. Additionally, they will adhere to a research-specific manual tailored to their respective roles to ensure that all procedures are carried out in accordance with the validation of assessment tools or any necessary training, especially for tools developed by the research team. Plans to promote participant retention and complete follow-up {18b} There are no plans to promote participant retention. To minimize dropouts on the 6-month follow-up, the participants will receive monthly phone calls to remind them about filling out the diabetes-related morbidity tool and the post-follow-up assessment appointment. Data management {19} All trial data will be entered into an Excel file by an independent research team member who is not an intervention team member, outcome assessor, or data analyst. Additionally, 30% of the entered data will be randomly checked by one of the research center coordinators. Confidentiality {27} The collected data will be anonymized using individual participants’ coding to label the trial data. The link between identifiable personal data and the codes will be stored securely and separately from the trial data. Plans for collection, laboratory evaluation, and storage of biological specimens for genetic or molecular analysis in this trial/future use {33} No biological specimens will be collected in this trial.
All research team members will receive training specific to their assigned responsibilities in the research, whether it involves conducting baseline assessments or outcome assessments. Additionally, they will adhere to a research-specific manual tailored to their respective roles to ensure that all procedures are carried out in accordance with the validation of assessment tools or any necessary training, especially for tools developed by the research team.
There are no plans to promote participant retention. To minimize dropouts on the 6-month follow-up, the participants will receive monthly phone calls to remind them about filling out the diabetes-related morbidity tool and the post-follow-up assessment appointment.
All trial data will be entered into an Excel file by an independent research team member who is not an intervention team member, outcome assessor, or data analyst. Additionally, 30% of the entered data will be randomly checked by one of the research center coordinators.
The collected data will be anonymized using individual participants’ coding to label the trial data. The link between identifiable personal data and the codes will be stored securely and separately from the trial data.
No biological specimens will be collected in this trial.
Statistical methods for primary and secondary outcomes {20a} The participants’ characteristics will be presented as central tendency and dispersion measures or absolute values and percentages. These data will be compared using the independent t -test for continuous variables and chi-square or Fischer’s exact test for categorical variables. Primary and secondary outcomes Primary and secondary outcomes will be presented as central tendency and dispersion measures. Data distribution will be analyzed using the Shapiro–Wilk test. The analysis of variance (2 × 2 ANOVA) will compare data between baseline and post-intervention and between groups. An alpha of 5% will be considered for statistical significance, and post hoc comparisons will be performed to detect two-by-two differences. In case of missing data, the statistical procedures will follow the intention to treat. Tertiary outcomes Quality of life, depression, and diet quality will be presented and analyzed as the primary and secondary outcomes. Data from the program adherence and morbidity associated with diabetes will be analyzed using the independent t -test. Regarding satisfaction with the program, it will be analyzed descriptively from absolute values and percentages. Interim analysis {21b} There is no planned interim analysis. Methods for additional analysis {20b} An interaction analysis to test the effect of the intervention delivery mode (on-site or remote) is planned. Methods in analysis to handle protocol non-adherence and any statistical methods to handle missing data {20c} The value 0 (zero) will be imputed to replace the missing data in the analysis that will follow the intention-to-treat principle. Plans to give access to the full protocol, participant-level data, and statistical code {31c} The study plan is accessible to the public through the ClinicalTrials.gov register (NCT03914924). The data supporting the findings of this study will be available from the corresponding author upon reasonable request.
The participants’ characteristics will be presented as central tendency and dispersion measures or absolute values and percentages. These data will be compared using the independent t -test for continuous variables and chi-square or Fischer’s exact test for categorical variables. Primary and secondary outcomes Primary and secondary outcomes will be presented as central tendency and dispersion measures. Data distribution will be analyzed using the Shapiro–Wilk test. The analysis of variance (2 × 2 ANOVA) will compare data between baseline and post-intervention and between groups. An alpha of 5% will be considered for statistical significance, and post hoc comparisons will be performed to detect two-by-two differences. In case of missing data, the statistical procedures will follow the intention to treat. Tertiary outcomes Quality of life, depression, and diet quality will be presented and analyzed as the primary and secondary outcomes. Data from the program adherence and morbidity associated with diabetes will be analyzed using the independent t -test. Regarding satisfaction with the program, it will be analyzed descriptively from absolute values and percentages.
Primary and secondary outcomes will be presented as central tendency and dispersion measures. Data distribution will be analyzed using the Shapiro–Wilk test. The analysis of variance (2 × 2 ANOVA) will compare data between baseline and post-intervention and between groups. An alpha of 5% will be considered for statistical significance, and post hoc comparisons will be performed to detect two-by-two differences. In case of missing data, the statistical procedures will follow the intention to treat.
Quality of life, depression, and diet quality will be presented and analyzed as the primary and secondary outcomes. Data from the program adherence and morbidity associated with diabetes will be analyzed using the independent t -test. Regarding satisfaction with the program, it will be analyzed descriptively from absolute values and percentages.
There is no planned interim analysis.
An interaction analysis to test the effect of the intervention delivery mode (on-site or remote) is planned.
The value 0 (zero) will be imputed to replace the missing data in the analysis that will follow the intention-to-treat principle. Plans to give access to the full protocol, participant-level data, and statistical code {31c} The study plan is accessible to the public through the ClinicalTrials.gov register (NCT03914924). The data supporting the findings of this study will be available from the corresponding author upon reasonable request.
The study plan is accessible to the public through the ClinicalTrials.gov register (NCT03914924). The data supporting the findings of this study will be available from the corresponding author upon reasonable request.
Composition of the coordinating center and trial steering committee {5d} This study will be overseen by a principal investigator who will be responsible for supervising the research center coordinators and all trial-related activities. The principal investigator will ensure that trial objectives and targets are met according to the established study schedule and protocol, as well as allocate financial resources to the trial development. Research center coordinators, in turn, will be responsible for creating a schedule for recruitment, assessments, and interventions. They will also supervise all research team members’ activities, including the assessment and intervention teams. This trial does not have a trial steering committee with independent members, nor does it include stakeholder or public involvement groups. Composition of the data monitoring committee, its role, and reporting structure {21a} The present study does not establish a data and safety monitoring board, as both the exercise and the education interventions are considered low-risk interventions with no substantial safety issues. Adverse event reporting and harms {22} Due to the low-risk nature of the trial, no adverse event is anticipated. Any unintended effects of exercise intervention will be reported to the Research Ethics Committees at the Federal University of Juiz de Fora or the Federal University of Minas Gerais. Frequency and plans for auditing trial conduct {23} A data monitoring committee was not deemed necessary in the randomized controlled trial due to the low risk of adverse events and the robustness of the study protocol ensuring participant safety and data integrity. Plans for communicating important protocol amendments to relevant parties (e.g., trial participants, ethical committees) {25} Any revision to the trial will be reported to the ethics committee. After receiving the approval of the protocol revision, we will communicate it to the investigators. Dissemination plans {31a} The results of this trial will be available through scientific publications in a peer-reviewed journal and at scientific conferences.
This study will be overseen by a principal investigator who will be responsible for supervising the research center coordinators and all trial-related activities. The principal investigator will ensure that trial objectives and targets are met according to the established study schedule and protocol, as well as allocate financial resources to the trial development. Research center coordinators, in turn, will be responsible for creating a schedule for recruitment, assessments, and interventions. They will also supervise all research team members’ activities, including the assessment and intervention teams. This trial does not have a trial steering committee with independent members, nor does it include stakeholder or public involvement groups.
The present study does not establish a data and safety monitoring board, as both the exercise and the education interventions are considered low-risk interventions with no substantial safety issues.
Due to the low-risk nature of the trial, no adverse event is anticipated. Any unintended effects of exercise intervention will be reported to the Research Ethics Committees at the Federal University of Juiz de Fora or the Federal University of Minas Gerais.
A data monitoring committee was not deemed necessary in the randomized controlled trial due to the low risk of adverse events and the robustness of the study protocol ensuring participant safety and data integrity.
Any revision to the trial will be reported to the ethics committee. After receiving the approval of the protocol revision, we will communicate it to the investigators.
The results of this trial will be available through scientific publications in a peer-reviewed journal and at scientific conferences.
To our knowledge, this is the first randomized controlled trial that integrates exercise with patient education, aiming to surpass the benefits of exercise alone for individuals with prediabetes or diabetes in middle-income countries. Our objectives include improving functional capacity, increasing disease-related knowledge, fostering healthier behaviors, optimizing cardiometabolic health parameters, enhancing quality of life, alleviating depression, enhancing diet quality, and reducing diabetes morbidity over 6 months. This study has the potential to contribute significantly by assessing the effectiveness of a structured patient education program associated with an exercise program that could be applied in the public and private health system in order to improve outcomes related to the prognosis in individuals with diabetes and to prevent type 2 diabetes in individuals with prediabetes. Considering (1) the proven benefits of physical exercise for individuals with diabetes and prediabetes, (2) the role of health education in facilitating better diabetes management and prevention, and (3) its impact on mitigating complications and cardiovascular risk factors, these interventions could substantially improve the quality of life for this population.
Recruitment began in November 2022 and ended in February 2024. The current protocol is version 9.0, dated on 26 June 2024. It is anticipated that data collection will be completed by December 2024, following the last participant’s completion. Submitting the study protocol earlier was not feasible due to challenges, including researcher turnover in the study research team and multiple updates in the study protocol due to the COVID-19 pandemic and the pilot and feasibility studies motivated by that. This trial closure is scheduled for May 2025, following the data analysis completion.
Additional file 1. Screener for internet access and technology literacy. Additional file 2.
|
The social role of pediatrics in the past and present times | b3955621-53d1-49d6-b092-d18a17d19432 | 8684095 | Pediatrics[mh] | Pediatrics and society are closely related. This link is as old as the history of Pediatrics, and dates to the second half of the eighteenth century. The vocation of the first European pediatric schools, indeed, was clinical and scientific, as well as social. The founding fathers of Pediatrics were scientists of great talent, and many of them benefactors and philanthropists. They spent their lives assisting the suffering childhood, and became promoters and organizers of social securities for the poorest and most vulnerable categories . The attention to the problems of abandonment was closely linked to study, prevention, and treatment of pathologies (especially infectious, deficiency and neurological ones) . The profile and activity of pediatricians grew in the following decades. The University institutions, then, contributed to provide a further impulse to childcare as well as cultural authority, also thanks to the foundation of the first chairs and scientific journals of Pediatrics. The relevance and prestige of the studies performed rapidly spread throughout Europe . All these achievements, in addition to the increasing economic well-being and the improvement of the social and hygienic conditions, led to a progressive and relevant growth in the quality of children’s care, and in the meantime to the decrease of neonatal and infant mortality rates. Here we trace the historical steps which led to birth and development of pediatrics as independent medical discipline, with intrinsic and unavoidable ethical and social vocation. The growth of this specialty is outlined, also analyzing its rise within the University institutions, with the foundation of the first chairs, and the contribution of the greatest European and Italian masters. In the second section of the text, the role which the pediatrician should currently play is described, especially his responsibilities beside institutions as well as in facing new growing health critical issues, related to the biological, cultural, and psychological changes of the patients of present days. Finally, the challenges which pediatricians are called to deal with are analyzed, oriented to guarantee to all children the best possible assistance, to fight against inequalities and poverty, as well as to protect their rights within families and their identity in the society in all its forms, personal, cultural, and religious. Pediatrics and society in the past A radical turning point about the attention given to children, within pedagogy, literature, and family, occurred between the 18th and nineteenth century. A new educational concern appeared thanks to the thought and work of Jean Jacques Rousseau, the father of modern pedagogy. Since then, the value of childhood shifted from an economic plan to that of feeling, regarded as affection and attention. The idea of the new Man introduced and promoted by the Enlightenment, opened to the consideration of infancy as the starting dimension of men. Therefore, doctors and pedagogists began to share a body’s philosophy, which allowed children to be protected. The techniques on the body started to join character education. Indeed, authoritative thinkers of the eighteenth century observed and controlled the “infantile bodies” of their children. They “looked with special attention at the physical life of their sons”. The “physical life of children” started to have a real attention, becoming object of hospital care, as well as of a dedicated medical (pediatric) culture, also due to a growing treatise literature . The first treatise on Pediatrics, De Infantum Aegritudinibus et Remediis , was written by the Italian Paolo Bagellardo in 1472. However, childhood medical publishing was lacking in Italy during the seventeenth century. This was related to the political, religious, and cultural events occurred between the end of the 16th and the seventeenth century. The Counter-reform (Trento, 1545–1573) played a substantial breaking action on the scientific progress in the countries of catholic area, with the imposition of the Aristotelian orthodoxy. Consequently, cultural activities in the fields of natural sciences and medicine migrated from southern to northern Europe, especially Netherlands, England, and Scandinavian countries. Indeed, among the most important pediatric treatises of the 17th and 18th centuries stood out those of the English William Harris ( De Morbis Acutis Infantum , 1689), George Armstrong (1767) and Michael Underwood (1784), and that of the Swedish Nils Rosen von Rosenstein, published in 1765. In 1794 Christoph Girtanner, a Swiss pediatrician, wrote another relevant text, which led the way to the great pediatric treatise literature flourishing during the nineteenth century in France (Billard, 1828; Rilliet and Barthez, 1843; Bouchut, 1845; Comby, 1892), Germany (Vogel, 1860; Gerhardt, 1877; Henoch, 1881; Baginsky, 1882), Russia (Filatov, 1894) and United States of America (Smith, 1869). Italy followed at the end of the nineteenth century with its only pediatric book of that time (Biagini, 1897), also due to the cultural dependence from foreign countries, which finished only after the Risorgimento and the process of unification . The first schools of pediatrics and children’s hospitals The first pediatric school was founded in Italy, even if it lasted for a short time. In fact, on April 1802, a chair of pediatrics was born in Florence on initiative of the king of Etruria, Ludwig I of Bourbon, who entrusted the task to Professor Gaetano Palloni, who gave lessons at the Ospizio degli Innocenti . The school of Palloni lasted just 3 years, until 1805, when the queen Maria Luisa of Spain suppressed the chair of children’s diseases. However, in 1807, she restored the chair of pediatrics, owing to the high number of deaths among children. Nonetheless, also this time the Florentine School of Pediatrics had no luck. Indeed, due to the upheavals caused by the Napoleonic gestures, all the old institutions were suppressed or renewed and, in the same year (1807) when Tuscany became a French province, the school was definitively closed . While the Ospizio degli Innocenti in Florence was the first, although brief, seat of pediatric teaching, the first real pediatric hospital was born in Paris on May 1802, in the middle of the Napoleonic era. The hospital, which was called Hôpital des enfants malades , was set up inside an old convent of nuns which was variously used in different times, until, during the Revolution, it became an orphanage ( Maison de l’Enfant Jesus ). It hosted children from 2 to 15 years old. Alongside the hospital wards, an outpatient clinic was also set up for care of patients with less serious diseases and of those who did not need hospitalization. In the same location, activities for teaching and spreading childcare notions to mothers belonging to popular classes were also carried out. The Hôpital des enfants malades soon grew as center of pediatric studies and for the spread of pediatric culture during the whole nineteenth century, becoming the cradle of French and European pediatrics . Pediatrics and society in France The French Pediatric School foundation overlaps with that of the Hôpital des enfats malades , between the end of 1700 and the beginning of 1800. The prestige of this school is related to great scientists as Bichat, Corvisart, Laennec, who were not pediatricians, but contributed to the advancement of pediatrics . Furthermore, there were other eminent physicians particularly interested in pediatric diseases. Charles Michel Billard (1800–1832), founder of the pediatric pathological anatomy, studied many corpses of children and babies died in the Parisian orphanages. Fréderic Rilliet (1814–1861) and Antoine-Charles-Ernest Barthez (1811–1891), both doctors at the Sainte Eugénie Hospital in Paris, published in 1843 the Traité clinique et pratique des maladies des enfants , which was the reference text for the pediatricians of the nineteenth century. Eugene Bouchut (1818–1891) was the first to use the laryngeal intubation in the croup (1858). Armand Trousseau (1801–1867) carried out studies on convulsions, chorea, eruptive fevers, diphtheria, and typhus. His fame is related to the first tracheotomies which performed in Paris, defining technique and postoperative treatment. His name is linked to the sign of tetany . Marie-Jules Parrot (1829–1887) was interested in the cerebrovascular lesions of childhood, and studied the nutritional disorders of early infancy, coining the term “atrepsia”. The pseudoparalysis of luetic infants bears his name . Pierre Costant Budin (1846–1917) and Adolphe Pinard (1844–1934) were obstetricians and sustained the relevance of boiling milk and breastfeeding, respectively. Thèophile Roussel (1816–1903), was a doctor and politician involved in social and occupational medicine . Jean Bernard Antoine Marfan (1858–1942) was the first professor of Early Childhood Clinic in Paris. He dealt with many fields of children’s pathology and had a great scientific production . In 1881 and 1897 were launched, respectively, the Monthly Review of Childhood Diseases and the Children’s Medicine Archives . Pediatrics and society in Central Europe German pediatrics started to grow and to be notable during the eighteenth century. In 1753 Jakob Reinbold Spielmann (1722–1783) was the first to analyze the milk of women and domestic animals. In 1787, Joseph J. Mastalier founded in Vienna the first Public Institute for Sick Children , which was, rather than a real hospital, an outpatient pediatric clinic. About 50 years later, the first Austrian pediatric hospitals ( Sainte Anne in 1837, and Saint Joseph in 1842) were built in Vienna. Conversely, German pediatrics was officially born in 1830, with the foundation of a small ward at the Charité Hospital in Berlin. It developed as a clinic in the following decades, under the direction of Barez. This latter had a prestigious teaching activity and founded the first pediatric journal in the world: the Journal für Kinderkrankenheit . Other pediatric hospitals were built throughout the century in the area belonging to Germanic culture, which had in the Viennese Pediatric School its driving force, proof of an increasing interest in childhood. Carl Credé (1819–1892) was obstetrician, and proposed the prophylaxis of blenorrhagic conjunctivitis of the newborn (main cause of neonatal blindness of that time), by the instillation in the conjunctival sac of 2% silver nitrate. Eduard Heinrich Henoch (1820–1910), considered the founder of clinical pediatrics in Germany, was director of the Pediatric Clinic at the Charité Hospital in Berlin. His name is linked to the purpura fulminans , which from him took the name of “Henoch purpura”. Franz Soxhlet (1848–1926), chemist and physiologist, studied milk sterilization and was able to fractionate its proteins into casein, albumin, globulin and lactoproteins. Alois Epstein (1849–1918) was director of the Prague Brefotrophy, which became with him a great pediatric school. Theodor Escherich (1857–1911) linked his name to the bacteriological research on intestinal germs, and on changes of the intestinal flora in infants with nutritional disorders. He discovered the bacterium coli , which was then called Escherichia coli . Carl von Pirquet (1874–1929) introduced the concept of allergy. He also supported the suffering childhood, becoming organizer of social provisions for poor children, especially after the terrible famine that struck Austria after the defeat of World War I . Pediatrics and society in the United Kingdom The first interests in childhood diseases, although not yet framed into pediatric schools, began in Great Britain as early as the seventeenth century. Daniel Whistler (1619–1684), and then Francis Glisson (1597–1677), started indeed around 1650 the first studies on rickets, while Thomas Sydenham (1647–1732) deepened various topics of pediatric interest, such as exanthematous diseases, chorea (Sydenham’s chorea), difficult dentition and scurvy . Actually, English pediatrics started with George Armstrong, and was characterized by a greatly humanitarian and sensitive care. In 1769 in London, he made the first generous experiment of pediatric care. Indeed, he opened then a pediatric clinic, where cured about 35,000 children in 12 years, sustaining alone costs and efforts. Andrew Wilson (1718–1792) was his successor, for a short time, in the direction of the London pediatric clinic, which closed in 1783 shortly after the death of its founder, due to lack of benefactors and funds . Before having a real children’s hospital in Great Britain, it was necessary to wait until the second half of the nineteenth century, when the Great Ormond Street Children’s Hospital was founded in London in 1852. It may be considered the cradle of English pediatrics, and was the first place where pediatrics was taught. Its first director was Charles West (1816–1898). In 1871, he published Above some disorders of the nervous system in children , where he described the infantile myoclonic encephalopathy, which took from him its name (West syndrome). He also gave a significant contribution in the field of the organization of pediatric hospitals, publishing a study entitled On hospital organization, with special reference to the organization of hospitals for children . Thomas Barlow (1845–1945) is also linked to the prestigious hospital for sick children in Great Ormond Street , where he completed his studies. They were dedicated to infantile scurvy, to which he first gave features of autonomous disease, providing clinical and anatomical-pathological evidence. Scurvy took, then, from him the name of Barlow’s disease. George F. Still (1868–1941) followed in the direction of the Great Ormond Street Hospital . He described chronic childhood primary polyarthritis, which was then named Still’s Disease . Pediatrics and society in Italy The first organizations for childhood appeared for the first time in Italy in the medieval age. The first institutions are the Ospizi for foundlings or gettatelli , which was the name attributed to abandoned or refused newborns. Including the so called “wheels” (which served to receive the abandoned newborns, guaranteeing anonymity to those who left them), they were expression of a charitable attitude, and were able to counteract infanticide . To find a new type of institutes dedicated to childhood, especially sick children, it is necessary to wait the Ospizi Marini of the nineteenth century. They were promoted by the intelligent and passionate work of Giuseppe Giannelli, and then of Giuseppe Barellai (1813–1884) . This latter founded the first marine colony in Viareggio in 1862, which was followed by many others in marine and mountainous areas. These institutes arose thanks to the commitment of spontaneous civic committees, and to the awareness of the benefits which children affected with tuberculosis or rickets might have from thalassotherapy. The requirement was that of belonging to poor families, among which, moreover, there were most of the affected subjects .In 1843, Count L. Franchi founded the Regina Margherita Children’s Hospital in Turin, which was the first Italian pediatric hospital. Soon thereafter, in 1845, the Ospedaletto di Santa Filomena was founded by the Marquise Falletti of Barolo in Turin, specifically intended for girls affected with tuberculosis and/or rickets, and aged 3 to 12 years. In 1869, the Bambino Gesù Hospital was built in Rome, on the initiative of the Duchess Salviati, where children from 2 to 12 years old were accepted (Table ) . However, in many cities, hospital care for children was also provided within large general hospitals, which dedicated special pavilions to them. A critical issue of the hospital care of that time was the exclusion of children under the age of 3, among which there was the greatest morbidity and mortality, due to the difficult management of such young patients. Although a hospital vocation of some orphanages and institutes for children with rickets, near to the twentieth century the united Italy was still widely poor of structures for hospitalization and cure of children, especially infants. Moreover, 80 years passed from the foundation of the Florentine Pediatric School, before seeing the birth, in a united Italy, of a chair of pediatrics. The credit went to Dante Cervesato (1850–1905). After gaining experiences in the pediatric field as student at the Wiederhofer of Vienna, he returned to Padua, and was able to set up a small Pediatric Clinic, where he received in 1889 the assignment of full professor of Pediatrics. From Padua, Cervesato moved to Bologna in 1900, where he created a thriving pediatric school. He there performed studies on tetany, infantile tuberculosis, neonatal hemorrhagic diseases, appendicitis, intestinal tumors, liver cirrhosis, and poliomyelitis. Therefore, also due to a new cultural attitude towards childhood, a change in the field of pediatric care occurred between the end of the 19th and the beginning of the twentieth century. Indeed, also in Italy it became clear that childhood had the right to organized and structured places of cure, based on their specific needs. In 1876, a Children’s Hospital was built in Trieste (named in 1907 Burlo Garofolo ), then followed by many others, among which Naples and Cremona (1881), Palermo (1882), Genoa and Livorno (1888), Florence (1891), Milan (1897), Bologna (1907) and Modena (1911) (Table ). Infancy finally received from society a new attention, which never enjoyed before. However, almost all children’s hospitals were funded by private charities, and scarcely by initiative of public institutions. Often these hospitals began their activity in humble rented buildings, to become larger over time with subsequent extensions, restorations of old buildings, or even with construction of new ones . It is noteworthy that some hospital admissions did not have medical indications. They were healthy children “which enter healthy for various reasons … , among which the more frequent is here the concomitant admission of the sick mother in another part of the hospital, or that healed, they stay abandoned for many, painful or shameful, reasons” . This declaration of Ponticaccia (1908) is a complaint for some parental behaviors, as well as a clear proof of the social role which sometimes the hospital had to play. The patients who came to hospitals belonged to the poor classes, which were vulnerable for lacking diets and/or unhealthy environments. The scientific articles which appeared at the end of the nineteenth century and the beginning of the 20th one, provided precious information on the reasons of hospitalizations of those years, and of their length . The little patients, especially infants, were often hospitalized for severe conditions of “atrepsia”, characterized by extreme decay of the general conditions, frequently irreversible and fatal . Francesco Fede (1832–1913) first related primitive atrepsia to malnutrition, underlining that these children belonged to the poorest social classes, and calling upon an intervention from authorities aimed at improving their conditions . He was the greatest exponent of early Italian pediatrics, even if chronologically not the first. He was a founding member and president of the Italian Society of Pediatrics, and in 1893 he founded the periodical La Pediatria . Subsequently in 1897, Luigi Concetti (1855–1920) introduced Pediatrics as a free course at the University of Rome. He promoted the first Congress of the Italian Society of Pediatrics, of which he was a founding member and then president in 1903. He founded in 1904 the Journal of Pediatric Clinic , together with Giuseppe Mya (1857–1911) . Mya was called in 1891 in Florence to hold the chair of Pediatric Clinic. In 1901, he transferred the small rooms of his Institute at the Maternity Hospital to the Anna Meyer Children’s Hospital , which the Marquis of Montagliani founded in 1887 in memory of his wife (Table ). In 1916, Vitale Tedeschi (who followed Dante Cervesato in Padua) discussed the possibility to hold the mothers within efficiently and safely organized pediatric wards . It was clear, from the beginning of the foundation of the first pediatric hospitals, that hospitalization might induce suffering in children and parents due to their separation. The time for the introduction of the mother near to her son seems however to be far, but a new attention to the physical and psychological needs of the child was starting to grow. In more recent years, a diffuse change of point of view towards childhood finally shows the necessity of reconstructing, within hospitals, the binomial mother-child, promoting a specific and global approach to the little patient . During the 1970s a process of de-hospitalization and humanization of pediatric hospitals started, also through the creation of new outpatient systems (Day-Hospital, and then Day-Surgery). In the meantime, and to respond to new and different epidemiological needs, the old sanatoriums for tuberculosis were dismantled or used for other health issues and/or diseases. Then, around the ‘80s, through many regional laws included as aims of health plans, the doors of the hospital wards started to open to the mothers of sick children. Initiatives sustained by associations of families and volunteers were carried out, allowing children to continue the normal activities within hospitals, like games and school, also through psychological support and/or that of cultural mediators. Also, the environments were humanized and tailored on the child. The habit of decorating the walls of infirmaries and hospitalization rooms progressively spread and, in neonatology units, the practice of rooming-in began . In the same years, primary care were carried out by family pediatricians in all the areas of the country. Pediatrics and society in Palermo (Sicily) The Pediatric Clinic was born in Palermo in 1903, when Rocco Jemma, young and brilliant doctor working in Genoa, was called to hold the role of professor of Pediatrics at the University of Palermo. Promoter of this initiative was Ignazio Florio, belonging to one of the richest and most influential European families of entrepreneurs and patrons of the time. In a few years a new and efficient structure was built, and inaugurated in 1907 . Rocco Jemma founded in those years a real school of pediatricians, who came from all Sicily. When Jemma moved to Naples in 1913, the most brilliant of his pupils, Professor Giovanni Di Cristina, succeeded him. He continued the work of his master, further expanding the scientific activities of the Clinic and starting new social and care initiatives . It is due to him, moreover, the discovery of an effective and decisive treatment for the cure of visceral leishmaniasis with antimony salts, then universally used, which allowed to overcome such infectious disease, commonly lethal until then . He was active for the construction, on lands and with funds obtained from donations, of the hospitals Casa del Sole (assigned as tuberculous sanatorium in a hilly area of the town) and Aiuto Materno (for the hospitalization of children with high social risk). His premature and unexpected death, in 1928, left a great emptiness in pediatrics of the whole island. The city dedicated to him the Children’s Hospital in 1929, which he wanted closely related to the Pediatric Clinic, and which thereafter took his name. After Di Cristina, the direction of the Clinic and the Hospital passed to La Franca, and then Cannata, Maggiore, and Gerbasi. In 1946, the School of Specialization of Pediatrics was established, favoring the recruitment of scientifically able young doctors. Gerbasi was principal of the Faculty of Medicine and rector of the University, and gave shine to the pediatrics of Palermo, also at a national level. His school was particularly rich of prestigious pupils and personalities, first Giuseppe Roberto Burgio (who later became director of the Pediatric Clinic in Perugia, and then in Pavia). Hematology, infectious diseases, and nutrition were his areas of major scientific impact. He gave decisive contributions on deficiency diseases, as the definition of the perniciosiform anemia of infants (Gerbasi’s anemia), and on dystrophies . The great social caliber of his activities was evident also during the earthquake of Belice in 1968, a dramatic time which Gerbasi faced moving on site doctors, nurses, and hospital equipments. Pediatrics and society today The changes that took place over the centuries, and described so far, were innumerable and extraordinary. Society, institutions, and political and economic structures of countries underwent profound transformations. Indeed, the political and health reforms implemented in Italy (and in the other European countries) during the last decades, and the increased economic well-being allowed the reduction of infant mortality rates, which currently are among the lowest worldwide. Specifically, the infant mortality rate for children < 1 year of age (IMR) was 231‰ in 1865, and fell to 185‰ already before the end of the nineteenth century (1895) . Then, this downward trend became even more relevant, till the first two decades of 1900. Afterward, it had two sudden stops and reversals, corresponding to the two war periods. Moreover, the 1920 IMR (155‰) also included the deaths due to the Spanish flu epidemic . Thereafter, in the interwar period (1930), such rate halved (119‰) if compared to the initial recorded values, and dropped below 50‰ in the 1960s and 20‰ in the 1980s, reaching 3‰ between 2015 and 2020 (in Europe decreased from 38.2‰ in 1961 to 3.4‰ in 2019) (Fig. ). The reduction of IMR over time was associated to a variation of the causes of death. Their analysis better defines the improvements obtained, showing the progressive disappearance of infectious diseases (from about 65% in 1895, to 2% in 2015), and the emergence of other ones, which today mainly includes congenital malformations and conditions of perinatal origin (69%) . Pediatrics significantly contributed to the achievement of these formidable results, through the development of a culture of children’s rights, the acquisition of specific technical knowledge and skills within the context of a constant medical and technological progress, and the control of previously endemic (malaria) and/or past and current communicable diseases (syphilis, tuberculosis and more recently measles, pertussis and lastly COVID-19) . The pediatrician had to adapt and reshape himself in the light of the sociocultural changes of society, as well as of the current biological and psychological features of patients, of what infants, children and adolescents are today . Old diseases disappeared, or their prognosis is significantly improved for ever more effective therapies. New diseases appear, or re-emerge with higher incidence or prevalence, also due to the constant migratory flows from low-income countries to western ones. Diseases which had poor prognosis, are no longer considered as such (oncohematological and genetic ones), in relation to the continuous updating of therapeutic approaches (i.e., transplantations, gene therapies). New techniques of intensive care allow to extremely preterm newborns to survive, and novel treatments (i.e., hypothermia) to reduce adverse outcomes and morbidities. New tools for the identification of genetic diseases (i.e., next generation sequencing) permit more precise diagnosis, prognosis and counselling to patients and their families. New antibiotics, more rationalized treatments, in addition to early and multidisciplinary management, improve quality and length of life of children with complex and chronic diseases . Then, despite the overcoming of many diseases, however pediatricians must increasingly face new health critical issues (new addictions, as the abuse of technological and digital devices , contrast to overweight and obesity, care of the migrant child, new infections, as evidenced also by the devastating novel coronavirus pandemic). The functional and environmental changes of pediatric hospitals, as well as the hygienic and structural ones, must be adjusted to the epidemiological mutations of diseases, in addition to the new diagnostic and therapeutic approach to sick children. They must be realized to keep pace with the times, and to guarantee a careful and updated clinical care. Furthermore, to answer to the many and relevant changes of the current digital and hyper-connected society, which especially occurred in recent years, the pediatrician must be formed and equipped with a wide cultural baggage, not exclusively made by eminently clinical and technical aspects. The complexity of his role, in addition to the new scientific knowledge (i.e., the acquisition on the epigenetic mechanisms subtending diseases), led to an increase of his responsibilities, and then to the need to own holistic competences, which should span bioethical, law, relational and communication issues, and several others including pedagogy, bioengineering, sociology, economy, art, sport, politics, technology, music, botany, poetry . They compose the cultural background of pediatricians, to take care of all children effectively and competently. The goal of today’s pediatrician is to protect and improve the health of all children, guaranteeing their fundamental rights from conception. The data relating to social inequalities in our country, as in the whole world, are worrying, especially if we refer to the pandemic period. Although neonatal and child mortality in Italy decreased, notable disparities still remain disadvantaging insular and southern regions (linked to cultural, economic and social factors, in addition to organizational problems also referring to the perinatal network and the high number of small birthing centers) than center-northern ones, and foreign citizens than Italians . Such inequities are amplified if we look at the European context, and even more at the global one. Indeed, there is a significant territorial disparity in the access to health care, as well as in education, and adequate living conditions. Most of these children live in the southern regions of our country (and of the world), where there is a high risk of social exclusion, leading to possible adverse long-term consequences. It is pediatrician’s duty to work to guarantee to every child the same right to health and education, regardless of the family and region of origin . The gradual reduction in funding for the health sector, which characterized the last few decades, led to a profound suffering of our national health system (NHS), which became particularly evident during the pandemic: in a such dramatic time, indeed, local doctors and pediatricians were literally overwhelmed by an immense care burden. Today’s pediatrician must therefore find a new and adequate place within a new structure of the NHS, which must be remodeled and oriented to more effective care networks. Furthermore, the collapse in the number of pediatricians, which will further worsen in the next years, requires the development of a new system able to guarantee pediatric specificity and the right of all subjects in developmental age to be assisted by the pediatrician, with a continuity of care between territory and hospital. Currently, due to lack of specialists in Pediatrics, the child is often evaluated in the first instance by the doctor for adults, with the inevitable risk of clinical inappropriateness. It seems therefore crucial to reformulate university and specialist training programs, supporting the most lacking areas based on territorial needs . Finally, the pediatrician must sustain the cultural and scientific theme of prevention. The promotion of healthy lifestyle (primarily breastfeeding), starting before conception and during the first 1000 days, represents the most effective intervention to counteract the development of chronic socially communicable diseases (i.e., obesity, diabetes, cardiovascular diseases), which today represent among the main causes of morbidity and mortality also among children. For this purpose, all health education activities may play a key role. Investing in the school, indeed, as well as in the health system and in policies to support families, will likely reduce inequality, educational poverty, social neglect, behavioral disorders, delinquency, and ultimately many of the health problems of the children of today and tomorrow .
A radical turning point about the attention given to children, within pedagogy, literature, and family, occurred between the 18th and nineteenth century. A new educational concern appeared thanks to the thought and work of Jean Jacques Rousseau, the father of modern pedagogy. Since then, the value of childhood shifted from an economic plan to that of feeling, regarded as affection and attention. The idea of the new Man introduced and promoted by the Enlightenment, opened to the consideration of infancy as the starting dimension of men. Therefore, doctors and pedagogists began to share a body’s philosophy, which allowed children to be protected. The techniques on the body started to join character education. Indeed, authoritative thinkers of the eighteenth century observed and controlled the “infantile bodies” of their children. They “looked with special attention at the physical life of their sons”. The “physical life of children” started to have a real attention, becoming object of hospital care, as well as of a dedicated medical (pediatric) culture, also due to a growing treatise literature . The first treatise on Pediatrics, De Infantum Aegritudinibus et Remediis , was written by the Italian Paolo Bagellardo in 1472. However, childhood medical publishing was lacking in Italy during the seventeenth century. This was related to the political, religious, and cultural events occurred between the end of the 16th and the seventeenth century. The Counter-reform (Trento, 1545–1573) played a substantial breaking action on the scientific progress in the countries of catholic area, with the imposition of the Aristotelian orthodoxy. Consequently, cultural activities in the fields of natural sciences and medicine migrated from southern to northern Europe, especially Netherlands, England, and Scandinavian countries. Indeed, among the most important pediatric treatises of the 17th and 18th centuries stood out those of the English William Harris ( De Morbis Acutis Infantum , 1689), George Armstrong (1767) and Michael Underwood (1784), and that of the Swedish Nils Rosen von Rosenstein, published in 1765. In 1794 Christoph Girtanner, a Swiss pediatrician, wrote another relevant text, which led the way to the great pediatric treatise literature flourishing during the nineteenth century in France (Billard, 1828; Rilliet and Barthez, 1843; Bouchut, 1845; Comby, 1892), Germany (Vogel, 1860; Gerhardt, 1877; Henoch, 1881; Baginsky, 1882), Russia (Filatov, 1894) and United States of America (Smith, 1869). Italy followed at the end of the nineteenth century with its only pediatric book of that time (Biagini, 1897), also due to the cultural dependence from foreign countries, which finished only after the Risorgimento and the process of unification . The first schools of pediatrics and children’s hospitals The first pediatric school was founded in Italy, even if it lasted for a short time. In fact, on April 1802, a chair of pediatrics was born in Florence on initiative of the king of Etruria, Ludwig I of Bourbon, who entrusted the task to Professor Gaetano Palloni, who gave lessons at the Ospizio degli Innocenti . The school of Palloni lasted just 3 years, until 1805, when the queen Maria Luisa of Spain suppressed the chair of children’s diseases. However, in 1807, she restored the chair of pediatrics, owing to the high number of deaths among children. Nonetheless, also this time the Florentine School of Pediatrics had no luck. Indeed, due to the upheavals caused by the Napoleonic gestures, all the old institutions were suppressed or renewed and, in the same year (1807) when Tuscany became a French province, the school was definitively closed . While the Ospizio degli Innocenti in Florence was the first, although brief, seat of pediatric teaching, the first real pediatric hospital was born in Paris on May 1802, in the middle of the Napoleonic era. The hospital, which was called Hôpital des enfants malades , was set up inside an old convent of nuns which was variously used in different times, until, during the Revolution, it became an orphanage ( Maison de l’Enfant Jesus ). It hosted children from 2 to 15 years old. Alongside the hospital wards, an outpatient clinic was also set up for care of patients with less serious diseases and of those who did not need hospitalization. In the same location, activities for teaching and spreading childcare notions to mothers belonging to popular classes were also carried out. The Hôpital des enfants malades soon grew as center of pediatric studies and for the spread of pediatric culture during the whole nineteenth century, becoming the cradle of French and European pediatrics . Pediatrics and society in France The French Pediatric School foundation overlaps with that of the Hôpital des enfats malades , between the end of 1700 and the beginning of 1800. The prestige of this school is related to great scientists as Bichat, Corvisart, Laennec, who were not pediatricians, but contributed to the advancement of pediatrics . Furthermore, there were other eminent physicians particularly interested in pediatric diseases. Charles Michel Billard (1800–1832), founder of the pediatric pathological anatomy, studied many corpses of children and babies died in the Parisian orphanages. Fréderic Rilliet (1814–1861) and Antoine-Charles-Ernest Barthez (1811–1891), both doctors at the Sainte Eugénie Hospital in Paris, published in 1843 the Traité clinique et pratique des maladies des enfants , which was the reference text for the pediatricians of the nineteenth century. Eugene Bouchut (1818–1891) was the first to use the laryngeal intubation in the croup (1858). Armand Trousseau (1801–1867) carried out studies on convulsions, chorea, eruptive fevers, diphtheria, and typhus. His fame is related to the first tracheotomies which performed in Paris, defining technique and postoperative treatment. His name is linked to the sign of tetany . Marie-Jules Parrot (1829–1887) was interested in the cerebrovascular lesions of childhood, and studied the nutritional disorders of early infancy, coining the term “atrepsia”. The pseudoparalysis of luetic infants bears his name . Pierre Costant Budin (1846–1917) and Adolphe Pinard (1844–1934) were obstetricians and sustained the relevance of boiling milk and breastfeeding, respectively. Thèophile Roussel (1816–1903), was a doctor and politician involved in social and occupational medicine . Jean Bernard Antoine Marfan (1858–1942) was the first professor of Early Childhood Clinic in Paris. He dealt with many fields of children’s pathology and had a great scientific production . In 1881 and 1897 were launched, respectively, the Monthly Review of Childhood Diseases and the Children’s Medicine Archives . Pediatrics and society in Central Europe German pediatrics started to grow and to be notable during the eighteenth century. In 1753 Jakob Reinbold Spielmann (1722–1783) was the first to analyze the milk of women and domestic animals. In 1787, Joseph J. Mastalier founded in Vienna the first Public Institute for Sick Children , which was, rather than a real hospital, an outpatient pediatric clinic. About 50 years later, the first Austrian pediatric hospitals ( Sainte Anne in 1837, and Saint Joseph in 1842) were built in Vienna. Conversely, German pediatrics was officially born in 1830, with the foundation of a small ward at the Charité Hospital in Berlin. It developed as a clinic in the following decades, under the direction of Barez. This latter had a prestigious teaching activity and founded the first pediatric journal in the world: the Journal für Kinderkrankenheit . Other pediatric hospitals were built throughout the century in the area belonging to Germanic culture, which had in the Viennese Pediatric School its driving force, proof of an increasing interest in childhood. Carl Credé (1819–1892) was obstetrician, and proposed the prophylaxis of blenorrhagic conjunctivitis of the newborn (main cause of neonatal blindness of that time), by the instillation in the conjunctival sac of 2% silver nitrate. Eduard Heinrich Henoch (1820–1910), considered the founder of clinical pediatrics in Germany, was director of the Pediatric Clinic at the Charité Hospital in Berlin. His name is linked to the purpura fulminans , which from him took the name of “Henoch purpura”. Franz Soxhlet (1848–1926), chemist and physiologist, studied milk sterilization and was able to fractionate its proteins into casein, albumin, globulin and lactoproteins. Alois Epstein (1849–1918) was director of the Prague Brefotrophy, which became with him a great pediatric school. Theodor Escherich (1857–1911) linked his name to the bacteriological research on intestinal germs, and on changes of the intestinal flora in infants with nutritional disorders. He discovered the bacterium coli , which was then called Escherichia coli . Carl von Pirquet (1874–1929) introduced the concept of allergy. He also supported the suffering childhood, becoming organizer of social provisions for poor children, especially after the terrible famine that struck Austria after the defeat of World War I . Pediatrics and society in the United Kingdom The first interests in childhood diseases, although not yet framed into pediatric schools, began in Great Britain as early as the seventeenth century. Daniel Whistler (1619–1684), and then Francis Glisson (1597–1677), started indeed around 1650 the first studies on rickets, while Thomas Sydenham (1647–1732) deepened various topics of pediatric interest, such as exanthematous diseases, chorea (Sydenham’s chorea), difficult dentition and scurvy . Actually, English pediatrics started with George Armstrong, and was characterized by a greatly humanitarian and sensitive care. In 1769 in London, he made the first generous experiment of pediatric care. Indeed, he opened then a pediatric clinic, where cured about 35,000 children in 12 years, sustaining alone costs and efforts. Andrew Wilson (1718–1792) was his successor, for a short time, in the direction of the London pediatric clinic, which closed in 1783 shortly after the death of its founder, due to lack of benefactors and funds . Before having a real children’s hospital in Great Britain, it was necessary to wait until the second half of the nineteenth century, when the Great Ormond Street Children’s Hospital was founded in London in 1852. It may be considered the cradle of English pediatrics, and was the first place where pediatrics was taught. Its first director was Charles West (1816–1898). In 1871, he published Above some disorders of the nervous system in children , where he described the infantile myoclonic encephalopathy, which took from him its name (West syndrome). He also gave a significant contribution in the field of the organization of pediatric hospitals, publishing a study entitled On hospital organization, with special reference to the organization of hospitals for children . Thomas Barlow (1845–1945) is also linked to the prestigious hospital for sick children in Great Ormond Street , where he completed his studies. They were dedicated to infantile scurvy, to which he first gave features of autonomous disease, providing clinical and anatomical-pathological evidence. Scurvy took, then, from him the name of Barlow’s disease. George F. Still (1868–1941) followed in the direction of the Great Ormond Street Hospital . He described chronic childhood primary polyarthritis, which was then named Still’s Disease . Pediatrics and society in Italy The first organizations for childhood appeared for the first time in Italy in the medieval age. The first institutions are the Ospizi for foundlings or gettatelli , which was the name attributed to abandoned or refused newborns. Including the so called “wheels” (which served to receive the abandoned newborns, guaranteeing anonymity to those who left them), they were expression of a charitable attitude, and were able to counteract infanticide . To find a new type of institutes dedicated to childhood, especially sick children, it is necessary to wait the Ospizi Marini of the nineteenth century. They were promoted by the intelligent and passionate work of Giuseppe Giannelli, and then of Giuseppe Barellai (1813–1884) . This latter founded the first marine colony in Viareggio in 1862, which was followed by many others in marine and mountainous areas. These institutes arose thanks to the commitment of spontaneous civic committees, and to the awareness of the benefits which children affected with tuberculosis or rickets might have from thalassotherapy. The requirement was that of belonging to poor families, among which, moreover, there were most of the affected subjects .In 1843, Count L. Franchi founded the Regina Margherita Children’s Hospital in Turin, which was the first Italian pediatric hospital. Soon thereafter, in 1845, the Ospedaletto di Santa Filomena was founded by the Marquise Falletti of Barolo in Turin, specifically intended for girls affected with tuberculosis and/or rickets, and aged 3 to 12 years. In 1869, the Bambino Gesù Hospital was built in Rome, on the initiative of the Duchess Salviati, where children from 2 to 12 years old were accepted (Table ) . However, in many cities, hospital care for children was also provided within large general hospitals, which dedicated special pavilions to them. A critical issue of the hospital care of that time was the exclusion of children under the age of 3, among which there was the greatest morbidity and mortality, due to the difficult management of such young patients. Although a hospital vocation of some orphanages and institutes for children with rickets, near to the twentieth century the united Italy was still widely poor of structures for hospitalization and cure of children, especially infants. Moreover, 80 years passed from the foundation of the Florentine Pediatric School, before seeing the birth, in a united Italy, of a chair of pediatrics. The credit went to Dante Cervesato (1850–1905). After gaining experiences in the pediatric field as student at the Wiederhofer of Vienna, he returned to Padua, and was able to set up a small Pediatric Clinic, where he received in 1889 the assignment of full professor of Pediatrics. From Padua, Cervesato moved to Bologna in 1900, where he created a thriving pediatric school. He there performed studies on tetany, infantile tuberculosis, neonatal hemorrhagic diseases, appendicitis, intestinal tumors, liver cirrhosis, and poliomyelitis. Therefore, also due to a new cultural attitude towards childhood, a change in the field of pediatric care occurred between the end of the 19th and the beginning of the twentieth century. Indeed, also in Italy it became clear that childhood had the right to organized and structured places of cure, based on their specific needs. In 1876, a Children’s Hospital was built in Trieste (named in 1907 Burlo Garofolo ), then followed by many others, among which Naples and Cremona (1881), Palermo (1882), Genoa and Livorno (1888), Florence (1891), Milan (1897), Bologna (1907) and Modena (1911) (Table ). Infancy finally received from society a new attention, which never enjoyed before. However, almost all children’s hospitals were funded by private charities, and scarcely by initiative of public institutions. Often these hospitals began their activity in humble rented buildings, to become larger over time with subsequent extensions, restorations of old buildings, or even with construction of new ones . It is noteworthy that some hospital admissions did not have medical indications. They were healthy children “which enter healthy for various reasons … , among which the more frequent is here the concomitant admission of the sick mother in another part of the hospital, or that healed, they stay abandoned for many, painful or shameful, reasons” . This declaration of Ponticaccia (1908) is a complaint for some parental behaviors, as well as a clear proof of the social role which sometimes the hospital had to play. The patients who came to hospitals belonged to the poor classes, which were vulnerable for lacking diets and/or unhealthy environments. The scientific articles which appeared at the end of the nineteenth century and the beginning of the 20th one, provided precious information on the reasons of hospitalizations of those years, and of their length . The little patients, especially infants, were often hospitalized for severe conditions of “atrepsia”, characterized by extreme decay of the general conditions, frequently irreversible and fatal . Francesco Fede (1832–1913) first related primitive atrepsia to malnutrition, underlining that these children belonged to the poorest social classes, and calling upon an intervention from authorities aimed at improving their conditions . He was the greatest exponent of early Italian pediatrics, even if chronologically not the first. He was a founding member and president of the Italian Society of Pediatrics, and in 1893 he founded the periodical La Pediatria . Subsequently in 1897, Luigi Concetti (1855–1920) introduced Pediatrics as a free course at the University of Rome. He promoted the first Congress of the Italian Society of Pediatrics, of which he was a founding member and then president in 1903. He founded in 1904 the Journal of Pediatric Clinic , together with Giuseppe Mya (1857–1911) . Mya was called in 1891 in Florence to hold the chair of Pediatric Clinic. In 1901, he transferred the small rooms of his Institute at the Maternity Hospital to the Anna Meyer Children’s Hospital , which the Marquis of Montagliani founded in 1887 in memory of his wife (Table ). In 1916, Vitale Tedeschi (who followed Dante Cervesato in Padua) discussed the possibility to hold the mothers within efficiently and safely organized pediatric wards . It was clear, from the beginning of the foundation of the first pediatric hospitals, that hospitalization might induce suffering in children and parents due to their separation. The time for the introduction of the mother near to her son seems however to be far, but a new attention to the physical and psychological needs of the child was starting to grow. In more recent years, a diffuse change of point of view towards childhood finally shows the necessity of reconstructing, within hospitals, the binomial mother-child, promoting a specific and global approach to the little patient . During the 1970s a process of de-hospitalization and humanization of pediatric hospitals started, also through the creation of new outpatient systems (Day-Hospital, and then Day-Surgery). In the meantime, and to respond to new and different epidemiological needs, the old sanatoriums for tuberculosis were dismantled or used for other health issues and/or diseases. Then, around the ‘80s, through many regional laws included as aims of health plans, the doors of the hospital wards started to open to the mothers of sick children. Initiatives sustained by associations of families and volunteers were carried out, allowing children to continue the normal activities within hospitals, like games and school, also through psychological support and/or that of cultural mediators. Also, the environments were humanized and tailored on the child. The habit of decorating the walls of infirmaries and hospitalization rooms progressively spread and, in neonatology units, the practice of rooming-in began . In the same years, primary care were carried out by family pediatricians in all the areas of the country. Pediatrics and society in Palermo (Sicily) The Pediatric Clinic was born in Palermo in 1903, when Rocco Jemma, young and brilliant doctor working in Genoa, was called to hold the role of professor of Pediatrics at the University of Palermo. Promoter of this initiative was Ignazio Florio, belonging to one of the richest and most influential European families of entrepreneurs and patrons of the time. In a few years a new and efficient structure was built, and inaugurated in 1907 . Rocco Jemma founded in those years a real school of pediatricians, who came from all Sicily. When Jemma moved to Naples in 1913, the most brilliant of his pupils, Professor Giovanni Di Cristina, succeeded him. He continued the work of his master, further expanding the scientific activities of the Clinic and starting new social and care initiatives . It is due to him, moreover, the discovery of an effective and decisive treatment for the cure of visceral leishmaniasis with antimony salts, then universally used, which allowed to overcome such infectious disease, commonly lethal until then . He was active for the construction, on lands and with funds obtained from donations, of the hospitals Casa del Sole (assigned as tuberculous sanatorium in a hilly area of the town) and Aiuto Materno (for the hospitalization of children with high social risk). His premature and unexpected death, in 1928, left a great emptiness in pediatrics of the whole island. The city dedicated to him the Children’s Hospital in 1929, which he wanted closely related to the Pediatric Clinic, and which thereafter took his name. After Di Cristina, the direction of the Clinic and the Hospital passed to La Franca, and then Cannata, Maggiore, and Gerbasi. In 1946, the School of Specialization of Pediatrics was established, favoring the recruitment of scientifically able young doctors. Gerbasi was principal of the Faculty of Medicine and rector of the University, and gave shine to the pediatrics of Palermo, also at a national level. His school was particularly rich of prestigious pupils and personalities, first Giuseppe Roberto Burgio (who later became director of the Pediatric Clinic in Perugia, and then in Pavia). Hematology, infectious diseases, and nutrition were his areas of major scientific impact. He gave decisive contributions on deficiency diseases, as the definition of the perniciosiform anemia of infants (Gerbasi’s anemia), and on dystrophies . The great social caliber of his activities was evident also during the earthquake of Belice in 1968, a dramatic time which Gerbasi faced moving on site doctors, nurses, and hospital equipments.
The first pediatric school was founded in Italy, even if it lasted for a short time. In fact, on April 1802, a chair of pediatrics was born in Florence on initiative of the king of Etruria, Ludwig I of Bourbon, who entrusted the task to Professor Gaetano Palloni, who gave lessons at the Ospizio degli Innocenti . The school of Palloni lasted just 3 years, until 1805, when the queen Maria Luisa of Spain suppressed the chair of children’s diseases. However, in 1807, she restored the chair of pediatrics, owing to the high number of deaths among children. Nonetheless, also this time the Florentine School of Pediatrics had no luck. Indeed, due to the upheavals caused by the Napoleonic gestures, all the old institutions were suppressed or renewed and, in the same year (1807) when Tuscany became a French province, the school was definitively closed . While the Ospizio degli Innocenti in Florence was the first, although brief, seat of pediatric teaching, the first real pediatric hospital was born in Paris on May 1802, in the middle of the Napoleonic era. The hospital, which was called Hôpital des enfants malades , was set up inside an old convent of nuns which was variously used in different times, until, during the Revolution, it became an orphanage ( Maison de l’Enfant Jesus ). It hosted children from 2 to 15 years old. Alongside the hospital wards, an outpatient clinic was also set up for care of patients with less serious diseases and of those who did not need hospitalization. In the same location, activities for teaching and spreading childcare notions to mothers belonging to popular classes were also carried out. The Hôpital des enfants malades soon grew as center of pediatric studies and for the spread of pediatric culture during the whole nineteenth century, becoming the cradle of French and European pediatrics .
The French Pediatric School foundation overlaps with that of the Hôpital des enfats malades , between the end of 1700 and the beginning of 1800. The prestige of this school is related to great scientists as Bichat, Corvisart, Laennec, who were not pediatricians, but contributed to the advancement of pediatrics . Furthermore, there were other eminent physicians particularly interested in pediatric diseases. Charles Michel Billard (1800–1832), founder of the pediatric pathological anatomy, studied many corpses of children and babies died in the Parisian orphanages. Fréderic Rilliet (1814–1861) and Antoine-Charles-Ernest Barthez (1811–1891), both doctors at the Sainte Eugénie Hospital in Paris, published in 1843 the Traité clinique et pratique des maladies des enfants , which was the reference text for the pediatricians of the nineteenth century. Eugene Bouchut (1818–1891) was the first to use the laryngeal intubation in the croup (1858). Armand Trousseau (1801–1867) carried out studies on convulsions, chorea, eruptive fevers, diphtheria, and typhus. His fame is related to the first tracheotomies which performed in Paris, defining technique and postoperative treatment. His name is linked to the sign of tetany . Marie-Jules Parrot (1829–1887) was interested in the cerebrovascular lesions of childhood, and studied the nutritional disorders of early infancy, coining the term “atrepsia”. The pseudoparalysis of luetic infants bears his name . Pierre Costant Budin (1846–1917) and Adolphe Pinard (1844–1934) were obstetricians and sustained the relevance of boiling milk and breastfeeding, respectively. Thèophile Roussel (1816–1903), was a doctor and politician involved in social and occupational medicine . Jean Bernard Antoine Marfan (1858–1942) was the first professor of Early Childhood Clinic in Paris. He dealt with many fields of children’s pathology and had a great scientific production . In 1881 and 1897 were launched, respectively, the Monthly Review of Childhood Diseases and the Children’s Medicine Archives .
German pediatrics started to grow and to be notable during the eighteenth century. In 1753 Jakob Reinbold Spielmann (1722–1783) was the first to analyze the milk of women and domestic animals. In 1787, Joseph J. Mastalier founded in Vienna the first Public Institute for Sick Children , which was, rather than a real hospital, an outpatient pediatric clinic. About 50 years later, the first Austrian pediatric hospitals ( Sainte Anne in 1837, and Saint Joseph in 1842) were built in Vienna. Conversely, German pediatrics was officially born in 1830, with the foundation of a small ward at the Charité Hospital in Berlin. It developed as a clinic in the following decades, under the direction of Barez. This latter had a prestigious teaching activity and founded the first pediatric journal in the world: the Journal für Kinderkrankenheit . Other pediatric hospitals were built throughout the century in the area belonging to Germanic culture, which had in the Viennese Pediatric School its driving force, proof of an increasing interest in childhood. Carl Credé (1819–1892) was obstetrician, and proposed the prophylaxis of blenorrhagic conjunctivitis of the newborn (main cause of neonatal blindness of that time), by the instillation in the conjunctival sac of 2% silver nitrate. Eduard Heinrich Henoch (1820–1910), considered the founder of clinical pediatrics in Germany, was director of the Pediatric Clinic at the Charité Hospital in Berlin. His name is linked to the purpura fulminans , which from him took the name of “Henoch purpura”. Franz Soxhlet (1848–1926), chemist and physiologist, studied milk sterilization and was able to fractionate its proteins into casein, albumin, globulin and lactoproteins. Alois Epstein (1849–1918) was director of the Prague Brefotrophy, which became with him a great pediatric school. Theodor Escherich (1857–1911) linked his name to the bacteriological research on intestinal germs, and on changes of the intestinal flora in infants with nutritional disorders. He discovered the bacterium coli , which was then called Escherichia coli . Carl von Pirquet (1874–1929) introduced the concept of allergy. He also supported the suffering childhood, becoming organizer of social provisions for poor children, especially after the terrible famine that struck Austria after the defeat of World War I .
The first interests in childhood diseases, although not yet framed into pediatric schools, began in Great Britain as early as the seventeenth century. Daniel Whistler (1619–1684), and then Francis Glisson (1597–1677), started indeed around 1650 the first studies on rickets, while Thomas Sydenham (1647–1732) deepened various topics of pediatric interest, such as exanthematous diseases, chorea (Sydenham’s chorea), difficult dentition and scurvy . Actually, English pediatrics started with George Armstrong, and was characterized by a greatly humanitarian and sensitive care. In 1769 in London, he made the first generous experiment of pediatric care. Indeed, he opened then a pediatric clinic, where cured about 35,000 children in 12 years, sustaining alone costs and efforts. Andrew Wilson (1718–1792) was his successor, for a short time, in the direction of the London pediatric clinic, which closed in 1783 shortly after the death of its founder, due to lack of benefactors and funds . Before having a real children’s hospital in Great Britain, it was necessary to wait until the second half of the nineteenth century, when the Great Ormond Street Children’s Hospital was founded in London in 1852. It may be considered the cradle of English pediatrics, and was the first place where pediatrics was taught. Its first director was Charles West (1816–1898). In 1871, he published Above some disorders of the nervous system in children , where he described the infantile myoclonic encephalopathy, which took from him its name (West syndrome). He also gave a significant contribution in the field of the organization of pediatric hospitals, publishing a study entitled On hospital organization, with special reference to the organization of hospitals for children . Thomas Barlow (1845–1945) is also linked to the prestigious hospital for sick children in Great Ormond Street , where he completed his studies. They were dedicated to infantile scurvy, to which he first gave features of autonomous disease, providing clinical and anatomical-pathological evidence. Scurvy took, then, from him the name of Barlow’s disease. George F. Still (1868–1941) followed in the direction of the Great Ormond Street Hospital . He described chronic childhood primary polyarthritis, which was then named Still’s Disease .
The first organizations for childhood appeared for the first time in Italy in the medieval age. The first institutions are the Ospizi for foundlings or gettatelli , which was the name attributed to abandoned or refused newborns. Including the so called “wheels” (which served to receive the abandoned newborns, guaranteeing anonymity to those who left them), they were expression of a charitable attitude, and were able to counteract infanticide . To find a new type of institutes dedicated to childhood, especially sick children, it is necessary to wait the Ospizi Marini of the nineteenth century. They were promoted by the intelligent and passionate work of Giuseppe Giannelli, and then of Giuseppe Barellai (1813–1884) . This latter founded the first marine colony in Viareggio in 1862, which was followed by many others in marine and mountainous areas. These institutes arose thanks to the commitment of spontaneous civic committees, and to the awareness of the benefits which children affected with tuberculosis or rickets might have from thalassotherapy. The requirement was that of belonging to poor families, among which, moreover, there were most of the affected subjects .In 1843, Count L. Franchi founded the Regina Margherita Children’s Hospital in Turin, which was the first Italian pediatric hospital. Soon thereafter, in 1845, the Ospedaletto di Santa Filomena was founded by the Marquise Falletti of Barolo in Turin, specifically intended for girls affected with tuberculosis and/or rickets, and aged 3 to 12 years. In 1869, the Bambino Gesù Hospital was built in Rome, on the initiative of the Duchess Salviati, where children from 2 to 12 years old were accepted (Table ) . However, in many cities, hospital care for children was also provided within large general hospitals, which dedicated special pavilions to them. A critical issue of the hospital care of that time was the exclusion of children under the age of 3, among which there was the greatest morbidity and mortality, due to the difficult management of such young patients. Although a hospital vocation of some orphanages and institutes for children with rickets, near to the twentieth century the united Italy was still widely poor of structures for hospitalization and cure of children, especially infants. Moreover, 80 years passed from the foundation of the Florentine Pediatric School, before seeing the birth, in a united Italy, of a chair of pediatrics. The credit went to Dante Cervesato (1850–1905). After gaining experiences in the pediatric field as student at the Wiederhofer of Vienna, he returned to Padua, and was able to set up a small Pediatric Clinic, where he received in 1889 the assignment of full professor of Pediatrics. From Padua, Cervesato moved to Bologna in 1900, where he created a thriving pediatric school. He there performed studies on tetany, infantile tuberculosis, neonatal hemorrhagic diseases, appendicitis, intestinal tumors, liver cirrhosis, and poliomyelitis. Therefore, also due to a new cultural attitude towards childhood, a change in the field of pediatric care occurred between the end of the 19th and the beginning of the twentieth century. Indeed, also in Italy it became clear that childhood had the right to organized and structured places of cure, based on their specific needs. In 1876, a Children’s Hospital was built in Trieste (named in 1907 Burlo Garofolo ), then followed by many others, among which Naples and Cremona (1881), Palermo (1882), Genoa and Livorno (1888), Florence (1891), Milan (1897), Bologna (1907) and Modena (1911) (Table ). Infancy finally received from society a new attention, which never enjoyed before. However, almost all children’s hospitals were funded by private charities, and scarcely by initiative of public institutions. Often these hospitals began their activity in humble rented buildings, to become larger over time with subsequent extensions, restorations of old buildings, or even with construction of new ones . It is noteworthy that some hospital admissions did not have medical indications. They were healthy children “which enter healthy for various reasons … , among which the more frequent is here the concomitant admission of the sick mother in another part of the hospital, or that healed, they stay abandoned for many, painful or shameful, reasons” . This declaration of Ponticaccia (1908) is a complaint for some parental behaviors, as well as a clear proof of the social role which sometimes the hospital had to play. The patients who came to hospitals belonged to the poor classes, which were vulnerable for lacking diets and/or unhealthy environments. The scientific articles which appeared at the end of the nineteenth century and the beginning of the 20th one, provided precious information on the reasons of hospitalizations of those years, and of their length . The little patients, especially infants, were often hospitalized for severe conditions of “atrepsia”, characterized by extreme decay of the general conditions, frequently irreversible and fatal . Francesco Fede (1832–1913) first related primitive atrepsia to malnutrition, underlining that these children belonged to the poorest social classes, and calling upon an intervention from authorities aimed at improving their conditions . He was the greatest exponent of early Italian pediatrics, even if chronologically not the first. He was a founding member and president of the Italian Society of Pediatrics, and in 1893 he founded the periodical La Pediatria . Subsequently in 1897, Luigi Concetti (1855–1920) introduced Pediatrics as a free course at the University of Rome. He promoted the first Congress of the Italian Society of Pediatrics, of which he was a founding member and then president in 1903. He founded in 1904 the Journal of Pediatric Clinic , together with Giuseppe Mya (1857–1911) . Mya was called in 1891 in Florence to hold the chair of Pediatric Clinic. In 1901, he transferred the small rooms of his Institute at the Maternity Hospital to the Anna Meyer Children’s Hospital , which the Marquis of Montagliani founded in 1887 in memory of his wife (Table ). In 1916, Vitale Tedeschi (who followed Dante Cervesato in Padua) discussed the possibility to hold the mothers within efficiently and safely organized pediatric wards . It was clear, from the beginning of the foundation of the first pediatric hospitals, that hospitalization might induce suffering in children and parents due to their separation. The time for the introduction of the mother near to her son seems however to be far, but a new attention to the physical and psychological needs of the child was starting to grow. In more recent years, a diffuse change of point of view towards childhood finally shows the necessity of reconstructing, within hospitals, the binomial mother-child, promoting a specific and global approach to the little patient . During the 1970s a process of de-hospitalization and humanization of pediatric hospitals started, also through the creation of new outpatient systems (Day-Hospital, and then Day-Surgery). In the meantime, and to respond to new and different epidemiological needs, the old sanatoriums for tuberculosis were dismantled or used for other health issues and/or diseases. Then, around the ‘80s, through many regional laws included as aims of health plans, the doors of the hospital wards started to open to the mothers of sick children. Initiatives sustained by associations of families and volunteers were carried out, allowing children to continue the normal activities within hospitals, like games and school, also through psychological support and/or that of cultural mediators. Also, the environments were humanized and tailored on the child. The habit of decorating the walls of infirmaries and hospitalization rooms progressively spread and, in neonatology units, the practice of rooming-in began . In the same years, primary care were carried out by family pediatricians in all the areas of the country.
The Pediatric Clinic was born in Palermo in 1903, when Rocco Jemma, young and brilliant doctor working in Genoa, was called to hold the role of professor of Pediatrics at the University of Palermo. Promoter of this initiative was Ignazio Florio, belonging to one of the richest and most influential European families of entrepreneurs and patrons of the time. In a few years a new and efficient structure was built, and inaugurated in 1907 . Rocco Jemma founded in those years a real school of pediatricians, who came from all Sicily. When Jemma moved to Naples in 1913, the most brilliant of his pupils, Professor Giovanni Di Cristina, succeeded him. He continued the work of his master, further expanding the scientific activities of the Clinic and starting new social and care initiatives . It is due to him, moreover, the discovery of an effective and decisive treatment for the cure of visceral leishmaniasis with antimony salts, then universally used, which allowed to overcome such infectious disease, commonly lethal until then . He was active for the construction, on lands and with funds obtained from donations, of the hospitals Casa del Sole (assigned as tuberculous sanatorium in a hilly area of the town) and Aiuto Materno (for the hospitalization of children with high social risk). His premature and unexpected death, in 1928, left a great emptiness in pediatrics of the whole island. The city dedicated to him the Children’s Hospital in 1929, which he wanted closely related to the Pediatric Clinic, and which thereafter took his name. After Di Cristina, the direction of the Clinic and the Hospital passed to La Franca, and then Cannata, Maggiore, and Gerbasi. In 1946, the School of Specialization of Pediatrics was established, favoring the recruitment of scientifically able young doctors. Gerbasi was principal of the Faculty of Medicine and rector of the University, and gave shine to the pediatrics of Palermo, also at a national level. His school was particularly rich of prestigious pupils and personalities, first Giuseppe Roberto Burgio (who later became director of the Pediatric Clinic in Perugia, and then in Pavia). Hematology, infectious diseases, and nutrition were his areas of major scientific impact. He gave decisive contributions on deficiency diseases, as the definition of the perniciosiform anemia of infants (Gerbasi’s anemia), and on dystrophies . The great social caliber of his activities was evident also during the earthquake of Belice in 1968, a dramatic time which Gerbasi faced moving on site doctors, nurses, and hospital equipments.
The changes that took place over the centuries, and described so far, were innumerable and extraordinary. Society, institutions, and political and economic structures of countries underwent profound transformations. Indeed, the political and health reforms implemented in Italy (and in the other European countries) during the last decades, and the increased economic well-being allowed the reduction of infant mortality rates, which currently are among the lowest worldwide. Specifically, the infant mortality rate for children < 1 year of age (IMR) was 231‰ in 1865, and fell to 185‰ already before the end of the nineteenth century (1895) . Then, this downward trend became even more relevant, till the first two decades of 1900. Afterward, it had two sudden stops and reversals, corresponding to the two war periods. Moreover, the 1920 IMR (155‰) also included the deaths due to the Spanish flu epidemic . Thereafter, in the interwar period (1930), such rate halved (119‰) if compared to the initial recorded values, and dropped below 50‰ in the 1960s and 20‰ in the 1980s, reaching 3‰ between 2015 and 2020 (in Europe decreased from 38.2‰ in 1961 to 3.4‰ in 2019) (Fig. ). The reduction of IMR over time was associated to a variation of the causes of death. Their analysis better defines the improvements obtained, showing the progressive disappearance of infectious diseases (from about 65% in 1895, to 2% in 2015), and the emergence of other ones, which today mainly includes congenital malformations and conditions of perinatal origin (69%) . Pediatrics significantly contributed to the achievement of these formidable results, through the development of a culture of children’s rights, the acquisition of specific technical knowledge and skills within the context of a constant medical and technological progress, and the control of previously endemic (malaria) and/or past and current communicable diseases (syphilis, tuberculosis and more recently measles, pertussis and lastly COVID-19) . The pediatrician had to adapt and reshape himself in the light of the sociocultural changes of society, as well as of the current biological and psychological features of patients, of what infants, children and adolescents are today . Old diseases disappeared, or their prognosis is significantly improved for ever more effective therapies. New diseases appear, or re-emerge with higher incidence or prevalence, also due to the constant migratory flows from low-income countries to western ones. Diseases which had poor prognosis, are no longer considered as such (oncohematological and genetic ones), in relation to the continuous updating of therapeutic approaches (i.e., transplantations, gene therapies). New techniques of intensive care allow to extremely preterm newborns to survive, and novel treatments (i.e., hypothermia) to reduce adverse outcomes and morbidities. New tools for the identification of genetic diseases (i.e., next generation sequencing) permit more precise diagnosis, prognosis and counselling to patients and their families. New antibiotics, more rationalized treatments, in addition to early and multidisciplinary management, improve quality and length of life of children with complex and chronic diseases . Then, despite the overcoming of many diseases, however pediatricians must increasingly face new health critical issues (new addictions, as the abuse of technological and digital devices , contrast to overweight and obesity, care of the migrant child, new infections, as evidenced also by the devastating novel coronavirus pandemic). The functional and environmental changes of pediatric hospitals, as well as the hygienic and structural ones, must be adjusted to the epidemiological mutations of diseases, in addition to the new diagnostic and therapeutic approach to sick children. They must be realized to keep pace with the times, and to guarantee a careful and updated clinical care. Furthermore, to answer to the many and relevant changes of the current digital and hyper-connected society, which especially occurred in recent years, the pediatrician must be formed and equipped with a wide cultural baggage, not exclusively made by eminently clinical and technical aspects. The complexity of his role, in addition to the new scientific knowledge (i.e., the acquisition on the epigenetic mechanisms subtending diseases), led to an increase of his responsibilities, and then to the need to own holistic competences, which should span bioethical, law, relational and communication issues, and several others including pedagogy, bioengineering, sociology, economy, art, sport, politics, technology, music, botany, poetry . They compose the cultural background of pediatricians, to take care of all children effectively and competently. The goal of today’s pediatrician is to protect and improve the health of all children, guaranteeing their fundamental rights from conception. The data relating to social inequalities in our country, as in the whole world, are worrying, especially if we refer to the pandemic period. Although neonatal and child mortality in Italy decreased, notable disparities still remain disadvantaging insular and southern regions (linked to cultural, economic and social factors, in addition to organizational problems also referring to the perinatal network and the high number of small birthing centers) than center-northern ones, and foreign citizens than Italians . Such inequities are amplified if we look at the European context, and even more at the global one. Indeed, there is a significant territorial disparity in the access to health care, as well as in education, and adequate living conditions. Most of these children live in the southern regions of our country (and of the world), where there is a high risk of social exclusion, leading to possible adverse long-term consequences. It is pediatrician’s duty to work to guarantee to every child the same right to health and education, regardless of the family and region of origin . The gradual reduction in funding for the health sector, which characterized the last few decades, led to a profound suffering of our national health system (NHS), which became particularly evident during the pandemic: in a such dramatic time, indeed, local doctors and pediatricians were literally overwhelmed by an immense care burden. Today’s pediatrician must therefore find a new and adequate place within a new structure of the NHS, which must be remodeled and oriented to more effective care networks. Furthermore, the collapse in the number of pediatricians, which will further worsen in the next years, requires the development of a new system able to guarantee pediatric specificity and the right of all subjects in developmental age to be assisted by the pediatrician, with a continuity of care between territory and hospital. Currently, due to lack of specialists in Pediatrics, the child is often evaluated in the first instance by the doctor for adults, with the inevitable risk of clinical inappropriateness. It seems therefore crucial to reformulate university and specialist training programs, supporting the most lacking areas based on territorial needs . Finally, the pediatrician must sustain the cultural and scientific theme of prevention. The promotion of healthy lifestyle (primarily breastfeeding), starting before conception and during the first 1000 days, represents the most effective intervention to counteract the development of chronic socially communicable diseases (i.e., obesity, diabetes, cardiovascular diseases), which today represent among the main causes of morbidity and mortality also among children. For this purpose, all health education activities may play a key role. Investing in the school, indeed, as well as in the health system and in policies to support families, will likely reduce inequality, educational poverty, social neglect, behavioral disorders, delinquency, and ultimately many of the health problems of the children of today and tomorrow .
Pediatrics arose from the need, developed in different parts of the world and in different times, and not necessarily felt by doctors, to protect children. Many pediatric hospitals were born thanks to sensitivity and attention of people animated by authentic philanthropy and altruism. The peculiarity of pediatrics, as a global science immersed in social aspects, outlined its uniqueness and specificity since its birth . Although more than 200 years passed since its foundation, and despite the extraordinary changes in the society, the advances and acquisitions obtained in the medical and technological fields (e.g., treatment and control of many diseases, reduction of IMR), however spirit, values, and targets which the pediatrician sets in the everyday work, are still unchanged. Aims of today’s pediatrician are, in fact, care of children with the best possible assistance, protection of their rights within families and in the society, fight against inequalities and poverty, and respect for their identity in all its forms, personal, cultural, and religious. The pediatrician must take care, indeed, of children’s health, considered as mental, physical, and social well-being. He has then to cure, as well as to promote such well-being. Indeed, his responsibility goes beyond the simplicity of the doctor-patient relationship. It involves several people, all those who take care of newborns, children, adolescents. This involvement may also refer to teachers, coaches, friends, with a domino effect which reaches the entire world around the single child . The modulation of all these relationships makes clear the social role of pediatricians. Moreover, we know today those lifestyles and quality of life can build the well-being through personal choices, and above all that such well-being is the result also of the environment where we live. Therefore, the pediatrician has a responsibility on families’ decisions, and in the meantime, he must be aware of politically and socially acting to favor the presence of the best possible context oriented to promote the well-being of the child.
|
Prognostic Factors of Preterm Birth After Selective Laser Umbilical Cord Coagulation for Twin-twin Transfusion Syndrome at Hanoi Obstetrics and Gynecology Hospital | 1fd1139e-abe0-4f3b-83ec-46575ce480e3 | 11813210 | Surgical Procedures, Operative[mh] | BACKGROUND Twin-Twin Transfusion Syndrome (TTTS) is a serious complication affecting 10-15% of monochorionic diamniotic twin pregnancies, characterized by imbalanced blood flow between fetuses, which can lead to high morbidity and mortality if untreated . Fetoscopic laser photocoagulation has become the standard treatment, offering improved survival rates by interrupting abnormal blood vessel connections. However, this procedure is associated with an increased risk of preterm birth, typically occurring between 29 to 33 weeks, which poses significant challenges for neonatal outcomes . The risk of preterm birth is influenced by multiple factors. Preoperative factors, such as a maternal history of preterm labor, short cervical length, and previous pregnancy complications, are known to affect pregnancy outcomes. Intraoperative factors, including the duration of the procedure, the volume of amniotic fluid removed, and any complications during surgery, can also contribute to early delivery . Postoperative variables, such as the separation of amniotic membranes, premature rupture of membranes (PROM), and changes in cervical length, are also closely associated with an increased risk of preterm birth . objective This study aims to evaluate the prognostic factors for preterm birth in TTTS cases treated with selective laser umbilical cord coagulation. Specifically, it focuses on the influence of gestational age at surgery and cervical length changes, providing insights that may guide clinical decision-making and improve patient outcomes . MATERIAL AND METHODS Study Design and Population This was a prospective, non-controlled intervention study conducted at the Fetal Intervention Center of Hanoi Obstetrics and Gynecology Hospital from September 2019 to November 2020. The study included pregnant women diagnosed with TTTS at Quintero stages III and IV, with gestational ages between 16 to 26 weeks (1, 2). Patients who underwent selective laser umbilical cord coagulation as a treatment for TTTS were included. We excluded those who have more than two fetuses, those with severe conditions, those who have contraindication to surgeries, fetuses with severe abnor- malities, stillbirths, membrane ruptures, as well as current threats of miscarriage and preterm birth. All patients gave written informed consent for the procedure and consented to their clinical data being used for research purposes. This study was approved by the Institutional Review Board (IRB) of Hanoi Medical University (IRB No NCS25/HMU-IRB; date approval March 27th, 2019) Procedure Routine preoperative ultrasound assessments were performed including fetal biometry, fetal morphology, amniotic fluid volume, and placental location followed by color and pulsed Doppler examination. Anesthesia was performed by combined intravenous sedation and local anesthesia (using Lidocaine 1%). The surgeries were performed in a separate dedicated fetal surgery operation room. We used the following fetoscopic instruments: Karl Storz Image IS 4U (Tuttlingen, Germany), Dornier Medilas D MultiBeam laser instruments and Dornier Medtech Laser light guides. A 1.2mm diameter scope was used for fetuses under 20 weeks and a 2mm diameter scope was used for pregnancies over 20 weeks Selective umbilical cord ablation laser technique was applied to reduce pregnancies with a worse prognosis. Laser selective umbilical cord ablation technique was indi- cated in the following cases: TTTS stage III or IV; TTTS with sIUGR or when fetoscopic laser surgery techniques cannot be performed, such as the distance between the two umbilical cords being close together, with the consent of the mother and family. The amniotic uid was subsequently drained through the cannula until the maximum vertical pocket of amniotic fluid (MVP) reached a normal range . The hospital provided perioperative management with prophylactic tocolysis and antibiotics. Prenatal care occurred at 24 hours, 48 hours, and seven days post-operation, plus every two weeks until delivery by ultrasound examination. Experienced obstetricians and neonatologists performed delivery and neonatal management. Delivery was decided according to obstetrical indications . Outcome Measures Gestational age, cervical length changes, and surgical outcomes were recorded and analyzed. Change in cervical length was calculated by the difference in cervical length before surgery and 48 hours after surgery and 7 days after surgery. Neonatal survival was defined as the infant surviving until hospital discharge Statistical Analysis Data were collected and managed with Redcap software and analyzed with STATA 16.0 software. The results are ex- pressed as the mean ± standard deviation (SD) (Min – Max) for numerical variables and as the frequency and proportion for categorical variables. Numerical variables were compared using the Mann-Whitney U test, and a p-value < 0.05 was considered a statistically signi cant difference. Receiver Operating Characteristic (ROC) is used to assess the value of a diagnostic method RESULTS The average gestational age before surgery is 20.30 weeks. When classifying the stage of TTTS before surgery, we only have stage II (85.71%) and stage IV (14.29%), more than 50% have accompanying sIURG condition, this is also the main reason leading to the appointment of selective umbilical cord coagulation surgery in the study subjects. The average gestational age at birth was 34.70 ± 4.33 weeks. We kept the pregnancy an average of 12.97 weeks after surgery. The rate of premature birth under 34 weeks is 31.57%. We did not record any complications during surgery or maternal complications after surgery. Regarding pregnancy complications, we only recorded 2 stillbirths (6.06%) within 7 days after surgery. After surgery, we noted a decrease in cervical length at 48 hours after surgery and 7 days after surgery, but this change was not statistically significant with p < 0.05 (Mann-Whitney test). Two factors predicting premature birth before 34 weeks are the week of gestational age at surgery (AUC 0.7308; se 5000%, sp 92.31%, PPV 75.00%) and reduction in cervix length after surgery 48 hours (AUC 8.205; se 66.67%, sp 100%, PPV 100%) Surgery at a gestational age of over 22 weeks increases the risk of premature birth under 34 weeks by 4.33 times, and a decrease in cervical length 48 hours after surgery of over 9.5% increases the risk of premature birth before 34 weeks by 8.67 times. DISCUSSION The present study aimed to evaluate the prognostic factors influencing the risk of preterm birth following selective laser umbilical cord coagulation in Twin-Twin Transfusion Syndrome (TTTS) cases. Our findings indicate that both gestational age at the time of surgery and postoperative cervical length reduction are significant predictors of preterm delivery, emphasizing the need for precise surgical timing and vigilant postoperative care. Our results show that surgical intervention performed after 22 weeks of gestation significantly increased the risk of preterm birth before 34 weeks, with a 4.33-fold elevated risk (p = 0.025). This finding aligns with existing literature, where delayed intervention has been associated with unfavorable pregnancy outcomes . Malshe et al. (2017) similarly demonstrated that TTTS cases undergoing late gestational intervention had a higher incidence of preterm delivery due to increased uterine irritability, membrane rupture, and inflammatory responses triggered by advanced pregnancy . The underlying mechanisms for this association may involve increased uterine volume, higher amniotic pressure, and more extensive placental vascular connections as pregnancy progresses. These factors can complicate surgical procedures, leading to a greater likelihood of uterine contractions and membrane separation. Consequently, early intervention, preferably before 22 weeks of gestation, may reduce these risks and improve overall outcomes for both the mother and fetuses . Our study further highlights the importance of cervical length changes as a critical prognostic indicator for preterm birth. A reduction in cervical length of ≥9.5% within 48 hours post-surgery was associated with an 8.67-fold increase in the risk of preterm delivery (p = 0.006). This finding supports the role of cervical shortening as an early predictor of labor onset, particularly in high-risk pregnancies . The immediate postoperative period is critical for evaluating cervical integrity, as inflammation and mechanical stress from surgical intervention can exacerbate cervical shortening. Several studies have emphasized the importance of cervical length monitoring as part of routine postoperative care. Ville et al. (2020) suggested that early detection of cervical shortening enables timely interventions, such as cervical cerclage, progesterone supplementation, or the use of pessaries, which can help prolong pregnancy and reduce preterm birth rates . Although the reduction in cervical length observed in our study did not reach statistical significance at 7 days post-surgery, the marked decrease at 48 hours underscores the importance of short-term monitoring. This highlights a potential “window of intervention” where clinical measures could be implemented to mitigate the risk of preterm labor. The neonatal outcomes reported in our study were encouraging, with a survival rate of 90.48%, which is comparable to findings in other studies evaluating laser therapy for TTTS . The mean gestational age at delivery was 34.7 weeks, and the average pregnancy prolongation post-surgery was 12.97 weeks, further demonstrating the efficacy of selective laser umbilical cord ablation in extending pregnancy duration and mitigating extreme prematurity. However, it is important to note that over 50% of neonates had low birth weight, with 5.26% below 1000 grams, reflecting the persistent challenge of optimizing fetal growth in TTTS pregnancies. Studies by Rahimi-Sharbaf et al. (2021) and Mari et al. (2007) similarly reported a high prevalence of low birth weight among TTTS cases, attributing this outcome to placental insufficiency and chronic fetal growth restriction in donor twins . A particularly notable finding in our study is the absence of intraoperative maternal complications and the low rate of postoperative complications. Only 2 stillbirths (6.06%) were recorded within 7 days post-surgery, which compares favorably to previous reports . This finding underscores the safety of selective laser umbilical cord ablation when performed by experienced operators and highlights its efficacy in reducing TTTS-related mortality. Compared to laser coagulation of vascular anastomoses, which has been associated with higher complication rates, our approach demonstrates the benefits of selective ablation in minimizing risks . Further studies comparing different surgical techniques may provide additional insights into the relative safety and efficacy of these interventions. The findings from this study have several important clinical implications. First, early intervention, ideally before 22 weeks of gestation, should be prioritized to minimize preterm birth risk and improve neonatal outcomes. Second, routine postoperative monitoring of cervical length, particularly within the first 48 hours, should be integrated into clinical protocols. For cases exhibiting significant cervical shortening, additional interventions such as cervical cerclage or progesterone therapy should be considered . While our study provides valuable insights, certain limitations must be acknowledged. The relatively small sample size and single-center design may limit the generalizability of our findings. Additionally, the lack of long-term neurodevelopmental follow-up restricts our ability to evaluate the full impact of selective ablation on neonatal outcomes. Future research should focus on multicenter trials with larger sample sizes to validate these findings and explore strategies to further optimize pregnancy outcomes. CONCLUSION This study identifies gestational age at surgery and postoperative cervical length reduction as critical prognostic factors for preterm birth in TTTS cases treated with selective laser umbilical cord ablation. Early intervention and vigilant postoperative cervical monitoring are essential for improving pregnancy outcomes, reducing the risk of preterm delivery, and enhancing neonatal survival. |
Bringing global hematology research to the forefront | 22e42603-7dde-4afc-bcfa-1abe8c8d300a | 11279253 | Internal Medicine[mh] | |
Bioelectromagnetic Platform for Cell, Tissue, and In Vivo Stimulation | 682e983f-2627-49f0-a4ef-93658ca3ccdf | 8392012 | Physiology[mh] | The rapid growth of research interest in magnetogenetics in the past decade has resulted in a broad range of bioelectromagnetic stimulation applications , creating a demand for sophisticated stimulus delivery systems. Many biological systems can be magnetically stimulated to regulate gene expression or neural activity, and stimulation parameters can vary significantly depending on the mechanisms employed to elicit responses . In contrast to visible light, low frequency and DC magnetic fields easily penetrate soft tissue and bone, potentially allowing for minimally invasive and wireless stimulation. High costs of bioelectromagnetic stimulation devices and a lack of systematic analysis of electromagnetic stimulus fields serve as a barrier to designing quantitative studies and replicating results in magnetogenetics experiments. Development of magnetic sensitive pathways, like those using nanoparticles and proteins like the electromagnetic perceptive gene (EPG) , have contributed to making magnetic stimulus delivery for wide ranging applications increasingly important. Furthermore, recent studies which show that humans may also have magnetoperception serve to increase the demand for easy to implement and versatile electromagnetic stimulation devices. One solution, which is well suited for stimulating multi-well plates, includes the coil array of , which consists of an array of coils positioned to fit underneath each well of a 24-well plate. The coils are then used to deliver a pulsed time varying electromagnetic field stimulus having magnetic flux densities in the range of 1.0 to 1.2 mT. Another solution for magnetic stimulus delivery includes the use of an induction heater ; however, these are limited in their ability to be integrated into a wide variety of experimental protocols. Therefore, it is beneficial to design and build electromagnets which can be more easily incorporated into a wide variety of applications. Custom stimulation coils demonstrate improved integration into microscopy applications , and we aim to build on this flexibility and emphasize detailed stimulation validation. Furthermore, repeatability, uniformity, a negative control condition, and ease of use are critical properties of interest in magnetic stimulus systems. Here, we present work conducted toward developing a magnetogenetics bioelectromagnet stimulation platform which is low cost, versatile, easy to use, and affords a high degree of control over stimulation parameters. The electromagnet designs presented in this paper are applicable for electrophysiology, microscopy, fluorescence and luminescence imaging, and also to stimulate in freely behaving animals. We developed four designs, where each design is unique to accommodate application specific physical constraints as well as maintain uniformity in the target area. Double wrapping coils as described in Kirschvink et al. allows experiments to be tested with a negative control. Additionally, a low-cost stimulation controller along with an accompanying graphical interface provides a user-friendly way to switch between active and sham stimulation conditions and reproduce specific stimulus parameters.
2.1. Applications The primary use of the electromagnet systems we designed are microscopy, luminescence, and fluorescence imaging, in vivo electrophysiology, and freely behaving animal experiments. For each application, our goal was to design an electromagnet that can deliver the desired magnetic flux given various constraints including power consumption, coil and sample temperature, and coil size. Evidence suggests that applying magnetic flux densities >50 mT was successful at eliciting responses in magnetoreceptive targets. Thus, this paper presents a system that provides the ability to conduct a parametric study of potential stimulation parameters and investigate the response thresholds of these parameters. 2.1.1. Microscopy Microscopy applications tend to impose strict size constraints on electromagnets. As also seen in Pashut et al. , care must be taken to ensure that the electromagnet does not interfere with the objectives or condenser of a microscope. For fluorescence microscopy, calcium imaging, voltage imaging, and patch clamping, a single coil was designed specifically to fit around a circular 35 mm diameter glass bottomed cell culture dish. During imaging, the electromagnet is either placed within a microscope compatible auxiliary incubation chamber or mounted underneath the stage directly below the sample, depending on the microscope in use. The incubation chamber restricts the maximum width and height of the coil holder to 85 mm and 15 mm, respectively. An assembled coil placed around a 35 mm culture dish is illustrated schematically in a, where only half of the coil is shown for clarity. An advantage of this design, as will be shown in , is that the field in the target region at the center of the plate is relatively uniform, thereby providing the ability to reliably deliver consistent stimulus between experiments. 2.1.2. Luminescence and Fluorescence Imaging This application consists of measuring responses to magnetic stimuli in many cell preparations at the same time. A multi-well plate is used to test the effects of cell type, media preparation, control conditions, or genetic variants of a protein all within the same trial. This enables higher throughput for screening experiments, opening the door for mutagenesis studies aimed at improving stimulation responses. The application requires consistent stimulus delivery in each trial. To facilitate higher throughput screening, a three-coil electromagnet was designed based on the Merritt Coils outlined in and used for stimulation of multi-well plates within a PerkinElmer In Vivo Imaging System (IVIS). Using a multi-coil design aids in producing a uniform magnetic field within a volume along the central axis of the coil. b shows an illustration of half of the three-coils with a 96-well plate placed at the central plane. 2.1.3. In Vivo Electrophysiology The potential neuromodulation effects of magnetic stimulation on rodents expressing the EPG protein or any other magnetoreceptive gene is considered in this application. An electromagnet design consisting of a coil wound around a ferromagnetic core is proposed. This coil was attached to an adjustable arm and positioned next to the head of a rat while recording neural signals. This design is particularly useful when the application allows for less restrictions on the physical placement of the coil. A schematic of the experimental setup for this application is presented in c, where the electromagnet is positioned between electrodes placed in the brain of an anesthetized, head-fixed rat. Typically, there is much more space to place the equipment and adjust it for proper alignment with the target in electrophysiology than in applications such as microscopy, making this a versatile solution. 2.1.4. Freely Moving Animal In addition to the aforementioned methods of investigating magnetosensitive pathways, it is also of great benefit to be able to study the effects of stimulation on the behavior of freely moving animals. Such studies could be performed in an operant conditioning box and designed to monitor reward seeking behavior, anxiety, stress, etc. This application, however, presents a challenge for stimulus delivery. In the case of rodents, cages used for behavioral studies can vary from 200–500 mm in length and width, and could be as tall as 300 mm. While Merritt Coils can deliver uniform stimulus to a given volume, for delivering uniform fields to a large volume, the required power can exceed the capabilities of practical systems. Delivering a stimulus of up to ∼50 mT within a uniform field in a region the size of a rodent cage using Merritt Coils is therefore impractical. Alternatively, a stimulus can be delivered locally using a fixed stimulation device attached to the subject. The attachment may consist of a head mounted fixture or a wearable jacket and would allow the animal to freely move about. Stimulus can then be applied to the localized area when desired. For this application, an electromagnet built into a pot core provides a good solution. Pot cores consist of a central rod around which a coil is wrapped with an additional metal shield which surrounds the coil. They are typically made of ferrite and are highly permeable to magnetic fields and therefore serve to increase magnetic flux density and focus the magnetic fields within the core region. A depiction of such a device is shown in d, where it is attached to a custom head mounting fixture. 2.2. Numerical Modeling Before fabricating and assembling the electromagnets and associated components, each design was numerically modeled and simulated using finite element analysis (FEA) to assess the performance of the design. Simulations were implemented using COMSOL Multiphysics and a standard linear solver was used to solve the electromagnetic field equations governing the underlying physics. All simulations were performed with a constant excitation current of 15 Amperes to solve for the distribution of the magnetic flux density, B . The simulations are posed as magnetostatics problems, i.e., the electric field at the target location remained unaffected by the coil operation during stimulation. Visualization of the magnitude of B , | B |, served to both aid in optimization of the geometric parameters of the designs and provide a benchmark for experimental coil comparisons. A summary of the parameters of the coils used in simulation is provided in , including the size of both the wire and device as well as the number of turns used. Additionally, values for the same parameters of the experimental devices are also listed, which are discussed in . In the case where the physically realized number of turns were less than initial estimations, the numerical models were updated to more accurately reflect the physical coils and facilitate more realistic comparison of coil simulations to experimental measurements. 2.2.1. Air Core Model A single coil geometry was modeled to fully utilize the space available within the imaging incubation chamber for a Keyence BZ-X800E microscope. A coil with 264 turns, height of 11.5 mm, outer diameter of 85.0 mm, and inner diameter of 45.0 mm was considered. a shows a schematic of the simulated coil and b shows the simulation results of the | B | in the YZ plane. c shows the simulated line scans along the lines depicted in b, showing that stimulations greater than 50 mT are expected. 2.2.2. Three-Coil System Model A three-coil geometry was modeled to fit inside of an IVIS having an imaging chamber of dimensions 430 × 380 × 430 mm. Due to a 100 V supply voltage constraint and 15 Ampere supply current constraint, the total device resistance was constrained to 6.66 Ω . The coil was modeled with 12 gauge magnet wire, selected due to its low resistivity. The maximum length of the wire was determined based on the maximum resistance and wire resistivity. d shows a representation of the dimensions of the three-coil system. In addition, 150 mm was selected for the inner square side length since the multi-well plates are 130 mm wide. The total height of the system was chosen to be 123.17 mm, consistent with the ratio of side length to coil spacing presented in for a three-coil Merritt Coil system, (1) h d = 0.821116 , where d is the length of each side of the coils and h is the height. The height and outer width of each coil was selected to be 50 mm and 227.47 mm, respectively, resulting in a simulated coil thickness of 38.74 mm and 276 turns per coil. A side view of the central XZ plane is shown in e having a | B | of about 45 mT in the center. f shows the central XY plane, which clearly demonstrates the uniformity of the field in a central circular region of about 80 mm in diameter. In contrast to coils shown in which use an ampere turn ratio of 0.512 for the center coil relative to the top and bottom coils, the system modeled utilizes coils of equal ampere turn ratios which adds more turns to the system. A central coil with ampere turn ratio 20:39 can be substituted should a volume of uniformity be required instead of a plane. 2.2.3. Ferromagnetic Core Model A ferromagnetic core of diameter 10.47 mm, length 150 mm, and relative magnetic permeability of 100,000 was modeled. The relative magnetic permeability value was modeled based on the property value reported in the material datasheet for the mu-metal rod described in . The tips at each end are tapered to a point as seen in g. The coil geometry with an outer diameter of 31.97 mm wrapped along 30 mm of the length of the core. The coil was initially simulated with 372 turns; however, it was updated to have 308 turns to reflect the number of turns in the assembled coil. h shows that the field exceeds 600 mT at the very tip of the core. This flux then decays quickly along the coil axis, dropping to about 75 mT at a distance of 10 mm from the tip of the core, as seen in i. 2.2.4. Pot Core Model Lastly, the pot core configuration for freely moving animals was simulated. For a pot core electromagnet to be head mounted on a rat, it must be small and light enough to allow maneuverability. A pot core geometry with an outer diameter of 30 mm and height of 9.45 mm was modeled along with the coil. A 28-turn coil was modeled to fit in the coil channel having depth and width of 6.5 mm and 6.05 mm, respectively. The core was simulated with a ferrite core having a relative magnetic permeability of 10,000, based on the property value reported in the datasheet for the pot core described in . j shows the simulated pot core. Since the device will be mounted on the moving animal, which is the target of stimulation, the magnetic field of interest is along the coil axis on the unshielded side of the pot core. The field distribution predicted by the numerical model is displayed in k. While k shows that there are strong fringing fields in close proximity to the coil, these fields decay quickly to yield uniform fields a few mm away. In practice, the coil holder and mounting device will cause the source to target distance to be a few mm. l shows the magnetic field strength of the pot core at 10 mm from the device, indicating that a strength of ∼15.5 mT at the target is achievable. Even though the simulated target value falls below 50 mT, reaching such a stimulation strength in a compact device is thought to be sufficient for our experimental needs. 2.3. Magnet Assembly Implementation 2.3.1. Double Wrapping Coils All coils pictured in are double wrapped, as demonstrated in , to allow for a negative control. Wrapping a coil with two adjacent wires allows a user to reverse the direction of current in one wire relative to the other. This has the effect of canceling out the magnetic fields generated by the two opposing currents, thereby resulting in a net zero field. The stimulus can then be operated in either active or sham mode, which can help to provide a control for motion caused by the changing magnetic flux or temperature increase due to ohmic heating in the coils. In practice, it is not possible to fully cancel out the magnetic field in the sham mode because this would require perfectly aligned wires with negligible width. Regardless, empirical measurements of the | B | of the sham conditions discussed herein are typically at least an order of magnitude smaller than the flux density delivered in the active mode. Two types of magnet wire were used in the assembly of the coils, either the 20 gauge copper magnet wire MW0167 offered by TEMCo Industrial or the 12 gauge 12 HAPT-200 offered by MWS Wire Industries. 2.3.2. Air Core A coil holder was 3D printed from the high temperature plastic Digital ABS, heat treated, and then wrapped with 20 gauge magnet wire such that the channel was evenly filled, resulting in 280 total turns. a shows the assembled air core coil placed over the top of a 35 mm culture dish. The assembled coil dimensions align closely with the simulated values that can be seen in . 2.3.3. Three-Coil System Three square coils were constructed to the same specifications used in the simulation model using 12 gauge magnet wire. The system is shown in b with all three-coils together. Several methods were used in coil construction, with the initial method based on 3D printed parts. Due to the weight and size of the coils, the 3D printed parts were suboptimal in terms of strength and rigidity. Subsequent coil holders were assembled from acrylic components with metal hardware. While the top coil visible in b is assembled with steel hardware, the central coil uses non-ferromagnetic brass hardware as described in . The dimensions of the assembled coils of the three coil system, as seen in , match closely to simulated values. Some variation is present, as the size of the coils makes them difficult to assemble and and measure. Flexibility of the structural materials causes some bending in the coil holders due to the pressure applied by the tightly wound coil, which served to increase the overall height of the system by about 10 mm. In addition, improvements in the coil winding process allowed the final coil to be wound more tightly, thereby reducing the outer side length slightly. For these reasons, the use of an approximate outer side length of 230 mm is appropriate. 2.3.4. Ferromagnetic Core A 150 mm section of 0.412 ″ EFI 50 round bar mu-metal from Ed Fagan was machined such that each tip came to a point as shown in g. Two half bobbins were used when wrapping the coil around the ferromagnetic core so that the tightly wound wires would apply pressure on the half bobbins to maintain their position on the core. The total outer diameter of the coil is about 30.7 mm, with an inner coil diameter equal to 11.72 mm, accounting for the core diameter and 3D printed bobbin with Z-PETG. The metal core has a maximum relative magnetic permeability of 100,000 for DC operation, and 308 turns of 20 gauge magnet wire were used in the construction. d shows the fully assembled ferromagnetic core electromagnet. 2.3.5. Pot Core For the pot core coil design, the ferrite pot core 0W43019UG from Magnetics Inc. was selected due to both its size and magnetic properties. The ferrite material selected has the highest relative permeability of pot cores produced by Magnetics Inc., measuring at 10,000 ± 30%. At 30 mm wide and 9.5 mm thick, this pot core is compact enough to be head mounted and large enough to fit a multi-turn 20 gauge wire coil. To get the maximum number of windings into the pot core channel, the coil is tightly wound around a 3D printed bobbin printed with Zortrax Z-PETG material. Once wrapped, the bobbin is inserted into the pot core. A total of 28 turns of 20 gauge wire were fit into the channel. c shows the assembled pot core device with the plastic bobbin inserted. Attaching the pot core to the freely moving animal is also important for this application. To achieve secure attachment of the device while minimizing the distance between the coil and the stimulation target, a custom coil holder was designed to mount the pot core using the hole along the coil axis. This mount is itself attached to a permanent head mounted fixture that can be secured to the head of the animal. Alternatively, the device could also be attached to a wearable jacket to facilitate stimulation in other regions of an animal. 2.3.6. Stimulation Controller One of the goals of this work was to make the electromagnetic stimulation platform versatile and easy to use. Toward this goal, a stimulation controller was developed which allows the user to specify the stimulation protocol and switch between the active and sham conditions. Using automated stimulation protocols is highly advantageous as it increases the stimulation consistency between experiments. A Python application was developed to set the stimulation protocol and allow the user to control the stimulation delivered by a custom hardware device shown in e, enabling selection of the direction of current through the double wrapped coils from within the graphical interface. Additionally, stimulations can be triggered by an external signal, and an auxiliary stimulation signal can be connected to other devices.
The primary use of the electromagnet systems we designed are microscopy, luminescence, and fluorescence imaging, in vivo electrophysiology, and freely behaving animal experiments. For each application, our goal was to design an electromagnet that can deliver the desired magnetic flux given various constraints including power consumption, coil and sample temperature, and coil size. Evidence suggests that applying magnetic flux densities >50 mT was successful at eliciting responses in magnetoreceptive targets. Thus, this paper presents a system that provides the ability to conduct a parametric study of potential stimulation parameters and investigate the response thresholds of these parameters. 2.1.1. Microscopy Microscopy applications tend to impose strict size constraints on electromagnets. As also seen in Pashut et al. , care must be taken to ensure that the electromagnet does not interfere with the objectives or condenser of a microscope. For fluorescence microscopy, calcium imaging, voltage imaging, and patch clamping, a single coil was designed specifically to fit around a circular 35 mm diameter glass bottomed cell culture dish. During imaging, the electromagnet is either placed within a microscope compatible auxiliary incubation chamber or mounted underneath the stage directly below the sample, depending on the microscope in use. The incubation chamber restricts the maximum width and height of the coil holder to 85 mm and 15 mm, respectively. An assembled coil placed around a 35 mm culture dish is illustrated schematically in a, where only half of the coil is shown for clarity. An advantage of this design, as will be shown in , is that the field in the target region at the center of the plate is relatively uniform, thereby providing the ability to reliably deliver consistent stimulus between experiments. 2.1.2. Luminescence and Fluorescence Imaging This application consists of measuring responses to magnetic stimuli in many cell preparations at the same time. A multi-well plate is used to test the effects of cell type, media preparation, control conditions, or genetic variants of a protein all within the same trial. This enables higher throughput for screening experiments, opening the door for mutagenesis studies aimed at improving stimulation responses. The application requires consistent stimulus delivery in each trial. To facilitate higher throughput screening, a three-coil electromagnet was designed based on the Merritt Coils outlined in and used for stimulation of multi-well plates within a PerkinElmer In Vivo Imaging System (IVIS). Using a multi-coil design aids in producing a uniform magnetic field within a volume along the central axis of the coil. b shows an illustration of half of the three-coils with a 96-well plate placed at the central plane. 2.1.3. In Vivo Electrophysiology The potential neuromodulation effects of magnetic stimulation on rodents expressing the EPG protein or any other magnetoreceptive gene is considered in this application. An electromagnet design consisting of a coil wound around a ferromagnetic core is proposed. This coil was attached to an adjustable arm and positioned next to the head of a rat while recording neural signals. This design is particularly useful when the application allows for less restrictions on the physical placement of the coil. A schematic of the experimental setup for this application is presented in c, where the electromagnet is positioned between electrodes placed in the brain of an anesthetized, head-fixed rat. Typically, there is much more space to place the equipment and adjust it for proper alignment with the target in electrophysiology than in applications such as microscopy, making this a versatile solution. 2.1.4. Freely Moving Animal In addition to the aforementioned methods of investigating magnetosensitive pathways, it is also of great benefit to be able to study the effects of stimulation on the behavior of freely moving animals. Such studies could be performed in an operant conditioning box and designed to monitor reward seeking behavior, anxiety, stress, etc. This application, however, presents a challenge for stimulus delivery. In the case of rodents, cages used for behavioral studies can vary from 200–500 mm in length and width, and could be as tall as 300 mm. While Merritt Coils can deliver uniform stimulus to a given volume, for delivering uniform fields to a large volume, the required power can exceed the capabilities of practical systems. Delivering a stimulus of up to ∼50 mT within a uniform field in a region the size of a rodent cage using Merritt Coils is therefore impractical. Alternatively, a stimulus can be delivered locally using a fixed stimulation device attached to the subject. The attachment may consist of a head mounted fixture or a wearable jacket and would allow the animal to freely move about. Stimulus can then be applied to the localized area when desired. For this application, an electromagnet built into a pot core provides a good solution. Pot cores consist of a central rod around which a coil is wrapped with an additional metal shield which surrounds the coil. They are typically made of ferrite and are highly permeable to magnetic fields and therefore serve to increase magnetic flux density and focus the magnetic fields within the core region. A depiction of such a device is shown in d, where it is attached to a custom head mounting fixture.
Microscopy applications tend to impose strict size constraints on electromagnets. As also seen in Pashut et al. , care must be taken to ensure that the electromagnet does not interfere with the objectives or condenser of a microscope. For fluorescence microscopy, calcium imaging, voltage imaging, and patch clamping, a single coil was designed specifically to fit around a circular 35 mm diameter glass bottomed cell culture dish. During imaging, the electromagnet is either placed within a microscope compatible auxiliary incubation chamber or mounted underneath the stage directly below the sample, depending on the microscope in use. The incubation chamber restricts the maximum width and height of the coil holder to 85 mm and 15 mm, respectively. An assembled coil placed around a 35 mm culture dish is illustrated schematically in a, where only half of the coil is shown for clarity. An advantage of this design, as will be shown in , is that the field in the target region at the center of the plate is relatively uniform, thereby providing the ability to reliably deliver consistent stimulus between experiments.
This application consists of measuring responses to magnetic stimuli in many cell preparations at the same time. A multi-well plate is used to test the effects of cell type, media preparation, control conditions, or genetic variants of a protein all within the same trial. This enables higher throughput for screening experiments, opening the door for mutagenesis studies aimed at improving stimulation responses. The application requires consistent stimulus delivery in each trial. To facilitate higher throughput screening, a three-coil electromagnet was designed based on the Merritt Coils outlined in and used for stimulation of multi-well plates within a PerkinElmer In Vivo Imaging System (IVIS). Using a multi-coil design aids in producing a uniform magnetic field within a volume along the central axis of the coil. b shows an illustration of half of the three-coils with a 96-well plate placed at the central plane.
The potential neuromodulation effects of magnetic stimulation on rodents expressing the EPG protein or any other magnetoreceptive gene is considered in this application. An electromagnet design consisting of a coil wound around a ferromagnetic core is proposed. This coil was attached to an adjustable arm and positioned next to the head of a rat while recording neural signals. This design is particularly useful when the application allows for less restrictions on the physical placement of the coil. A schematic of the experimental setup for this application is presented in c, where the electromagnet is positioned between electrodes placed in the brain of an anesthetized, head-fixed rat. Typically, there is much more space to place the equipment and adjust it for proper alignment with the target in electrophysiology than in applications such as microscopy, making this a versatile solution.
In addition to the aforementioned methods of investigating magnetosensitive pathways, it is also of great benefit to be able to study the effects of stimulation on the behavior of freely moving animals. Such studies could be performed in an operant conditioning box and designed to monitor reward seeking behavior, anxiety, stress, etc. This application, however, presents a challenge for stimulus delivery. In the case of rodents, cages used for behavioral studies can vary from 200–500 mm in length and width, and could be as tall as 300 mm. While Merritt Coils can deliver uniform stimulus to a given volume, for delivering uniform fields to a large volume, the required power can exceed the capabilities of practical systems. Delivering a stimulus of up to ∼50 mT within a uniform field in a region the size of a rodent cage using Merritt Coils is therefore impractical. Alternatively, a stimulus can be delivered locally using a fixed stimulation device attached to the subject. The attachment may consist of a head mounted fixture or a wearable jacket and would allow the animal to freely move about. Stimulus can then be applied to the localized area when desired. For this application, an electromagnet built into a pot core provides a good solution. Pot cores consist of a central rod around which a coil is wrapped with an additional metal shield which surrounds the coil. They are typically made of ferrite and are highly permeable to magnetic fields and therefore serve to increase magnetic flux density and focus the magnetic fields within the core region. A depiction of such a device is shown in d, where it is attached to a custom head mounting fixture.
Before fabricating and assembling the electromagnets and associated components, each design was numerically modeled and simulated using finite element analysis (FEA) to assess the performance of the design. Simulations were implemented using COMSOL Multiphysics and a standard linear solver was used to solve the electromagnetic field equations governing the underlying physics. All simulations were performed with a constant excitation current of 15 Amperes to solve for the distribution of the magnetic flux density, B . The simulations are posed as magnetostatics problems, i.e., the electric field at the target location remained unaffected by the coil operation during stimulation. Visualization of the magnitude of B , | B |, served to both aid in optimization of the geometric parameters of the designs and provide a benchmark for experimental coil comparisons. A summary of the parameters of the coils used in simulation is provided in , including the size of both the wire and device as well as the number of turns used. Additionally, values for the same parameters of the experimental devices are also listed, which are discussed in . In the case where the physically realized number of turns were less than initial estimations, the numerical models were updated to more accurately reflect the physical coils and facilitate more realistic comparison of coil simulations to experimental measurements. 2.2.1. Air Core Model A single coil geometry was modeled to fully utilize the space available within the imaging incubation chamber for a Keyence BZ-X800E microscope. A coil with 264 turns, height of 11.5 mm, outer diameter of 85.0 mm, and inner diameter of 45.0 mm was considered. a shows a schematic of the simulated coil and b shows the simulation results of the | B | in the YZ plane. c shows the simulated line scans along the lines depicted in b, showing that stimulations greater than 50 mT are expected. 2.2.2. Three-Coil System Model A three-coil geometry was modeled to fit inside of an IVIS having an imaging chamber of dimensions 430 × 380 × 430 mm. Due to a 100 V supply voltage constraint and 15 Ampere supply current constraint, the total device resistance was constrained to 6.66 Ω . The coil was modeled with 12 gauge magnet wire, selected due to its low resistivity. The maximum length of the wire was determined based on the maximum resistance and wire resistivity. d shows a representation of the dimensions of the three-coil system. In addition, 150 mm was selected for the inner square side length since the multi-well plates are 130 mm wide. The total height of the system was chosen to be 123.17 mm, consistent with the ratio of side length to coil spacing presented in for a three-coil Merritt Coil system, (1) h d = 0.821116 , where d is the length of each side of the coils and h is the height. The height and outer width of each coil was selected to be 50 mm and 227.47 mm, respectively, resulting in a simulated coil thickness of 38.74 mm and 276 turns per coil. A side view of the central XZ plane is shown in e having a | B | of about 45 mT in the center. f shows the central XY plane, which clearly demonstrates the uniformity of the field in a central circular region of about 80 mm in diameter. In contrast to coils shown in which use an ampere turn ratio of 0.512 for the center coil relative to the top and bottom coils, the system modeled utilizes coils of equal ampere turn ratios which adds more turns to the system. A central coil with ampere turn ratio 20:39 can be substituted should a volume of uniformity be required instead of a plane. 2.2.3. Ferromagnetic Core Model A ferromagnetic core of diameter 10.47 mm, length 150 mm, and relative magnetic permeability of 100,000 was modeled. The relative magnetic permeability value was modeled based on the property value reported in the material datasheet for the mu-metal rod described in . The tips at each end are tapered to a point as seen in g. The coil geometry with an outer diameter of 31.97 mm wrapped along 30 mm of the length of the core. The coil was initially simulated with 372 turns; however, it was updated to have 308 turns to reflect the number of turns in the assembled coil. h shows that the field exceeds 600 mT at the very tip of the core. This flux then decays quickly along the coil axis, dropping to about 75 mT at a distance of 10 mm from the tip of the core, as seen in i. 2.2.4. Pot Core Model Lastly, the pot core configuration for freely moving animals was simulated. For a pot core electromagnet to be head mounted on a rat, it must be small and light enough to allow maneuverability. A pot core geometry with an outer diameter of 30 mm and height of 9.45 mm was modeled along with the coil. A 28-turn coil was modeled to fit in the coil channel having depth and width of 6.5 mm and 6.05 mm, respectively. The core was simulated with a ferrite core having a relative magnetic permeability of 10,000, based on the property value reported in the datasheet for the pot core described in . j shows the simulated pot core. Since the device will be mounted on the moving animal, which is the target of stimulation, the magnetic field of interest is along the coil axis on the unshielded side of the pot core. The field distribution predicted by the numerical model is displayed in k. While k shows that there are strong fringing fields in close proximity to the coil, these fields decay quickly to yield uniform fields a few mm away. In practice, the coil holder and mounting device will cause the source to target distance to be a few mm. l shows the magnetic field strength of the pot core at 10 mm from the device, indicating that a strength of ∼15.5 mT at the target is achievable. Even though the simulated target value falls below 50 mT, reaching such a stimulation strength in a compact device is thought to be sufficient for our experimental needs.
A single coil geometry was modeled to fully utilize the space available within the imaging incubation chamber for a Keyence BZ-X800E microscope. A coil with 264 turns, height of 11.5 mm, outer diameter of 85.0 mm, and inner diameter of 45.0 mm was considered. a shows a schematic of the simulated coil and b shows the simulation results of the | B | in the YZ plane. c shows the simulated line scans along the lines depicted in b, showing that stimulations greater than 50 mT are expected.
A three-coil geometry was modeled to fit inside of an IVIS having an imaging chamber of dimensions 430 × 380 × 430 mm. Due to a 100 V supply voltage constraint and 15 Ampere supply current constraint, the total device resistance was constrained to 6.66 Ω . The coil was modeled with 12 gauge magnet wire, selected due to its low resistivity. The maximum length of the wire was determined based on the maximum resistance and wire resistivity. d shows a representation of the dimensions of the three-coil system. In addition, 150 mm was selected for the inner square side length since the multi-well plates are 130 mm wide. The total height of the system was chosen to be 123.17 mm, consistent with the ratio of side length to coil spacing presented in for a three-coil Merritt Coil system, (1) h d = 0.821116 , where d is the length of each side of the coils and h is the height. The height and outer width of each coil was selected to be 50 mm and 227.47 mm, respectively, resulting in a simulated coil thickness of 38.74 mm and 276 turns per coil. A side view of the central XZ plane is shown in e having a | B | of about 45 mT in the center. f shows the central XY plane, which clearly demonstrates the uniformity of the field in a central circular region of about 80 mm in diameter. In contrast to coils shown in which use an ampere turn ratio of 0.512 for the center coil relative to the top and bottom coils, the system modeled utilizes coils of equal ampere turn ratios which adds more turns to the system. A central coil with ampere turn ratio 20:39 can be substituted should a volume of uniformity be required instead of a plane.
A ferromagnetic core of diameter 10.47 mm, length 150 mm, and relative magnetic permeability of 100,000 was modeled. The relative magnetic permeability value was modeled based on the property value reported in the material datasheet for the mu-metal rod described in . The tips at each end are tapered to a point as seen in g. The coil geometry with an outer diameter of 31.97 mm wrapped along 30 mm of the length of the core. The coil was initially simulated with 372 turns; however, it was updated to have 308 turns to reflect the number of turns in the assembled coil. h shows that the field exceeds 600 mT at the very tip of the core. This flux then decays quickly along the coil axis, dropping to about 75 mT at a distance of 10 mm from the tip of the core, as seen in i.
Lastly, the pot core configuration for freely moving animals was simulated. For a pot core electromagnet to be head mounted on a rat, it must be small and light enough to allow maneuverability. A pot core geometry with an outer diameter of 30 mm and height of 9.45 mm was modeled along with the coil. A 28-turn coil was modeled to fit in the coil channel having depth and width of 6.5 mm and 6.05 mm, respectively. The core was simulated with a ferrite core having a relative magnetic permeability of 10,000, based on the property value reported in the datasheet for the pot core described in . j shows the simulated pot core. Since the device will be mounted on the moving animal, which is the target of stimulation, the magnetic field of interest is along the coil axis on the unshielded side of the pot core. The field distribution predicted by the numerical model is displayed in k. While k shows that there are strong fringing fields in close proximity to the coil, these fields decay quickly to yield uniform fields a few mm away. In practice, the coil holder and mounting device will cause the source to target distance to be a few mm. l shows the magnetic field strength of the pot core at 10 mm from the device, indicating that a strength of ∼15.5 mT at the target is achievable. Even though the simulated target value falls below 50 mT, reaching such a stimulation strength in a compact device is thought to be sufficient for our experimental needs.
2.3.1. Double Wrapping Coils All coils pictured in are double wrapped, as demonstrated in , to allow for a negative control. Wrapping a coil with two adjacent wires allows a user to reverse the direction of current in one wire relative to the other. This has the effect of canceling out the magnetic fields generated by the two opposing currents, thereby resulting in a net zero field. The stimulus can then be operated in either active or sham mode, which can help to provide a control for motion caused by the changing magnetic flux or temperature increase due to ohmic heating in the coils. In practice, it is not possible to fully cancel out the magnetic field in the sham mode because this would require perfectly aligned wires with negligible width. Regardless, empirical measurements of the | B | of the sham conditions discussed herein are typically at least an order of magnitude smaller than the flux density delivered in the active mode. Two types of magnet wire were used in the assembly of the coils, either the 20 gauge copper magnet wire MW0167 offered by TEMCo Industrial or the 12 gauge 12 HAPT-200 offered by MWS Wire Industries. 2.3.2. Air Core A coil holder was 3D printed from the high temperature plastic Digital ABS, heat treated, and then wrapped with 20 gauge magnet wire such that the channel was evenly filled, resulting in 280 total turns. a shows the assembled air core coil placed over the top of a 35 mm culture dish. The assembled coil dimensions align closely with the simulated values that can be seen in . 2.3.3. Three-Coil System Three square coils were constructed to the same specifications used in the simulation model using 12 gauge magnet wire. The system is shown in b with all three-coils together. Several methods were used in coil construction, with the initial method based on 3D printed parts. Due to the weight and size of the coils, the 3D printed parts were suboptimal in terms of strength and rigidity. Subsequent coil holders were assembled from acrylic components with metal hardware. While the top coil visible in b is assembled with steel hardware, the central coil uses non-ferromagnetic brass hardware as described in . The dimensions of the assembled coils of the three coil system, as seen in , match closely to simulated values. Some variation is present, as the size of the coils makes them difficult to assemble and and measure. Flexibility of the structural materials causes some bending in the coil holders due to the pressure applied by the tightly wound coil, which served to increase the overall height of the system by about 10 mm. In addition, improvements in the coil winding process allowed the final coil to be wound more tightly, thereby reducing the outer side length slightly. For these reasons, the use of an approximate outer side length of 230 mm is appropriate. 2.3.4. Ferromagnetic Core A 150 mm section of 0.412 ″ EFI 50 round bar mu-metal from Ed Fagan was machined such that each tip came to a point as shown in g. Two half bobbins were used when wrapping the coil around the ferromagnetic core so that the tightly wound wires would apply pressure on the half bobbins to maintain their position on the core. The total outer diameter of the coil is about 30.7 mm, with an inner coil diameter equal to 11.72 mm, accounting for the core diameter and 3D printed bobbin with Z-PETG. The metal core has a maximum relative magnetic permeability of 100,000 for DC operation, and 308 turns of 20 gauge magnet wire were used in the construction. d shows the fully assembled ferromagnetic core electromagnet. 2.3.5. Pot Core For the pot core coil design, the ferrite pot core 0W43019UG from Magnetics Inc. was selected due to both its size and magnetic properties. The ferrite material selected has the highest relative permeability of pot cores produced by Magnetics Inc., measuring at 10,000 ± 30%. At 30 mm wide and 9.5 mm thick, this pot core is compact enough to be head mounted and large enough to fit a multi-turn 20 gauge wire coil. To get the maximum number of windings into the pot core channel, the coil is tightly wound around a 3D printed bobbin printed with Zortrax Z-PETG material. Once wrapped, the bobbin is inserted into the pot core. A total of 28 turns of 20 gauge wire were fit into the channel. c shows the assembled pot core device with the plastic bobbin inserted. Attaching the pot core to the freely moving animal is also important for this application. To achieve secure attachment of the device while minimizing the distance between the coil and the stimulation target, a custom coil holder was designed to mount the pot core using the hole along the coil axis. This mount is itself attached to a permanent head mounted fixture that can be secured to the head of the animal. Alternatively, the device could also be attached to a wearable jacket to facilitate stimulation in other regions of an animal. 2.3.6. Stimulation Controller One of the goals of this work was to make the electromagnetic stimulation platform versatile and easy to use. Toward this goal, a stimulation controller was developed which allows the user to specify the stimulation protocol and switch between the active and sham conditions. Using automated stimulation protocols is highly advantageous as it increases the stimulation consistency between experiments. A Python application was developed to set the stimulation protocol and allow the user to control the stimulation delivered by a custom hardware device shown in e, enabling selection of the direction of current through the double wrapped coils from within the graphical interface. Additionally, stimulations can be triggered by an external signal, and an auxiliary stimulation signal can be connected to other devices.
All coils pictured in are double wrapped, as demonstrated in , to allow for a negative control. Wrapping a coil with two adjacent wires allows a user to reverse the direction of current in one wire relative to the other. This has the effect of canceling out the magnetic fields generated by the two opposing currents, thereby resulting in a net zero field. The stimulus can then be operated in either active or sham mode, which can help to provide a control for motion caused by the changing magnetic flux or temperature increase due to ohmic heating in the coils. In practice, it is not possible to fully cancel out the magnetic field in the sham mode because this would require perfectly aligned wires with negligible width. Regardless, empirical measurements of the | B | of the sham conditions discussed herein are typically at least an order of magnitude smaller than the flux density delivered in the active mode. Two types of magnet wire were used in the assembly of the coils, either the 20 gauge copper magnet wire MW0167 offered by TEMCo Industrial or the 12 gauge 12 HAPT-200 offered by MWS Wire Industries.
A coil holder was 3D printed from the high temperature plastic Digital ABS, heat treated, and then wrapped with 20 gauge magnet wire such that the channel was evenly filled, resulting in 280 total turns. a shows the assembled air core coil placed over the top of a 35 mm culture dish. The assembled coil dimensions align closely with the simulated values that can be seen in .
Three square coils were constructed to the same specifications used in the simulation model using 12 gauge magnet wire. The system is shown in b with all three-coils together. Several methods were used in coil construction, with the initial method based on 3D printed parts. Due to the weight and size of the coils, the 3D printed parts were suboptimal in terms of strength and rigidity. Subsequent coil holders were assembled from acrylic components with metal hardware. While the top coil visible in b is assembled with steel hardware, the central coil uses non-ferromagnetic brass hardware as described in . The dimensions of the assembled coils of the three coil system, as seen in , match closely to simulated values. Some variation is present, as the size of the coils makes them difficult to assemble and and measure. Flexibility of the structural materials causes some bending in the coil holders due to the pressure applied by the tightly wound coil, which served to increase the overall height of the system by about 10 mm. In addition, improvements in the coil winding process allowed the final coil to be wound more tightly, thereby reducing the outer side length slightly. For these reasons, the use of an approximate outer side length of 230 mm is appropriate.
A 150 mm section of 0.412 ″ EFI 50 round bar mu-metal from Ed Fagan was machined such that each tip came to a point as shown in g. Two half bobbins were used when wrapping the coil around the ferromagnetic core so that the tightly wound wires would apply pressure on the half bobbins to maintain their position on the core. The total outer diameter of the coil is about 30.7 mm, with an inner coil diameter equal to 11.72 mm, accounting for the core diameter and 3D printed bobbin with Z-PETG. The metal core has a maximum relative magnetic permeability of 100,000 for DC operation, and 308 turns of 20 gauge magnet wire were used in the construction. d shows the fully assembled ferromagnetic core electromagnet.
For the pot core coil design, the ferrite pot core 0W43019UG from Magnetics Inc. was selected due to both its size and magnetic properties. The ferrite material selected has the highest relative permeability of pot cores produced by Magnetics Inc., measuring at 10,000 ± 30%. At 30 mm wide and 9.5 mm thick, this pot core is compact enough to be head mounted and large enough to fit a multi-turn 20 gauge wire coil. To get the maximum number of windings into the pot core channel, the coil is tightly wound around a 3D printed bobbin printed with Zortrax Z-PETG material. Once wrapped, the bobbin is inserted into the pot core. A total of 28 turns of 20 gauge wire were fit into the channel. c shows the assembled pot core device with the plastic bobbin inserted. Attaching the pot core to the freely moving animal is also important for this application. To achieve secure attachment of the device while minimizing the distance between the coil and the stimulation target, a custom coil holder was designed to mount the pot core using the hole along the coil axis. This mount is itself attached to a permanent head mounted fixture that can be secured to the head of the animal. Alternatively, the device could also be attached to a wearable jacket to facilitate stimulation in other regions of an animal.
One of the goals of this work was to make the electromagnetic stimulation platform versatile and easy to use. Toward this goal, a stimulation controller was developed which allows the user to specify the stimulation protocol and switch between the active and sham conditions. Using automated stimulation protocols is highly advantageous as it increases the stimulation consistency between experiments. A Python application was developed to set the stimulation protocol and allow the user to control the stimulation delivered by a custom hardware device shown in e, enabling selection of the direction of current through the double wrapped coils from within the graphical interface. Additionally, stimulations can be triggered by an external signal, and an auxiliary stimulation signal can be connected to other devices.
It is crucial to validate simulated results with experimental measurements to fully characterize the core and coil parameters in an electromagnet stimulation system. For a quantitative comparison between simulation and experiment, we have utilized a TLE493D-A2B6 three-axis Gaussmeter along with an Aerotech AGS1000 programmable XYZ scanner to measure the distribution of B in the regions of interest as shown in produced by the different configurations of stimulation devices. All experimental scans were performed with step sizes of 0.5 mm. All experimental measurements were performed with a 1 Ampere excitation current. At each position, five sensor measurements were averaged to generate the resulting | B |. To compare the experimental measurements with corresponding simulation results, magnetic flux density images were first aligned based on the maximum of their cross-correlation. A qualitative comparison of the experimental and simulated field data are shown in for the air core, three-coil, ferromagnetic core, and pot core systems. The images represent the spatial distribution of | B |. Results for the air core coil, shown in a, indicate that a maximum | B | of 5.20 mT is recorded just above the surface of the coil. For the three-coil system, the first region of interest consists of the central XZ plane passing through the coils. A region of 60 mm by 120 mm centered at the center of the coil was scanned in the XZ plane, as shown in b. In addition, measurements along the central XY plane were performed along a 40 mm by 120 mm region approximately centered on the coil axis, as shown in c. In d, we see the measured distribution of | B | for the ferromagnetic core coil, reaching a maximum of 24.55 mT. It is not surprising that, of the four geometries, this design produces the highest | B | at the target location due to the effect of the ferromagnetic core. The pot core measurements are presented in e, where the | B | achieves a maximum of 3.29 mT. In addition to measuring the | B | during the active condition, similar measurements were taken in the sham configuration as well as with no stimulation current. The air core sham results are seen in f, while the results for the air core no stimulation current control are shown in g. Low amplitude fringing fields are observed near the coil at the bottom of the image in the case of the sham condition. Otherwise, the sham condition performed similarly to the no current case. Similar results were observed in the case of other geometries. A quantitative comparison of the predicted and measured fields for the no current, sham, experimental, and simulated stimulations for each coil design is shown in and summarized quantitatively in f. The values listed in f for the cases of air, ferromagnetic, and pot core coils are measured at a distance of 10 mm from the coil along the coil axis, while measurements for the three-coil system are taken at the center of the XZ and XY planes. With regard to the uniformity of the stimulus delivered by the three-coil system, b shows that the flux density along the three-coil system’s XZ axis drops on average only 2.99% in strength at the extrema of the line scan compared to the center. In the XY plane, the | B | increases on average 5.98% at ±40 mm from the center, whereas, at the extrema of the line scan, it increases on average by 14.33% compared to the center. It is worth noting that the ferromagnetic core does retain a low level of magnetization. However, the rapid decay in | B | means the magnetization has little effect at a distance of 10 mm from the tip. Performance of the pot core coil showed that it did not reach the stimulation strength of the simulated target value. It is possible that the wide tolerance of the magnetic permeability of the pot core, ±30%, played some role in contributing to the reduced strength. Regardless, achieving a stimulation strength of 9.45 mT at the target distance is thought be a successful implementation of such a compact stimulation device. Future work will aim to increase the strength of the device by reducing the target distance to less than 10 mm and using a smaller diameter magnet wire to increase the number of turns. In the case of the latter method for increasing stimulation strength, careful consideration must be given to the trade-off between added performance in strength and the decreased thermal performance, which is a consequence of using smaller wire. Despite the use of DC stimulation in our study, induction of a transient electric field is inevitable during the powering on and powering off of the stimulation. To characterize the induced electric field at the target locations during these periods, electrical characteristics of the coils were measured and the induced electric field was estimated. shows the resistance and inductance used to determine the rise and fall times according to the relationship (2) t r = t f = L R ln 9 , where t r is rise time, t l is fall time, L is inductance, and R is resistance. Equation can be derived from the equations governing a step response in an RL circuit. The rise or fall times indicate the time required for the stimulation to ramp up from 10% strength to 90% strength or vice versa. An estimation of the induced transient electric field at the target locations is also shown in , calculated using Faraday’s Law of induction over the transition from 10% to 90% stimulation strength. Thermal imaging was also performed with a FLIR One thermal infrared camera. Safe operation of stimulation coils requires identification of maximum operating times for each coil geometry to stay below 75 °C. Such analysis is important to carefully design experiments that will allow the coils to stay within the defined temperature limits. Currents ranging from 1 to 15 Amperes at 1 Ampere intervals were applied to stimulate each coil while sampling coil temperature at one sample per second until the coil temperature reached 75 °C. Results plotted in can be used to determine both the maximum excitation current and stimulation time based on the desired stimulus strength. For each of the four geometries, composite plots of maximum flux density magnitude and maximum operating time are shown for increasing stimulation current values ranging from 1 to 15 Amperes. The blue y -axis on the left shows the flux density magnitude in mT and the orange y -axis on the right shows the maximum operating times for the given stimulation current. Stimulation times are cutoff after one hour for the three-coil system and three minutes for the remaining geometries. An exponential best fit line for the maximum stimulation times is also shown in each graph of . Temperatures were also measured at sample target locations for each coil configuration with a 15 Ampere stimulation current. Target locations for temperature measurements were the same locations as the measurements from f. Additionally, temperature measurements were taken at the corner wells of a 96-well plate placed in the three-coil system. For the three-coil system, temperature was observed after application of a five minute stimulation. The air core, ferromagnetic core, and pot core coils were stimulated for the duration of their respective maximum operating times indicated in . With the air core coil, sample temperature was seen to rise by 0.5 °C, whereas the ferromagnetic core sample temperature increased by 1.1 °C and the pot core sample temperature increased by 0.7 °C throughout the stimulations. The temperature of the corner well samples in the three-coil system showed an increase of 1.3 °C after five minutes, while the temperature at the center remained essentially unchanged, decreasing by 0.1 °C.
This study presented a magnetogenetics stimulation platform that supports four electromagnet stimulation coil designs and a controller for selecting stimulation conditions. The various coil geometries were chosen so that at least one of the designs satisfied the needs of magnetogenetics experiments including microscopy, in vivo electrophysiology, freely moving behavioral experiments, and some fluorescence and luminescence imaging setups. Regarding the use of ferromagnetic materials in stimulation coils, the added benefit of increased stimulation strength for an otherwise similar coil without a ferromagnetic core must be weighed against the necessity to account for the residual magnetization of the material. Negative effects can be mitigated by either demagnetizing the core between stimulations or placing the coil at a distance such that the residual field of the core does not interfere with the experiment design. While sham conditions are required to ensure proper experimental controls, it is important to understand their limitations as they apply to a given experimental protocol and stimulation conditions. The sham conditions all demonstrated at least an order of magnitude reduction in | B |; however, some residual magnetic flux is unavoidable. To properly incorporate sham conditions into an experiment, it would be best to know the minimum stimulation threshold necessary to produce a meaningful target response. With this knowledge, stimulations can be performed such that active conditions provide suprathreshold stimulation while sham conditions provide only subthreshold stimulation. Accounting for the effects of temperature change is important when studying pathways with thermal sensitivity. Our designs showed minimal temperature increases (0.5–1.3 °C) at sample locations under maximum field strength conditions in all cases studied. The versatility of various magnet designs presented allows for multiple choices of electromagnets based on the size constraints of the application. The analysis of the | B | distributions is important for selecting an appropriate electromagnet system to achieve the proper strength of the applied stimulus at the target location to successfully elicit a response. Additionally, our analysis of the sham stimulus strength is important in designing experiments with a negative control which can help eliminate the role of confounding variables on observed effects. Studies presented here also provide a useful tool for selecting experimental design parameters for magnetogenetics experiments. For example, knowing that the three-coil system has a field distribution that varies less than 6% over a range of 40 mm from the central axis of the coils means that the sample placement should be restricted to this range in order to maintain a high degree of stimulation uniformity. In addition to limiting the effects of temperature on experimental observations, thermal analysis also allowed for determination of safe operating limits for the coils. can be used to find the operating limits, in terms of time and current, for a desired | B | with each coil geometry. Lastly, the use of a custom stimulation controller allows for easily configurable stimulation patterns, in either the sham or experimental modes, which improves repeatability.
|
Automated cell-type classification in intact tissues by single-cell molecular profiling | 26451c86-d241-47b4-ac24-9a2071a49952 | 5802843 | Pathology[mh] | In parallel with the development of single-cell RNA sequencing (scRNA-seq), there have been rapid advances in single-molecule in situ hybridization (smISH) techniques that localize RNAs of interest directly in fixed cells . These smISH techniques involve hybridization of fluorescently-labeled oligonucleotide probes, typically 24–96 per gene, to mark individual RNA molecules with a discrete, diffraction-limited punctum that can be quantitatively analyzed by fluorescence microscopy. smISH has been used in cultured cells to study the subcellular distribution of RNAs (reviewed in ), the consequences of stochastic noise on gene expression , and the impact of cell shape and environment on expression programs . An increasingly important application for smISH is the simultaneous localization of customized panels of transcripts in tissue, which is used to validate putative cell subtypes identified by scRNA-seq studies . Performing smISH in intact tissue can also reveal the spatial relationship between the cells expressing secreted signaling factors and the cells expressing the corresponding receptors, information that current scRNA-seq approaches cannot resolve because they require tissue dissociation with irretrievable loss of spatial context. Finally, when applied on a genome-wide scale in tissues, smISH has the potential to entirely bypass scRNA-seq as an upfront discovery tool. The development of multiplexed smISH for use in tissue has been challenging due to autofluorescent background and light scattering . One strategy for addressing this problem is to amplify probe signals by the hybridization chain reaction (HCR, reviewed in ; see also for branched-DNA amplification), which provides up to five orthogonal detection channels. Higher levels of multiplexing can be achieved by repeated cycles of RNA in situ hybridization followed by a re-amplification step , but because a single round of probe hybridization in tissue sections takes hours, multiplexing with HCR is laborious. Unamplified smISH techniques have the practical advantage that hundreds of endogenous RNA species can be barcoded in a single reaction, and then read out with rapid label-image-erase cycles , but these do not provide adequate signal in tissues. Ideally, a technique for high-throughput profiling would combine all of the RNA probe hybridization and signal amplification steps into a single reaction. Previously, Nilsson and colleagues presented an elegant enzymatic solution to this problem . They used barcoded padlock probes to label cDNA molecules in cells and tissues, and rolling-circle amplification (RCA) to transform the circularized probes into long tandem repeats. The approach worked in tissues and handled an unbounded number of orthogonal amplification channels. The only limitations were that the RNA-detection efficiency was capped at about 15% (each transcript could only be probed at a single site because the 3' end of the cDNA served as the replication primer), and that the approach required an in situ reverse transcription step with specialized and costly locked nucleic-acid primers. Here, we report an in situ hybridization technique with performance characteristics that enable rapid and scalable single-cell expression profiling in tissue. Our approach is a simplified variant of the padlock/RCA technique which replaces padlock probes with RNA-templated proximity ligation at Holliday junctions ; hence, we term it proximity ligation in situ hybridization (PLISH). As demonstrated below, PLISH generates data of exceptionally high signal-to-noise. Multiplexed hybridization and signal amplification of all target RNA species is carried out in a single parallel reaction, and the RNAs are then localized with rapid label-image-erase cycles. PLISH exhibits high detection efficiency because it probes multiple sites in each target RNA, and high specificity because of the proximity ligation mechanism. PLISH utilizes only commodity reagents, so it can be scaled up inexpensively to cover many genes. It works well on conventional formalin-fixed tissues that have been cryo- or paraffin-embedded, and can be performed concurrently with immunostaining, making it extremely versatile. Using the murine lung as a characterized model tissue, we show that multiplexed PLISH can rediscover and spatially map the distinct cell types of a tissue in an automated and unsupervised fashion. An unexpected discovery from this experiment is that murine Club cells separate into two populations that differ molecularly and segregate anatomically. PLISH constitutes a novel, single cell spatial-profiling technology that combines high performance, versatility and low cost. Because of its technical simplicity, it will be accessible to a broad scientific community.
Proximity ligation in situ hybridization (PLISH) Proximity ligation at Holliday junctions offers a simple mechanism for the amplified detection of RNA . First, a transcript is targeted with a pair of oligonucleotide 'H' probes designed to hybridize at adjacent positions along its sequence . The left H probe includes a single-stranded 5' overhang while the right probe includes a 3' overhang. Importantly, target RNAs can be tiled with H probe pairs at multiple sites, which is critical for efficient detection of low abundance transcripts . The overhangs are then hybridized to 'bridge' and linear 'circle' oligonucleotides with embedded barcode sequences to form a Holliday junction structure, after which ligation at the nick sites creates a closed circle. Finally, the 3' end of the right H probe primes rolling-circle replication, which generates a long single-stranded amplicon of tandem repeats. Addition of fluorescently-labeled 'imager' oligonucleotides complementary to the barcodes generates an extremely bright punctum at the site of each labeled transcript. Because each barcode sequence is unique, the puncta derived from different target RNAs can be labeled with different colors . To implement PLISH, we adapted protocols for antibody-based proximity ligation . The technique utilizes conventional oligonucleotides, two commercially available enzymes, and procedures familiar to molecular biologists. The ligase and polymerase enzymes are less than half the size of an immunoglobulin G, and they diffuse at least as rapidly as the 60mer DNA hairpins used for HCR amplification . Our initial studies produced bright puncta that were absent if any of the oligonucleotide or enzyme reagents was withheld. The signal from the individual RCA amplicons exceeded cellular and tissue fluorescence background by more than 30-fold, rendering autofluorescence inconsequential ( and ). Histograms of puncta intensities fit to a negative binomial distribution, as expected for a DNA replication process that terminates stochastically and irreversibly . The coefficients of variation for the puncta intensity distributions were typically between one and two. Highly specific and sensitive detection of RNA transcripts The requirement for coincident hybridization of two probes at adjacent sites in an RNA transcript should make PLISH highly specific. To evaluate this, we performed several experiments. First, we used PLISH to detect the transcription factor SRY-box 4 (SOX4) in cultured HCT116 cells. A pool of ten H probesets exhibited much higher RNA detection efficiency than a single H probeset, as expected . However, when the RNA-recognition sequence of either the left or right H probe in each set was scrambled, there were no detectable puncta. Thus, both H probes had to be correctly targeted to generate a signal. Second, we tested the sequence-specificity of the PLISH signal in tissue by pre-incubating samples with antisense 'blocking' oligonucleotides complementary to the target RNA at the H probe hybridization sites. For these experiments, we stained mouse lung sections for secretoglobin 1a1 (Scgb1a1), a marker of airway Club cells. Antisense oligonucleotides drastically attenuated the number of PLISH puncta, whereas scrambled blocking oligonucleotides of the same length had no apparent effect . Third, we analyzed murine lung sections for the co-localization of the mRNA transcript and protein product of surfactant protein C (Sftpc), which is expressed in alveolar epithelial type II (AT2) cells. Of the cells that were positive for PLISH signal, 98.5% were also positive for antibody staining (n = 184, ). This level of specificity is excellent relative to HCR-amplified smISH, where off-target binding of hybridization probes can account for a quarter of the observed puncta . To quantify the sensitivity and accuracy of RNA detection, we benchmarked PLISH measurements against a reference-standard dataset of single-cell, quantitative reverse transcription polymerase chain reaction (qPCR) and RNA-seq measurements on HCT116 cells . For genes with fragment-per-kilobase-per-million-read (FPKM) values greater than one, the single-cell qPCR technique detected mRNA in >90% of the cells . However, the fraction of transcript-positive cells dropped quickly between FPKM values of 1 and 0.1. A fit of the qPCR data to a Poisson sampling model suggested that an FPKM value of one corresponded to 2.5 copies per cell (see also ). The PLISH technique detected RNA transcripts with a sensitivity comparable to single-cell qPCR. For example, Caspase-9 (CASP9) has an FPKM value of 2, and it was observed in 100% of the cells by PLISH. We detected an average of 8 puncta per cell, which is consistent with the prediction of 5 copies per cell from the fit to the qPCR data ( , inset). For a set of ten genes covering the full spectrum of expression levels in HCT116 cells, the number of PLISH puncta per cell correlated with bulk FPKM values . To quantify RNA-detection efficiency in tissue, we marked a set of axin 2 (Axin2) transcripts in mouse lung sections using an HCR-amplified smISH procedure and then determined the fraction of the marked transcripts that could be identified by PLISH. We chose the Axin2 gene because of its low expression level in the lung. HCR detected a sparse population of cells with one to two puncta each (the HCR detection efficiency was low because we used a single HCR probe rather than 24). PLISH puncta generated with a pool of four H probe pairs co-localized with 32% of the HCR puncta . Thus, the four PLISH probesets detected Axin2 transcripts with a composite efficiency of 32% and an average per-site efficiency of 9%. This probe efficiency matches or exceeds that of other smISH techniques. The PLISH detection efficiency can be tuned on a per gene basis by altering the number of H probe pairs. Decreasing the number of probesets pro-rates the number of puncta from highly-expressed genes, while increasing the number of probesets can facilitate sensitive detection of very low-abundance transcripts. Visualization of molecular and histological features in tissue We next characterized the performance of PLISH for low-plex RNA localization in tissues. This experimental format uses a disposable hybridization chamber that is sealed to a coverslip or slide surrounding a tissue section . PLISH detection of up to 5 RNA species is accomplished by stepwise application of reagents through the inlet and outlet ports of the chamber. The puncta from each RNA species are then labeled in a unique color by hybridization to 'imager' oligonucleotides with spectrally-distinct fluorophores. After imaging, the fluorescence micrographs are interpreted by direct visual inspection. We aimed to test whether PLISH provides single-molecule and single-cell resolution in tissues, whether it robustly detects low-abundance RNA species, whether the spatial distribution of RNA is consistent with prior knowledge, whether PLISH is compatible with simultaneous immunostaining, and whether it is compatible with formalin-fixed, paraffin-embedded (FFPE) samples. First, we analyzed murine lung sections for RNA expression of the ciliated-cell marker Forkhead box J1 (Foxj1), and the Club-cell marker Scgb1a1. Foxj1 is a low-abundance transcript with an FPKM value of 10 in ciliated cells, as measured by scRNA-seq . We observed single cells with multiple discrete Foxj1 puncta in the terminal bronchiolar epithelium, surrounded by numerous strongly Scgb1a1 positive cells . These data establish PLISH's single-molecule and single-cell resolution in tissues, and its ability to detect low-abundance transcripts. Second, we analyzed human lung FFPE sections for RNA expression of SCGB1A1 , and for protein expression of the basal cell marker, Keratin 5 (KRT5). To do this, we appended two antibody incubation steps to the standard PLISH protocol. Strongly SCGB1A1 positive cells were localized to the lumen of the airways, overlying KRT5 positive cells , matching the known anatomical distribution of Club and basal cells, respectively. These data establish PLISH's compatibility with simultaneous immunostaining, and with FFPE samples. Third, we analyzed murine lung sections for RNA expression of three genes: the AT2 cell marker Sftpc, the macrophage-enriched marker Lysozyme 2 (Lyz2), and Scgb1a1. Overlays of the three channels provided a striking visual depiction of the different cell types. Macrophages were bright in the Lyz2 channel, but absent in the other channels . AT2 cells were bright in the Sftpc channel, moderately bright in the Lyz2 channel and absent in the Scgb1a1 channel ( , white cells in the overlay). Club cells were very bright in the Scgb1a1 channel, but otherwise absent . Finally, putative bronchioalveolar stem cells (BASCs) were bright in the Sftpc channel with a weak punctate signal in the Scgb1a1 channel . Thus, raw PLISH data can be interpreted without any computational processing, made possible by PLISH's exceptional signal-to-noise in tissues. We also evaluated how PLISH performs in primary samples of diseased human tissue, to assess whether it will be useful for molecular analysis of the many human diseases that cannot be accurately modeled in animals. One example is idiopathic pulmonary fibrosis (IPF), a fatal lung disease of unknown pathogenesis . The diagnosis of IPF is based on the presence of specific histological features, including clusters of spindle-shaped fibroblasts, stereotyped 'honeycomb' cysts, and epithelial cell hyperplasia. In this regard, single-cell profiling approaches that operate on dissociated tissue are intrinsically limited because they cannot correlate molecular data with cytologic and spatial features. As a preliminary test, we used PLISH to analyze RNA expression of the AT2 cell marker SFTPC in resected lung tissue from control and IPF patients. In contrast to the uniformly cuboidal SFTPC -expressing AT2 cells distributed throughout alveoli of non-IPF lungs , we observed clusters of SFTPC Hi cells of heterogeneous size and varying degrees of flattening lining the airspace lumen of IPF lungs. Surprisingly, many cells that did not appear to be epithelial (i.e., they were not lining an airway lumen) expressed SFTPC at low levels. Based on this pilot experiment, PLISH should be a suitable tool for building atlases of RNA expression in human disease. The PLISH data can be overlaid with monoclonal antibody staining patterns that are the mainstay of pathologic diagnosis and classification. Multiplexed and iterative PLISH in tissues Highly multiplexed measurement of different RNA species requires iterated data collection cycles, since conventional fluorescence microscopy only provides up to five channels . The data collection cycles include fluorescent labeling of a subset of the 'barcodes' (i.e., unique nucleotide sequences complementary to fluorescently labeled 'imager' oligonucleotides) in a sample, imaging of the labeled transcripts, and erasure of the fluorescent signal. Ideally, the cycles should be fast, and the erasure should not cause any mechanical or chemical damage to the sample. Consistent with prior work, we found that PLISH puncta could be imaged in the presence of a 3 nM background of freely-diffusing imager oligonucleotides ( and ). This allowed us to streamline data collection by eliminating a wash step, and also presented a simple erasure strategy. By using short imager oligonucleotides that equilibrate rapidly on and off of RCA amplicons , we could erase fluorescence from a previous cycle by a simple buffer exchange . We also established an erasure method based on uracil-containing imager oligonucleotides, which were removed with a 15 min enzymatic digestion . Thus, we could image PLISH puncta in five different color channels with spectrally-distinct fluorophores, and we were able to complete cycles in as little as 20 min, which approaches the cycle time of an Illumina MiSeq instrument ( https://support.illumina.com ). To demonstrate and validate the multiplexing capacity of PLISH, we co-localized the mRNA of eight selected genes in ~2900 single cells from an adult mouse lung . Our panel included four commonly used lung cell type markers that have been previously characterized, and four ubiquitously expressed genes. The targeted transcripts were Sftpc (AT2 cells), advanced glycosylation end product-specific receptor ( Ager , AT1 cells), Scgb1a1 (Club cells), Lyz2 (macrophage and AT2 cell subset), ferritin light polypeptide 1 ( Ftl1 ), beta actin ( Actb ), inactive X specific transcripts ( Xist ), and glyceraldehyde-3-phosphate dehydrogenase ( Gapdh ). The eight RNA species were barcoded in a single PLISH reaction, and the data were collected with a pair of label-image-erase cycles using the enzymatic erasure approach described above . A nuclear counterstain (DAPI) and transmitted light micrograph were also obtained. To quantify the expression of all eight genes on a per-cell basis, we created a PLISH-specific pipeline in CellProfiler, an open-source software package . The pipeline first identified nuclei in the DAPI channel, which were used as anchor points for expansion to full-cell assignments. Fortuitously, the bulk of the detected mRNAs in AT1 cells, which have an extremely flat and broad morphology, were clustered around the nuclei. We summed the PLISH signal for each gene in the nuclear and peri-nuclear regions of each cell, and saved the results as single-cell expression profiles indexed on anatomical location. We also created a utility to pseudocolor cells in a transmitted light micrograph according to their inferred cell type (see below), so that we could visualize the relationship between cellular gene expression and anatomical localization. Automated cell classification and insights into lung biology An important scientific challenge is to identify and map all of the molecularly distinct cell types that make up complex tissues, and in situ single-cell profiling should be a powerful tool for working towards this goal. As a proof-of-concept for this, we asked whether known lung cell types could be rediscovered by an automated and unsupervised analysis of our multiplexed PLISH data set. We used two standard data analysis tools, K-means clustering and t-distributed stochastic neighbor embedding (t-SNE, ), to classify and visualize the entire population of cells. The automated analysis identified ten cell classes, four of which were labeled 'other' because they were defined primarily by 'signature' profiles of ubiquitously-expressed genes. The remaining six classes were associated with a known lung cell type based on marker-gene expression. The Sftpc and Scgb1a1 positive cell classes were labeled as AT2 and Club, respectively, while the Lyz2 positive class was labeled as macrophage (one of the two AT2 cell classes was also Lyz2 positive as previously reported in . The cell class with the highest Ager expression was labeled as AT1, but Ager mRNA was also detected in a subset of AT2 and Club cells, and in one of the four 'other' cell classes, indicating it is not particularly specific for AT1 cells. We validated the PLISH results by indirect immunohistochemistry and by comparison with previously published scRNA-seq data , which confirmed the low specificity of Ager for AT1 cells. We also analyzed the RNA expression of Akap5 , another transcript that is highly-enriched in AT1 cells , and found that its localization correlated closely with Ager 's . For a higher-resolution analysis of cellular gene expression, we examined the expression pattern of individual genes in re-colored t-SNE plots . We found a small cloud of cells between the Club and AT2 clusters that expressed both Sftpc and Scgb1a1 . On the basis of this dual expression, we assigned them as the BASC type . We also noted that Lyz2 expression partitioned the AT2 cells into two classes designated Lyz2 + and Lyz2 - , while Actb segregated Club cells into two classes designated Actb Hi and Actb Lo . Gapdh was the most uniformly expressed transcript, consistent with its role as a 'housekeeping' gene . Ftl1 expression was highest in alveolar macrophages, as expected, where it is believed to play a role in processing iron from ingested red blood cells . Unexpectedly, Ftl1 was also highly expressed in Club cells. Actb expression was highest in macrophages, presumably because of its functional role in motility, and in AT1 cells, which must maintain a flat morphology and expansive cytoskeleton . To validate the PLISH results, we pseudocolored the cells in transmitted-light images according to their class . Importantly, no spatial information was included in the k-means clustering. Several observations confirmed the accuracy of the automated classification. First, the Club cell class mapped perfectly onto the bronchial epithelium, while cells from the AT1 and AT2 classes were distributed throughout the alveolar compartment. The rare BASCs also localized precisely to the bronchioalveolar junctions, where they have been shown to reside by immunostaining ( and ). The macrophage class was primarily found inside the alveolar lumen, and many exhibited a characteristic rounded cell shape. The Other d class of cells was enriched in pulmonary arteries, and therefore might represent endothelial or perivascular cells. We further observed a striking spatial segregation of the two Club cell classes. Actb Hi Club cells clustered together at the bronchial terminus, while Actb Lo Club cells populated more proximal domains . While the significance of this pattern is not immediately obvious, it emphasizes how PLISH can readily integrate molecular and spatial features of single cells to generate insights that would be missed with either piece of information alone.
Proximity ligation at Holliday junctions offers a simple mechanism for the amplified detection of RNA . First, a transcript is targeted with a pair of oligonucleotide 'H' probes designed to hybridize at adjacent positions along its sequence . The left H probe includes a single-stranded 5' overhang while the right probe includes a 3' overhang. Importantly, target RNAs can be tiled with H probe pairs at multiple sites, which is critical for efficient detection of low abundance transcripts . The overhangs are then hybridized to 'bridge' and linear 'circle' oligonucleotides with embedded barcode sequences to form a Holliday junction structure, after which ligation at the nick sites creates a closed circle. Finally, the 3' end of the right H probe primes rolling-circle replication, which generates a long single-stranded amplicon of tandem repeats. Addition of fluorescently-labeled 'imager' oligonucleotides complementary to the barcodes generates an extremely bright punctum at the site of each labeled transcript. Because each barcode sequence is unique, the puncta derived from different target RNAs can be labeled with different colors . To implement PLISH, we adapted protocols for antibody-based proximity ligation . The technique utilizes conventional oligonucleotides, two commercially available enzymes, and procedures familiar to molecular biologists. The ligase and polymerase enzymes are less than half the size of an immunoglobulin G, and they diffuse at least as rapidly as the 60mer DNA hairpins used for HCR amplification . Our initial studies produced bright puncta that were absent if any of the oligonucleotide or enzyme reagents was withheld. The signal from the individual RCA amplicons exceeded cellular and tissue fluorescence background by more than 30-fold, rendering autofluorescence inconsequential ( and ). Histograms of puncta intensities fit to a negative binomial distribution, as expected for a DNA replication process that terminates stochastically and irreversibly . The coefficients of variation for the puncta intensity distributions were typically between one and two.
The requirement for coincident hybridization of two probes at adjacent sites in an RNA transcript should make PLISH highly specific. To evaluate this, we performed several experiments. First, we used PLISH to detect the transcription factor SRY-box 4 (SOX4) in cultured HCT116 cells. A pool of ten H probesets exhibited much higher RNA detection efficiency than a single H probeset, as expected . However, when the RNA-recognition sequence of either the left or right H probe in each set was scrambled, there were no detectable puncta. Thus, both H probes had to be correctly targeted to generate a signal. Second, we tested the sequence-specificity of the PLISH signal in tissue by pre-incubating samples with antisense 'blocking' oligonucleotides complementary to the target RNA at the H probe hybridization sites. For these experiments, we stained mouse lung sections for secretoglobin 1a1 (Scgb1a1), a marker of airway Club cells. Antisense oligonucleotides drastically attenuated the number of PLISH puncta, whereas scrambled blocking oligonucleotides of the same length had no apparent effect . Third, we analyzed murine lung sections for the co-localization of the mRNA transcript and protein product of surfactant protein C (Sftpc), which is expressed in alveolar epithelial type II (AT2) cells. Of the cells that were positive for PLISH signal, 98.5% were also positive for antibody staining (n = 184, ). This level of specificity is excellent relative to HCR-amplified smISH, where off-target binding of hybridization probes can account for a quarter of the observed puncta . To quantify the sensitivity and accuracy of RNA detection, we benchmarked PLISH measurements against a reference-standard dataset of single-cell, quantitative reverse transcription polymerase chain reaction (qPCR) and RNA-seq measurements on HCT116 cells . For genes with fragment-per-kilobase-per-million-read (FPKM) values greater than one, the single-cell qPCR technique detected mRNA in >90% of the cells . However, the fraction of transcript-positive cells dropped quickly between FPKM values of 1 and 0.1. A fit of the qPCR data to a Poisson sampling model suggested that an FPKM value of one corresponded to 2.5 copies per cell (see also ). The PLISH technique detected RNA transcripts with a sensitivity comparable to single-cell qPCR. For example, Caspase-9 (CASP9) has an FPKM value of 2, and it was observed in 100% of the cells by PLISH. We detected an average of 8 puncta per cell, which is consistent with the prediction of 5 copies per cell from the fit to the qPCR data ( , inset). For a set of ten genes covering the full spectrum of expression levels in HCT116 cells, the number of PLISH puncta per cell correlated with bulk FPKM values . To quantify RNA-detection efficiency in tissue, we marked a set of axin 2 (Axin2) transcripts in mouse lung sections using an HCR-amplified smISH procedure and then determined the fraction of the marked transcripts that could be identified by PLISH. We chose the Axin2 gene because of its low expression level in the lung. HCR detected a sparse population of cells with one to two puncta each (the HCR detection efficiency was low because we used a single HCR probe rather than 24). PLISH puncta generated with a pool of four H probe pairs co-localized with 32% of the HCR puncta . Thus, the four PLISH probesets detected Axin2 transcripts with a composite efficiency of 32% and an average per-site efficiency of 9%. This probe efficiency matches or exceeds that of other smISH techniques. The PLISH detection efficiency can be tuned on a per gene basis by altering the number of H probe pairs. Decreasing the number of probesets pro-rates the number of puncta from highly-expressed genes, while increasing the number of probesets can facilitate sensitive detection of very low-abundance transcripts.
We next characterized the performance of PLISH for low-plex RNA localization in tissues. This experimental format uses a disposable hybridization chamber that is sealed to a coverslip or slide surrounding a tissue section . PLISH detection of up to 5 RNA species is accomplished by stepwise application of reagents through the inlet and outlet ports of the chamber. The puncta from each RNA species are then labeled in a unique color by hybridization to 'imager' oligonucleotides with spectrally-distinct fluorophores. After imaging, the fluorescence micrographs are interpreted by direct visual inspection. We aimed to test whether PLISH provides single-molecule and single-cell resolution in tissues, whether it robustly detects low-abundance RNA species, whether the spatial distribution of RNA is consistent with prior knowledge, whether PLISH is compatible with simultaneous immunostaining, and whether it is compatible with formalin-fixed, paraffin-embedded (FFPE) samples. First, we analyzed murine lung sections for RNA expression of the ciliated-cell marker Forkhead box J1 (Foxj1), and the Club-cell marker Scgb1a1. Foxj1 is a low-abundance transcript with an FPKM value of 10 in ciliated cells, as measured by scRNA-seq . We observed single cells with multiple discrete Foxj1 puncta in the terminal bronchiolar epithelium, surrounded by numerous strongly Scgb1a1 positive cells . These data establish PLISH's single-molecule and single-cell resolution in tissues, and its ability to detect low-abundance transcripts. Second, we analyzed human lung FFPE sections for RNA expression of SCGB1A1 , and for protein expression of the basal cell marker, Keratin 5 (KRT5). To do this, we appended two antibody incubation steps to the standard PLISH protocol. Strongly SCGB1A1 positive cells were localized to the lumen of the airways, overlying KRT5 positive cells , matching the known anatomical distribution of Club and basal cells, respectively. These data establish PLISH's compatibility with simultaneous immunostaining, and with FFPE samples. Third, we analyzed murine lung sections for RNA expression of three genes: the AT2 cell marker Sftpc, the macrophage-enriched marker Lysozyme 2 (Lyz2), and Scgb1a1. Overlays of the three channels provided a striking visual depiction of the different cell types. Macrophages were bright in the Lyz2 channel, but absent in the other channels . AT2 cells were bright in the Sftpc channel, moderately bright in the Lyz2 channel and absent in the Scgb1a1 channel ( , white cells in the overlay). Club cells were very bright in the Scgb1a1 channel, but otherwise absent . Finally, putative bronchioalveolar stem cells (BASCs) were bright in the Sftpc channel with a weak punctate signal in the Scgb1a1 channel . Thus, raw PLISH data can be interpreted without any computational processing, made possible by PLISH's exceptional signal-to-noise in tissues. We also evaluated how PLISH performs in primary samples of diseased human tissue, to assess whether it will be useful for molecular analysis of the many human diseases that cannot be accurately modeled in animals. One example is idiopathic pulmonary fibrosis (IPF), a fatal lung disease of unknown pathogenesis . The diagnosis of IPF is based on the presence of specific histological features, including clusters of spindle-shaped fibroblasts, stereotyped 'honeycomb' cysts, and epithelial cell hyperplasia. In this regard, single-cell profiling approaches that operate on dissociated tissue are intrinsically limited because they cannot correlate molecular data with cytologic and spatial features. As a preliminary test, we used PLISH to analyze RNA expression of the AT2 cell marker SFTPC in resected lung tissue from control and IPF patients. In contrast to the uniformly cuboidal SFTPC -expressing AT2 cells distributed throughout alveoli of non-IPF lungs , we observed clusters of SFTPC Hi cells of heterogeneous size and varying degrees of flattening lining the airspace lumen of IPF lungs. Surprisingly, many cells that did not appear to be epithelial (i.e., they were not lining an airway lumen) expressed SFTPC at low levels. Based on this pilot experiment, PLISH should be a suitable tool for building atlases of RNA expression in human disease. The PLISH data can be overlaid with monoclonal antibody staining patterns that are the mainstay of pathologic diagnosis and classification.
Highly multiplexed measurement of different RNA species requires iterated data collection cycles, since conventional fluorescence microscopy only provides up to five channels . The data collection cycles include fluorescent labeling of a subset of the 'barcodes' (i.e., unique nucleotide sequences complementary to fluorescently labeled 'imager' oligonucleotides) in a sample, imaging of the labeled transcripts, and erasure of the fluorescent signal. Ideally, the cycles should be fast, and the erasure should not cause any mechanical or chemical damage to the sample. Consistent with prior work, we found that PLISH puncta could be imaged in the presence of a 3 nM background of freely-diffusing imager oligonucleotides ( and ). This allowed us to streamline data collection by eliminating a wash step, and also presented a simple erasure strategy. By using short imager oligonucleotides that equilibrate rapidly on and off of RCA amplicons , we could erase fluorescence from a previous cycle by a simple buffer exchange . We also established an erasure method based on uracil-containing imager oligonucleotides, which were removed with a 15 min enzymatic digestion . Thus, we could image PLISH puncta in five different color channels with spectrally-distinct fluorophores, and we were able to complete cycles in as little as 20 min, which approaches the cycle time of an Illumina MiSeq instrument ( https://support.illumina.com ). To demonstrate and validate the multiplexing capacity of PLISH, we co-localized the mRNA of eight selected genes in ~2900 single cells from an adult mouse lung . Our panel included four commonly used lung cell type markers that have been previously characterized, and four ubiquitously expressed genes. The targeted transcripts were Sftpc (AT2 cells), advanced glycosylation end product-specific receptor ( Ager , AT1 cells), Scgb1a1 (Club cells), Lyz2 (macrophage and AT2 cell subset), ferritin light polypeptide 1 ( Ftl1 ), beta actin ( Actb ), inactive X specific transcripts ( Xist ), and glyceraldehyde-3-phosphate dehydrogenase ( Gapdh ). The eight RNA species were barcoded in a single PLISH reaction, and the data were collected with a pair of label-image-erase cycles using the enzymatic erasure approach described above . A nuclear counterstain (DAPI) and transmitted light micrograph were also obtained. To quantify the expression of all eight genes on a per-cell basis, we created a PLISH-specific pipeline in CellProfiler, an open-source software package . The pipeline first identified nuclei in the DAPI channel, which were used as anchor points for expansion to full-cell assignments. Fortuitously, the bulk of the detected mRNAs in AT1 cells, which have an extremely flat and broad morphology, were clustered around the nuclei. We summed the PLISH signal for each gene in the nuclear and peri-nuclear regions of each cell, and saved the results as single-cell expression profiles indexed on anatomical location. We also created a utility to pseudocolor cells in a transmitted light micrograph according to their inferred cell type (see below), so that we could visualize the relationship between cellular gene expression and anatomical localization.
An important scientific challenge is to identify and map all of the molecularly distinct cell types that make up complex tissues, and in situ single-cell profiling should be a powerful tool for working towards this goal. As a proof-of-concept for this, we asked whether known lung cell types could be rediscovered by an automated and unsupervised analysis of our multiplexed PLISH data set. We used two standard data analysis tools, K-means clustering and t-distributed stochastic neighbor embedding (t-SNE, ), to classify and visualize the entire population of cells. The automated analysis identified ten cell classes, four of which were labeled 'other' because they were defined primarily by 'signature' profiles of ubiquitously-expressed genes. The remaining six classes were associated with a known lung cell type based on marker-gene expression. The Sftpc and Scgb1a1 positive cell classes were labeled as AT2 and Club, respectively, while the Lyz2 positive class was labeled as macrophage (one of the two AT2 cell classes was also Lyz2 positive as previously reported in . The cell class with the highest Ager expression was labeled as AT1, but Ager mRNA was also detected in a subset of AT2 and Club cells, and in one of the four 'other' cell classes, indicating it is not particularly specific for AT1 cells. We validated the PLISH results by indirect immunohistochemistry and by comparison with previously published scRNA-seq data , which confirmed the low specificity of Ager for AT1 cells. We also analyzed the RNA expression of Akap5 , another transcript that is highly-enriched in AT1 cells , and found that its localization correlated closely with Ager 's . For a higher-resolution analysis of cellular gene expression, we examined the expression pattern of individual genes in re-colored t-SNE plots . We found a small cloud of cells between the Club and AT2 clusters that expressed both Sftpc and Scgb1a1 . On the basis of this dual expression, we assigned them as the BASC type . We also noted that Lyz2 expression partitioned the AT2 cells into two classes designated Lyz2 + and Lyz2 - , while Actb segregated Club cells into two classes designated Actb Hi and Actb Lo . Gapdh was the most uniformly expressed transcript, consistent with its role as a 'housekeeping' gene . Ftl1 expression was highest in alveolar macrophages, as expected, where it is believed to play a role in processing iron from ingested red blood cells . Unexpectedly, Ftl1 was also highly expressed in Club cells. Actb expression was highest in macrophages, presumably because of its functional role in motility, and in AT1 cells, which must maintain a flat morphology and expansive cytoskeleton . To validate the PLISH results, we pseudocolored the cells in transmitted-light images according to their class . Importantly, no spatial information was included in the k-means clustering. Several observations confirmed the accuracy of the automated classification. First, the Club cell class mapped perfectly onto the bronchial epithelium, while cells from the AT1 and AT2 classes were distributed throughout the alveolar compartment. The rare BASCs also localized precisely to the bronchioalveolar junctions, where they have been shown to reside by immunostaining ( and ). The macrophage class was primarily found inside the alveolar lumen, and many exhibited a characteristic rounded cell shape. The Other d class of cells was enriched in pulmonary arteries, and therefore might represent endothelial or perivascular cells. We further observed a striking spatial segregation of the two Club cell classes. Actb Hi Club cells clustered together at the bronchial terminus, while Actb Lo Club cells populated more proximal domains . While the significance of this pattern is not immediately obvious, it emphasizes how PLISH can readily integrate molecular and spatial features of single cells to generate insights that would be missed with either piece of information alone.
PLISH represents a practical technology for multiplexed expression profiling in tissues. It combines high performance in four key areas: specificity, detection efficiency, signal-to-noise and speed. The specificity derives from coincidence detection, which requires two probes to hybridize next to one another for signal generation. Efficient detection of low-abundance transcripts is accomplished by targeting multiple sites along the RNA sequence. Enzymatic amplification produces extremely bright puncta, and allows many different RNA transcripts to be marked with unique barcodes in one step. The different RNA transcripts can then be iteratively detected to rapidly generate high dimensional data. While low-plex PLISH on a handful of different genes can be valuable, the PLISH technology is also scalable, without requiring specialized microscopes (or other equipment), software, or computational expertise. The oligonucleotides and enzymes are inexpensive and commercially available from multiple vendors. The H probes are the cost-limiting reagent, but can be synthesized in pools . Assuming five pairs of H probes for each target RNA species, and 20 cents for a 40mer oligonucleotide, the cost of PLISH reagents amounts to $3 per gene. It should therefore be practical to simultaneously interrogate entire molecular systems, such as signaling pathways or super-families of adhesion receptors. The high specificity and signal-to-noise of PLISH will be advantageous for deep profiling, where non-specific background increases with increasingly complex mixtures of hybridization probes . Our initial studies demonstrate PLISH's capacity for rapid, automated and unbiased cell-type classification, and illustrate how it can complement single-cell RNA sequencing (sc-RNAseq). Sc-RNAseq offers greater gene depth than in situ hybridization approaches, but it is less sensitive, fails to capture spatial information, and induces artefactual changes in gene expression during tissue dissociation . PLISH provides the missing cytological and spatial information, and it is applied to intact tissues. Going forward, sequencing can be used to nominate putative cell types and molecular states based on the coordinate expression of 'signature genes', and multiplexed PLISH can be used to distinguish true biological variation from technical noise and experimentally-induced perturbations. Importantly, multiplexed PLISH provides the tissue context of distinct cell populations, which is essential for understanding the higher-order organization of intact systems like solid tumors and developing organs. In diseases like IPF where morphology and gene expression are severely deranged , histological, cytological and spatial features may even be essential for making biological sense of sequencing data. Currently, efforts are underway to more deeply characterize cellular states by integrating diverse types of molecular information. We have already demonstrated the combined application of PLISH with conventional immunostaining. Going one step further, oligonucleotide-antibody conjugates make it possible to mix and match protein and RNA targets in a multiplexed format . The generation of comprehensive, multidimensional molecular maps of intact tissues, in both healthy and diseased states, will have a fundamental impact on basic science and medicine.
Materials Unless otherwise specified, all reagents were from Thermo-Fisher and Sigma-Aldrich. Oligonucleotides were purchased from Integrated DNA Technologies. T4 polynucleotide kinase, T4 ligase, USER enzyme and their respective buffers were purchased from New England Biolabs. Nxgen phi29 polymerase and its buffer were purchased from Lucigen. Abbreviations: BSA, bovine serum albumin; DAPI, 4,6-diamidino-2-phenylindole; DEPC, diethylpyrocarbonate; EDTA, ethylenediaminetetraacetic acid; min, minutes; PBS, phosphate buffered saline; PFA, paraformaldehyde; RCA, rolling circle amplification; RT, room temperature. All oligonucleotide sequences are listed in . Sample preparation HCT116 cells (ATCC; CCL-247) were authenticated by HLA typing and confirmed negative for Mycoplasma contamination using PCR. Cells were grown on poly-lysine coated #1.5 coverslips (Fisherbrand 12–544 G) using standard cell culture protocols until they reached the desired confluency. The cells were rinsed in 1X PBS and fixed in 3.7% formaldehyde with 0.1% DEPC at RT for 20 min. The fixed cells were treated with 10 mM citrate buffer (pH 6.0) at 70°C for 30 min, dehydrated in an ethanol series, then enclosed by application of a seal chamber (Grace Biolabs 621505) to the coverslip. Lungs were collected from adult B6 mice (Jackson Labs) and fixed by immersion in 4% PFA as previously described . Non-IPF human lung tissue was obtained from a surgical resection, and IPF tissue from an explant. All mouse and human research were approved by the Institutional Animal Care and Use Committee and Internal Review Board, respectively, at Stanford University. The tissues were fixed by immersion in 10% neutral buffered formalin in PBS at 4°C overnight under gentle rocking, cryoprotected in 30% sucrose at 4°C overnight, submerged in OCT (Tissue Tek) in an embedding mold, frozen on dry ice, and stored at −80°C. 20 μm sections were cut on a cryostat (LeicaCM 3050S) and collected on either poly-lysine coated #1.5 coverslips or glass slides (Fisherbrand Superfrost), air dried for 10 min, and post-fixed with 4% PFA at RT for 20 min. The human lung tissue in was formalin-fixed and paraffin-embedded (FFPE) according to standard protocols, and 20 μm sections were cut on a microtome and collected on glass slides. The FFPE sections were deparaffinized by immersion in Histoclear (National Diagnostics, HS-200) for 3 × 5 min, then dehydrated in an ethanol series and post-fixed with 4% PFA at RT for 20 min. Tissue sections were treated with 10 mM citrate buffer (pH 6.0) containing 0.05% lithium dodecyl sulfate at 70°C for 30 min, or in some experiments, with 0.1 mg/ml Pepsin in 0.1M HCl for 8 min at 37°C and dehydrated in an ethanol series. Following treatment, sections were air dried for 10 min and enclosed by application of a seal chamber. PLISH probe design and preparation Target RNAs were probed at ~40 nucleotide detection sites, with 1 to 10 sites per RNA species depending on expression level. NCBI BLAST searches were used to eliminate detection sites that shared 10 or more contiguous nucleotides with a non-target RNA. The detection sites were also selected to minimize self-complementarity as indicated by the IDT oligo analyzer. Each detection site was targeted with a pair of H probes designated HL (left H probe) and HR (right H probe). The HL and HR probes included ~20 nucleotide binding sequences that were complementary respectively to the 5' and 3' halves of the detection site. The binding sequences were chosen so that the 5' end of the HL binding sequence and the 3' end of the HR binding sequence would abut at a 5’-AG-3’ or a 5’-TA-3’ dinucleotide in the target RNA. The lengths of the binding sequences were adjusted so that the melting temperature of the corresponding DNA duplex would fall between 45–65°C as computed by IDT Oligo analyzer using default settings of 0.25 μM oligo concentration and 50 mM salt concentration. To generate H probes, suitable HL and HR binding sequences were catenated at their respective 5' and 3' ends with overhang sequences taken from one of eight modular design templates . The left and right overhang sequences in each design template were complementary to a specific bridge (B) and circle (C) oligonucleotide, which directed a desired fluorescent readout. The design templates reported here utilized a common 31 base oligonucleotide for the bridge. Following previous work , the circle oligonucleotides were ~60 bases long with 11 base regions of complementarity to cognate H probes on either end. The circle sequences were chosen to minimize self-complementarity. Each imager oligonucleotide was complementary to a barcode embedded in one of the C oligonucleotides, allowing unique detection of the corresponding RCA amplicon. The H-probe oligonucleotides were ordered on a 25 nanomole scale with standard desalting. The B and C oligonucleotides were ordered on a 100 nanomole scale with HPLC purification, and phosphorylated with T4 polynucleotide kinase according to the manufacturer recommendations. Imager oligonucleotides were purchased either as HPLC-purified fluorophore conjugates (A488, Texas Red, Cy3, Cy5), or as amine-modified oligonucleotides that were subsequently coupled to Pacific Blue-NHS ester according to the manufacturer recommendations. PLISH barcoding procedure Six buffers were used for PLISH barcoding: H-probe buffer (1M sodium trichloroacetate, 50 mM Tris pH 7.4, 5 mM EDTA, 0.2 mg/mL Heparin), bridge-circle buffer (2% BSA, 0.2 mg/mL heparin, 0.05% Tween-20, 1X T4 ligase buffer in RNAse-free water), PBST (PBS + 0.1% Tween-20), ligation buffer (10 CEU/μl T4 DNA ligase, 2% BSA, 1X T4 ligase buffer, 1% RNaseOUT and 0.05% Tween-20 in RNAse-free water), labeling buffer (2x SSC/20% formamide in RNAse-free water), and RCA buffer (1 U/μl Nxgen phi29 polymerase, 1X Nxgen phi29 polymerase buffer, 2%BSA, 5% glycerol, 10 mM dNTPs, 1% RNaseOUT in RNAse-free water). An H cocktail was prepared by mixing H probes in H-probe buffer at a final concentration of 100 nM each. If an RNA was targeted with more than five probe sets, the concentrations of the H probes for that RNA were pro-rated so that their sum did not exceed 1000 nM. A BC cocktail was also prepared by mixing B and C oligonucleotides in bridge-circle buffer at a final concentration of 6 μM each. Single-step barcoding was performed in sealed chambers. The workflow consisted of three steps: (i) The sample was incubated in the H cocktail at 37°C for 2 hr. The sample was then washed 4 × 5 min with H-probe buffer at RT, and incubated in the BC cocktail at 37°C for 1 hr. (ii) Following a 5 min wash with PBST at RT, the sample was incubated in ligation buffer at 37°C for 1 hr. (iii) The sample was washed 2 × 5 min with labeling buffer at RT, and washed with 1X Nxgen phi29 polymerase buffer at RT for 5 min. The sample was then incubated in RCA buffer at 37°C for 2 hr (typical for cultured cells) to overnight (typical for tissue). Finally, the sample was washed 2 × 5 min with labeling buffer. Imaging Barcoded PLISH samples were fluorescently labeled by two different procedures, designated 'washout' and 'fast'. In the washout procedure, the sample was incubated with imager oligonucleotides in imager buffer (labeling buffer with 0.2 mg/mL heparin) at a final concentration of 100 nM each for 30 min, and then washed 2 × 5 min with PBST at RT. In the fast procedure, the sample was incubated for 5 min with imager oligonucleotides in imager buffer at a final concentration of 3 nM each, and then imaged immediately. Samples that did not require label-image-erase cycles were stained with DAPI (stock 1 mg/ml; final concentration - 1:1000 in PBS) for 5 min and mounted in H-1000 Vectashield mounting medium (Vector). Data were collected by confocal microscopy (Leica Sp8 and Zeiss LSM 800) using a 40X oil immersion or a 25X water immersion objective lens. 20 μm z-stacks were scanned, and maximum projection images were saved for analysis. For 5-color experiments, DAPI was added after the Pacific Blue channel had been imaged, and the Texas Red and Cy3 channels were linearly unmixed using Zeiss software. Transmitted light images were acquired on a Leica Sp8 confocal microscope using the 488 nm Argon laser and the appropriate PMT-TL detector. Images from serial rounds of data collection were aligned using the nuclear stain from each round as a fiducial marker. Unless otherwise stated, imaging data of cells and mouse lung tissue are representative of three independent experiments with ≥4 fields of view each. Imaging data of human lung tissue are representative of two independent experiments with ≥4 fields of view each. PLISH and HCR co-localization HCR was performed following a published protocol with probes that targeted two sites covering nucleotides 621–670 and 1159–1208 in the mouse Axin2 transcript, and AlexaFluor 488-/AlexaFluor 647-labeled amplifier oligonucleotides. The samples were then processed for PLISH with H probes targeting four sites covering nucleotides 347–386, 1878–1917, 2412–2451 and 2956–2995 in the Axin2 transcript, and imaged using a Cy3-labeled imager oligonucleotide. PLISH with concurrent immunohistochemistry PLISH barcoding was performed as described above. Subsequently, the sample was washed 3 × 5 min with PBST at RT, and incubated in blocking solution (50 μl/ml [5%] normal goat serum, 1 μl/ml [0.1%] Triton X-100, 5 mM EDTA and 0.03 g/ml [3%] BSA in PBS) at RT for 1 hr. The sample was then incubated with primary antibody (Rabbit anti-pro-Sftpc, Millipore, 1:500 or Rabbit anti-Cytokeratin 5, Abcam Ab193895, 1:400) in blocking solution at 37°C for 2 hr under gentle rocking, washed 4 × 5 min with PBST at RT, and incubated with secondary antibody (Goat anti-Rabbit-Cy5, Jackson Lab, 1:250) and DAPI (1:1000) in blocking solution at RT for 1 hr. The sample was washed 3 × 5 min in PBST at RT and mounted in H-1000 Vectashield. Antisense blocking oligonucleotide Mouse lung tissue cryosections were collected on slides, post-fixed and processed as described above. The samples were incubated with a 60-base oligonucleotide complementary to nucleotides 219–278 in the Scgb1a1 mRNA, or with a scrambled 60-base oligonucleotide, at 100 nM final concentration in H-probe buffer at 37°C for 2 hr. The samples were then washed 2 × 5 min with H-probe buffer at RT, and processed for PLISH using H probes that targeted nucleotides 229–268 in the Scgb1a1 transcript. Signal erasure for iterative cycles of PLISH To perform enzymatic erasure, 15–20 base imager oligonucleotides were ordered with the dT nucleotides replaced by dU nucleotides. Following imaging, the signal was erased by incubating the sample with 0.1 U/μL USER enzyme in 1X USER enzyme buffer at 37°C for 20 min, followed by washing 2 × 3 min with PBST at RT. To perform rapid erasure, short 10–11 base oligonucleotides were ordered. Following imaging, the signal was erased by incubating the sample with PBST at 37°C for 15 min. Correlative immunostaining Lungs collected from B6 and the Lyz2 +/EGFP mouse strains were fixed and immunostained as whole mounts as previously described . Primary antibodies were chicken anti-GFP (Abcam ab13970), rat anti-Ecad/Cdh1 (Invitrogen ECCD-2), goat anti-Scgb1a1 (gift from Barry Stripp), rabbit anti-pro-Sftpc (Chemicon AB3786), and rat anti-Ager (R and D MAB1179). Fluorophore-conjugated secondary antibodies raised in Goat (Invitrogen) or Donkey (Jackson Labs) were used at 1:250 and DAPI at 1:1000. Data analysis FIJI was used to pseudocolor unprocessed micrographs for display as three-color overlays. A custom CellProfiler pipeline was created to measure RNA signal intensities at the single-cell level. Briefly, the centers of cell nuclei were first identified as maxima in a filtered DAPI image, and associated with a numerical index. Nuclear boundaries were assigned by a propagation algorithm, and then expanded by ~1 micron to define sampling areas. The following data were then recorded: (i) average pixel intensities for each data channel over each sampling area; (ii) the coordinates of the sampling areas; (iii) shape metrics for the corresponding nuclei; and (iv) an image with the boundary pixels of each nucleus set equal to the associated index value. For each RNA species, the PLISH data were first normalized onto a 0:10 scale by dividing through by the largest value observed in any cell over all of the fields of view, and then multiplying by ten. The data were then log-transformed onto a −1:1 scale by the operation: transformed_data = log(0.1 + normalized_data). Custom Matlab scripts were used to perform hierarchical clustering of the log-transformed single-cell expression profiles, to generate heatmaps, and to create images with the boundary pixels of each nucleus colored according to a cluster assignment . Custom R scripts were used for k-means clustering and to make t-SNE projection plots .
Unless otherwise specified, all reagents were from Thermo-Fisher and Sigma-Aldrich. Oligonucleotides were purchased from Integrated DNA Technologies. T4 polynucleotide kinase, T4 ligase, USER enzyme and their respective buffers were purchased from New England Biolabs. Nxgen phi29 polymerase and its buffer were purchased from Lucigen. Abbreviations: BSA, bovine serum albumin; DAPI, 4,6-diamidino-2-phenylindole; DEPC, diethylpyrocarbonate; EDTA, ethylenediaminetetraacetic acid; min, minutes; PBS, phosphate buffered saline; PFA, paraformaldehyde; RCA, rolling circle amplification; RT, room temperature. All oligonucleotide sequences are listed in .
HCT116 cells (ATCC; CCL-247) were authenticated by HLA typing and confirmed negative for Mycoplasma contamination using PCR. Cells were grown on poly-lysine coated #1.5 coverslips (Fisherbrand 12–544 G) using standard cell culture protocols until they reached the desired confluency. The cells were rinsed in 1X PBS and fixed in 3.7% formaldehyde with 0.1% DEPC at RT for 20 min. The fixed cells were treated with 10 mM citrate buffer (pH 6.0) at 70°C for 30 min, dehydrated in an ethanol series, then enclosed by application of a seal chamber (Grace Biolabs 621505) to the coverslip. Lungs were collected from adult B6 mice (Jackson Labs) and fixed by immersion in 4% PFA as previously described . Non-IPF human lung tissue was obtained from a surgical resection, and IPF tissue from an explant. All mouse and human research were approved by the Institutional Animal Care and Use Committee and Internal Review Board, respectively, at Stanford University. The tissues were fixed by immersion in 10% neutral buffered formalin in PBS at 4°C overnight under gentle rocking, cryoprotected in 30% sucrose at 4°C overnight, submerged in OCT (Tissue Tek) in an embedding mold, frozen on dry ice, and stored at −80°C. 20 μm sections were cut on a cryostat (LeicaCM 3050S) and collected on either poly-lysine coated #1.5 coverslips or glass slides (Fisherbrand Superfrost), air dried for 10 min, and post-fixed with 4% PFA at RT for 20 min. The human lung tissue in was formalin-fixed and paraffin-embedded (FFPE) according to standard protocols, and 20 μm sections were cut on a microtome and collected on glass slides. The FFPE sections were deparaffinized by immersion in Histoclear (National Diagnostics, HS-200) for 3 × 5 min, then dehydrated in an ethanol series and post-fixed with 4% PFA at RT for 20 min. Tissue sections were treated with 10 mM citrate buffer (pH 6.0) containing 0.05% lithium dodecyl sulfate at 70°C for 30 min, or in some experiments, with 0.1 mg/ml Pepsin in 0.1M HCl for 8 min at 37°C and dehydrated in an ethanol series. Following treatment, sections were air dried for 10 min and enclosed by application of a seal chamber.
Target RNAs were probed at ~40 nucleotide detection sites, with 1 to 10 sites per RNA species depending on expression level. NCBI BLAST searches were used to eliminate detection sites that shared 10 or more contiguous nucleotides with a non-target RNA. The detection sites were also selected to minimize self-complementarity as indicated by the IDT oligo analyzer. Each detection site was targeted with a pair of H probes designated HL (left H probe) and HR (right H probe). The HL and HR probes included ~20 nucleotide binding sequences that were complementary respectively to the 5' and 3' halves of the detection site. The binding sequences were chosen so that the 5' end of the HL binding sequence and the 3' end of the HR binding sequence would abut at a 5’-AG-3’ or a 5’-TA-3’ dinucleotide in the target RNA. The lengths of the binding sequences were adjusted so that the melting temperature of the corresponding DNA duplex would fall between 45–65°C as computed by IDT Oligo analyzer using default settings of 0.25 μM oligo concentration and 50 mM salt concentration. To generate H probes, suitable HL and HR binding sequences were catenated at their respective 5' and 3' ends with overhang sequences taken from one of eight modular design templates . The left and right overhang sequences in each design template were complementary to a specific bridge (B) and circle (C) oligonucleotide, which directed a desired fluorescent readout. The design templates reported here utilized a common 31 base oligonucleotide for the bridge. Following previous work , the circle oligonucleotides were ~60 bases long with 11 base regions of complementarity to cognate H probes on either end. The circle sequences were chosen to minimize self-complementarity. Each imager oligonucleotide was complementary to a barcode embedded in one of the C oligonucleotides, allowing unique detection of the corresponding RCA amplicon. The H-probe oligonucleotides were ordered on a 25 nanomole scale with standard desalting. The B and C oligonucleotides were ordered on a 100 nanomole scale with HPLC purification, and phosphorylated with T4 polynucleotide kinase according to the manufacturer recommendations. Imager oligonucleotides were purchased either as HPLC-purified fluorophore conjugates (A488, Texas Red, Cy3, Cy5), or as amine-modified oligonucleotides that were subsequently coupled to Pacific Blue-NHS ester according to the manufacturer recommendations.
Six buffers were used for PLISH barcoding: H-probe buffer (1M sodium trichloroacetate, 50 mM Tris pH 7.4, 5 mM EDTA, 0.2 mg/mL Heparin), bridge-circle buffer (2% BSA, 0.2 mg/mL heparin, 0.05% Tween-20, 1X T4 ligase buffer in RNAse-free water), PBST (PBS + 0.1% Tween-20), ligation buffer (10 CEU/μl T4 DNA ligase, 2% BSA, 1X T4 ligase buffer, 1% RNaseOUT and 0.05% Tween-20 in RNAse-free water), labeling buffer (2x SSC/20% formamide in RNAse-free water), and RCA buffer (1 U/μl Nxgen phi29 polymerase, 1X Nxgen phi29 polymerase buffer, 2%BSA, 5% glycerol, 10 mM dNTPs, 1% RNaseOUT in RNAse-free water). An H cocktail was prepared by mixing H probes in H-probe buffer at a final concentration of 100 nM each. If an RNA was targeted with more than five probe sets, the concentrations of the H probes for that RNA were pro-rated so that their sum did not exceed 1000 nM. A BC cocktail was also prepared by mixing B and C oligonucleotides in bridge-circle buffer at a final concentration of 6 μM each. Single-step barcoding was performed in sealed chambers. The workflow consisted of three steps: (i) The sample was incubated in the H cocktail at 37°C for 2 hr. The sample was then washed 4 × 5 min with H-probe buffer at RT, and incubated in the BC cocktail at 37°C for 1 hr. (ii) Following a 5 min wash with PBST at RT, the sample was incubated in ligation buffer at 37°C for 1 hr. (iii) The sample was washed 2 × 5 min with labeling buffer at RT, and washed with 1X Nxgen phi29 polymerase buffer at RT for 5 min. The sample was then incubated in RCA buffer at 37°C for 2 hr (typical for cultured cells) to overnight (typical for tissue). Finally, the sample was washed 2 × 5 min with labeling buffer.
Barcoded PLISH samples were fluorescently labeled by two different procedures, designated 'washout' and 'fast'. In the washout procedure, the sample was incubated with imager oligonucleotides in imager buffer (labeling buffer with 0.2 mg/mL heparin) at a final concentration of 100 nM each for 30 min, and then washed 2 × 5 min with PBST at RT. In the fast procedure, the sample was incubated for 5 min with imager oligonucleotides in imager buffer at a final concentration of 3 nM each, and then imaged immediately. Samples that did not require label-image-erase cycles were stained with DAPI (stock 1 mg/ml; final concentration - 1:1000 in PBS) for 5 min and mounted in H-1000 Vectashield mounting medium (Vector). Data were collected by confocal microscopy (Leica Sp8 and Zeiss LSM 800) using a 40X oil immersion or a 25X water immersion objective lens. 20 μm z-stacks were scanned, and maximum projection images were saved for analysis. For 5-color experiments, DAPI was added after the Pacific Blue channel had been imaged, and the Texas Red and Cy3 channels were linearly unmixed using Zeiss software. Transmitted light images were acquired on a Leica Sp8 confocal microscope using the 488 nm Argon laser and the appropriate PMT-TL detector. Images from serial rounds of data collection were aligned using the nuclear stain from each round as a fiducial marker. Unless otherwise stated, imaging data of cells and mouse lung tissue are representative of three independent experiments with ≥4 fields of view each. Imaging data of human lung tissue are representative of two independent experiments with ≥4 fields of view each.
HCR was performed following a published protocol with probes that targeted two sites covering nucleotides 621–670 and 1159–1208 in the mouse Axin2 transcript, and AlexaFluor 488-/AlexaFluor 647-labeled amplifier oligonucleotides. The samples were then processed for PLISH with H probes targeting four sites covering nucleotides 347–386, 1878–1917, 2412–2451 and 2956–2995 in the Axin2 transcript, and imaged using a Cy3-labeled imager oligonucleotide.
PLISH barcoding was performed as described above. Subsequently, the sample was washed 3 × 5 min with PBST at RT, and incubated in blocking solution (50 μl/ml [5%] normal goat serum, 1 μl/ml [0.1%] Triton X-100, 5 mM EDTA and 0.03 g/ml [3%] BSA in PBS) at RT for 1 hr. The sample was then incubated with primary antibody (Rabbit anti-pro-Sftpc, Millipore, 1:500 or Rabbit anti-Cytokeratin 5, Abcam Ab193895, 1:400) in blocking solution at 37°C for 2 hr under gentle rocking, washed 4 × 5 min with PBST at RT, and incubated with secondary antibody (Goat anti-Rabbit-Cy5, Jackson Lab, 1:250) and DAPI (1:1000) in blocking solution at RT for 1 hr. The sample was washed 3 × 5 min in PBST at RT and mounted in H-1000 Vectashield.
Mouse lung tissue cryosections were collected on slides, post-fixed and processed as described above. The samples were incubated with a 60-base oligonucleotide complementary to nucleotides 219–278 in the Scgb1a1 mRNA, or with a scrambled 60-base oligonucleotide, at 100 nM final concentration in H-probe buffer at 37°C for 2 hr. The samples were then washed 2 × 5 min with H-probe buffer at RT, and processed for PLISH using H probes that targeted nucleotides 229–268 in the Scgb1a1 transcript.
To perform enzymatic erasure, 15–20 base imager oligonucleotides were ordered with the dT nucleotides replaced by dU nucleotides. Following imaging, the signal was erased by incubating the sample with 0.1 U/μL USER enzyme in 1X USER enzyme buffer at 37°C for 20 min, followed by washing 2 × 3 min with PBST at RT. To perform rapid erasure, short 10–11 base oligonucleotides were ordered. Following imaging, the signal was erased by incubating the sample with PBST at 37°C for 15 min.
Lungs collected from B6 and the Lyz2 +/EGFP mouse strains were fixed and immunostained as whole mounts as previously described . Primary antibodies were chicken anti-GFP (Abcam ab13970), rat anti-Ecad/Cdh1 (Invitrogen ECCD-2), goat anti-Scgb1a1 (gift from Barry Stripp), rabbit anti-pro-Sftpc (Chemicon AB3786), and rat anti-Ager (R and D MAB1179). Fluorophore-conjugated secondary antibodies raised in Goat (Invitrogen) or Donkey (Jackson Labs) were used at 1:250 and DAPI at 1:1000.
FIJI was used to pseudocolor unprocessed micrographs for display as three-color overlays. A custom CellProfiler pipeline was created to measure RNA signal intensities at the single-cell level. Briefly, the centers of cell nuclei were first identified as maxima in a filtered DAPI image, and associated with a numerical index. Nuclear boundaries were assigned by a propagation algorithm, and then expanded by ~1 micron to define sampling areas. The following data were then recorded: (i) average pixel intensities for each data channel over each sampling area; (ii) the coordinates of the sampling areas; (iii) shape metrics for the corresponding nuclei; and (iv) an image with the boundary pixels of each nucleus set equal to the associated index value. For each RNA species, the PLISH data were first normalized onto a 0:10 scale by dividing through by the largest value observed in any cell over all of the fields of view, and then multiplying by ten. The data were then log-transformed onto a −1:1 scale by the operation: transformed_data = log(0.1 + normalized_data). Custom Matlab scripts were used to perform hierarchical clustering of the log-transformed single-cell expression profiles, to generate heatmaps, and to create images with the boundary pixels of each nucleus colored according to a cluster assignment . Custom R scripts were used for k-means clustering and to make t-SNE projection plots .
|
Cardiovascular imaging in cardio-oncology | 2a463ecd-7a7b-4179-8a0b-931ab5fb4e42 | 11588866 | Internal Medicine[mh] | Cancer and cardiovascular disease are the leading causes of death in most developed countries. Cancer and cardiovascular disease are closely related from both scientific and clinical perspectives. The likelihood of developing most types of cancer and cardiovascular disease increases with age, and thus, older individuals with cancer are more likely to have high risk of cardiovascular disease. Recent dramatic advances in cancer treatment, including targeted molecular therapy, immunotherapy, and radiotherapy, have dramatically increased the number of cancer survivors. Cancer treatments may often have its characteristic side effects, particularly in cardiovascular system. Thus, management of cardiovascular complications in cancer survivors is increasingly focused . This oncology–cardiology combined area is called either onco-cardiology or cardio-oncology . We used cardio-oncology in this review since the latter term seems to be more internationally applied recently. Cardio-oncology has gained attention by both cardiologists and oncologists. More recently, radiologists have played important roles for suitable image selection and interpretations for assessing cardiovascular complications. In addition, radiology specialists may provide appropriate radiotherapy planning with reduced radiation dose to cardiovascular system. Mutual work among these specialists on pathophysiology analysis and treatment strategy is valuable . These specialists should collaborate in the diagnosis and treatment of cancer patients with cardiovascular disease in various clinical settings. This review summarizes the definitions of the most common cardiovascular complications and the latest findings of cardiotoxicity for cancer therapy, including targeted chemotherapy and radiotherapy. In addition, appropriate non-invasive image analysis for assessing important insights in the early detection and monitoring cardiotoxicity is described. The most common cardiotoxicity after cancer therapy is cardiac dysfunction resulting in heart failure. It is defined by the position papers using decrease in left ventricular ejection fraction (LVEF) and or global longitudinal strain following cancer therapy from European Society of Cardiology (ESC) , American Society of Echocardiography (ASE) , American Society of Clinical Oncology (ASCO) , and European Society of Medical Oncology (ESMO) . Recent guidelines released from ESC in cardio-oncology have provided all the healthcare professionals the instruments to take care of oncologic patients before, during and after cancer therapy as far as the cardiovascular system is concerned . These include clear indications regarding how to select and appropriate use of different imaging modalities in various clinical circumstances . Before initiating cancer treatment, baseline imaging is crucial for several reasons. Establishing a baseline helps in identifying any pre-existing cardiovascular conditions that might influence treatment decisions. Baseline imaging can also help stratify patients according to their risk for developing treatment-related cardiotoxicity. It provides essential information for planning cancer treatment, ensuring that any necessary modifications can be made to minimize cardiac risks. Echocardiography is widely used for baseline assessment of cardiac function and structure. Cardiac MRI provides detailed information on cardiac anatomy and function. These modalities are useful for comprehensive baseline assessment . Anthracyclines (e.g., doxorubicin, daunorubicin) are widely used as the prototype of cardiotoxic cancer therapy of leukemia and various solid tumors . It is well known that anthracyclines are associated with a significant risk of cardiotoxic side effects. Left ventricular dysfunction and consecutive heart failure are as the most important form considering their profound impact on morbidity and mortality after cancer therapy . Antimetabolites are considered to disrupt the formation of DNA and RNA. Notable antimetabolites include 5-fluorouricil (5-FU), capecitabine, cytarabine, gemcitabine, methotrexate, and hydroxyurea. These drugs are frequently used in the treatment of leukemia and cancers of the ovary, breast, gastrointestinal tract, and other solid tumors. The most common issues reported in cardiac field are myocardial ischemia, angina, chest discomfort, and changes in the electrocardiogram (ECG), such as ST-segment and T-wave alterations . The harmful cardiac effects of 5-FU and capecitabine are believed to be caused by several factors, including damage to the endothelial cells leading to thrombosis, increased metabolic activity resulting in energy shortage and ischemia. Oxidative stress leading to cell damage, coronary artery spasm, and reduced oxygen delivery by red blood cells also leads to myocardial ischemia . During the past 2 decades, targeted therapeutics have been introduced and increasingly applied in cancer therapy. The first evidence of significant cardiotoxicity was found for trastuzumab, an inhibitor of human epidermal growth factor receptor 2 (HER2) that is commonly used in the treatment of HER2-positive breast cancer . Significant progress in the development of new drugs has led to the increasing use of targeted therapeutics for cancer therapy with great improvements in morbidity and mortality. Such increased applications of targeted therapeutics have unmasked various forms of side effects, including severe cardiovascular toxicities . Immune checkpoint inhibitor therapy induces an anti-tumor immune reaction by blocking immune-inhibitory signaling via the programmed death 1 (PD1) pathways . The survival rate of cancer patients after this therapy has been greatly improved, particularly for melanoma and non-small cell lung cancer. On the other hand, this therapy is associated with the risk of autoimmune-triggered immune-related adverse events (irAEs), including a significant risk of cardiotoxicity. The most recognized form is myocarditis which may cause cardiogenic shock and severe arrhythmia . Cardiac irAEs are often treated with immunosuppressive therapy. The second most common cardiovascular complication of immune checkpoint inhibitor therapy is pericardial disease, such as pericardial effusion and pericarditis. There are number of other targeted therapeutics which have been applied for specific cancers. Such new therapy has indicated great improvements in prognosis in cancer, but also suggested increased risk of severe cardiovascular toxicities . Wide applications are seen with new progress in the field of targeted therapeutics together with the growing number of long-term survivors after oncology therapy. On the other hand, cardiotoxic side effects have become essential for the best and suitable treatment strategy in cardio-oncology. Radiotherapy is an increasingly applied cancer-therapeutic method. When applying radiotherapy to malignant tumors in left breast cancer and esophageal cancer, cardiotoxicity can be caused by high dose of radiation delivered to the myocardium. The likelihood and intensity of cardiovascular complications rise with an increase in the radiation dose, the size of the area exposed, and radiation exposure at a younger age . The risk also grows the longer the time of radiotherapy, with use of additional chemotherapy, and when other metabolic risk factors like hypertension, smoking, obesity, and diabetes are present . Careful follow-up care of cancer patients is particularly important after radiotherapy combined with chemotherapy. Radiotherapy is well known to have significant cardiovascular complications, such as pericarditis and long-term complications, such as restrictive or constrictive pericarditis. Approximately 35% of cancer patients undergo radiotherapy within 1 year after diagnosis . Radiation therapy to treat tumors near the heart increases the risk of developing radiation-induced valvular heart disease. Moreover, patients who have previously undergone mediastinal radiation therapy may face a higher risk of complications and death following valve surgery . Radiotherapy-related myocarditis and vasculitis are known as acute complication, but its incidence has decreased due to dose fractioning . Suitable diagnosis is considered valuable for identifying and monitoring such early complications, using blood tests and molecular imaging if it is required. Long-term complications of radiotherapy involving the heart are cardiac fibrosis and coronary atherosclerosis. Such complications may arise with a latency of several decades after exposure to radiation . These long-term complications include coronary artery disease, valvular disease, and diastolic dysfunction. Radiation during childhood and concomitant exposure to anthracyclines is associated with a significant increased risk for cardiac complications . Therefore, careful follow-up of young cancer patients is particularly important after radiotherapy. Appropriate diagnosis and assessment of cardiovascular toxicities plays an important role for cardio-oncology. There are number of methods commonly used for detecting cardiovascular disease and assessing its severity after cancer treatment. Measurement of serum biomarker, such as troponin, is valuable for detection of early signs of cardiotoxic effect during chemotherapy . Biomarker analysis is quite simple and accurate for diagnosis and severity assessment of cardiotoxicities. When the biomarker studies suggest possible cardiac abnormalities associated with ECG abnormalities, non-invasive imaging study should be performed in the next step for precise analysis. Long-term follow-up imaging is essential to identify delayed cardiotoxic effects and to monitor cardiac function over time. Imaging biomarkers may provide a way to diagnose toxicity before the development of irreversible cardiovascular damage, particularly in the early stage of cancer therapy. The choice of imaging modality in cardio-oncology is influenced by several factors, including the patient’s clinical condition, the specific information needed, the availability of imaging technology, and the overall treatment plan. Different cancer treatments have varying cardiotoxic profiles. For instance, anthracyclines are known to cause left ventricular dysfunction, while radiation therapy can lead to myocardial fibrosis and coronary artery disease. The imaging modality should be selected based on the expected cardiotoxicity profile of the treatment. The imaging may also lead a way of care that allows cancer patients to continue their treatment safely without major cardiac events. There are number of evidence showing importance of monitoring left ventricular function during cancer therapy. However, the characteristics and clinical evidence of right ventricular impairment are rather poorly described . In addition, vascular inflammation should be considered as one of the major risks after cancer therapy. Furthermore, thromboembolic complications leading to an increased risk of pulmonary embolism is often observed, particularly after treatment with tyrosine kinase inhibitors and serine–threonine kinase inhibitors. Thus, imaging analysis should play important roles for assessing cardiovascular functions after cancer therapy. Characteristics of imaging techniques for identifying cardiotoxicity are summarized in Table . Echocardiography has been used as a gold standard in cardio-oncology field since this is easily performed. Echocardiography is excellent for assessing cardiac function, including left ventricular ejection fraction (LVEF), diastolic function, and myocardial strain. LVEF is often used as the main parameter for detecting changes in left and right ventricular function. Particularly, advanced assessment of right ventricular function by 3D-echocardiography is valuable for cardiotoxicity analysis . New imaging techniques such as longitudinal strain on echocardiography, cardiac MRI, and nuclear imaging have recently been focused on this field. Myocardial strain is expected to be used for detecting early changes before cardiac function deteriorates showing better interobserver agreement than LVEF. Recent meta-analysis study indicated good prognostic performance of global longitudinal strain for subsequent LV dysfunction from anthracycline therapy . Thus, strain analysis has the most benefit for patients with low-normal LVEF, suggesting as a new and sensitive marker for closer surveillance and possible cardio-protection during and after cancer treatment . While echocardiography is easy and most commonly performed for monitoring biventricular function, it is often difficult to assess function for those with poor acoustic window. In addition, various functional parameters seem to be rather operator dependent. Cardiac CT is a standard test for the diagnosis of atherosclerotic cardiovascular disease. High resolution and reduced radiation exposure permits wide applications in various patients suspected with cardiovascular disease. For cardio-oncology cardiac CT provides precise risk assessment in various cancer patients. Cardiac CT is valuable for assessing structural abnormalities of myocardium, coronary arteries, and aorta . It provides detailed images of the heart's structures, including the heart chambers, valves, and major vessels. This is important for evaluating patients who might have valvular heart disease, especially if related to previous cancer treatments like radiation therapy. Non-contrast CT scanning is employed to assess the amount of calcium in the coronary arteries, serving as a dependable indicator of cardiovascular risk. Coronary CT is widely performed for those suspected with coronary artery disease. It also utilized to rule out coronary artery stenosis in patients who have undergone cardiotoxic cancer treatments and have subsequently shown a decrease in LVEF . In addition, myocardial fibrosis and arteritis which is often seen after radiotherapy is well identified . Cardiac CT shows an extremely high negative predictive value for high-risk patients. However, the diagnostic accuracy for cardiovascular disease by cardiac CT is limited for those with higher heart rate, arrhythmias or severe calcification . Cardiac MRI has been used as the reference standard for the measurement of cardiac chamber volumes, myocardial mass, and contractile function. It may identify structural alteration such as biventricular dysfunctions and functional changes in the myocardium, including signs of edema and inflammation, possibly prior to the left ventricular dysfunction . From a functional standpoint, minor initial changes in tissue post-chemotherapy can lead to localized impairments in wall motion as an early indicator of heart toxicity, identifiable through strain-based cardiac MRI . Cardiac MRI is useful to identify perfusion defects with similar diagnostic value with CT perfusion and radionuclide perfusion imaging. The main advantage of cardiac MRI is myocardial tissue characterization by using multiparametric imaging, such as late gadolinium enhancement to identify gross scar, T1 mapping to define interstitial fibrosis and extracellular volume (ECV) increase, and T2 mapping to assess myocardial edema. Myocardial edema and increased ECV are the earliest surrogate markers of chemotherapy or radiation . Several research works have shown that patients who received cardiotoxic chemotherapy exhibit an increased ECV compared to a similar group of individuals who did not undergo such treatment . ECV is considered as an indirect biomarker of tissue fibrosis and interstitial space expansion . An increase in pre-contrast T1 is associated with myocardial edema, inflammation, and fibrosis . On the other hand, an increase in T2 relaxation time is associated with acute myocardial edema, as a marker of water-sensitive process . Generally, cardiotoxic drug exposure has been associated with an increase in both native T1 and T2 relaxation time . A number of studies suggested valuable role of increased T2 relaxation time as an early acute sign of cardiotoxicity . These changes are often observed with various cardiotoxic drug therapies as well as radiotherapy. MRI permits semiquantitative assessment of tissue characterization. In addition, there is no radiation burden to the patients. Therefore, both baseline study before the treatment and follow-up study after the treatment are valuable . For screening and monitoring of cardiac function in patients with cancer treated with cardiotoxic chemoradiotherapy, cardiac MRI is considered as a second-line modality after echocardiography, particularly for those with difficult sonographic window . The role of cardiac MRI in cardio-oncology continues to grow. Nuclear medicine imaging has also been commonly applied for quantitative assessment of cardiovascular function after cancer therapy . LVEF is accurately assessed with high reproducibility using ECG-gated blood pool scan and ECG-gated perfusion imaging as well. Myocardial perfusion imaging using single-photon emission CT (SPECT) and positron emission tomography (PET) have been applied for many patients suspected with coronary artery disease. SPECT is often used for assessing myocardial ischemia. SPECT perfusion study offers number of advantages over echocardiography or MRI, including applications for every patient such as obesity, metal device, or kidney failure with low intra- and interobserver variability . PET perfusion study, using 15 O-water, 82 Rb, 13 N-ammonia, or new pharmaceutical agent 18 F-flurpiridaz, is valuable for quantitative analysis of myocardial blood flow and myocardial flow reserve . Such quantitative assessment of myocardial blood flow and flow reserve by PET hold a promise for precise assessment of disease mechanism and its severity after cancer therapy . Particularly, ischemia with non- obstructive coronary artery (INOCA) has recently been focused for risk analysis after cancer therapy. Nuclear perfusion study, particularly by PET, has an important role for accurate diagnosis of INOCA and risk analysis in various cardiovascular diseases . Molecular imaging using MRI and nuclear imaging are used for assessing tissue function before and after cancer therapy on molecular perspectives. Values of tissue characterization by MRI has been described previously. MRI is particularly valuable for monitoring cardiovascular dysfunction due to no radiation as compared to nuclear imaging. Cardiac MRI provides detailed information on cardiac anatomy and can detect myocardial fibrosis, edema, and infarction. On the other hand, nuclear imaging should play an important role for assessing cardiotoxicity using various radionuclide tracers. PET with 18 F-fluorodeoxyglucose (FDG) has been used as a marker of glucose utilization. FDG uptake may reflect oxidative stress and alterations in cardiac metabolism. In particular, an increase in FDG accumulation is associated with an active inflammatory process, allowing for the activity of the inflammatory disease to be evaluated under fasting condition . Therefore, FDG-PET is valuable for detecting active cardiovascular inflammation, particularly in the early stage of cancer therapy and monitoring toxic effects . Cardiovascular toxicity may carry a risk of arterial thrombosis, including myocardial infarction. FDG serves as a sensitive indicator of the metabolic shift in the myocardium, which occurs in the early stages of coronary artery disease . Distinct patterns of FDG uptake, particularly in the right ventricle, have been associated with anthracycline cardiotoxicity . FDG uptake may not be specific for cancer-related inflammation but also for ischemic but viable myocardium, and therefore, careful prolonged fasting with food control is required for suitable FDG-PET study . FDG-PET has been used for identifying ischemic myocardium, active myocarditis, and vasculitis after chemotherapy, immunotherapy, and radiation therapy. FDG-PET plays a crucial role in not only oncology but also cardio-oncology fields due to its ability to visualize metabolic activity in the whole-body. FDG-PET is commonly used for oncology studies showing localization and extension of cancer throughout the body. This imaging particularly important for assessing treatment strategy and treatment effect as well with various anticancer therapy. Therefore, FDG-PET after cancer therapy is considered as an elegant approach for simultaneous assessment of tumor response to cancer therapy and possible cardiovascular dysfunction as well (Fig. ). The risk of vascular disease posed by the cancer itself is increased by cancer therapies. Vascular toxicities are the second most common cause of death in patients with cancer undergoing outpatient therapy . Number of studies suggested chemotherapy-related vascular side effects and radiotherapy-related vascular side effects . FDG-PET can demonstrate not only myocardial inflammation but also active vasculitis. Recent studies nicely suggested that FDG-PET/CT can identify unstable atherosclerosis and active vasculitis as focal FDG uptake . FDG-PET to apply for those with arteritis has recently been approved for insurance coverage in Japan . While there remain limited clinical reports at present, FDG-PET should play an important role for identifying and managing active vasculitis after cancer therapy. One of the single-photon molecular imaging biomarkers is 123 I-meta-iodobenzylguanidine (MIBG), a radiolabeled norepinephrine analogue. Cardiac neuronal function is compromised in various cardiac diseases, such as heart failure, ischemia, arrhythmia, and some types of cardiomyopathy . Functional and structural injury to myocardial adrenergic neurons may also be accompanied with the pathophysiology of cancer therapy-related cardiotoxicity. Thus, this is a novel approach for imaging dysregulated presynaptic norepinephrine homeostasis as a prognostic heart failure. The myocardial MIBG imaging can demonstrate dysregulated presynaptic norepinephrine homeostasis which is used as a prognostic marker in heart failure . Cardiac innervation imaging is preferable, which may be an early cardiotoxicity marker before decrease in LVEF occurs. Recent report suggests increasing MIBG washout as a marker for myocardial compensation to cardiotoxic injury from anthracycline exposure . Another molecular imaging biomarker is 123 I-beta-methyl-iodophenyl pentadecanoic acid (BMIPP) as a marker of fatty acid uptake in the myocardium which is commonly used in Japan . Focal decrease in BMIPP uptake in the myocardium is commonly observed after radiation therapy, suggesting myocardial damage or fibrosis . The combined imaging analysis using FDG-PET and BMIPP SPECT indicated focal damage with metabolic alteration in the myocardium after radiotherapy in esophageal cancer . There are a few other PET molecular imaging biomarkers applied for early assessment of cardiotoxicity. Somatostatin receptor PET using 68 Ga-DOTATOC/DOTATATE is new established method for staging or restaging of patients with neuroendocrine tumors. In addition, this tracer can visualize myocardial macrophage infiltration . This visualization permits early detection of myocardial inflammation in patients with pericarditis, myocarditis, or subacute myocardial infarction and serve as a potential predictor of cardiac remodeling processes . Recent reports describe new applications of somatostatin receptor PET in immunotherapy-induced myocarditis . One of the major advantages of this tracer is no requirement of diet control, which is mandatory in FDG study. Note that the synthesis of 68 Ga tracers requires a Ge/Ga generator but not in-house cyclotron, which could be an advantage in some regions. More recently, 18 F- or 68 Ga-labeled fibroblast activation protein inhibitor (FAPI) tracers have attentions since it also enables simultaneous assessment for tumor activity and cardiac diseases (e.g., myocardial infarction and atherosclerosis) . Cardio-oncology represents an important new area that should be covered by multiple specialist teams, including medical and radiation oncologists, cardiologists, diagnostic radiologists, technologist, nurses, and pharmacists. Interdisciplinary cooperation among these specialists is mandatory for accurate and timely diagnosis and also suitable management for each cancer patient. Cardiologists should be knowledgeable about the strengths and limitations of various imaging modalities, including echocardiography, cardiac MRI, and nuclear imaging. They should select the appropriate modality based on the clinical scenario and the specific information needed. Many oncologists should understand various cardiotoxic effects after new cancer treatments. In addition, radiologists should play an important role for selecting suitable imaging modalities and interpreting not only the tumor itself but also the condition of the heart when viewing tumor images in order to provide important messages from these images for both cardiologists and oncologists. Radiologists should be aware of the critical role that imaging plays in the diagnosis, monitoring, and management of cardiotoxicity in cancer patients. Understanding the appropriate use of each imaging modality is essential for providing comprehensive care. Radiologists should stay informed about the latest advancements in imaging technologies and techniques, such as cardiac MRI and nuclear imaging, to ensure accurate assessment and timely detection of cardiotoxic effects. Cardiovascular imaging has evoked as a key role for this purpose, allowing non-invasive evaluation of cardiovascular alterations complimentary to biomarkers and clinical assessment. Suitable imaging selection and interpretation may permit not only for early diagnosis of cardiovascular injury but also accurate assessment treatment effects. Thus, the imaging will provide impact on therapeutic management and improved outcome after cancer therapy. Future studies are warranted to assess the promising potential of these non-invasive cardiovascular imaging in cardio-oncology. |
Shi-Zhen-An-Shen Decoction, a Herbal Medicine That Reverses Cuprizone-Induced Demyelination and Behavioral Deficits in Mice Independent of the Neuregulin-1 Pathway | 32f86d4d-700f-47a0-9c6b-d523f12e0438 | 7932787 | Pharmacology[mh] | Schizophrenia is a severe, debilitating neuropsychiatric disorder affecting about 1% of the population worldwide . The lifetime prevalence of schizophrenia is approximately 0.6% in China . Schizophrenia is associated with long-term disabilities, considerable economic burden, and challenging social responsibility. Clinical symptoms of schizophrenia are classified as positive (e.g., hallucinations and delusions), negative (e.g., emotional blunting and social withdrawal), and cognitive deficits (problems with attention, processing speed, and working memory) . Antipsychotic therapy is the primary clinical treatment for schizophrenia. Although classical and atypical antipsychotics can be able to reduce delusions and hallucinations, they have little effect on negative symptoms and cognitive impairment exhibited by schizophrenia patients . Therefore, alternative treatments for schizophrenia are needed. We have prescribed Shi-Zhen-An-Shen decoction (SZASD), an empirical Chinese herb prescription for individuals at extreme risk for psychosis at Beijing Anding Hospital, and reported significantly relieved psychiatric symptoms and improved cognitive function in schizophrenia patients . Also, numerous reports have described divergent therapeutic effects for the active components of SZASD, including cornel iridoid glycoside(CIG) and tetrahydroxystilbene glucoside(TSG) , on neurological defects and cognitive impairment. Although increasing evidence supports the neuroprotective effects of SZASD on neurological disorders, its mechanisms of action remain unclear. Although the interaction of genetics and environmental factors is thought to be involved in the development of schizophrenia, the specific pathophysiological processes involved in the disease development and progression are largely unknown. Many studies have focused on the role of changes in gray matter in the pathogenesis of schizophrenia. Other studies, including magnetic resonance imaging (MRI) and genetic analysis , suggest that abnormalities in white matter (WM) and myelin sheaths are involved in the etiopathology of schizophrenia. The myelin sheaths in the central nervous system (CNS) are primarily composed of oligodendrocytes and function to preserve axonal integrity for rapid and efficient conduction of the electrical impulses along axons . The loss of myelin may result in axonal degeneration and neuronal dysfunction, which leads to cognitive deficits . Cuprizone (CPZ) is a neurotoxic agent that acts as a copper chelator , causing damage to the oligodendrocytes . Studies that fed mice with 0.2% ( w / w ) CPZ for several weeks induced chronic demyelination. Recent imaging studies showed that there was significant demyelination in the corpus callosum of mice fed with 0.2% CPZ for six weeks . Mice also exhibited abnormal behaviors, including impaired sensory gating and impaired memory when exposed to CPZ for three or four weeks. Quetiapine (QTP) (an atypical antipsychotic) significantly improved the schizophrenia-like behaviors and reduced myelin loss in CPZ-fed mice . Previous studies revealed that QTP at the dose of 10 mg/kg could attenuate some of the changes observed in CPZ-fed rats . The mechanisms of myelin development, degeneration, and regeneration are complex. Considerable evidence shows that neuregulin- (NRG-) 1 plays an important role in regulating the myelination process . Furthermore, clinical and animal studies indicate a critical role of NRG-1 in the development of schizophrenia. Thus, we speculated that NRG-1 was a potential target for the therapeutic effect of SZASD on schizophrenia. This study examined the effects of SZASD on schizophrenia-like behaviors in mice in which demyelination was induced by exposure to CPZ. We hypothesized that SZASD would exert its antipsychotic effects by protecting the myelin sheath through the NRG-1 signaling pathway.
2.1. Animals Sixty six-week-old male C57BL/6 mice weighing 20 ± 2 g were used. Mice were obtained from the laboratory animal center at Capital Medical University. They were housed in SPF environment with a 12 h light/dark cycle and a temperature- and humidity-controlled facility (22 ± 1°C and relative humidity of 55% to 60%), with free access to food and water. All animal experiments were approved by the Institutional Animal Care and Use Committee of Capital Medical University (AEEI-2018-047). 2.2. Drugs CPZ (Sigma-Aldrich, St. Louis, MO, USA) was added to the rodent chow with a final concentration of 0.2% ( w / w ). As a positive control, additional mice were treated with quetiapine (AstraZeneca, Wilmington, DE, USA, 10 mg·kg −1 ·d −1 , QTP), a widely used antipsychotic . SZASD is a traditional Chinese herb formula, which is composed of Chrysanthemum (菊花), Rehmannia glutinosa (干地黄), and Polygoni Multiflori Radix (何首乌) and other components . All herbs were purchased from Beijing Tong Ren Tang (Group, Co., Ltd., Beijing, China). The mixed herbs were boiled at 100°C in an appropriate volume for 1 h, and the extraction procedure was repeated twice. The aqueous extracts were filtered, combined, and further concentrated using rotary evaporation under vacuum in a 60°C water bath. The concentrated extract was lyophilized to create a SZASD powder (yield: 32.0%) and stored under desiccation at room temperature. 2.3. Drug Preparation and Administration The dose of SZASD administered to the mice was increased 9.1-fold compared to the typical human dose, based on the formula for the body surface area. For an average adult human weighing 70 kg, the typical dose is 1.9 g herbs·kg −1 ·d −1 . The low (L), medium (M), and high (H) doses for mice were calculated as 8.65, 17.29, and 25.94 g ·kg −1 ·d −1 , respectively. The powder was dissolved in sterile saline (NS) before administration via gavage. The final volume used for gavage feeding was 0.06 mL/10 g. The control and QTP-treated groups were gavaged daily with an equal volume of sterile saline. 2.4. Experimental Design Sixty mice were randomly divided into six groups ( n = 10 per group). Mice in group 1, the control group (control), received regular rodent chow and tap water. Group 2 mice (CPZ+NS) were fed a rodent chow containing CPZ from weeks zero to six (10 mg·kg − ·d −1 ) to induce chronic demyelination. The CPZ was administrated with NS intragastrically via gavage during the last two weeks of treatment. Group 3 mice (CPZ+L(SZASD)) that received a low dose of SZASD (8.65 g·kg −1 ·d −1 ), group 4 mice (CPZ+M(SZASD)) that received a medium dose of SZASD (17.29 g·kg −1 ·d −1 ), group 5 mice (CPZ+H(SZASD)) that received a high dose of SZASD (25.94 g·kg −1 ·d −1 ), and group 6 mice (CPZ+QTP) that received QTP (10 mg·kg −1 ·d −1 ) were treated with rodent chow containing 0.2% CPZ ( w / w ) during weeks zero through six . SZASD and QTP were administered using oral gavage once daily for two weeks . The mice were weighed weekly. 2.5. Behavioral Tests The behavioral tests were conducted 24 hours after the completion of the two-week drug exposure. 2.5.1. Nest-Building Activity As previously described, each mouse was individually housed in a new cage overnight with access to food and water ad libitum. One unit of pressed cotton batting (5 cm × 5 cm) was placed in a corner of the cage . Normal mice perform typical nest-building activities in the “home” cage. A five-point criteria system was used to evaluate the nest-building activity of mice the following morning . The criteria included the following: 1 = pressed cotton batting was scattered throughout the cage, not bitten; 2 = pieces of pressed cotton batting were applied to a side of the cage, but relatively loose and not forming a nest, and with no obvious tearing or folding; 3 = pieces of pressed cotton batting were folded together to form a nest, but the nest was relatively flat and did not exhibit any visible tearing; 4 = pieces of the pressed cotton batting were folded together to form a ball-shaped nest that covered the mouse, and the batting was bitten into small pieces; and 5 = pieces of the pressed cotton batting were folded together to form a ball-shaped nest that covered the mouse, and the nest walls were higher than the height of the mouse's body. The nest-building scores were obtained blindly by the same observer . 2.5.2. Open-Field Test The open-field test was used to measure the locomotor activity, exploratory behavior, and anxiety-related behavior of the mice. Tests were performed in a quiet house with low light levels. The day before the experiment, the mice were moved to this room to allow them to adapt to the environment. The open-field test was performed in a square box (40 × 40 × 25 cm) painted gray. A video camera was placed 1.5 meters above the box. The camera was connected to a computer equipped with the Supermaze System (Shanghai Xinruan Information Technology, Co. Ltd., Shanghai, China) to evaluate locomotor activity. Each mouse was placed in the center of the box and tested individually in the open field for 30 min, and the behaviors were recorded. After each test, the equipment was cleaned with 70% alcohol. The total distance traveled and time spent at the perimeter were analyzed. 2.5.3. Prepulse Inhibition (PPI) PPI was performed according to published methods with slight modifications . The mice were exposed to a series of startle pulses with or without a short acoustic prepulse. Auditory startle reflex and sensory gating were evaluated using commercial startle chambers (MED-ASR-PRO1, MED Associates Inc., USA). White noise (60 dB) was provided as the background noise, and the experiment was divided into three stages, including an adjustment period, block I, and block II. After an adjustment period of five minutes, block I was set to 20 trials at 20 s intervals. Each trial included a single 20 ms 110 dB startle pulse stimulus with no acoustic prepulse. Block II included 60 trials at 20 s intervals with an acoustic prepulse and co5vered six types of stimuli, including a 2 ms duration prepulse at 75 dB or 85 dB, followed by no-startle stimulus or a 20 ms 110 dB startle stimulus with a 30 ms or 100 ms delay. Each trial type was randomly repeated ten times. The PPI score was calculated using the block II data and the following formula: PPI = (1 − prepulse plus startle amplitude/startle only amplitude) × 100. PPI sessions were performed between 08:00 and 12:00 a.m. 2.6. Biochemical Analyses 2.6.1. Tissue Preparation After the behavioral tests were completed, five mice from each group were used for immunohistochemical staining, and the other five mice were used for Western blot analysis. The mice used for immunohistochemical staining were deeply anesthetized using pentobarbital sodium (250 mg/kg, intraperitoneal injection (i.p.)) and then transcardially perfused with saline, followed by 4% paraformaldehyde in PBS (0.1 M, pH =7.4) . The brains were removed and immersed in the same fixative overnight at 4°C, followed by cryoprotection in 20% sucrose for 24 h and 30% sucrose for 48 h, both at 4°C. The brains were cut into serial coronal sections (30 μ m) using a sliding microtome, and the sections were collected into 16-well plates for cryopreservation. The mice used for Western blots were euthanized using cervical dislocation. The hippocampus and cerebral cortex were removed from the brains and quickly stored at -80°C for Western blot analysis. 2.6.2. Immunohistochemical Staining Free-floating sections were placed in PBS and washed three times at room temperature (RT) and then permeabilized with 0.3% Triton for 30 min at RT. Then, the sections were washed three times in PBS and incubated in PBS with 10% hydrogen peroxide for 30 min at RT. After washing three times with PBS, the sections were incubated with goat serum for 30 min at RT to block nonspecific antigens. Subsequently, the sections were incubated in a solution containing an MBP (myelin basic protein) rabbit polyclonal primary antibody (1 : 200 dilution, Sigma-Aldrich, St. Louis, MO, USA) overnight at 4°C. After rinsing in PBS, the sections were incubated with a secondary antibody conjugated with HRP (Zhongshan Golden Bridge Biology Company, Beijing, China) for 20 min at RT. The antigen-antibody complexes were visualized using 0.025% diaminobenzidine (DAB; Zhongshan Golden Bridge Biology Company, Beijing, China) as the chromogen. All sections were washed in PBS and mounted on amino propyltriethoxysilane-coated slides. The mounted sections were dehydrated using a graded series of ethanol, cleared in xylene, and coverslipped with immunohistochemical sealed with water-soluble tablets. Images were obtained using a Nikon BX-51 light microscope and analyzed using Image-Pro Plus 6 (Media Cybernetics, Inc., Bethesda, MD, USA). 2.6.3. Western Blot Analysis The proteins were extracted with Tris-EDTA lysis buffer (1 mM EDTA, 20 mM Tris, pH 7.5, 1% Triton X-100, and 10% glycerol) containing a protease inhibitor cocktail (Sigma-Aldrich, St. Louis, MO, USA). After the content was determined, the proteins were separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis and transferred onto Immobilon-FL NC membranes (Millipore UK Ltd., Watford, UK). Membranes were blocked with Tris-buffered saline containing 0.1% Tween-20 and 5% nonfat milk (PBST/5% milk) and then probed with specific antibodies at 4°C overnight. Membranes were then washed with PBST and incubated for 1 or 2 h in PBST/5% milk containing horseradish peroxidase- (HRP-) conjugated secondary antibodies (1 : 2000, Zhongshan Golden Bridge Biology Company, Beijing, China); then, the target band was focused through enhanced chemiluminescence (ECL System, Bio-Rad, Hercules, CA, USA). Images were captured by the V3 Workflow (Bio-Rad, Hercules, CA, USA), and the bands visualized in the gels were analyzed using the Image Lab Software (Media Cybernetics, Inc., Bethesda, MD, USA). The following primary antibodies were used in this study: rabbit monoclonal anti-MBP (1 : 1000, Abcam, Cambridge, UK), rabbit polyclonal anti-NRG1 (1 : 1000, Abcam, Cambridge, UK), and mouse monoclonal anti- β -actin antibody (1 : 1000; Santa Cruz, Dallas, Texas, USA). 2.7. Statistical Analysis SPSS software version 20 was used for data analysis, and GraphPad software (Prism version 6) was used for charting. The data in the statistical chart were expressed as the means ± SEM (Standard Error of Mean). The difference of PPI and nest-building ability among groups was analyzed by the Wilcoxon rank-sum test. Repeated measures ANOVA with Bonferroni adjustment was used to compare the weight of mice among groups. The other data were analyzed using one-way analysis of variance (ANOVA), and Dunnett's post hoc test was used for multiple comparisons. The p value of less than 0.05 was considered significant.
Sixty six-week-old male C57BL/6 mice weighing 20 ± 2 g were used. Mice were obtained from the laboratory animal center at Capital Medical University. They were housed in SPF environment with a 12 h light/dark cycle and a temperature- and humidity-controlled facility (22 ± 1°C and relative humidity of 55% to 60%), with free access to food and water. All animal experiments were approved by the Institutional Animal Care and Use Committee of Capital Medical University (AEEI-2018-047).
CPZ (Sigma-Aldrich, St. Louis, MO, USA) was added to the rodent chow with a final concentration of 0.2% ( w / w ). As a positive control, additional mice were treated with quetiapine (AstraZeneca, Wilmington, DE, USA, 10 mg·kg −1 ·d −1 , QTP), a widely used antipsychotic . SZASD is a traditional Chinese herb formula, which is composed of Chrysanthemum (菊花), Rehmannia glutinosa (干地黄), and Polygoni Multiflori Radix (何首乌) and other components . All herbs were purchased from Beijing Tong Ren Tang (Group, Co., Ltd., Beijing, China). The mixed herbs were boiled at 100°C in an appropriate volume for 1 h, and the extraction procedure was repeated twice. The aqueous extracts were filtered, combined, and further concentrated using rotary evaporation under vacuum in a 60°C water bath. The concentrated extract was lyophilized to create a SZASD powder (yield: 32.0%) and stored under desiccation at room temperature.
The dose of SZASD administered to the mice was increased 9.1-fold compared to the typical human dose, based on the formula for the body surface area. For an average adult human weighing 70 kg, the typical dose is 1.9 g herbs·kg −1 ·d −1 . The low (L), medium (M), and high (H) doses for mice were calculated as 8.65, 17.29, and 25.94 g ·kg −1 ·d −1 , respectively. The powder was dissolved in sterile saline (NS) before administration via gavage. The final volume used for gavage feeding was 0.06 mL/10 g. The control and QTP-treated groups were gavaged daily with an equal volume of sterile saline.
Sixty mice were randomly divided into six groups ( n = 10 per group). Mice in group 1, the control group (control), received regular rodent chow and tap water. Group 2 mice (CPZ+NS) were fed a rodent chow containing CPZ from weeks zero to six (10 mg·kg − ·d −1 ) to induce chronic demyelination. The CPZ was administrated with NS intragastrically via gavage during the last two weeks of treatment. Group 3 mice (CPZ+L(SZASD)) that received a low dose of SZASD (8.65 g·kg −1 ·d −1 ), group 4 mice (CPZ+M(SZASD)) that received a medium dose of SZASD (17.29 g·kg −1 ·d −1 ), group 5 mice (CPZ+H(SZASD)) that received a high dose of SZASD (25.94 g·kg −1 ·d −1 ), and group 6 mice (CPZ+QTP) that received QTP (10 mg·kg −1 ·d −1 ) were treated with rodent chow containing 0.2% CPZ ( w / w ) during weeks zero through six . SZASD and QTP were administered using oral gavage once daily for two weeks . The mice were weighed weekly.
The behavioral tests were conducted 24 hours after the completion of the two-week drug exposure. 2.5.1. Nest-Building Activity As previously described, each mouse was individually housed in a new cage overnight with access to food and water ad libitum. One unit of pressed cotton batting (5 cm × 5 cm) was placed in a corner of the cage . Normal mice perform typical nest-building activities in the “home” cage. A five-point criteria system was used to evaluate the nest-building activity of mice the following morning . The criteria included the following: 1 = pressed cotton batting was scattered throughout the cage, not bitten; 2 = pieces of pressed cotton batting were applied to a side of the cage, but relatively loose and not forming a nest, and with no obvious tearing or folding; 3 = pieces of pressed cotton batting were folded together to form a nest, but the nest was relatively flat and did not exhibit any visible tearing; 4 = pieces of the pressed cotton batting were folded together to form a ball-shaped nest that covered the mouse, and the batting was bitten into small pieces; and 5 = pieces of the pressed cotton batting were folded together to form a ball-shaped nest that covered the mouse, and the nest walls were higher than the height of the mouse's body. The nest-building scores were obtained blindly by the same observer . 2.5.2. Open-Field Test The open-field test was used to measure the locomotor activity, exploratory behavior, and anxiety-related behavior of the mice. Tests were performed in a quiet house with low light levels. The day before the experiment, the mice were moved to this room to allow them to adapt to the environment. The open-field test was performed in a square box (40 × 40 × 25 cm) painted gray. A video camera was placed 1.5 meters above the box. The camera was connected to a computer equipped with the Supermaze System (Shanghai Xinruan Information Technology, Co. Ltd., Shanghai, China) to evaluate locomotor activity. Each mouse was placed in the center of the box and tested individually in the open field for 30 min, and the behaviors were recorded. After each test, the equipment was cleaned with 70% alcohol. The total distance traveled and time spent at the perimeter were analyzed. 2.5.3. Prepulse Inhibition (PPI) PPI was performed according to published methods with slight modifications . The mice were exposed to a series of startle pulses with or without a short acoustic prepulse. Auditory startle reflex and sensory gating were evaluated using commercial startle chambers (MED-ASR-PRO1, MED Associates Inc., USA). White noise (60 dB) was provided as the background noise, and the experiment was divided into three stages, including an adjustment period, block I, and block II. After an adjustment period of five minutes, block I was set to 20 trials at 20 s intervals. Each trial included a single 20 ms 110 dB startle pulse stimulus with no acoustic prepulse. Block II included 60 trials at 20 s intervals with an acoustic prepulse and co5vered six types of stimuli, including a 2 ms duration prepulse at 75 dB or 85 dB, followed by no-startle stimulus or a 20 ms 110 dB startle stimulus with a 30 ms or 100 ms delay. Each trial type was randomly repeated ten times. The PPI score was calculated using the block II data and the following formula: PPI = (1 − prepulse plus startle amplitude/startle only amplitude) × 100. PPI sessions were performed between 08:00 and 12:00 a.m.
As previously described, each mouse was individually housed in a new cage overnight with access to food and water ad libitum. One unit of pressed cotton batting (5 cm × 5 cm) was placed in a corner of the cage . Normal mice perform typical nest-building activities in the “home” cage. A five-point criteria system was used to evaluate the nest-building activity of mice the following morning . The criteria included the following: 1 = pressed cotton batting was scattered throughout the cage, not bitten; 2 = pieces of pressed cotton batting were applied to a side of the cage, but relatively loose and not forming a nest, and with no obvious tearing or folding; 3 = pieces of pressed cotton batting were folded together to form a nest, but the nest was relatively flat and did not exhibit any visible tearing; 4 = pieces of the pressed cotton batting were folded together to form a ball-shaped nest that covered the mouse, and the batting was bitten into small pieces; and 5 = pieces of the pressed cotton batting were folded together to form a ball-shaped nest that covered the mouse, and the nest walls were higher than the height of the mouse's body. The nest-building scores were obtained blindly by the same observer .
The open-field test was used to measure the locomotor activity, exploratory behavior, and anxiety-related behavior of the mice. Tests were performed in a quiet house with low light levels. The day before the experiment, the mice were moved to this room to allow them to adapt to the environment. The open-field test was performed in a square box (40 × 40 × 25 cm) painted gray. A video camera was placed 1.5 meters above the box. The camera was connected to a computer equipped with the Supermaze System (Shanghai Xinruan Information Technology, Co. Ltd., Shanghai, China) to evaluate locomotor activity. Each mouse was placed in the center of the box and tested individually in the open field for 30 min, and the behaviors were recorded. After each test, the equipment was cleaned with 70% alcohol. The total distance traveled and time spent at the perimeter were analyzed.
PPI was performed according to published methods with slight modifications . The mice were exposed to a series of startle pulses with or without a short acoustic prepulse. Auditory startle reflex and sensory gating were evaluated using commercial startle chambers (MED-ASR-PRO1, MED Associates Inc., USA). White noise (60 dB) was provided as the background noise, and the experiment was divided into three stages, including an adjustment period, block I, and block II. After an adjustment period of five minutes, block I was set to 20 trials at 20 s intervals. Each trial included a single 20 ms 110 dB startle pulse stimulus with no acoustic prepulse. Block II included 60 trials at 20 s intervals with an acoustic prepulse and co5vered six types of stimuli, including a 2 ms duration prepulse at 75 dB or 85 dB, followed by no-startle stimulus or a 20 ms 110 dB startle stimulus with a 30 ms or 100 ms delay. Each trial type was randomly repeated ten times. The PPI score was calculated using the block II data and the following formula: PPI = (1 − prepulse plus startle amplitude/startle only amplitude) × 100. PPI sessions were performed between 08:00 and 12:00 a.m.
2.6.1. Tissue Preparation After the behavioral tests were completed, five mice from each group were used for immunohistochemical staining, and the other five mice were used for Western blot analysis. The mice used for immunohistochemical staining were deeply anesthetized using pentobarbital sodium (250 mg/kg, intraperitoneal injection (i.p.)) and then transcardially perfused with saline, followed by 4% paraformaldehyde in PBS (0.1 M, pH =7.4) . The brains were removed and immersed in the same fixative overnight at 4°C, followed by cryoprotection in 20% sucrose for 24 h and 30% sucrose for 48 h, both at 4°C. The brains were cut into serial coronal sections (30 μ m) using a sliding microtome, and the sections were collected into 16-well plates for cryopreservation. The mice used for Western blots were euthanized using cervical dislocation. The hippocampus and cerebral cortex were removed from the brains and quickly stored at -80°C for Western blot analysis. 2.6.2. Immunohistochemical Staining Free-floating sections were placed in PBS and washed three times at room temperature (RT) and then permeabilized with 0.3% Triton for 30 min at RT. Then, the sections were washed three times in PBS and incubated in PBS with 10% hydrogen peroxide for 30 min at RT. After washing three times with PBS, the sections were incubated with goat serum for 30 min at RT to block nonspecific antigens. Subsequently, the sections were incubated in a solution containing an MBP (myelin basic protein) rabbit polyclonal primary antibody (1 : 200 dilution, Sigma-Aldrich, St. Louis, MO, USA) overnight at 4°C. After rinsing in PBS, the sections were incubated with a secondary antibody conjugated with HRP (Zhongshan Golden Bridge Biology Company, Beijing, China) for 20 min at RT. The antigen-antibody complexes were visualized using 0.025% diaminobenzidine (DAB; Zhongshan Golden Bridge Biology Company, Beijing, China) as the chromogen. All sections were washed in PBS and mounted on amino propyltriethoxysilane-coated slides. The mounted sections were dehydrated using a graded series of ethanol, cleared in xylene, and coverslipped with immunohistochemical sealed with water-soluble tablets. Images were obtained using a Nikon BX-51 light microscope and analyzed using Image-Pro Plus 6 (Media Cybernetics, Inc., Bethesda, MD, USA). 2.6.3. Western Blot Analysis The proteins were extracted with Tris-EDTA lysis buffer (1 mM EDTA, 20 mM Tris, pH 7.5, 1% Triton X-100, and 10% glycerol) containing a protease inhibitor cocktail (Sigma-Aldrich, St. Louis, MO, USA). After the content was determined, the proteins were separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis and transferred onto Immobilon-FL NC membranes (Millipore UK Ltd., Watford, UK). Membranes were blocked with Tris-buffered saline containing 0.1% Tween-20 and 5% nonfat milk (PBST/5% milk) and then probed with specific antibodies at 4°C overnight. Membranes were then washed with PBST and incubated for 1 or 2 h in PBST/5% milk containing horseradish peroxidase- (HRP-) conjugated secondary antibodies (1 : 2000, Zhongshan Golden Bridge Biology Company, Beijing, China); then, the target band was focused through enhanced chemiluminescence (ECL System, Bio-Rad, Hercules, CA, USA). Images were captured by the V3 Workflow (Bio-Rad, Hercules, CA, USA), and the bands visualized in the gels were analyzed using the Image Lab Software (Media Cybernetics, Inc., Bethesda, MD, USA). The following primary antibodies were used in this study: rabbit monoclonal anti-MBP (1 : 1000, Abcam, Cambridge, UK), rabbit polyclonal anti-NRG1 (1 : 1000, Abcam, Cambridge, UK), and mouse monoclonal anti- β -actin antibody (1 : 1000; Santa Cruz, Dallas, Texas, USA).
After the behavioral tests were completed, five mice from each group were used for immunohistochemical staining, and the other five mice were used for Western blot analysis. The mice used for immunohistochemical staining were deeply anesthetized using pentobarbital sodium (250 mg/kg, intraperitoneal injection (i.p.)) and then transcardially perfused with saline, followed by 4% paraformaldehyde in PBS (0.1 M, pH =7.4) . The brains were removed and immersed in the same fixative overnight at 4°C, followed by cryoprotection in 20% sucrose for 24 h and 30% sucrose for 48 h, both at 4°C. The brains were cut into serial coronal sections (30 μ m) using a sliding microtome, and the sections were collected into 16-well plates for cryopreservation. The mice used for Western blots were euthanized using cervical dislocation. The hippocampus and cerebral cortex were removed from the brains and quickly stored at -80°C for Western blot analysis.
Free-floating sections were placed in PBS and washed three times at room temperature (RT) and then permeabilized with 0.3% Triton for 30 min at RT. Then, the sections were washed three times in PBS and incubated in PBS with 10% hydrogen peroxide for 30 min at RT. After washing three times with PBS, the sections were incubated with goat serum for 30 min at RT to block nonspecific antigens. Subsequently, the sections were incubated in a solution containing an MBP (myelin basic protein) rabbit polyclonal primary antibody (1 : 200 dilution, Sigma-Aldrich, St. Louis, MO, USA) overnight at 4°C. After rinsing in PBS, the sections were incubated with a secondary antibody conjugated with HRP (Zhongshan Golden Bridge Biology Company, Beijing, China) for 20 min at RT. The antigen-antibody complexes were visualized using 0.025% diaminobenzidine (DAB; Zhongshan Golden Bridge Biology Company, Beijing, China) as the chromogen. All sections were washed in PBS and mounted on amino propyltriethoxysilane-coated slides. The mounted sections were dehydrated using a graded series of ethanol, cleared in xylene, and coverslipped with immunohistochemical sealed with water-soluble tablets. Images were obtained using a Nikon BX-51 light microscope and analyzed using Image-Pro Plus 6 (Media Cybernetics, Inc., Bethesda, MD, USA).
The proteins were extracted with Tris-EDTA lysis buffer (1 mM EDTA, 20 mM Tris, pH 7.5, 1% Triton X-100, and 10% glycerol) containing a protease inhibitor cocktail (Sigma-Aldrich, St. Louis, MO, USA). After the content was determined, the proteins were separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis and transferred onto Immobilon-FL NC membranes (Millipore UK Ltd., Watford, UK). Membranes were blocked with Tris-buffered saline containing 0.1% Tween-20 and 5% nonfat milk (PBST/5% milk) and then probed with specific antibodies at 4°C overnight. Membranes were then washed with PBST and incubated for 1 or 2 h in PBST/5% milk containing horseradish peroxidase- (HRP-) conjugated secondary antibodies (1 : 2000, Zhongshan Golden Bridge Biology Company, Beijing, China); then, the target band was focused through enhanced chemiluminescence (ECL System, Bio-Rad, Hercules, CA, USA). Images were captured by the V3 Workflow (Bio-Rad, Hercules, CA, USA), and the bands visualized in the gels were analyzed using the Image Lab Software (Media Cybernetics, Inc., Bethesda, MD, USA). The following primary antibodies were used in this study: rabbit monoclonal anti-MBP (1 : 1000, Abcam, Cambridge, UK), rabbit polyclonal anti-NRG1 (1 : 1000, Abcam, Cambridge, UK), and mouse monoclonal anti- β -actin antibody (1 : 1000; Santa Cruz, Dallas, Texas, USA).
SPSS software version 20 was used for data analysis, and GraphPad software (Prism version 6) was used for charting. The data in the statistical chart were expressed as the means ± SEM (Standard Error of Mean). The difference of PPI and nest-building ability among groups was analyzed by the Wilcoxon rank-sum test. Repeated measures ANOVA with Bonferroni adjustment was used to compare the weight of mice among groups. The other data were analyzed using one-way analysis of variance (ANOVA), and Dunnett's post hoc test was used for multiple comparisons. The p value of less than 0.05 was considered significant.
3.1. Body Weight Changes All mice exhibited shiny, smooth fur, as well as normal activity levels and feeding behavior. No mice died during the experiment. Repeated measures ANOVA for body weight showed that there was a significant main effect for time ( F time = 675.87, p time < 0.001), group assignment ( F group = 31.19, p group < 0.001), and interaction ( F group = 28.43, p interaction < 0.001), meaning a significant difference in body weight from baseline to the end of 6 weeks among groups . The control group showed sustained body weight gain during the experimental period. The CPZ-treated mice exhibited lower body weight gain than the control mice, starting in the second week of the experiment. As shown in , both medium-dose SZASD and QTP remarkably increased the body weight of CPZ-treated mice from the 4 th week to the 6 th week. 3.2. Schizophrenia-Like Behaviors The CPZ-treated mice presented schizophrenia-like behaviors, including abnormal nesting activity, locomotor activity, and PPI. A significant decrease in nest-building scores was observed in the CPZ+NS group compared to the naive controls ( p < 0.01, ). CPZ-fed mice treated with three different doses of SZASD did not show significant changes in nest-building scores compared with the CPZ+NS group (all p > 0.05), although QTP significantly increased the nest-building ability in CPZ-fed mice ( p < 0.05). The open-field test evaluated the locomotor activity of the mice. The CPZ+NS group traveled a relatively greater total distance than the control group ( p > 0.05) and a significantly greater total distance than the CPZ+M-SZASD and CPZ+H-SZASD groups ( p < 0.05, each) . No significant difference was observed in the time spent in the center of the open-field chamber (a marker of anxiety) among the six groups, although the CPZ+NS group showed a trend towards increased time compared to other groups. PPI evaluates the sensory gating function, which is impaired in schizophrenia. The CPZ+NS group showed significantly lower PPIs than other groups at both the prepulse intensity of 75 dB with a 30 ms delay ( p < 0.05, for all groups except the CPZ+H-SZASD group) and at 75 dB with a 100 ms delay ( p < 0.01, all groups) . 3.3. Myelin Sheath MBP is the main component of myelin and is essential to maintain the structural stability and function of the myelin sheath. Immunohistochemistry analysis revealed that the expression of MBP was significantly downregulated in the corpus callosum ( , p < 0.01), hippocampus ( , p < 0.01), and cerebral cortex ( , p < 0.05) after CPZ exposure, and this was reversed in mice treated with the three doses of SZASD and QTP, except for the hippocampus where the L-SZASD and H-SZASD treatments did not reverse the CPZ-induced MBP downregulation ( p > 0.05). Western blot analysis also showed significantly decreased MBP expression in the hippocampus after CPZ exposure ( p < 0.01) (Figures and ) that was reversed after M-SZASD, H-SZASD, and QTP treatment, but not after L-SZASD treatment ( p < 0.05). 3.4. NRG1 Expression NRG-1 signaling is crucial for normal oligodendrocyte development and CNS myelination. Western blot analysis revealed that exposure to CPZ significantly decreased NRG-1 expression in the mouse hippocampus (Figures and , p < 0.01), which was not significantly reversed after SZASD and QTP treatments, all p > 0.05).
All mice exhibited shiny, smooth fur, as well as normal activity levels and feeding behavior. No mice died during the experiment. Repeated measures ANOVA for body weight showed that there was a significant main effect for time ( F time = 675.87, p time < 0.001), group assignment ( F group = 31.19, p group < 0.001), and interaction ( F group = 28.43, p interaction < 0.001), meaning a significant difference in body weight from baseline to the end of 6 weeks among groups . The control group showed sustained body weight gain during the experimental period. The CPZ-treated mice exhibited lower body weight gain than the control mice, starting in the second week of the experiment. As shown in , both medium-dose SZASD and QTP remarkably increased the body weight of CPZ-treated mice from the 4 th week to the 6 th week.
The CPZ-treated mice presented schizophrenia-like behaviors, including abnormal nesting activity, locomotor activity, and PPI. A significant decrease in nest-building scores was observed in the CPZ+NS group compared to the naive controls ( p < 0.01, ). CPZ-fed mice treated with three different doses of SZASD did not show significant changes in nest-building scores compared with the CPZ+NS group (all p > 0.05), although QTP significantly increased the nest-building ability in CPZ-fed mice ( p < 0.05). The open-field test evaluated the locomotor activity of the mice. The CPZ+NS group traveled a relatively greater total distance than the control group ( p > 0.05) and a significantly greater total distance than the CPZ+M-SZASD and CPZ+H-SZASD groups ( p < 0.05, each) . No significant difference was observed in the time spent in the center of the open-field chamber (a marker of anxiety) among the six groups, although the CPZ+NS group showed a trend towards increased time compared to other groups. PPI evaluates the sensory gating function, which is impaired in schizophrenia. The CPZ+NS group showed significantly lower PPIs than other groups at both the prepulse intensity of 75 dB with a 30 ms delay ( p < 0.05, for all groups except the CPZ+H-SZASD group) and at 75 dB with a 100 ms delay ( p < 0.01, all groups) .
MBP is the main component of myelin and is essential to maintain the structural stability and function of the myelin sheath. Immunohistochemistry analysis revealed that the expression of MBP was significantly downregulated in the corpus callosum ( , p < 0.01), hippocampus ( , p < 0.01), and cerebral cortex ( , p < 0.05) after CPZ exposure, and this was reversed in mice treated with the three doses of SZASD and QTP, except for the hippocampus where the L-SZASD and H-SZASD treatments did not reverse the CPZ-induced MBP downregulation ( p > 0.05). Western blot analysis also showed significantly decreased MBP expression in the hippocampus after CPZ exposure ( p < 0.01) (Figures and ) that was reversed after M-SZASD, H-SZASD, and QTP treatment, but not after L-SZASD treatment ( p < 0.05).
NRG-1 signaling is crucial for normal oligodendrocyte development and CNS myelination. Western blot analysis revealed that exposure to CPZ significantly decreased NRG-1 expression in the mouse hippocampus (Figures and , p < 0.01), which was not significantly reversed after SZASD and QTP treatments, all p > 0.05).
We evaluated the remyelinating properties of SZASD in CPZ-treated mice. SZASD improved schizophrenia-like behaviors in CPZ-treated mice, including sensory gating and locomotor activity. SZASD also prevented demyelination and reversed MBP loss in the cerebral cortex, corpus callosum, and hippocampus but not the loss of hippocampal NRG-1 protein of CPZ-treated mice. These results suggested that SZASD might ameliorate schizophrenia-like behavior and demyelination in CPZ-treated mice independent of the NRG-1 pathway. Several kinds of animal models were commonly used in schizophrenia, but the pathological mechanisms are different. Common drugs used in pharmacological models are amphetamine or methamphetamine (dopamine receptor agonists), phencyclidine (PCP), and diphenhydramine maleate (MK-801), which resulted in schizophrenia-like symptoms in animals . However, these animal models only showed part of the phenotypes of schizophrenia. For example, amphetamine only induces psychotic-like changes but does not mimic the negative or cognitive symptoms seen in schizophrenia . MK-801 induces two main psychomotor behaviors: positive symptoms (rapid movement ) and negative symptoms (stereotyped behavior ), as well as long-term impairment in associative memory and spatial working memory . Studies with neuroimaging , gene , and postmortem showed that there was white matter damage in schizophrenia patients and animal models. Previous studies have shown that CPZ effectively induces demyelination and that the demyelination process can lead to the development of schizophrenia-like behaviors . CPZ-fed mice exhibited loss of body weight , increased locomotor activity , impaired sensory gating , and loss of MBP in the brain, which were in line with our present result in this study. The hyperactivity of CPZ-fed mice in the open-field test might reflect the abnormal activity in the mesolimbic and nigrostriatal dopamine system . The decreased nest-building activity in demyelinated mice might reflect self-neglect and social withdrawal , whereas the decline in PPI reflected impaired sensory gating, which was predominantly regulated by the cortico-striato-pallido-pontine (CSPP) system . Importantly, antipsychotics such as quetiapine and olanzapine significantly improved the abnormal behavior and myelin sheath loss in CPZ-fed animals. Thus, exposure to CPZ offers a promising model to study the effects of WM impairment in schizophrenia. In this study, the CPZ-fed mice exhibited significant loss of myelin in the corpus callosum, hippocampus, and cerebral cortex, which was similar to a previous study . MBP is the primary component of myelin, a critical component in maintaining the integrity of the myelin sheath, and is a biomarker for axon myelination sheaths. The myelin sheath is essential for providing nutrition to the axon and maintaining the proper conduction velocity of action potentials necessary for physiological function . The corpus callosum is the largest connectivity component in the brain and transfers information between the two cerebral hemispheres . The corpus callosum is responsible for processing cognitive information . The hippocampus is responsible for learning and memory. Imaging studies have revealed impaired WM in these brain regions of patients with schizophrenia . Reduced cellular density and morphological abnormalities of oligodendrocytes have been reported in the brains of schizophrenia patients, primarily in the corpus callosum , anterior cingulate cortex, prefrontal cortex , and hippocampus . Therefore, demyelination in the cerebral cortex, corpus callosum, and hippocampus in our study might be related to the schizophrenia-like behaviors observed in CPZ-treated mice. To explore the antipsychotic effect of SZASD, we completed a two-arm clinical study that used the Global Assessment of Functioning Scale, Positive and Negative Syndrome Scale, and Structured Interview for Prodromal Syndromes in 54 individuals with ultrahigh risk for psychosis . This study revealed that a twelve-week treatment using aripiprazole and SZASD cotherapy (vs. aripiprazole (5-10 mg/day) with a SZASD placebo) significantly improved the psychotic symptoms and cognitive deficits of the patients in their performance in the verbal learning, visual memory, continuous performance, and stroop (color/word) tests ( p < 0.05), whereas aripiprazole monotherapy only increased the verbal learning and the stroop word test scores ( p < 0.05). Our present study also indicated the antipsychotic effect of SZASD in CPZ-fed mice. The medium dose of SZASD significantly improved behavioral abnormalities and MBP expression, and this was similar to the effect of QTP. The low dose of SZASD only partially improved CPZ-induced neurobiological and behavioral deficits and was less effective than the medium dose of SZASD. The high dose of SZASD only moderately improved the behavioral deficits, suggesting a sedative or hypnotic effect of SZASD at the highest dose. Nevertheless, all SZASD doses used in this study appeared to be within safe and effective ranges (Supplementary material Figure ). Previous studies have supported the possibility of neuroprotective effects produced by the main components of SZASD in neurological diseases. For example, intragastric administration of CIG, the primary component extracted from Rehmannia glutinosa , ameliorated the neurological defects and cognitive impairment in rats after traumatic brain injury or fimbria-fornix transection . Moreover, TSG, the main active component extracted from Polygoni Multiflori Radix , showed neuroprotective effects and improved learning and memory in both normal and neurotoxin-injured animals . Our present results confirmed that intragastric treatment with SZASD for 14 consecutive days significantly improved schizophrenia-like behavioral deficits in CPZ-treated mice, including sensory gating and locomotor activity. Although the exact mechanism of the neuroprotective effect of SZASD is unknown, it might involve the inhibition of neuronal apoptosis and the promotion of neuroregeneration mediated by neurotrophic factors . Our study was the first to demonstrate the protective effect of SZASD on myelin sheath in the cerebral cortex, corpus callosum, and hippocampus and its association with improved psychological behaviors and cognitive deficits in CPZ-induced demyelinated mice. Although NRG-1 protein has been implicated in neurodevelopment, myelination, and schizophrenia , a definite role for NRG-1 in schizophrenia remains questionable. Both overexpression and knockout NRG-1 signaling have induced schizophrenia-like behaviors in animals. Furthermore, both elevated and reduced NRG1 levels were reported in patients with schizophrenia. In this study, the CPZ induced a significant decrease in NRG-1 protein in the hippocampus that was not reversed with SZASD ( p > 0.05). This result suggested that NGR-1was not involved in the neuroprotective effect of SZASD-mediated remyelination and behavioral improvement in CPZ-treated mice. Several limitations of this study should be noted. Firstly, due to the complexity of the symptoms and etiology of schizophrenia, it is difficult to replicate the full spectrum of symptoms and etiology in animal models. CPZ-fed mice only simulate symptoms related to myelin loss. Secondly, the pathophysiological process in this particular mouse model of schizophrenia is unclear and may not be specific to schizophrenia. Myelin impairment has been observed in other disorders , including bipolar disorder (BD), major depressive disorder (MDD), and multiple sclerosis (MS). Thirdly, the demyelination induced by CPZ may be transient. Withdrawal of CPZ can spontaneously initiate myelin repair . Therefore, clinical studies and more appropriate animal models are needed to fully understand the potential mechanism of action of SZASD in schizophrenia.
In summary, this study demonstrated that SZASD improved schizophrenia-like behaviors and demyelination impairment in mice exposed to CPZ. However, further studies are needed to determine the biological mechanisms that underlie the therapeutic effect of SZASD.
|
Calibrating cardiac electrophysiology models using latent Gaussian processes on atrial manifolds | ec7d8f15-f294-4023-96e7-6148bab69253 | 9532401 | Physiology[mh] | Mechanical contraction of the heart is initiated and synchronised by a travelling wave of electrical excitation and recovery that arises spontaneously in the natural pacemaker. The heart is made up of four chambers: the ventricles pump blood to the body and lungs, while the atria act as reservoirs and primers for the ventricles. A cardiac arrhythmia is a disturbance of regular heart rhythm resulting in a rapid, slow, or irregular rhythm. Atrial fibrillation (AF) is a common and increasingly prevalent cardiac arrhythmia . AF can be sustained by re-entry, where electrical activation continually propagates into recovering tissue, creating a self-sustaining rotating wave . Radio-frequency catheter ablation can be used to disrupt re-entrant circuits that act to sustain AF, but is not always effective . Two properties of cardiac tissue are important for the development of sustained re-entry, and these properties vary across atrial tissue. Conduction velocity (CV) describes the speed at which an activation wave spreads. The effective refractory period (ERP) is the minimum time interval between two successive stimuli that allows two activation waves to propagate and is related to action potential duration (APD), which is the interval between local activation (depolarization) and recovery (repolarization). Both CV and ERP decrease at shorter pacing intervals, and this dynamic behaviour and its spatial heterogeneity is important for determining the stability of re-entry , as well as the complex paths followed by electrical activation during AF . Natural variability in the speed of the excitation wave and the dynamics of excitation and recovery exist both between individuals and within the heart of a single individual , . Cardiac tissue exhibits spatial heterogeneity with differences in ion channel conductances, gap junction distributions, and fibrotic remodelling across the heart . These spatial heterogeneities in structural and functional properties lead to heterogeneity in ERP. The resulting dispersion in repolarisation properties is a mechanism for focal arrhythmia initiation , and atrial fibrillation initiation through increasing vulnerability to re-entry , . Electrophysiology (EP) models describe how electrical activation diffuses through cardiac tissue. Local activation and recovery are represented by a set of differential equations describing a reaction-diffusion system that models tissue-scale propagation of activation and cellular activation and recovery , . Models of cardiac electrical activation have become valuable research tools, but are also beginning to be used in the clinical setting to guide interventions in patients , . These applications require personalised models of both anatomy and electrophysiology to be constructed. Personalised anatomical models can be assembled from medical images, and statistical shape models enable the assessment of varying shape on electrical behaviour . Calibration of EP models is difficult because of the limited measurements that can be made routinely in the clinical setting. EP model parameters determine model behaviour and for a personalised model should be calibrated to reconstruct the heterogeneity in CV and ERP, as well as their dynamic behaviour, in the heart of a specific patient. Measurements of local activation time (LAT), which measures the time of arrival of the activation wavefront relative to the timing of a pacing stimulus, enable reconstruction of heterogeneous CV for pacing at a fixed rate , . Calibration to the dynamics of activation and recovery is more challenging. Both the quantity and type of data that can be recorded from patients are constrained by the clinical procedure, so it is difficult to determine spatial heterogeneity of repolarisation. An S1S2 pacing protocol can be used to measure restitution curves. The heart is paced for several beats at an initial pacing cycle length S1, followed by a stimulus with a shorter length S2. This protocol is repeated for different values of the S2 interval, and the shortest S2 that can elicit an activation indicates an upper bound for ERP at the stimulus site. While models can be calibrated to reconstruct CV(S2) restitution and ERP from LAT measurements with an S1S2 protocol , , recent work raises doubts over whether model parameters can be identified uniquely from these types of measurement alone . Biophysically detailed models of electrical activation have large numbers of parameters and many of these may be unidentifiable from restitution curve data . There is a need for robust approaches that can interrogate cardiac tissue properties more thoroughly while at the same time minimising additional interventions. In this paper, we present a novel method for probabilistic calibration of an electrophysiology simulator from spatially sparse measurements using a probabilistic model of electrophysiology parameters on a manifold representing the left atrium of the heart. We focus on estimating parameter fields that reconstruct heterogeneity in ERP. We chose to use a phenomenological EP model that captures the main features of cardiac activation and recovery. We determine two types of ERP measurements for calibrating EP parameters that determine excitability. EP parameters are modelled as latent Gaussian processes (GPs) on a manifold, and linked to observations via surrogate functions and a likelihood function designed for ERP measurements. We use Markov Chain Monte Carlo (MCMC) to obtain the posterior distribution of EP parameter fields across the atrium. We validate our method quantitatively by generating ground truths and calibrating to sparse data. The principles behind our method generalise to other measurement types, such as CV and APD restitution data, making our approach a step forward in the creation of digital twins capable of reproducing the complex dynamics of electrophysiology. Workflow The computational model, or ‘simulator’, that we seek to calibrate is composed of (i) a finite element mesh representing an atrial manifold [12pt]{minimal} $${ {x}} $$ x ∈ Ω ; (ii) an electrophysiology model, that maps EP parameters [12pt]{minimal} $$ _l({ {x}}), l=1,2,$$ θ l ( x ) , l = 1 , 2 , … defined on the computational mesh to observable quantities; and (iii) a numerical solver for running EP simulations. Details on obtaining and processing a mesh for suitability in electrophysiology simulations are given in “ ”, including details on the example mesh used here. The EP model is the modified Mitchell-Schaeffer (mMS) model , , the parameters of which are effectively time-constants representing different phases of the action potential. We parameterize the mMS model with the following 5 parameters: [12pt]{minimal} $$CV_{max}({ {x}}), _{in}({ {x}}), _{out}({ {x}}), _{open}({ {x}}), APD_{max}({ {x}})$$ C V max ( x ) , τ in ( x ) , τ out ( x ) , τ open ( x ) , A P D max ( x ) . See “ ” for details on the simulation model, parameterisation, and allowable parameter ranges, and details on the numerical implementation. We use the software openCARP to solve the mono-domain model for our simulations. The main task in this work is to calibrate the simulator by inferring parameter fields [12pt]{minimal} $$ _l({ {x}})$$ θ l ( x ) from ERP measurements. Figure represents our modeling workflow, which we summarize here. Our code is available in a Zenodo repository . Surrogate functions The simulator can be used to map parameter fields to ERP fields [12pt]{minimal} $$ {ERP}({ {x}}) = {sim}( _l({ {x}}), l=1,2, )$$ ERP ( x ) = sim ( θ l ( x ) , l = 1 , 2 , … ) . Given ERP observations at multiple locations on the atrial mesh, as well as an appropriate likelihood model for these observations, the simulator could be used in an MCMC setting in order to calibrate the parameter fields by obtaining samples from the posterior distribution for the EP fields. However, this is an extremely inefficient approach, since ERP depends only on local (rather than remote) tissue properties. We utilize a surrogate function (also called an “emulator”) solution in which we learn the mapping from parameters to ERP. This surrogate function allows us to predict ERP at location [12pt]{minimal} $${ {x}}$$ x as [12pt]{minimal} $$ {ERP}({ {x}}) = f( _l({ {x}}), l=1,2, )$$ ERP ( x ) = f ( θ l ( x ) , l = 1 , 2 , … ) , bypassing the need to run the simulator directly for inference. Gaussian process priors The mesh has [12pt]{minimal} $$ 10^5$$ ≈ 10 5 vertices for which parameters need to be defined, but ERP measurements are restricted to a subset of these vertices, with number of observations on the order of [12pt]{minimal} $$10^0$$ 10 0 – [12pt]{minimal} $$10^1$$ 10 1 . The electrophysiology parameter fields must be assumed to have low-rank structure, induced by spatial correlation, in order to make inferences about EP parameter values at locations other than ERP observation locations. This is achieved here by modeling the EP parameters using latent Gaussian process (GP) priors [12pt]{minimal} $$ _l({ {x}}) {{ {G}}}{{ {P}}}$$ θ l ( x ) ∼ G P . We use Gaussian Process Manifold Interpolation (GPMI), a method we proposed for defining Gaussian process (GP) distributions on manifolds . The approach uses solutions [12pt]{minimal} $$\{ _k, _k({ {x}})\}$$ λ k , ϕ k ( x ) of the Laplacian (Laplace-Beltrami) eigenproblem on the mesh . Bayesian calibration We perform probabilistic calibration with MCMC to obtain the posterior distribution of latent variables in the GPs. We utilize a likelihood function that we developed specifically for ERP measurements, which accounts for how an S1S2 pacing protocol to determine ERP effectively measures the S2 interval in which ERP lies, rather than measuring ERP directly. Sensitivity analysis and surrogate functions Figure a shows sensitivity indices for two types of ERP: an ERP measurement for S1S2 with S1 600 ms, denoted here as [12pt]{minimal} $$ {ERP}_ {S2}$$ ERP S2 , and another type of ERP measurement for S1S2S3 pacing for S1 600 ms S2 300 ms, denoted here as [12pt]{minimal} $$ {ERP}_ {S3}$$ ERP S3 . The S1S2S3 protocol, consisting of N S1 beats, 1 S2 beat, and 1 S3 beat, is introduced in this paper. We have determined that these measurements can be used to calibrate EP parameters sufficiently to reproduce not only these ERP measurements, but also the time required for the action potential to reach various levels of repolarization recovery (e.g. [12pt]{minimal} $$ {APD}_{20}$$ APD 20 and [12pt]{minimal} $$ {APD}_{90}$$ APD 90 , the time required for 20% and 90% recovery). It is a key finding that the S1S2S3 protocol can be used (alongside the standard S1S2 protocol) to disentangle the contributions of separate parts of the action potential to the value of ERP, without needing to measure the action potential directly. The sensitivity indices in Fig. a show that these ERP measurements are mainly determined by [12pt]{minimal} $$ _{out}$$ τ out and [12pt]{minimal} $$APD_{max}$$ A P D max , which approximately correspond to the duration of the repolarization and plateau phases of the action potential respectively. Calibration of other parameters, which determine some aspects of the shape of restitution curves but which do not strongly impact ERP, require both CV and APD restitution curve data from an S1S2 protocol . For this reason, we have determined to use ERP to calibrate [12pt]{minimal} $$ _1 _{out}$$ θ 1 ≡ τ out and [12pt]{minimal} $$ _2 APD_{max}$$ θ 2 ≡ A P D max . Figure b shows contour plots of the surrogate functions for ERP. A discontinuity occurs in the [12pt]{minimal} $$ {ERP}_ {S3}$$ ERP S3 surface for parameters combinations resulting in [12pt]{minimal} $$ {ERP}_ {S2} > 285$$ ERP S2 > 285 ms, so data for [12pt]{minimal} $$ {ERP}_ {S2} > 280$$ ERP S2 > 280 ms were discarded before fitting this function. Note that the majority of clinical [12pt]{minimal} $$ {ERP}_ {S2}$$ ERP S2 measurements fall in the range 170–270 ms, so even 280 ms could be considered as an upper limit . Synthetic experiments To test our methodology, we ran synthetic experiments as detailed in “ ”. We used a left atrial mesh generated from a scan of an individual performed at St Thomas’ Hospital (see “ ” for details). We created ground truth parameter fields for [12pt]{minimal} $$ _{out}$$ τ out and [12pt]{minimal} $$APD_{max}$$ A P D max in order to verify our calibration approach. We used 10 measurement locations, placed at random using a maximin design that excluded sites close to the mesh boundaries. The resolution of the S1S2 and S1S2S3 protocol was set to 10 ms. We used 24 eigenfunctions for representing each of the two parameter fields in Eq. , which we found to be sufficient to capture spatial variation while allowing good posterior sampling. For MCMC we used 5000 iterations, using 8 chains, discarding the first 50% of the samples as ‘burn-in’, and randomly thinning the remaining samples by a factor of 100 to give 200 posterior samples. Figure shows the true parameter fields, and the posterior mean and standard deviation of the calibrated parameter fields. Figure shows the true ERP fields, the posterior mean and standard deviation of the ERP fields (calculated from ERP samples, which are calculated from the parameter field posterior samples), and the Independent Standard Errors (ISE) of ERP (the absolute difference between true and posterior mean, divided by the posterior standard deviation). Measurement locations are shown as spheres in Figs. and , colored by the corresponding values at each location. Figure shows the APD simulation results from the atrial simulator using the ground truth parameter fields and the posterior mean of the calibrated parameter fields. The prediction of EP parameter fields [12pt]{minimal} $$ _{out}$$ τ out and [12pt]{minimal} $$APD_{max}$$ A P D max and ERP fields [12pt]{minimal} $$ {ERP}_ {S2}$$ ERP S2 and [12pt]{minimal} $$ {ERP}_ {S3}$$ ERP S3 captures the ground truth extremely well. Predictions on the pulmonary veins, which are effectively regions of extrapolation, deviate from the ground truth more than other regions on the main body of the atrium. These deviations are on the order of the S2 and S3 resolution, and the posterior variance is higher in these regions. Uncertainty increases with distance from the measurement locations. The ISE scores show that the distribution of ERP predicted by the model covers the ground truth well as nearly all values are less than 3. The ISE for [12pt]{minimal} $$ {ERP}_ {S2}$$ ERP S2 on the left atrial appendage is above 3, which may be caused by a combination of high ground-truth values for [12pt]{minimal} $$ _{out}$$ τ out (which are not effectively probed by the measurements) and insufficient basis functions to capture high spatial variation in this region of the mesh. APD from the full atrial simulator using the posterior mean of the parameters (the maximum a posteriori estimate could have been used instead) matches the ground truth values very closely, demonstrating that the action potential has been calibrated well using only ERP measurements. We also performed quantitative validation across a broad range of designs. Figure shows these validation results, for different configurations of the S1S2(S3) pacing protocol (number of ERP observations, resolution of S2 and S3 intervals) and different heterogeneity for ERP, controlled by different correlation lengthscales for generated [12pt]{minimal} $$APD_{max}$$ A P D max and [12pt]{minimal} $$ _{out}$$ τ out ground-truth fields. A unit of kernel lengthscale is approximately 3.2 mm for this mesh; see “ ” for details. Prediction values of ERP are based on the maximum a posteriori estimate of the parameters, and here we use 32 eigenfunctions per EP parameter field in order to better model fields with more rapid spatial variation. Root Mean Squared Error (RMSE) is reduced with increasing lengthscale (less ERP heterogeneity), decreasing S2 and S3 resolution (more precise measurements), and increasing number of observations. We note that our likelihood function introduces a small amount of bias, discussed below, which for S2 and S3 resolution 10ms causes RMSE to increase slightly from 20 to 40 observations. Overall, the quantitative validation suggests that little is gained above 20 observation locations. The computational model, or ‘simulator’, that we seek to calibrate is composed of (i) a finite element mesh representing an atrial manifold [12pt]{minimal} $${ {x}} $$ x ∈ Ω ; (ii) an electrophysiology model, that maps EP parameters [12pt]{minimal} $$ _l({ {x}}), l=1,2,$$ θ l ( x ) , l = 1 , 2 , … defined on the computational mesh to observable quantities; and (iii) a numerical solver for running EP simulations. Details on obtaining and processing a mesh for suitability in electrophysiology simulations are given in “ ”, including details on the example mesh used here. The EP model is the modified Mitchell-Schaeffer (mMS) model , , the parameters of which are effectively time-constants representing different phases of the action potential. We parameterize the mMS model with the following 5 parameters: [12pt]{minimal} $$CV_{max}({ {x}}), _{in}({ {x}}), _{out}({ {x}}), _{open}({ {x}}), APD_{max}({ {x}})$$ C V max ( x ) , τ in ( x ) , τ out ( x ) , τ open ( x ) , A P D max ( x ) . See “ ” for details on the simulation model, parameterisation, and allowable parameter ranges, and details on the numerical implementation. We use the software openCARP to solve the mono-domain model for our simulations. The main task in this work is to calibrate the simulator by inferring parameter fields [12pt]{minimal} $$ _l({ {x}})$$ θ l ( x ) from ERP measurements. Figure represents our modeling workflow, which we summarize here. Our code is available in a Zenodo repository . Surrogate functions The simulator can be used to map parameter fields to ERP fields [12pt]{minimal} $$ {ERP}({ {x}}) = {sim}( _l({ {x}}), l=1,2, )$$ ERP ( x ) = sim ( θ l ( x ) , l = 1 , 2 , … ) . Given ERP observations at multiple locations on the atrial mesh, as well as an appropriate likelihood model for these observations, the simulator could be used in an MCMC setting in order to calibrate the parameter fields by obtaining samples from the posterior distribution for the EP fields. However, this is an extremely inefficient approach, since ERP depends only on local (rather than remote) tissue properties. We utilize a surrogate function (also called an “emulator”) solution in which we learn the mapping from parameters to ERP. This surrogate function allows us to predict ERP at location [12pt]{minimal} $${ {x}}$$ x as [12pt]{minimal} $$ {ERP}({ {x}}) = f( _l({ {x}}), l=1,2, )$$ ERP ( x ) = f ( θ l ( x ) , l = 1 , 2 , … ) , bypassing the need to run the simulator directly for inference. Gaussian process priors The mesh has [12pt]{minimal} $$ 10^5$$ ≈ 10 5 vertices for which parameters need to be defined, but ERP measurements are restricted to a subset of these vertices, with number of observations on the order of [12pt]{minimal} $$10^0$$ 10 0 – [12pt]{minimal} $$10^1$$ 10 1 . The electrophysiology parameter fields must be assumed to have low-rank structure, induced by spatial correlation, in order to make inferences about EP parameter values at locations other than ERP observation locations. This is achieved here by modeling the EP parameters using latent Gaussian process (GP) priors [12pt]{minimal} $$ _l({ {x}}) {{ {G}}}{{ {P}}}$$ θ l ( x ) ∼ G P . We use Gaussian Process Manifold Interpolation (GPMI), a method we proposed for defining Gaussian process (GP) distributions on manifolds . The approach uses solutions [12pt]{minimal} $$\{ _k, _k({ {x}})\}$$ λ k , ϕ k ( x ) of the Laplacian (Laplace-Beltrami) eigenproblem on the mesh . Bayesian calibration We perform probabilistic calibration with MCMC to obtain the posterior distribution of latent variables in the GPs. We utilize a likelihood function that we developed specifically for ERP measurements, which accounts for how an S1S2 pacing protocol to determine ERP effectively measures the S2 interval in which ERP lies, rather than measuring ERP directly. The simulator can be used to map parameter fields to ERP fields [12pt]{minimal} $$ {ERP}({ {x}}) = {sim}( _l({ {x}}), l=1,2, )$$ ERP ( x ) = sim ( θ l ( x ) , l = 1 , 2 , … ) . Given ERP observations at multiple locations on the atrial mesh, as well as an appropriate likelihood model for these observations, the simulator could be used in an MCMC setting in order to calibrate the parameter fields by obtaining samples from the posterior distribution for the EP fields. However, this is an extremely inefficient approach, since ERP depends only on local (rather than remote) tissue properties. We utilize a surrogate function (also called an “emulator”) solution in which we learn the mapping from parameters to ERP. This surrogate function allows us to predict ERP at location [12pt]{minimal} $${ {x}}$$ x as [12pt]{minimal} $$ {ERP}({ {x}}) = f( _l({ {x}}), l=1,2, )$$ ERP ( x ) = f ( θ l ( x ) , l = 1 , 2 , … ) , bypassing the need to run the simulator directly for inference. The mesh has [12pt]{minimal} $$ 10^5$$ ≈ 10 5 vertices for which parameters need to be defined, but ERP measurements are restricted to a subset of these vertices, with number of observations on the order of [12pt]{minimal} $$10^0$$ 10 0 – [12pt]{minimal} $$10^1$$ 10 1 . The electrophysiology parameter fields must be assumed to have low-rank structure, induced by spatial correlation, in order to make inferences about EP parameter values at locations other than ERP observation locations. This is achieved here by modeling the EP parameters using latent Gaussian process (GP) priors [12pt]{minimal} $$ _l({ {x}}) {{ {G}}}{{ {P}}}$$ θ l ( x ) ∼ G P . We use Gaussian Process Manifold Interpolation (GPMI), a method we proposed for defining Gaussian process (GP) distributions on manifolds . The approach uses solutions [12pt]{minimal} $$\{ _k, _k({ {x}})\}$$ λ k , ϕ k ( x ) of the Laplacian (Laplace-Beltrami) eigenproblem on the mesh . We perform probabilistic calibration with MCMC to obtain the posterior distribution of latent variables in the GPs. We utilize a likelihood function that we developed specifically for ERP measurements, which accounts for how an S1S2 pacing protocol to determine ERP effectively measures the S2 interval in which ERP lies, rather than measuring ERP directly. Figure a shows sensitivity indices for two types of ERP: an ERP measurement for S1S2 with S1 600 ms, denoted here as [12pt]{minimal} $$ {ERP}_ {S2}$$ ERP S2 , and another type of ERP measurement for S1S2S3 pacing for S1 600 ms S2 300 ms, denoted here as [12pt]{minimal} $$ {ERP}_ {S3}$$ ERP S3 . The S1S2S3 protocol, consisting of N S1 beats, 1 S2 beat, and 1 S3 beat, is introduced in this paper. We have determined that these measurements can be used to calibrate EP parameters sufficiently to reproduce not only these ERP measurements, but also the time required for the action potential to reach various levels of repolarization recovery (e.g. [12pt]{minimal} $$ {APD}_{20}$$ APD 20 and [12pt]{minimal} $$ {APD}_{90}$$ APD 90 , the time required for 20% and 90% recovery). It is a key finding that the S1S2S3 protocol can be used (alongside the standard S1S2 protocol) to disentangle the contributions of separate parts of the action potential to the value of ERP, without needing to measure the action potential directly. The sensitivity indices in Fig. a show that these ERP measurements are mainly determined by [12pt]{minimal} $$ _{out}$$ τ out and [12pt]{minimal} $$APD_{max}$$ A P D max , which approximately correspond to the duration of the repolarization and plateau phases of the action potential respectively. Calibration of other parameters, which determine some aspects of the shape of restitution curves but which do not strongly impact ERP, require both CV and APD restitution curve data from an S1S2 protocol . For this reason, we have determined to use ERP to calibrate [12pt]{minimal} $$ _1 _{out}$$ θ 1 ≡ τ out and [12pt]{minimal} $$ _2 APD_{max}$$ θ 2 ≡ A P D max . Figure b shows contour plots of the surrogate functions for ERP. A discontinuity occurs in the [12pt]{minimal} $$ {ERP}_ {S3}$$ ERP S3 surface for parameters combinations resulting in [12pt]{minimal} $$ {ERP}_ {S2} > 285$$ ERP S2 > 285 ms, so data for [12pt]{minimal} $$ {ERP}_ {S2} > 280$$ ERP S2 > 280 ms were discarded before fitting this function. Note that the majority of clinical [12pt]{minimal} $$ {ERP}_ {S2}$$ ERP S2 measurements fall in the range 170–270 ms, so even 280 ms could be considered as an upper limit . To test our methodology, we ran synthetic experiments as detailed in “ ”. We used a left atrial mesh generated from a scan of an individual performed at St Thomas’ Hospital (see “ ” for details). We created ground truth parameter fields for [12pt]{minimal} $$ _{out}$$ τ out and [12pt]{minimal} $$APD_{max}$$ A P D max in order to verify our calibration approach. We used 10 measurement locations, placed at random using a maximin design that excluded sites close to the mesh boundaries. The resolution of the S1S2 and S1S2S3 protocol was set to 10 ms. We used 24 eigenfunctions for representing each of the two parameter fields in Eq. , which we found to be sufficient to capture spatial variation while allowing good posterior sampling. For MCMC we used 5000 iterations, using 8 chains, discarding the first 50% of the samples as ‘burn-in’, and randomly thinning the remaining samples by a factor of 100 to give 200 posterior samples. Figure shows the true parameter fields, and the posterior mean and standard deviation of the calibrated parameter fields. Figure shows the true ERP fields, the posterior mean and standard deviation of the ERP fields (calculated from ERP samples, which are calculated from the parameter field posterior samples), and the Independent Standard Errors (ISE) of ERP (the absolute difference between true and posterior mean, divided by the posterior standard deviation). Measurement locations are shown as spheres in Figs. and , colored by the corresponding values at each location. Figure shows the APD simulation results from the atrial simulator using the ground truth parameter fields and the posterior mean of the calibrated parameter fields. The prediction of EP parameter fields [12pt]{minimal} $$ _{out}$$ τ out and [12pt]{minimal} $$APD_{max}$$ A P D max and ERP fields [12pt]{minimal} $$ {ERP}_ {S2}$$ ERP S2 and [12pt]{minimal} $$ {ERP}_ {S3}$$ ERP S3 captures the ground truth extremely well. Predictions on the pulmonary veins, which are effectively regions of extrapolation, deviate from the ground truth more than other regions on the main body of the atrium. These deviations are on the order of the S2 and S3 resolution, and the posterior variance is higher in these regions. Uncertainty increases with distance from the measurement locations. The ISE scores show that the distribution of ERP predicted by the model covers the ground truth well as nearly all values are less than 3. The ISE for [12pt]{minimal} $$ {ERP}_ {S2}$$ ERP S2 on the left atrial appendage is above 3, which may be caused by a combination of high ground-truth values for [12pt]{minimal} $$ _{out}$$ τ out (which are not effectively probed by the measurements) and insufficient basis functions to capture high spatial variation in this region of the mesh. APD from the full atrial simulator using the posterior mean of the parameters (the maximum a posteriori estimate could have been used instead) matches the ground truth values very closely, demonstrating that the action potential has been calibrated well using only ERP measurements. We also performed quantitative validation across a broad range of designs. Figure shows these validation results, for different configurations of the S1S2(S3) pacing protocol (number of ERP observations, resolution of S2 and S3 intervals) and different heterogeneity for ERP, controlled by different correlation lengthscales for generated [12pt]{minimal} $$APD_{max}$$ A P D max and [12pt]{minimal} $$ _{out}$$ τ out ground-truth fields. A unit of kernel lengthscale is approximately 3.2 mm for this mesh; see “ ” for details. Prediction values of ERP are based on the maximum a posteriori estimate of the parameters, and here we use 32 eigenfunctions per EP parameter field in order to better model fields with more rapid spatial variation. Root Mean Squared Error (RMSE) is reduced with increasing lengthscale (less ERP heterogeneity), decreasing S2 and S3 resolution (more precise measurements), and increasing number of observations. We note that our likelihood function introduces a small amount of bias, discussed below, which for S2 and S3 resolution 10ms causes RMSE to increase slightly from 20 to 40 observations. Overall, the quantitative validation suggests that little is gained above 20 observation locations. In this paper, we have developed a workflow for calibrating an electrophysiology simulator from sparse measurements of excitability. This was done by representing the spatially varying parameter fields as Gaussian processes on a manifold, and linking these parameters to excitability observations through non-linear surrogate functions (emulators). Using a likelihood function for ERP observations, we performed probabilistic calibration to obtain the posterior distribution of the EP parameter fields. Both visual and quantitative comparison demonstrates that this workflow can successfully calibrate a simulator to ERP to a high level of accuracy. The nature of ERP observations, in which only the interval containing ERP is observed (and the possible brackets around this interval are fixed by the S1S2(S3) protocol), is that the ability to learn more by adding observations is strongly limited above a certain point. Figure demonstrates that this limit is reached faster for smaller S2 and S3 resolution. Our likelihood function does introduce a very small amount of bias, since the true likelihood should be constant in the pacing interval, but our approximation decreases on approaching the interval edges. A simple solution would be to pad the ERP observation brackets, which would remove the bias but reduce the precision. Without the assumption that measurements at locations give information about quantities at nearby locations, i.e. spatial correlation, inference about tissue properties beyond measurement sites would not be possible and atrial tissue would need to be sampled everywhere. Such regularization might make it difficult to capture discontinuous changes in tissue properties, although it would be difficult to measure such abrupt changes in tissue behaviour using sparse measurements. It may be possible to utilize other personal data (e.g. scans) or prior information (e.g. a database of clinical measurements) to assist with calibration. The latent Gaussian process model serves two purposes. Firstly, a run of the electrophysiology model requires specification of parameters at all points on the mesh, and the Gaussian process enables this specification via interpolation between measurement locations. Secondly, we assume that parameter values at neighbouring locations on the mesh are likely to be similar, which means that we need to do joint inference for the parameters at the measurement locations, rather than inferring parameters at each measurement location independently. In developing our method, we first attempted such an independent inference approach, in which parameters are calibrated at each measurement location independently and then interpolated over the manifold using GPMI, but we were not able to obtain satisfactory results. Our current workflow easily allows more complex spatial modeling using multiple latent GPs per EP parameter field, each with independent covariance kernels and hyperparameters that can be freely given suitable priors. It also provides the benefit of being able to constrain the posterior distribution by directly manipulating the posterior samples based on a priori knowledge, such that parameter values (or the tissue properties depending on these parameters) should fall within a certain physiological range. Our proposed workflow for calibration is suitable for other types of data. We have previously shown that Gaussian processes can be used as surrogate functions for CV, APD, and ERP restitution curves . Observations from these restitution curves at different locations over the atrium could be included in calibration simply by including additional contributions to the likelihood function and using “Restitution Curve Emulators” to map from EP parameters to the corresponding restitution curves. Our approach here solves the problem of representing the EP parameter fields on a manifold so as to make probabilistic calibration to sparse measurements into a tractable problem. This allows for propagating uncertainty from measurements through to an ensemble of calibrated models. Electrophysiology model The modified Mitchell-Schaeffer (mMS) cell model , for mono-domain tissue simulations with isotropic diffusion is expressed in the following equations: 1 [12pt]{minimal} $$ = & {} D ^2 V_m + h )(1 - V_m)}{ _{in}} - (1 - h) } + J_{stim} $$ ∂ V m ∂ t = D ∇ 2 V m + h V m ( V m - V gate ) ( 1 - V m ) τ in - ( 1 - h ) V m τ out + J stim 2 [12pt]{minimal} $$ = & {} {\{ (1-h) / _{open} &{} { if}\ V_m V_{gate} \\ -h / _{close} &{} . } $$ ∂ h ∂ t = ( 1 - h ) / τ open if V m ≤ V gate - h / τ close otherwise where [12pt]{minimal} $$V_m$$ V m is a normalised membrane voltage, h is a gating parameter that controls recovery, and [12pt]{minimal} $$J_{stim}$$ J stim is an externally applied stimulus. The 4 cell model parameters [12pt]{minimal} $$ = ( _{in}, _{close}, _{out}, _{open})$$ τ = ( τ in , τ close , τ out , τ open ) are time-constants that approximately characterize stages of the action potential sequence, and D is conductivity. We fixed the excitation threshold [12pt]{minimal} $$V_{gate}$$ V gate to 0.1. As in , we reparameterized the model as follows: 3 [12pt]{minimal} $$ CV_{max}= & {} 0.5(1 - 2 V_{gate}) } $$ C V max = 0.5 ( 1 - 2 V gate ) 2 D / τ in 4 [12pt]{minimal} $$ APD_{max}= & {} _{close} ( 1 + _{out} (1 - V_{gate})^2 / (4 _{in})) $$ A P D max = τ close log 1 + τ out ( 1 - V gate ) 2 / ( 4 τ in ) In this new parameter space, weighted combinations of valid parameters are also valid parameters, which means that spatial interpolation of valid parameters will produce valid parameters. We refer to these transformed parameters simply as ‘parameters’. The valid ranges of these parameters are set as [12pt]{minimal} $$CV_{max}$$ C V max 0.1–1.5 m/s, [12pt]{minimal} $$ _{in}$$ τ in 0.01–0.30 ms, [12pt]{minimal} $$ _{out}$$ τ out 1–30 ms, [12pt]{minimal} $$ _{open}$$ τ open 65–215 ms, [12pt]{minimal} $$APD_{max}$$ A P D max 120–270 ms. Atrial mesh To generate the mesh for the simulator, the left atrial blood pool was segmented from a contrast enhanced magnetic resonance angiogram scan performed at St Thomas’ Hospital . This segmentation was meshed using a marching cubes algorithm in CEMRGApp , and the resulting surface was remeshed to a regular edge length of 0.3mm using mmgtools software , corresponding to around 110,000 vertices, which is sufficient for simulation with the MMS model. This mesh can be found here , and is also included with our code . Sensitivity analysis To determine [12pt]{minimal} $$ {ERP}_ {S2}$$ ERP S2 , the ERP value under an S1S2 protocol for S1 600 ms, and [12pt]{minimal} $$ {ERP}_ {S3}$$ ERP S3 , the ERP value under an S1S2S3 protocol for S1 600 ms and S2 300 ms, we utilized a surrogate simulation: a strip of tissue with homogeneous parameters, paced from one end with the corresponding protocol, with activation measured in the strip centre . The strip simulation is set up to match the atrial simulation as closely as possible (space and time discretization, cell model time-step subdivision, numerical integration, etc). We obtain simulation results with an optimized Latin hyper-cube design of 500 parameter combinations in the parameter range explained above. Variance-based sensitivity analysis was performed by fitting a General Additive Model (GAM) to model outputs, e.g. [12pt]{minimal} $$ {ERP}_ {S2}$$ ERP S2 , as a function of a single model input, e.g. [12pt]{minimal} $$APD_{max}$$ A P D max . The expectation of the GAM is then a line through a point-cloud of input-output pairs. The variance of this line (evaluated at the inputs) divided by the variance of the point-cloud gives an approximate sensitivity index of the input on that output , . This method can be repeated for all inputs and all outputs. We implement GAMs using the LinearGAM function with 10 splines from the Python module PYGAM . The sensitivity index of output y for input x can then be calculated as Surrogate functions The map from EP parameters (inputs) to high dimensional tissue responses (outputs), such as restitution curves, has been modelled previously using Gaussian processes . Here, cubic polynomials in both [12pt]{minimal} $$ _1 = _{out}$$ θ 1 = τ out and [12pt]{minimal} $$ _2 = APD_{max}$$ θ 2 = A P D max were fit to corresponding values of [12pt]{minimal} $$ {ERP}_ {S2}$$ ERP S2 and [12pt]{minimal} $$ {ERP}_ {S3}$$ ERP S3 , generated from an optimized Latin hyper-cube design of 100 values of [12pt]{minimal} $$CV_{max}$$ C V max , [12pt]{minimal} $$ _{out}$$ τ out and [12pt]{minimal} $$APD_{max}$$ A P D max , keeping [12pt]{minimal} $$ _{in} = 0.05$$ τ in = 0.05 ms and [12pt]{minimal} $$ _{open} = 120$$ τ open = 120 ms in order to produce ERP and APD values in a range observed in human atrial tissue . [12pt]{minimal} $$CV_{max}$$ C V max was varied for robustness, but has negligible effects on ERP, as confirmed by negligible fitting residuals. There is a discontinuity in [12pt]{minimal} $$ {ERP}_ {S3}$$ ERP S3 for parameter values producing [12pt]{minimal} $$ {ERP}_ {S2} 285$$ ERP S2 ≈ 285 ms, so data for [12pt]{minimal} $$ {ERP}_ {S2}> 280$$ ERP S2 > 280 ms were discarded before fitting these functions. We refer to these polynomial fits as ‘surrogate functions’ for [12pt]{minimal} $$ {ERP}_ {S2}$$ ERP S2 and [12pt]{minimal} $$ {ERP}_ {S3}$$ ERP S3 , denoted as [12pt]{minimal} $$f_{1}( _1, _2)$$ f 1 ( θ 1 , θ 2 ) and [12pt]{minimal} $$f_{2}( _1, _2)$$ f 2 ( θ 1 , θ 2 ) respectively, as they allow for determining ERP without running simulations. Gaussian process priors We model the EP parameters fields, [12pt]{minimal} $$ _l( {x})$$ θ l ( x ) , as spatially correlated random fields defined on the atrial manifold, i.e. [12pt]{minimal} $$ {x} $$ x ∈ Ω . We use Gaussian Process Manifold Interpolation (GPMI), a method we proposed for defining Gaussian process distributions on manifolds . The approach uses solutions [12pt]{minimal} $$\{ _k, _k({ {x}})\}$$ λ k , ϕ k ( x ) of the Laplacian (Laplace-Beltrami) eigenproblem on the mesh . Using GPMI allows us to represent fields on the atrium using a coordinate system that uses these eigenfunctions as a basis, enabling us to calibrate parameter fields on any given atrial manifold. The prior for each parameter field [12pt]{minimal} $$ _l({})$$ θ l ( x ) can then be represented using the following probabilistic model, which uses the K smallest eigenvalue solutions to the Laplacian eigenproblem: 5 [12pt]{minimal} $$ _l({}) = m_l + _l _{k=1}^K ()_k , _l) } _k({}) $$ θ l ( x ) = m l + α l ∑ k = 1 K ( η l ) k S λ k , ρ l ϕ k ( x ) 6 [12pt]{minimal} $$()_k { {N}}(0,1) $$ ( η l ) k ∼ N ( 0 , 1 ) with hyperparameters mean [12pt]{minimal} $$m_l$$ m l , amplitude [12pt]{minimal} $$ _l$$ α l , and lengthscale [12pt]{minimal} $$ _l$$ ρ l , for [12pt]{minimal} $$l=1,2$$ l = 1 , 2 and [12pt]{minimal} $$k=1, , K$$ k = 1 , … , K . The lengthscales determine the distance over which values are correlated, with larger lengthscale corresponding to smoother parameter fields. The units of the lengthscale hyperparameters are determined by the spatial units of the mesh on which the eigenproblem is solved. See below for details. The hyperparameter vector [12pt]{minimal} $$ { {R}}^K$$ η l ∈ R K must be given a Gaussian prior in order for this model to approximate a Gaussian process. The function [12pt]{minimal} $$S( , _l)$$ S λ k , ρ l is the spectral density corresponding to the choice of covariance kernel, with the square root of the eigenvalue [12pt]{minimal} $$$$ λ k being the ‘frequency’ argument to this function. In this work, we use the spectral density for the radial basis function (exponentiated quadratic) kernel, but other stationary kernels could be used. It is possible and tractable to perform inference directly on the simulation mesh by solving the Laplacian eigenproblem on this mesh. But for convenience, a lower resolution mesh of 5000 vertices was used, with vertices that are a subset of the higher resolution mesh vertices. The lower resolution mesh is produced using a simulated annealing algorithm to optimally choose a subset of 5000 nodes before meshing these new nodes to form a new surface. The routines for this are found in the quLATi package , and method details are given in our previous work . This lower resolution mesh allows for calculation of eigenfunctions with fewer computational resources, less data storage for eigenfunctions, and is convenient for plotting. Values can be easily transferred to the simulation mesh via interpolation (we use the ‘interpolate’ function in the Python software Pyvista ). However, this ‘two-mesh’ approach is entirely optional. The units of lengthscale parameter [12pt]{minimal} $$$$ ρ in Eq. can be empirically related to geodesic distance by drawing many GP samples for a given kernel function on the mesh (using a lengthscale that allows for correlations to approach zero for some pairs of vertices since the mesh is finite), calculating the correlation between these samples at many pairs of vertices, and fitting (via least squares) the kernel function to correlation as a function of geodesic distance between the pairs of vertices. For the mesh in this work, 1 unit of kernel lengthscale corresponds to approximately 3.2 mm. Bayesian calibration Given ERP measurements at different locations [12pt]{minimal} $${ {x}}_i$$ x i over the atrium, it is possible to calibrate the parameter fields [12pt]{minimal} $$ _1({ {x}}) _{out}({ {x}})$$ θ 1 ( x ) ≡ τ out ( x ) and [12pt]{minimal} $$ _2({ {x}}) APD_{max}({ {x}})$$ θ 2 ( x ) ≡ A P D max ( x ) by obtaining the posterior distribution of the hyperparameters in Eq. . For convenience, we collect the hyperparameters into the vector [12pt]{minimal} $$ := (m_1, m_2, _1, _2, _1, _2, , )$$ ψ : = ( m 1 , m 2 , α 1 , α 2 , ρ 1 , ρ 2 , η 1 , η 2 ) . Defining the ERP measurements as [12pt]{minimal} $${ {y}}$$ y , then we can write the Bayesian inference problem as: 7 [12pt]{minimal} $$p( | { {y}}) p({ {y}} | ) p() $$ p ( ψ | y ) ∝ p ( y | ψ ) p ( ψ ) 8 [12pt]{minimal} $$p( { {y}}|) := _i p( y_1({ {x}}_i) \;|\; ) _i p( y_2({ {x}}_i) \;|\; ) $$ p y | ψ : = ∏ i p y 1 ( x i ) | ψ ∏ i p y 2 ( x i ) | ψ where [12pt]{minimal} $$y_1({ {x}}_i)$$ y 1 ( x i ) and [12pt]{minimal} $$y_2({ {x}}_i)$$ y 2 ( x i ) represent observations of [12pt]{minimal} $$ {ERP}_ {S2}$$ ERP S2 and [12pt]{minimal} $$ {ERP}_ {S3}$$ ERP S3 respectively. We assume that both types of ERP are measured at each location, but this is not a requirement as terms can just be replaced with 1 if the corresponding measurement is not performed. Clinically, S1S2 protocols are performed by decreasing S2 by [12pt]{minimal} $$$$ Δ S2 until successful activation does not occur on the S2 beat. Therefore, observations of ERP are only observations of an interval in which ERP lies. The observation that each ERP value at a measurement location [12pt]{minimal} $${ {x}}_i$$ x i lies between two S2 values [12pt]{minimal} $$t_s$$ t s and [12pt]{minimal} $$t_{s+1}$$ t s + 1 can be expressed in the following way (see Fig. for a graphical representation): 9 [12pt]{minimal} $$ y_1({ {x}}_i) :&= {ERP}_ {S2} [ t_3,\; t_4] \; {at} \; { {x}}_i $$ y 1 ( x i ) : = ERP S2 ∈ t 3 , t 4 at x i 10 [12pt]{minimal} $$ y_2({ {x}}_i) :&= {ERP}_ {S3} [ t_1,\; t_2] \; {at} \; { {x}}_i $$ y 2 ( x i ) : = ERP S3 ∈ t 1 , t 2 at x i Observations can be linked to the hyperparameters [12pt]{minimal} $$$$ ψ via the GP fields defined by Eq. , which determine the EP parameters at positions on the atrial mesh, and by the surrogate functions, which map these EP parameter values to ERP values: 11 [12pt]{minimal} $$p( y_m({ {x}}_i) \;|\; ) = p( y_m({ {x}}_i) \;|\; f_m({ {x}}_i)) $$ p y m ( x i ) | ψ = p y m ( x i ) | f m ( x i ) 12 [12pt]{minimal} $$f_m({ {x}}_i) := f_m( _1({ {x}}_i, ),\; _2({ {x}}_i, )) $$ f m ( x i ) : = f m θ 1 ( x i , ψ 1 ) , θ 2 ( x i , ψ 2 ) where [12pt]{minimal} $$ _1$$ ψ 1 and [12pt]{minimal} $$ _2$$ ψ 2 represent partitions of [12pt]{minimal} $$$$ ψ for each EP parameter field. An S1S2 pacing protocol to determine ERP effectively measures the S2 interval in which ERP lies. Defining the lower bound of this interval by I and the interval width by [12pt]{minimal} $$ S2$$ Δ S 2 , the true likelihood is given by a truncated uniform, or ‘top-hat’, distribution. In other words, [12pt]{minimal} $$p( {ERP} [I, I + S2] \;|\; )$$ p ( ERP ∈ [ I , I + Δ S 2 ] | ψ ) is equal 1 if [12pt]{minimal} $$$$ ψ produces ERP in the specified interval, and 0 otherwise. The surrogate functions [12pt]{minimal} $$f_m( _1, _2)$$ f m ( θ 1 , θ 2 ) can be used to predict ERP from the EP parameters, which are determined by the GP fields [12pt]{minimal} $$ _l({ {x}})$$ θ l ( x ) depending on the hyperparameters [12pt]{minimal} $$$$ ψ . However, it is more convenient to work with an approximation to this top-hat distribution, which we previously derived for use with ERP measurements . This top-hat likelihood can be approximated by dividing the interval into N sub-intervals, with a normal distribution [12pt]{minimal} $${ {N}}(c_i, s)$$ N ( c i , s ) centered on each sub-interval [12pt]{minimal} $$c_i = I + (i - 1/2) S2 / N$$ c i = I + ( i - 1 / 2 ) Δ S 2 / N with standard deviation equal to the sub-interval width [12pt]{minimal} $$s = S2 / N$$ s = Δ S 2 / N . We choose [12pt]{minimal} $$N = S2$$ N = Δ S 2 , such that [12pt]{minimal} $$s = 1$$ s = 1 . For an observation [12pt]{minimal} $$y_1( {x^*}) := {ERP}_ {S2} [ I,\; I + S2]$$ y 1 ( x ∗ ) : = ERP S2 ∈ I , I + Δ S 2 (and similarly for [12pt]{minimal} $$ {ERP}_ {S3}$$ ERP S3 ) the likelihood can be approximated as: 13 [12pt]{minimal} $$ p(y_1 [I, I + S2] \;|\; ) = _{i=1}^{N} } {( - (( {x^*})) - c_i )^2 }{2 s^2} ) } $$ p ( y 1 ∈ [ I , I + Δ S 2 ] | ψ ) = 1 N ∑ i = 1 N 1 2 π s 2 exp - ( f 1 ( θ ( x ∗ ) ) - c i ) 2 2 s 2 The shape of this likelihood function is a top-hat with smoothed sides and no discontinuities, such that the likelihood is approximately constant in the interval but rapidly falls to zero near the interval edges. This approximate top-hat distribution has infinite support, allowing gradient-based MCMC to be performed. For the log-likelihood, the readily available logsumexp function is used to prevent numerical underflow (this function is available in STAN , , which is used for MCMC for this work). Note that this approximate top-hat distribution integrates to 1, rather than having approximately constant value 1 (in the interval), but constant factors do not matter for MCMC so we retain this form for simplicity. Also note that with a small adjustment this likelihood can be used with a Gaussian process surrogate function that predicts mean and variance . We use STAN (via PyStan) , to perform Hamiltonian MCMC, which yields samples from the posterior distribution of the parameters [12pt]{minimal} $$$$ ψ . See Results for details. We can use these samples to calculate samples of the EP fields over the entire atrium using Eq. , from which ERP samples can be calculated using the surrogate functions. We modify Eq. by replacing [12pt]{minimal} $$ _l$$ α l with [12pt]{minimal} $$ _l / | _1|$$ α l / | Φ 1 | (where [12pt]{minimal} $$ _1({ {x}})$$ Φ 1 ( x ) is constant), which assists with defining priors for [12pt]{minimal} $$ _l$$ α l . We used the following priors on the hyperparameters: [12pt]{minimal} $$ _l {InvGamma}(1.01, 20)$$ ρ l ∼ InvGamma ( 1.01 , 20 ) , [12pt]{minimal} $$m_l {Uniform}(- , + )$$ m l ∼ Uniform ( - ∞ , + ∞ ) , and [12pt]{minimal} $$ _{l} {InvGamma}(1, 5)$$ α l ∼ InvGamma ( 1 , 5 ) . We found that these priors consistently allow for recoverability of both EP parameter fields. Eq. gives the prior for the remaining hyperparameters. Parameter samples To generate ‘ground truth’ parameter fields, we draw samples from a Gaussian process defined by Eq. with Matern 5/2 spectral density function using 256 eigenfunctions. We set parameters [12pt]{minimal} $$m = 0$$ m = 0 and [12pt]{minimal} $$ = 1$$ α = 1 , and [12pt]{minimal} $$$$ ρ is set to values explained in Results and below. The generated samples for [12pt]{minimal} $$ _{out}$$ τ out and [12pt]{minimal} $$APD_{max}$$ A P D max are then scaled and offset into the full allowable parameters ranges. The same operation is performed for [12pt]{minimal} $$CV_{max}$$ C V max , which is needed for atrial simulations. Certain combinations of [12pt]{minimal} $$ _{out}$$ τ out and [12pt]{minimal} $$APD_{max}$$ A P D max correspond to regions of parameter space that produce unrealistic ERP. This is handled for both ‘ground truth’ samples and posterior samples by identifying mesh nodes where the parameters produce values [12pt]{minimal} $$ {ERP}_ {S2} > 280~$$ ERP S2 > 280 ms . The parameter values at these nodes are then replaced by a weighted average of parameter values at other nodes with acceptable ERP values, weighted by [12pt]{minimal} $$1/d_{BH}^4$$ 1 / d BH 4 where [12pt]{minimal} $$d_{BH}$$ d BH is biharmonic distance. Biharmonic distance, calculated from the Laplacian eigenvalues and eigenfunctions, is significantly cheaper to calculate than geodesic distance, and avoids topological issues from using Euclidean distance to interpolate values on a manifold. This procedure allows to constrain parameter samples efficiently and effectively, and is far simpler than attempting to encode such constraints into MCMC. Synthetic experiment We created ground truth parameter fields for [12pt]{minimal} $$ _{out}$$ τ out and [12pt]{minimal} $$APD_{max}$$ A P D max in order to verify our calibration approach (see above for details). ERP values were calculated using the surrogate functions. For ERP measurements, we generate a design of measurement locations using an optimized ‘maximin’ hypercube design, excluding mesh sites within 0.6 cm of mesh boundaries as potential sites for these measurements since clinical measurements are unlikely to sample these regions. The resolution of the S1S2 and S1S2S3 protocols are set to values specified in Results. A lengthscale of 20 was used for the example shown in Figs. , and . APD values were obtained using the atrial simulator (simulations of the mono-domain equation with the mMS model using the software openCARP ). Tissue was paced for 8 beats from near the coronary sinus, and depolarization and repolarization were measured on the final beat. A spatially varying [12pt]{minimal} $$CV_{max}$$ C V max field was generated for use in this simulation, and [12pt]{minimal} $$ _{in}$$ τ in and [12pt]{minimal} $$ _{open}$$ τ open were fixed as described above. Simulations were run either for ground truth parameter fields or for predicted parameter fields resulting from calibration. Note that the parameters described in this manuscript were transformed back into the original parameters for the mMS model for running simulations in openCARP . The diffusion time-step was 0.1 ms and the ionic current time-step was 0.02 ms. The mMS action potential is normalized to have minimum 0 and maximum 1. Activation (depolarization) was measured when [12pt]{minimal} $$V_m$$ V m reached 0.7 on upstroke, and recovery (repolarization) was measured when [12pt]{minimal} $$V_m$$ V m fell to 0.8 ( [12pt]{minimal} $$ {APD}_{20}$$ APD 20 ), 0.7, ( [12pt]{minimal} $$ {APD}_{30}$$ APD 30 ), 0.5 ( [12pt]{minimal} $$ {APD}_{50}$$ APD 50 ), and 0.1 ( [12pt]{minimal} $$ {APD}_{90}$$ APD 90 ). APD values are the time between activation and recovery. Simulations results for [12pt]{minimal} $$ {APD}_{20}$$ APD 20 and [12pt]{minimal} $$ {APD}_{90}$$ APD 90 are given in Results. The modified Mitchell-Schaeffer (mMS) cell model , for mono-domain tissue simulations with isotropic diffusion is expressed in the following equations: 1 [12pt]{minimal} $$ = & {} D ^2 V_m + h )(1 - V_m)}{ _{in}} - (1 - h) } + J_{stim} $$ ∂ V m ∂ t = D ∇ 2 V m + h V m ( V m - V gate ) ( 1 - V m ) τ in - ( 1 - h ) V m τ out + J stim 2 [12pt]{minimal} $$ = & {} {\{ (1-h) / _{open} &{} { if}\ V_m V_{gate} \\ -h / _{close} &{} . } $$ ∂ h ∂ t = ( 1 - h ) / τ open if V m ≤ V gate - h / τ close otherwise where [12pt]{minimal} $$V_m$$ V m is a normalised membrane voltage, h is a gating parameter that controls recovery, and [12pt]{minimal} $$J_{stim}$$ J stim is an externally applied stimulus. The 4 cell model parameters [12pt]{minimal} $$ = ( _{in}, _{close}, _{out}, _{open})$$ τ = ( τ in , τ close , τ out , τ open ) are time-constants that approximately characterize stages of the action potential sequence, and D is conductivity. We fixed the excitation threshold [12pt]{minimal} $$V_{gate}$$ V gate to 0.1. As in , we reparameterized the model as follows: 3 [12pt]{minimal} $$ CV_{max}= & {} 0.5(1 - 2 V_{gate}) } $$ C V max = 0.5 ( 1 - 2 V gate ) 2 D / τ in 4 [12pt]{minimal} $$ APD_{max}= & {} _{close} ( 1 + _{out} (1 - V_{gate})^2 / (4 _{in})) $$ A P D max = τ close log 1 + τ out ( 1 - V gate ) 2 / ( 4 τ in ) In this new parameter space, weighted combinations of valid parameters are also valid parameters, which means that spatial interpolation of valid parameters will produce valid parameters. We refer to these transformed parameters simply as ‘parameters’. The valid ranges of these parameters are set as [12pt]{minimal} $$CV_{max}$$ C V max 0.1–1.5 m/s, [12pt]{minimal} $$ _{in}$$ τ in 0.01–0.30 ms, [12pt]{minimal} $$ _{out}$$ τ out 1–30 ms, [12pt]{minimal} $$ _{open}$$ τ open 65–215 ms, [12pt]{minimal} $$APD_{max}$$ A P D max 120–270 ms. To generate the mesh for the simulator, the left atrial blood pool was segmented from a contrast enhanced magnetic resonance angiogram scan performed at St Thomas’ Hospital . This segmentation was meshed using a marching cubes algorithm in CEMRGApp , and the resulting surface was remeshed to a regular edge length of 0.3mm using mmgtools software , corresponding to around 110,000 vertices, which is sufficient for simulation with the MMS model. This mesh can be found here , and is also included with our code . To determine [12pt]{minimal} $$ {ERP}_ {S2}$$ ERP S2 , the ERP value under an S1S2 protocol for S1 600 ms, and [12pt]{minimal} $$ {ERP}_ {S3}$$ ERP S3 , the ERP value under an S1S2S3 protocol for S1 600 ms and S2 300 ms, we utilized a surrogate simulation: a strip of tissue with homogeneous parameters, paced from one end with the corresponding protocol, with activation measured in the strip centre . The strip simulation is set up to match the atrial simulation as closely as possible (space and time discretization, cell model time-step subdivision, numerical integration, etc). We obtain simulation results with an optimized Latin hyper-cube design of 500 parameter combinations in the parameter range explained above. Variance-based sensitivity analysis was performed by fitting a General Additive Model (GAM) to model outputs, e.g. [12pt]{minimal} $$ {ERP}_ {S2}$$ ERP S2 , as a function of a single model input, e.g. [12pt]{minimal} $$APD_{max}$$ A P D max . The expectation of the GAM is then a line through a point-cloud of input-output pairs. The variance of this line (evaluated at the inputs) divided by the variance of the point-cloud gives an approximate sensitivity index of the input on that output , . This method can be repeated for all inputs and all outputs. We implement GAMs using the LinearGAM function with 10 splines from the Python module PYGAM . The sensitivity index of output y for input x can then be calculated as The map from EP parameters (inputs) to high dimensional tissue responses (outputs), such as restitution curves, has been modelled previously using Gaussian processes . Here, cubic polynomials in both [12pt]{minimal} $$ _1 = _{out}$$ θ 1 = τ out and [12pt]{minimal} $$ _2 = APD_{max}$$ θ 2 = A P D max were fit to corresponding values of [12pt]{minimal} $$ {ERP}_ {S2}$$ ERP S2 and [12pt]{minimal} $$ {ERP}_ {S3}$$ ERP S3 , generated from an optimized Latin hyper-cube design of 100 values of [12pt]{minimal} $$CV_{max}$$ C V max , [12pt]{minimal} $$ _{out}$$ τ out and [12pt]{minimal} $$APD_{max}$$ A P D max , keeping [12pt]{minimal} $$ _{in} = 0.05$$ τ in = 0.05 ms and [12pt]{minimal} $$ _{open} = 120$$ τ open = 120 ms in order to produce ERP and APD values in a range observed in human atrial tissue . [12pt]{minimal} $$CV_{max}$$ C V max was varied for robustness, but has negligible effects on ERP, as confirmed by negligible fitting residuals. There is a discontinuity in [12pt]{minimal} $$ {ERP}_ {S3}$$ ERP S3 for parameter values producing [12pt]{minimal} $$ {ERP}_ {S2} 285$$ ERP S2 ≈ 285 ms, so data for [12pt]{minimal} $$ {ERP}_ {S2}> 280$$ ERP S2 > 280 ms were discarded before fitting these functions. We refer to these polynomial fits as ‘surrogate functions’ for [12pt]{minimal} $$ {ERP}_ {S2}$$ ERP S2 and [12pt]{minimal} $$ {ERP}_ {S3}$$ ERP S3 , denoted as [12pt]{minimal} $$f_{1}( _1, _2)$$ f 1 ( θ 1 , θ 2 ) and [12pt]{minimal} $$f_{2}( _1, _2)$$ f 2 ( θ 1 , θ 2 ) respectively, as they allow for determining ERP without running simulations. We model the EP parameters fields, [12pt]{minimal} $$ _l( {x})$$ θ l ( x ) , as spatially correlated random fields defined on the atrial manifold, i.e. [12pt]{minimal} $$ {x} $$ x ∈ Ω . We use Gaussian Process Manifold Interpolation (GPMI), a method we proposed for defining Gaussian process distributions on manifolds . The approach uses solutions [12pt]{minimal} $$\{ _k, _k({ {x}})\}$$ λ k , ϕ k ( x ) of the Laplacian (Laplace-Beltrami) eigenproblem on the mesh . Using GPMI allows us to represent fields on the atrium using a coordinate system that uses these eigenfunctions as a basis, enabling us to calibrate parameter fields on any given atrial manifold. The prior for each parameter field [12pt]{minimal} $$ _l({})$$ θ l ( x ) can then be represented using the following probabilistic model, which uses the K smallest eigenvalue solutions to the Laplacian eigenproblem: 5 [12pt]{minimal} $$ _l({}) = m_l + _l _{k=1}^K ()_k , _l) } _k({}) $$ θ l ( x ) = m l + α l ∑ k = 1 K ( η l ) k S λ k , ρ l ϕ k ( x ) 6 [12pt]{minimal} $$()_k { {N}}(0,1) $$ ( η l ) k ∼ N ( 0 , 1 ) with hyperparameters mean [12pt]{minimal} $$m_l$$ m l , amplitude [12pt]{minimal} $$ _l$$ α l , and lengthscale [12pt]{minimal} $$ _l$$ ρ l , for [12pt]{minimal} $$l=1,2$$ l = 1 , 2 and [12pt]{minimal} $$k=1, , K$$ k = 1 , … , K . The lengthscales determine the distance over which values are correlated, with larger lengthscale corresponding to smoother parameter fields. The units of the lengthscale hyperparameters are determined by the spatial units of the mesh on which the eigenproblem is solved. See below for details. The hyperparameter vector [12pt]{minimal} $$ { {R}}^K$$ η l ∈ R K must be given a Gaussian prior in order for this model to approximate a Gaussian process. The function [12pt]{minimal} $$S( , _l)$$ S λ k , ρ l is the spectral density corresponding to the choice of covariance kernel, with the square root of the eigenvalue [12pt]{minimal} $$$$ λ k being the ‘frequency’ argument to this function. In this work, we use the spectral density for the radial basis function (exponentiated quadratic) kernel, but other stationary kernels could be used. It is possible and tractable to perform inference directly on the simulation mesh by solving the Laplacian eigenproblem on this mesh. But for convenience, a lower resolution mesh of 5000 vertices was used, with vertices that are a subset of the higher resolution mesh vertices. The lower resolution mesh is produced using a simulated annealing algorithm to optimally choose a subset of 5000 nodes before meshing these new nodes to form a new surface. The routines for this are found in the quLATi package , and method details are given in our previous work . This lower resolution mesh allows for calculation of eigenfunctions with fewer computational resources, less data storage for eigenfunctions, and is convenient for plotting. Values can be easily transferred to the simulation mesh via interpolation (we use the ‘interpolate’ function in the Python software Pyvista ). However, this ‘two-mesh’ approach is entirely optional. The units of lengthscale parameter [12pt]{minimal} $$$$ ρ in Eq. can be empirically related to geodesic distance by drawing many GP samples for a given kernel function on the mesh (using a lengthscale that allows for correlations to approach zero for some pairs of vertices since the mesh is finite), calculating the correlation between these samples at many pairs of vertices, and fitting (via least squares) the kernel function to correlation as a function of geodesic distance between the pairs of vertices. For the mesh in this work, 1 unit of kernel lengthscale corresponds to approximately 3.2 mm. Given ERP measurements at different locations [12pt]{minimal} $${ {x}}_i$$ x i over the atrium, it is possible to calibrate the parameter fields [12pt]{minimal} $$ _1({ {x}}) _{out}({ {x}})$$ θ 1 ( x ) ≡ τ out ( x ) and [12pt]{minimal} $$ _2({ {x}}) APD_{max}({ {x}})$$ θ 2 ( x ) ≡ A P D max ( x ) by obtaining the posterior distribution of the hyperparameters in Eq. . For convenience, we collect the hyperparameters into the vector [12pt]{minimal} $$ := (m_1, m_2, _1, _2, _1, _2, , )$$ ψ : = ( m 1 , m 2 , α 1 , α 2 , ρ 1 , ρ 2 , η 1 , η 2 ) . Defining the ERP measurements as [12pt]{minimal} $${ {y}}$$ y , then we can write the Bayesian inference problem as: 7 [12pt]{minimal} $$p( | { {y}}) p({ {y}} | ) p() $$ p ( ψ | y ) ∝ p ( y | ψ ) p ( ψ ) 8 [12pt]{minimal} $$p( { {y}}|) := _i p( y_1({ {x}}_i) \;|\; ) _i p( y_2({ {x}}_i) \;|\; ) $$ p y | ψ : = ∏ i p y 1 ( x i ) | ψ ∏ i p y 2 ( x i ) | ψ where [12pt]{minimal} $$y_1({ {x}}_i)$$ y 1 ( x i ) and [12pt]{minimal} $$y_2({ {x}}_i)$$ y 2 ( x i ) represent observations of [12pt]{minimal} $$ {ERP}_ {S2}$$ ERP S2 and [12pt]{minimal} $$ {ERP}_ {S3}$$ ERP S3 respectively. We assume that both types of ERP are measured at each location, but this is not a requirement as terms can just be replaced with 1 if the corresponding measurement is not performed. Clinically, S1S2 protocols are performed by decreasing S2 by [12pt]{minimal} $$$$ Δ S2 until successful activation does not occur on the S2 beat. Therefore, observations of ERP are only observations of an interval in which ERP lies. The observation that each ERP value at a measurement location [12pt]{minimal} $${ {x}}_i$$ x i lies between two S2 values [12pt]{minimal} $$t_s$$ t s and [12pt]{minimal} $$t_{s+1}$$ t s + 1 can be expressed in the following way (see Fig. for a graphical representation): 9 [12pt]{minimal} $$ y_1({ {x}}_i) :&= {ERP}_ {S2} [ t_3,\; t_4] \; {at} \; { {x}}_i $$ y 1 ( x i ) : = ERP S2 ∈ t 3 , t 4 at x i 10 [12pt]{minimal} $$ y_2({ {x}}_i) :&= {ERP}_ {S3} [ t_1,\; t_2] \; {at} \; { {x}}_i $$ y 2 ( x i ) : = ERP S3 ∈ t 1 , t 2 at x i Observations can be linked to the hyperparameters [12pt]{minimal} $$$$ ψ via the GP fields defined by Eq. , which determine the EP parameters at positions on the atrial mesh, and by the surrogate functions, which map these EP parameter values to ERP values: 11 [12pt]{minimal} $$p( y_m({ {x}}_i) \;|\; ) = p( y_m({ {x}}_i) \;|\; f_m({ {x}}_i)) $$ p y m ( x i ) | ψ = p y m ( x i ) | f m ( x i ) 12 [12pt]{minimal} $$f_m({ {x}}_i) := f_m( _1({ {x}}_i, ),\; _2({ {x}}_i, )) $$ f m ( x i ) : = f m θ 1 ( x i , ψ 1 ) , θ 2 ( x i , ψ 2 ) where [12pt]{minimal} $$ _1$$ ψ 1 and [12pt]{minimal} $$ _2$$ ψ 2 represent partitions of [12pt]{minimal} $$$$ ψ for each EP parameter field. An S1S2 pacing protocol to determine ERP effectively measures the S2 interval in which ERP lies. Defining the lower bound of this interval by I and the interval width by [12pt]{minimal} $$ S2$$ Δ S 2 , the true likelihood is given by a truncated uniform, or ‘top-hat’, distribution. In other words, [12pt]{minimal} $$p( {ERP} [I, I + S2] \;|\; )$$ p ( ERP ∈ [ I , I + Δ S 2 ] | ψ ) is equal 1 if [12pt]{minimal} $$$$ ψ produces ERP in the specified interval, and 0 otherwise. The surrogate functions [12pt]{minimal} $$f_m( _1, _2)$$ f m ( θ 1 , θ 2 ) can be used to predict ERP from the EP parameters, which are determined by the GP fields [12pt]{minimal} $$ _l({ {x}})$$ θ l ( x ) depending on the hyperparameters [12pt]{minimal} $$$$ ψ . However, it is more convenient to work with an approximation to this top-hat distribution, which we previously derived for use with ERP measurements . This top-hat likelihood can be approximated by dividing the interval into N sub-intervals, with a normal distribution [12pt]{minimal} $${ {N}}(c_i, s)$$ N ( c i , s ) centered on each sub-interval [12pt]{minimal} $$c_i = I + (i - 1/2) S2 / N$$ c i = I + ( i - 1 / 2 ) Δ S 2 / N with standard deviation equal to the sub-interval width [12pt]{minimal} $$s = S2 / N$$ s = Δ S 2 / N . We choose [12pt]{minimal} $$N = S2$$ N = Δ S 2 , such that [12pt]{minimal} $$s = 1$$ s = 1 . For an observation [12pt]{minimal} $$y_1( {x^*}) := {ERP}_ {S2} [ I,\; I + S2]$$ y 1 ( x ∗ ) : = ERP S2 ∈ I , I + Δ S 2 (and similarly for [12pt]{minimal} $$ {ERP}_ {S3}$$ ERP S3 ) the likelihood can be approximated as: 13 [12pt]{minimal} $$ p(y_1 [I, I + S2] \;|\; ) = _{i=1}^{N} } {( - (( {x^*})) - c_i )^2 }{2 s^2} ) } $$ p ( y 1 ∈ [ I , I + Δ S 2 ] | ψ ) = 1 N ∑ i = 1 N 1 2 π s 2 exp - ( f 1 ( θ ( x ∗ ) ) - c i ) 2 2 s 2 The shape of this likelihood function is a top-hat with smoothed sides and no discontinuities, such that the likelihood is approximately constant in the interval but rapidly falls to zero near the interval edges. This approximate top-hat distribution has infinite support, allowing gradient-based MCMC to be performed. For the log-likelihood, the readily available logsumexp function is used to prevent numerical underflow (this function is available in STAN , , which is used for MCMC for this work). Note that this approximate top-hat distribution integrates to 1, rather than having approximately constant value 1 (in the interval), but constant factors do not matter for MCMC so we retain this form for simplicity. Also note that with a small adjustment this likelihood can be used with a Gaussian process surrogate function that predicts mean and variance . We use STAN (via PyStan) , to perform Hamiltonian MCMC, which yields samples from the posterior distribution of the parameters [12pt]{minimal} $$$$ ψ . See Results for details. We can use these samples to calculate samples of the EP fields over the entire atrium using Eq. , from which ERP samples can be calculated using the surrogate functions. We modify Eq. by replacing [12pt]{minimal} $$ _l$$ α l with [12pt]{minimal} $$ _l / | _1|$$ α l / | Φ 1 | (where [12pt]{minimal} $$ _1({ {x}})$$ Φ 1 ( x ) is constant), which assists with defining priors for [12pt]{minimal} $$ _l$$ α l . We used the following priors on the hyperparameters: [12pt]{minimal} $$ _l {InvGamma}(1.01, 20)$$ ρ l ∼ InvGamma ( 1.01 , 20 ) , [12pt]{minimal} $$m_l {Uniform}(- , + )$$ m l ∼ Uniform ( - ∞ , + ∞ ) , and [12pt]{minimal} $$ _{l} {InvGamma}(1, 5)$$ α l ∼ InvGamma ( 1 , 5 ) . We found that these priors consistently allow for recoverability of both EP parameter fields. Eq. gives the prior for the remaining hyperparameters. To generate ‘ground truth’ parameter fields, we draw samples from a Gaussian process defined by Eq. with Matern 5/2 spectral density function using 256 eigenfunctions. We set parameters [12pt]{minimal} $$m = 0$$ m = 0 and [12pt]{minimal} $$ = 1$$ α = 1 , and [12pt]{minimal} $$$$ ρ is set to values explained in Results and below. The generated samples for [12pt]{minimal} $$ _{out}$$ τ out and [12pt]{minimal} $$APD_{max}$$ A P D max are then scaled and offset into the full allowable parameters ranges. The same operation is performed for [12pt]{minimal} $$CV_{max}$$ C V max , which is needed for atrial simulations. Certain combinations of [12pt]{minimal} $$ _{out}$$ τ out and [12pt]{minimal} $$APD_{max}$$ A P D max correspond to regions of parameter space that produce unrealistic ERP. This is handled for both ‘ground truth’ samples and posterior samples by identifying mesh nodes where the parameters produce values [12pt]{minimal} $$ {ERP}_ {S2} > 280~$$ ERP S2 > 280 ms . The parameter values at these nodes are then replaced by a weighted average of parameter values at other nodes with acceptable ERP values, weighted by [12pt]{minimal} $$1/d_{BH}^4$$ 1 / d BH 4 where [12pt]{minimal} $$d_{BH}$$ d BH is biharmonic distance. Biharmonic distance, calculated from the Laplacian eigenvalues and eigenfunctions, is significantly cheaper to calculate than geodesic distance, and avoids topological issues from using Euclidean distance to interpolate values on a manifold. This procedure allows to constrain parameter samples efficiently and effectively, and is far simpler than attempting to encode such constraints into MCMC. We created ground truth parameter fields for [12pt]{minimal} $$ _{out}$$ τ out and [12pt]{minimal} $$APD_{max}$$ A P D max in order to verify our calibration approach (see above for details). ERP values were calculated using the surrogate functions. For ERP measurements, we generate a design of measurement locations using an optimized ‘maximin’ hypercube design, excluding mesh sites within 0.6 cm of mesh boundaries as potential sites for these measurements since clinical measurements are unlikely to sample these regions. The resolution of the S1S2 and S1S2S3 protocols are set to values specified in Results. A lengthscale of 20 was used for the example shown in Figs. , and . APD values were obtained using the atrial simulator (simulations of the mono-domain equation with the mMS model using the software openCARP ). Tissue was paced for 8 beats from near the coronary sinus, and depolarization and repolarization were measured on the final beat. A spatially varying [12pt]{minimal} $$CV_{max}$$ C V max field was generated for use in this simulation, and [12pt]{minimal} $$ _{in}$$ τ in and [12pt]{minimal} $$ _{open}$$ τ open were fixed as described above. Simulations were run either for ground truth parameter fields or for predicted parameter fields resulting from calibration. Note that the parameters described in this manuscript were transformed back into the original parameters for the mMS model for running simulations in openCARP . The diffusion time-step was 0.1 ms and the ionic current time-step was 0.02 ms. The mMS action potential is normalized to have minimum 0 and maximum 1. Activation (depolarization) was measured when [12pt]{minimal} $$V_m$$ V m reached 0.7 on upstroke, and recovery (repolarization) was measured when [12pt]{minimal} $$V_m$$ V m fell to 0.8 ( [12pt]{minimal} $$ {APD}_{20}$$ APD 20 ), 0.7, ( [12pt]{minimal} $$ {APD}_{30}$$ APD 30 ), 0.5 ( [12pt]{minimal} $$ {APD}_{50}$$ APD 50 ), and 0.1 ( [12pt]{minimal} $$ {APD}_{90}$$ APD 90 ). APD values are the time between activation and recovery. Simulations results for [12pt]{minimal} $$ {APD}_{20}$$ APD 20 and [12pt]{minimal} $$ {APD}_{90}$$ APD 90 are given in Results. |
The Healthcare Integrated Research Database ( | 4f3df80c-9db4-4dd1-9a58-3f33e4b36faa | 11798679 | Pharmacology[mh] | Introduction Real‐world data (RWD) are increasingly used to generate real‐world evidence (RWE) to support the development and regulation of healthcare products, perform safety and effectiveness studies, and conduct health economics and outcomes research, among other uses . In response to the expanding utility of RWE, there is increasing availability of RWD sources, including national healthcare registries, disease and exposure registries, electronic health record (EHR) systems, and healthcare claims databases . Individuals in the United States (US) can obtain health insurance from their employer, individual exchanges, or government programs (i.e., Medicare and Medicaid). For employer‐sponsored insurance, there is typically a designated open enrollment period each year, during which employees can sign up for or make changes to their health insurance plans, as well as special enrollment periods for individuals who experience qualifying life events (e.g., marriage, birth/adoption of a child). Employees receive information about the available health plans, including details on premiums, coverage options, provider networks, and out‐of‐pocket costs. Employees evaluate the plans and select one that best meets their needs and budget. Individuals who do not have access to employer‐sponsored coverage can buy insurance through individual exchanges. These exchanges have an annual open enrollment period as well as special enrollment periods for individuals who experience a qualifying life event. The exchanges provide a range of health plans to choose from, allowing individuals to compare details on premiums, coverage options, provider networks, and out‐of‐pocket costs. Medicare is generally available to individuals aged 65 and older or to those under 65 with certain disabilities or conditions. Enrollees choose between traditional Medicare, offered from the federal government (Centers for Medicare & Medicaid Services), or Medicare Advantage plans offered through private insurance companies. Those who purchase traditional Medicare can also purchase supplemental coverage from private insurance companies to help cover out‐of‐pocket costs not covered by traditional Medicare. In addition, individuals can choose prescription drug coverage (Medicare Part D) from private insurance companies approved by Medicare. Eligibility for Medicaid is primarily based on income, household size, and state‐specific criteria. If eligible, individuals may choose from various managed care plans offered by private insurers contracted with the state Medicaid program. When a covered individual seeks medical care, healthcare providers (e.g., doctors, hospitals) deliver services and generate a claim containing service details and costs which is submitted to the insurer using standardized forms. Insurers review claims for accuracy, coverage, and medical necessity and determine the amount payable based on contractual agreements. The insurer then sends payments to providers for approved services. Insurers maintain comprehensive records of all submitted claims, whether paid or denied. Healthcare claims databases are an attractive option to obtain RWD, as they can include the healthcare information needed to support many research activities (such as enrollment dates in the health plan, demographic characteristics, diagnoses, treatments, and costs) in large populations. This information, obtained from all eligible members, providers, and facilities, identifies time periods of health plan membership and provides a nearly comprehensive picture of a patient's interactions with the healthcare system during this time of active enrollment. Though research using claims data alone can provide significant value, it does have limitations. Claims data are not captured for services which are not paid by the health plan (i.e., individual pays cash for services), certain clinical details are not contained in claims data, and the accuracy of diagnosis and procedure coding can be variable. There are a growing number of individual healthcare claims databases and multi‐source claims data aggregator databases available to researchers, and each database possesses attributes that impact its suitability for a specific research purpose. Foremost, a database must include the target population in sufficient numbers and the data elements necessary, with sufficient accuracy and completeness, to conduct valid and reliable research. There may be other requirements for a specific research project, such as the ability to trace the claims data to their origins to ensure satisfactory data quality, the ability to link external sources of information (e.g., medical records, National Death Index (NDI)) to supplement or validate claims data, or the ability to link mothers with their infants to conduct pregnancy‐related research. The Healthcare Integrated Research Database (HIRD, formerly the HealthCore Integrated Research Database) is a data environment curated and maintained by Carelon Research that includes “closed” (reviewed and approved by the payer) healthcare claims of individuals from across the US and has been augmented with additional data to support a variety of health‐related research. The HIRD contains data from 2006 and is updated monthly. This article describes the types, elements, timeliness, and quality of the data in the HIRD for health researchers considering RWD for US populations. Data Types in the HIRD The foundation of the HIRD is the enrollment records and associated healthcare claims for individuals enrolled in commercial, Medicare, or Medicaid health plans offered or managed by Elevance Health (Table ). The HIRD includes data from health plans in 33 states in the US, with individual members located throughout all 50 states. Data from individuals in these plans are generally available for research; however, some individuals or employer groups may choose not to make their data available for research. There are no restrictions on the use of data from Medicare Advantage plans (complete coverage through private health plans), supplemental Medicare coverage through the health plans, and Medicare Part D (pharmacy coverage). The HIRD does not contain traditional Medicare data. Medicaid plans require state‐by‐state approval for use in research. 2.1 Enrollment Records Enrollment records are created for individuals for all time segments during which they are enrolled in the health plan. Knowing when individuals are eligible allows researchers to distinguish the absence of data from the absence of a healthcare encounter . These records include unique identifiers for individuals, type of health plan, period of enrollment in the health plan, and demographic and geographic information (Table ). Individuals typically have multiple enrollment records, or lines of enrollment data called segments, for different periods of time. For administrative reasons, there is at least one segment in a calendar year for each time period enrolled, but a single segment may span multiple years. Members may have multiple segments that can overlap or be adjacent to one another owing to changes in the member's life, including multiple addresses in the same enrollment period, change of name, or change of employer who has coverage with the same health plan. Individuals who change plans or plan types within the same overarching health plan (i.e., have multiple insurance identifiers) are linked with a master identifier so they are recognized as the same individual. This includes individuals who transition to non‐traditional Medicare coverage from individual or employee‐sponsored insurance within the same health plan. Continuous enrollment episodes are created by integrating or “rolling up” records with overlapping dates into continuous enrollment segments. Overlapping and adjacent enrollment segments are joined together with a 1‐day gap allowed. Segments separated by more than 2 days are not rolled up together, resulting in multiple enrollment records with gaps in enrollment for individual patients. An individual can be followed across the healthcare system using unique characteristics that enable deterministic and probabilistic linkage between the individual's claims data and other data sources, including those permanently integrated into the HIRD (described below), as well as additional sources that are typically linked for specific research studies, such as medical records, the NDI, and vaccine/disease registries. Appropriate privacy and security protections are implemented to ensure regulatory compliance. The ability to link mothers and their infants within the HIRD supports pregnancy‐related research. Previous research using internally developed methodology has demonstrated that approximately 75% of infants are added to their mothers' insurance plans, allowing their data to be linked to their mothers' data in the HIRD . 2.2 Medical Claims Medical claims submitted by healthcare professionals and facilities for payment of services rendered include service dates, diagnoses, procedures, provider information, service locations, and service costs (Table ). Diagnoses are recorded using International Classification of Diseases 9th and 10th Clinical Modification codes (ICD‐9‐CM and ICD‐10‐CM), with up to 12 diagnosis codes per claim. If multiple services occur on the same date with the same provider, those services are typically combined into a single claim. Procedures, including treatment administrations such as infusions, are recorded using ICD‐9‐CM procedure codes, ICD‐10 Procedure Coding System (ICD‐10‐PCS) codes, Healthcare Common Procedure Coding System (HCPCS) codes, and Current Procedural Terminology (CPT) codes. Provider information includes National Provider Identifier (NPI) and tax identification numbers, practice name, and practice address. Service locations are classified into one of four settings: inpatient, stand‐alone emergency department, outpatient (including telehealth), and skilled nursing facility. Inpatient episodes (time period between admission and discharge) are constructed from multiple claims using a proprietary algorithm informed by service locations and service dates. Outpatient services are classified using the Restructured Berenson‐Eggers Type of Service Classification System . Service costs in medical claims include the amounts paid by the health plan, the individual, and other health plans. Cost data are available for all medical claims, with a small proportion of the cost data being imputed. 2.3 Pharmacy Claims Pharmacy claims for prescription treatments are submitted by outpatient pharmacies, including mail order and specialty pharmacies. Pharmacy claims include National Drug Codes (NDCs) for the treatment dispensed, quantity dispensed, days' supply, medication costs, dispensing date, prescriber information, and information about the dispensing pharmacy (Table ). NDCs provide information about the manufacturer, medication dispensed, strength, package size, route of administration, and dosage form. NDCs are mapped to Generic Product Identifier (GPI) codes, providing further information on the medication's therapeutic category as well as medication class and sub‐class . Pharmacy cost data include amounts paid by the health plan, the individual, and other health plans. For the subset of individuals who receive pharmacy benefits separately from their medical benefits, costs for pharmacy claims are imputed . 2.4 Provenance of Healthcare Claims Healthcare claims that enter the HIRD are generated when healthcare services are provided to individuals by healthcare providers. Providers submit claims for reimbursement to the health plan, which are either accepted or rejected (and potentially resubmitted). Healthcare claims are linked to enrollment records and then processed before being stored in the data warehouse. Enrollment records and associated healthcare claims are then integrated with external data sources (Figure ). Healthcare claims data in the HIRD are updated monthly. Only paid claims are included, with over 97% of pharmacy claims paid within 30 days, more than 90% of outpatient medical claims within 60 days, and over 90% of inpatient claims within 90 days (Figure ). As a complete healthcare claims history for a defined time period is important for many research studies, a 3‐month lag from the most recent data load is typically imposed on healthcare claims data available for research at the study level. Data in the HIRD have been available since January 2006, unless otherwise specified. The entirety of the HIRD is updated monthly, and quality control metrics are reviewed at each update to assess the data accuracy and completeness of both old data and new incremental data (i.e., data since the previous update). Monthly updating allows the old data to be overwritten with the most current data. By reviewing the entire database each month, trends in data quality can be identified and investigated. For example, data from a large Health Maintenance Organization (HMO) plan were originally deemed incomplete due to capitation arrangements and were excluded from the HIRD due to incomplete capture of healthcare services. Through collaborative projects with the health plan in recent years, the capitated plans began submitting claims with increasing consistency, akin to non‐capitated plans. This led to a reassessment of the previous decision to exclude HMO plans. Upon verifying the enhanced completeness of claims from this large HMO, the addition of these claims to the HIRD increased the researchable population by about 5%. Carelon Research's ability to trace data in the HIRD to their origins enables investigations of key data elements that can improve study quality. For example, in a safety study of a new migraine treatment, it was observed that the first dose recorded in the HIRD claims for many patients did not align with dosing recommendations (i.e., the first dose recorded in the HIRD claims was not the expected loading dose). The study team surveyed providers to inquire about real‐world dosing practices and learned that most providers used free medication samples for loading doses. With this knowledge, the study team was able to conduct more valid analyses and offer more meaningful interpretation of the data in the HIRD . Enrollment Records Enrollment records are created for individuals for all time segments during which they are enrolled in the health plan. Knowing when individuals are eligible allows researchers to distinguish the absence of data from the absence of a healthcare encounter . These records include unique identifiers for individuals, type of health plan, period of enrollment in the health plan, and demographic and geographic information (Table ). Individuals typically have multiple enrollment records, or lines of enrollment data called segments, for different periods of time. For administrative reasons, there is at least one segment in a calendar year for each time period enrolled, but a single segment may span multiple years. Members may have multiple segments that can overlap or be adjacent to one another owing to changes in the member's life, including multiple addresses in the same enrollment period, change of name, or change of employer who has coverage with the same health plan. Individuals who change plans or plan types within the same overarching health plan (i.e., have multiple insurance identifiers) are linked with a master identifier so they are recognized as the same individual. This includes individuals who transition to non‐traditional Medicare coverage from individual or employee‐sponsored insurance within the same health plan. Continuous enrollment episodes are created by integrating or “rolling up” records with overlapping dates into continuous enrollment segments. Overlapping and adjacent enrollment segments are joined together with a 1‐day gap allowed. Segments separated by more than 2 days are not rolled up together, resulting in multiple enrollment records with gaps in enrollment for individual patients. An individual can be followed across the healthcare system using unique characteristics that enable deterministic and probabilistic linkage between the individual's claims data and other data sources, including those permanently integrated into the HIRD (described below), as well as additional sources that are typically linked for specific research studies, such as medical records, the NDI, and vaccine/disease registries. Appropriate privacy and security protections are implemented to ensure regulatory compliance. The ability to link mothers and their infants within the HIRD supports pregnancy‐related research. Previous research using internally developed methodology has demonstrated that approximately 75% of infants are added to their mothers' insurance plans, allowing their data to be linked to their mothers' data in the HIRD . Medical Claims Medical claims submitted by healthcare professionals and facilities for payment of services rendered include service dates, diagnoses, procedures, provider information, service locations, and service costs (Table ). Diagnoses are recorded using International Classification of Diseases 9th and 10th Clinical Modification codes (ICD‐9‐CM and ICD‐10‐CM), with up to 12 diagnosis codes per claim. If multiple services occur on the same date with the same provider, those services are typically combined into a single claim. Procedures, including treatment administrations such as infusions, are recorded using ICD‐9‐CM procedure codes, ICD‐10 Procedure Coding System (ICD‐10‐PCS) codes, Healthcare Common Procedure Coding System (HCPCS) codes, and Current Procedural Terminology (CPT) codes. Provider information includes National Provider Identifier (NPI) and tax identification numbers, practice name, and practice address. Service locations are classified into one of four settings: inpatient, stand‐alone emergency department, outpatient (including telehealth), and skilled nursing facility. Inpatient episodes (time period between admission and discharge) are constructed from multiple claims using a proprietary algorithm informed by service locations and service dates. Outpatient services are classified using the Restructured Berenson‐Eggers Type of Service Classification System . Service costs in medical claims include the amounts paid by the health plan, the individual, and other health plans. Cost data are available for all medical claims, with a small proportion of the cost data being imputed. Pharmacy Claims Pharmacy claims for prescription treatments are submitted by outpatient pharmacies, including mail order and specialty pharmacies. Pharmacy claims include National Drug Codes (NDCs) for the treatment dispensed, quantity dispensed, days' supply, medication costs, dispensing date, prescriber information, and information about the dispensing pharmacy (Table ). NDCs provide information about the manufacturer, medication dispensed, strength, package size, route of administration, and dosage form. NDCs are mapped to Generic Product Identifier (GPI) codes, providing further information on the medication's therapeutic category as well as medication class and sub‐class . Pharmacy cost data include amounts paid by the health plan, the individual, and other health plans. For the subset of individuals who receive pharmacy benefits separately from their medical benefits, costs for pharmacy claims are imputed . Provenance of Healthcare Claims Healthcare claims that enter the HIRD are generated when healthcare services are provided to individuals by healthcare providers. Providers submit claims for reimbursement to the health plan, which are either accepted or rejected (and potentially resubmitted). Healthcare claims are linked to enrollment records and then processed before being stored in the data warehouse. Enrollment records and associated healthcare claims are then integrated with external data sources (Figure ). Healthcare claims data in the HIRD are updated monthly. Only paid claims are included, with over 97% of pharmacy claims paid within 30 days, more than 90% of outpatient medical claims within 60 days, and over 90% of inpatient claims within 90 days (Figure ). As a complete healthcare claims history for a defined time period is important for many research studies, a 3‐month lag from the most recent data load is typically imposed on healthcare claims data available for research at the study level. Data in the HIRD have been available since January 2006, unless otherwise specified. The entirety of the HIRD is updated monthly, and quality control metrics are reviewed at each update to assess the data accuracy and completeness of both old data and new incremental data (i.e., data since the previous update). Monthly updating allows the old data to be overwritten with the most current data. By reviewing the entire database each month, trends in data quality can be identified and investigated. For example, data from a large Health Maintenance Organization (HMO) plan were originally deemed incomplete due to capitation arrangements and were excluded from the HIRD due to incomplete capture of healthcare services. Through collaborative projects with the health plan in recent years, the capitated plans began submitting claims with increasing consistency, akin to non‐capitated plans. This led to a reassessment of the previous decision to exclude HMO plans. Upon verifying the enhanced completeness of claims from this large HMO, the addition of these claims to the HIRD increased the researchable population by about 5%. Carelon Research's ability to trace data in the HIRD to their origins enables investigations of key data elements that can improve study quality. For example, in a safety study of a new migraine treatment, it was observed that the first dose recorded in the HIRD claims for many patients did not align with dosing recommendations (i.e., the first dose recorded in the HIRD claims was not the expected loading dose). The study team surveyed providers to inquire about real‐world dosing practices and learned that most providers used free medication samples for loading doses. With this knowledge, the study team was able to conduct more valid analyses and offer more meaningful interpretation of the data in the HIRD . Other Data Types in the HIRD 3.1 Electronic Health Records The HIRD contains structured and unstructured EHR data for a subset of individuals. EHR data are obtained from provider network systems, large health systems and clinics, and state‐level health information exchanges (Table ). While most integrated EHR data come from outpatient primary care providers, some include records from specialists and inpatient providers. Integrated EHR data are available beginning January 2010. Additionally, the HIRD can be used as a sampling pool to identify individuals with characteristics of interest for which EHR data outside of the HIRD can be requested from providers and inpatient facilities. Structured EHR data include anthropometrics, vital signs, behavioral risk factors, medical history, and medications prescribed. In addition to the coding systems used in medical and pharmacy claims, structured data in the EHR may also contain Systematized Nomenclature of Medicine (SNOMED) codes (diagnoses, procedures), clinical drugs normalized (RxNorm) codes (treatments), and vaccinations (CVX codes). Unstructured EHR data include provider office visit notes. Through natural language processing, unstructured EHR data can be queried to identify clinical information and create structured fields, such as ejection fraction values for individuals with a heart failure diagnosis and heart failure classification . 3.2 Laboratory Results Laboratory test results for outpatient laboratory services are integrated within the HIRD. Laboratory test results generated by nationwide laboratory providers or included in EHRs are defined using Logical Observation Identifiers Names and Codes (LOINC), which provide information regarding the specimen source and methods of measurement (Table ). Laboratory results may be reported in inconsistent formats across labs; therefore, prior to inclusion in the HIRD, they are processed and standardized to ensure logically consistent data. The HIRD includes more than 65 of the most common laboratory tests readily available for research; however, this can be expanded to include other labs of interest. The proportion of individuals with laboratory results varies by test and therapeutic area. 3.3 Vital Status Vital status for individuals in the HIRD is obtained from enrollment records (reason for disenrollment), inpatient claims (discharge status), the Death Master File from the Social Security Administration, utilization management data, Center for Medicare and Medicaid Services records, and online obituary information processed by third‐party vendors. Data from these sources are combined to create a composite mortality variable for research use, indicating day of death (Table ). Cause of death is not directly available in the HIRD but can be approximated algorithmically in many cases . In a validation study among patients with advanced cancer between 2010 and 2018 ( n = 40 679), the composite mortality variable had good agreement with the NDI (sensitivity 89%, specificity 89%, positive predictive value 93%, negative predictive value 92%) . 3.4 Oncology The HIRD includes oncology data from the Carelon Cancer Care Quality Program (CCQP). Clinical oncology data for individuals undergoing cancer treatment in outpatient settings are recorded when a healthcare provider requests preauthorization for cancer treatment. Data entered by providers into the CCQP online portal include cancer type, cancer stage, biomarkers, pathology/histology, line of treatment, planned treatment regimen, height and weight, a metric of functional status (Eastern Cooperative Oncology Group Performance Status Scale), and other clinical details (Table ). In a validation study that compared the contents of the CCQP to medical records for breast, lung, and colorectal cancer patients, good agreement was observed for cancer type, cancer stage, histology (lung cancer only), and select cancer biomarkers . CCQP data are available beginning July 2014. 3.5 Social and Health Equity Data Individual‐level race and ethnicity data are obtained from enrollment files, EHR data, member self‐assessments, and proprietary imputation algorithms . These data are combined to create a composite race and ethnicity variable for research use (Table ). The composite race and ethnicity variable is based on Office of Management and Budget standards (White non‐Hispanic, Native Hawaiian/Other Pacific Islander non‐Hispanic, Black/African American non‐Hispanic, Asian non‐Hispanic, American Indian/Alaska Native non‐Hispanic, Hispanic/Latino, and other race non‐Hispanic) . A validation study found high agreement (Kappa = 0.82) between the composite variable and self‐reported race/ethnicity (among commercially insured Asian, Black/African American, Hispanic/Latino, and White individuals) . A variety of Social Drivers of Health (SDoH) data have been integrated into the HIRD . Area‐level information about urbanicity is derived from the National Center for Health Statistics' urban–rural classification scheme. The HIRD includes area‐level data from the American Community Survey, including over 50 variables at the census block group level associated with healthcare resource utilization, such as educational attainment, income, living conditions, family composition, transportation, and employment (Table ) . The HIRD also includes area‐level data from the Food Access Research Atlas, with over 140 variables at the census tract level related to food access and availability, urbanicity and rurality, and income . Social and health equity data in the HIRD are available for up to 95% of individuals. 3.6 Vaccinations Vaccination data from the Immunization Information System (IIS) are included in the HIRD to supplement vaccination data from healthcare claims and EHR data (Table ). Currently, the IIS data are obtained from 16 jurisdictions in 15 states and represent approximately 60% of members in the HIRD, although the data available may differ by state . Electronic Health Records The HIRD contains structured and unstructured EHR data for a subset of individuals. EHR data are obtained from provider network systems, large health systems and clinics, and state‐level health information exchanges (Table ). While most integrated EHR data come from outpatient primary care providers, some include records from specialists and inpatient providers. Integrated EHR data are available beginning January 2010. Additionally, the HIRD can be used as a sampling pool to identify individuals with characteristics of interest for which EHR data outside of the HIRD can be requested from providers and inpatient facilities. Structured EHR data include anthropometrics, vital signs, behavioral risk factors, medical history, and medications prescribed. In addition to the coding systems used in medical and pharmacy claims, structured data in the EHR may also contain Systematized Nomenclature of Medicine (SNOMED) codes (diagnoses, procedures), clinical drugs normalized (RxNorm) codes (treatments), and vaccinations (CVX codes). Unstructured EHR data include provider office visit notes. Through natural language processing, unstructured EHR data can be queried to identify clinical information and create structured fields, such as ejection fraction values for individuals with a heart failure diagnosis and heart failure classification . Laboratory Results Laboratory test results for outpatient laboratory services are integrated within the HIRD. Laboratory test results generated by nationwide laboratory providers or included in EHRs are defined using Logical Observation Identifiers Names and Codes (LOINC), which provide information regarding the specimen source and methods of measurement (Table ). Laboratory results may be reported in inconsistent formats across labs; therefore, prior to inclusion in the HIRD, they are processed and standardized to ensure logically consistent data. The HIRD includes more than 65 of the most common laboratory tests readily available for research; however, this can be expanded to include other labs of interest. The proportion of individuals with laboratory results varies by test and therapeutic area. Vital Status Vital status for individuals in the HIRD is obtained from enrollment records (reason for disenrollment), inpatient claims (discharge status), the Death Master File from the Social Security Administration, utilization management data, Center for Medicare and Medicaid Services records, and online obituary information processed by third‐party vendors. Data from these sources are combined to create a composite mortality variable for research use, indicating day of death (Table ). Cause of death is not directly available in the HIRD but can be approximated algorithmically in many cases . In a validation study among patients with advanced cancer between 2010 and 2018 ( n = 40 679), the composite mortality variable had good agreement with the NDI (sensitivity 89%, specificity 89%, positive predictive value 93%, negative predictive value 92%) . Oncology The HIRD includes oncology data from the Carelon Cancer Care Quality Program (CCQP). Clinical oncology data for individuals undergoing cancer treatment in outpatient settings are recorded when a healthcare provider requests preauthorization for cancer treatment. Data entered by providers into the CCQP online portal include cancer type, cancer stage, biomarkers, pathology/histology, line of treatment, planned treatment regimen, height and weight, a metric of functional status (Eastern Cooperative Oncology Group Performance Status Scale), and other clinical details (Table ). In a validation study that compared the contents of the CCQP to medical records for breast, lung, and colorectal cancer patients, good agreement was observed for cancer type, cancer stage, histology (lung cancer only), and select cancer biomarkers . CCQP data are available beginning July 2014. Social and Health Equity Data Individual‐level race and ethnicity data are obtained from enrollment files, EHR data, member self‐assessments, and proprietary imputation algorithms . These data are combined to create a composite race and ethnicity variable for research use (Table ). The composite race and ethnicity variable is based on Office of Management and Budget standards (White non‐Hispanic, Native Hawaiian/Other Pacific Islander non‐Hispanic, Black/African American non‐Hispanic, Asian non‐Hispanic, American Indian/Alaska Native non‐Hispanic, Hispanic/Latino, and other race non‐Hispanic) . A validation study found high agreement (Kappa = 0.82) between the composite variable and self‐reported race/ethnicity (among commercially insured Asian, Black/African American, Hispanic/Latino, and White individuals) . A variety of Social Drivers of Health (SDoH) data have been integrated into the HIRD . Area‐level information about urbanicity is derived from the National Center for Health Statistics' urban–rural classification scheme. The HIRD includes area‐level data from the American Community Survey, including over 50 variables at the census block group level associated with healthcare resource utilization, such as educational attainment, income, living conditions, family composition, transportation, and employment (Table ) . The HIRD also includes area‐level data from the Food Access Research Atlas, with over 140 variables at the census tract level related to food access and availability, urbanicity and rurality, and income . Social and health equity data in the HIRD are available for up to 95% of individuals. Vaccinations Vaccination data from the Immunization Information System (IIS) are included in the HIRD to supplement vaccination data from healthcare claims and EHR data (Table ). Currently, the IIS data are obtained from 16 jurisdictions in 15 states and represent approximately 60% of members in the HIRD, although the data available may differ by state . Characteristics of the HIRD Population The HIRD population used for research (“researchable population”) consists of individuals/employer groups with commercial and/or managed Medicare health insurance plans who have agreed to be included for research purposes. The specific population available for use in any given research study varies depending on population requirements for that study. Studies using claims data only can utilize the entire researchable population. As of July 2024, the researchable population included over 91 million individuals with medical benefits: over 72 million of these individuals also have pharmacy benefits with at least 1 day of membership in the HIRD (Table ). Of these, approximately 24 million individuals with medical benefits and approximately 17 million individuals with medical and pharmacy benefits were actively enrolled (i.e., had active health plan membership as of July 2024). The median age for the researchable population is 36 years (interquartile range, IQR: 22, 54), with approximately 10% ≥ 65 years old. The researchable population is approximately half male and half female. Individuals with available race and ethnicity data (46% for the entire HIRD starting 2006; ≥ 80% for the most recent 5 years) are approximately 63% White non‐Hispanic, 15% Hispanic/Latino, 9% Black/African American non‐Hispanic, 7% Asian non‐Hispanic, and 6% other races or ethnicities. Individuals most commonly live in the South census region (32%), followed by the West (27%), Midwest (23%), and Northeast (16%). Approximately 92% of individuals have commercial health insurance, whereas approximately 8% have managed Medicare (including Medicare Advantage, Medicare Supplement, and Medicare Part D) (Table ). Preferred provider organizations are the most common plan type (64%), followed by HMOs (19%) and consumer‐directed health plans (17%). The median duration of continuous enrollment for the entire population is approximately 2.0 years (IQR: 0.8, 4.4). Overall, the population with medical benefits is similar to the population with medical and pharmacy benefits, and the actively enrolled population is similar to the enrolled‐at‐any‐time population (Table ). Notably, actively enrolled patients are more likely to have race and ethnicity data (~80% for actively enrolled with medical benefits), and actively enrolled patients are typically enrolled for longer durations (median (IQR) of 3.8 (1.7, 8.3) years for actively enrolled with medical benefits). Strengths and Limitations The HIRD includes an abundance of RWD for a large population dispersed across the US. These data have supported many RWE studies completed exclusively within the HIRD and have made the HIRD a valuable contributor to multi‐database efforts , including the FDA's Sentinel Initiative , Biologics Effectiveness and Safety (BEST) System , Innovation in Medical Evidence and Development Surveillance (IMEDS) program , Biologics and Biosimilars Collective Intelligence Consortium (BBCIC) , and National Evaluation System for Health Technology (NEST) . Carelon Research's unique relationship with Elevance Health enables auditing and investigation of the source of healthcare claims data in the HIRD, providing assurance of data quality and yielding insights that inform study design and interpretation. The data in the HIRD begin in 2006 and are updated monthly, allowing the HIRD to support both historical and ongoing inquiries. The 2020 HIRD population (commercial and managed Medicare) is similar to the 2020 US Census population in terms of sex (male/female; overlap index = 99.2%), age (5‐year age categories; overlap index = 92.0%), and geographic region of residence (Northeast/Midwest/South/West; overlap index = 94.8%). For race/ethnicity (Hispanic or Latino/non‐Hispanic White/non‐Hispanic Black or African American/non‐Hispanic Asian/Other), the overlap index is 86.8%, with the 2020 HIRD population having 13% more non‐Hispanic White members and 11% fewer Hispanic or Latino and non‐Hispanic Black or African American members than the 2020 US Census population . The size of the database, the available data elements, and longitudinal nature of the data allow for a wide variety of pharmacoepidemiologic and health economic and outcomes research studies. The ability to re‐identify individuals (with appropriate approvals) supports many research initiatives, including validation studies , patient and provider surveys , linkage to medical records , linkage to product and disease registries , linkage to the NDI , as well as recruitment into pragmatic clinical trials . The ability to link family individuals supports pregnancy‐related research . The HIRD also has limitations. Claims data in the HIRD are only captured during periods of enrollment in the health plan, which limits analyses to the time an individual is enrolled in a health plan. Of note, a prior study evaluating enrollment time showed individuals in the HIRD with chronic diseases are enrolled for longer periods of time, allowing for longer follow‐up of this clinically important population . Claims data in the HIRD may incompletely or inaccurately reflect a patient's experience. For example, a pharmacy claim for a prescription medication does not guarantee consumption of the medication, or a medical claim with a diagnosis may be inaccurate due to misdiagnosis or coding errors. Certain data useful for research may not be readily available in the HIRD, such as sample medications from providers, individuals who pay for medications or other services without submitting to the health plan for payment, medications received in inpatient settings, markers of disease severity, or lifestyle risk factors. Limitations of both inaccurate and/or missing data may be addressed through a variety of methods, including the addition of data via linkage to external sources , the collection of data directly from patient or provider surveys, and analytically through quantitative bias analysis and other types of sensitivity analyses. Data Access Data within the HIRD are directly available for research through licensing and collaborations with Carelon Research. Conclusion The HIRD, established in 2006, has progressively expanded by incorporating additional data sources, allowing researchers to address more complex questions. This database allows for direct traceability back to the source data, which enhances research integrity and reliability. The HIRD's utility is evidenced by over 1900 peer‐reviewed publications and presentations. The ability to link the HIRD to other datasets amplifies research capabilities and underscores the value of RWD. The HIRD exemplifies how comprehensive data curation can be leveraged for diverse research applications and significant scientific contributions. 7.1 Plain Language Summary This article describes the HIRD, a RWD source that can be used to support health‐related research. The HIRD includes information on individuals with health insurance provided by Elevance Health. These individuals are located throughout the US. The HIRD is built upon the foundation of health insurance enrollment and claims data, which are linked with EHRs, laboratory test results, mortality data, clinical cancer data, social and health equity data, and vaccination data. The data in the HIRD date back to January 2006, and are updated monthly. As of July 2024, the HIRD included over 91 million individuals with medical insurance, including approximately 24 million individuals who were actively enrolled. A strength of the HIRD is that its data can be traced back to their origins. Also, data in the HIRD can be linked to external data sources, and family members within health plans are linked to each other. Because of these and other attributes, the HIRD has been used to support many health‐related research activities over the past 2 decades. The HIRD is directly available for research through licensing and collaborations with Carelon Research. Plain Language Summary This article describes the HIRD, a RWD source that can be used to support health‐related research. The HIRD includes information on individuals with health insurance provided by Elevance Health. These individuals are located throughout the US. The HIRD is built upon the foundation of health insurance enrollment and claims data, which are linked with EHRs, laboratory test results, mortality data, clinical cancer data, social and health equity data, and vaccination data. The data in the HIRD date back to January 2006, and are updated monthly. As of July 2024, the HIRD included over 91 million individuals with medical insurance, including approximately 24 million individuals who were actively enrolled. A strength of the HIRD is that its data can be traced back to their origins. Also, data in the HIRD can be linked to external data sources, and family members within health plans are linked to each other. Because of these and other attributes, the HIRD has been used to support many health‐related research activities over the past 2 decades. The HIRD is directly available for research through licensing and collaborations with Carelon Research. Carelon Research's access, use, and disclosure of protected health information (PHI) complies with the HIPAA Privacy Rule (45 CFR Part 160 and Subparts A and E of Part 164). Carelon Research does not access, use, or disclose PHI other than as permitted by HIPAA. De‐identified or limited datasets for research are created when feasible; however, when that is not feasible, Carelon Research may seek to obtain a specific waiver of the HIPAA authorization requirements from an Institutional Review Board (IRB). Carelon Research also takes into consideration other federal and state laws and regulations that might limit the use of certain types of data beyond HIPAA limitations, including laws related to substance use disorders and other sensitive medical information. All authors were employees of Carelon Research at the time the manuscript was prepared. All authors are shareholders of Elevance Health. Brett Doherty is currently an employee of Daiichi Sankyo Inc., Basking Ridge, NJ. |
Validity concerns with the Revised Study Process Questionnaire (R-SPQ-2F) in undergraduate anatomy & physiology students | 85508cad-2746-4f6f-ac69-e105909cf58f | 8084138 | Physiology[mh] | Student learning continues to be a topic of interest for educators across many contexts and educational levels. Within this body of literature, student approaches to learning (SAL) research has examined both the affective and contextual aspects of learning to elucidate student cognitive responses to the task of learning . The SAL concepts of deep and surface approaches to learning have been consistently utilized in educational research over the past 40 years and have more recently been used to understand how the biological subdisciplines of anatomy and physiology are learned, specifically in medical education . Biggs and colleagues first developed the Study Process Questionnaire and Learning Process Questionnaire instruments to describe student approaches to learning in various contexts . The Revised Study Process Questionnaire (R-SPQ-2F), the most recently developed instrument that categorizes student approaches as either surface or deep , has been used in educational research studies and within physiology education . Development of SAL theory Present research on student learning has been built upon findings from the 1970s and 1980s related to student learning approaches and whether these approaches are fixed or context-dependent . The SAL body of literature was established by four main research groups. Because these groups were addressing the same questions during the same period of time, findings from one group influenced the views and responses of the others. The Swedish group, led by Marton, introduced the terms deep and surface approaches to learning and provided evidence that these approaches were flexible and context-dependent . The findings from the work of the Lancaster and Richmond groups supported and extended the findings of Marton . The Australian group was led by John Biggs and mainly used quantitative methods to understand student approaches to learning. Biggs developed various iterations of the 3P (Presage–Process–Product) learning model, which recognized the inter-relationships of student characteristics, teaching context, student learning processes, and learning outcomes . He also developed multiple iterations of the Study Process Questionnaire (SPQ and R-SPQ-2F ) to distinguish between deep and surface learning approaches of students. This instrument categorizes student learning approaches based on their motive for learning and the strategies they utilize. Agreeing with Marton, Biggs held that learning and its approach were context-dependent and flexible . Beattie and colleagues summarize the findings from these groups in this manner: Thus this literature, viewed as a whole, demonstrates that a student’s approach to learning is only partly a function of his or her general characteristics, since it can be modified by specific learning situations. Such situational influences include the students’ perception of the relevance of the learning task, the attitudes and enthusiasm of the lecturer and the expected forms of assessment. The extent to which a student’s predilection for a particular approach can be modified is determined by their meta-learning capability. (, pg. 10) Following in the European and Australian traditions, SAL can be viewed as a process that combines affective traits of the student with the specific learning context. This interaction leads to a specific cognitive response to the task. Overall, the idea of deep and surface approaches to learning was widely adopted in the study of learning in higher education and beyond . As research programs moved forward, they began to focus on how to promote a deep approach to learning, as well as how to assess deep learning approaches in students . Development of surface and deep approaches to learning While the terms deep and surface to describe student learning approaches have been widely used in education research over the past 40 years, the definitions of these terms have been refined but mostly retained from their original introduction. A deep approach to learning has been previously defined as “an approach that connects new information to previous relevant knowledge” and is aligned with a focus to gain understanding of meaning and an intention to comprehend . Biggs also connected this approach to the process of internalizing , which is defined as an interest in personal growth and an intrinsic motivation to learn. A surface approach to learning has been previously defined as “an approach that focuses on bare essentials and reproduces through rote learning or memorization” . Other characteristics may also include memorization to succeed on a test, retention of literal aspects with no critical analysis or personal contribution, or simply storage of information . Biggs also connected this approach to the process of utilizing , which is viewing study as a task to accomplish and overcome in order to pursue a career. Multiple quantitative measures have been developed that use the terms of deep and surface to describe student approaches to learning, including the Approaches to Studying Inventory (ASI; ), Student Cognitions about Learning (SCALI; ) and the Inventory of Learning Styles (ILS; ). In a similar effort, Biggs and colleagues developed and then revised the aforementioned Study Process Questionnaire . However, little information was provided about the specific choices for item retention or subscale definitions on the revised instrument. Recent work within psychology has more clearly defined the presence of these types of questionable measurement practices (QMPs) as common within the literature despite the fact that they threaten the validity and conclusions of research . As deep and surface learning approaches were studied in additional cultures and contexts, new questions arose. The simple categorization of a deep or surface approach and the associated motives and strategies failed to capture the approaches taken by all students. A “new” approach to learning that combined understanding and memorization was described and coined as an achieving learning approach by Kember . In addition, this work further expanded the 3P Model and focused on how a student’s preferred learning approach interacted with the teaching environment to produce learning activities. Biggs’ work in developing his quantitative instruments identified two distinct groupings that interacted with the surface and deep approaches: a student’s motive and their strategy . Motive is defined as the student’s intention toward the work, which may include a fear of failure, intrinsic interest, or achievement. Strategy is defined as the particular actions taken by a student and their outcomes, which may include repetition or rote learning. This can also include work to maximize meaning and develop understanding, or an effective use of space and time. These characteristics form the basis for the items on the R-SPQ-2F. R-SPQ-2F survey instrument The R-SPQ-2F instrument provides information about the preferred learning approaches of students . It consists of 20 items that are reported to fall on one of two approach scales or factors (Surface and Deep) and one of two characteristic groups or subscales (Motive and Strategy) . For instance, item 1 ( I find that at times studying gives me a feeling of deep personal satisfaction ) is grouped on the Deep factor and Motive subscale. Overall, five items fall on each of the four factor and subscale combinations. The 20 items are scored using a 5-point Likert-type scale (A— this item is never or only rarely true of me to E— this item is always or almost always true of me ) and then converted to numerical data (A = 1 to E = 5). Main factors (Surface, Deep) and subscales (Deep Motive, Deep Strategy, Surface Motive, Surface Strategy) are calculated by summing the responses to the corresponding questions. The full survey and complete scoring instructions are available in previous publications . Previous psychometric analysis completed with undergraduate students in the late 1990s found the instrument to have acceptable scale reliability (Cronbach’s α = 0.73 for deep and 0.64 for surface) and a reasonable fit to the two-factor structure for the general undergraduate population at that time . Justicia and colleagues examined the underlying structure of the R-SPQ-2F using exploratory and confirmatory factor analysis with survey responses from undergraduate students in Spain. Their results support a two-factor structure as reliable, with the 20 items clustering as noted in the original survey administration instructions. However, their results challenge the ability of the instrument to differentiate subscales. In addition, the validity of the named factors and subscales was not addressed. Entwistle & Entwistle found that a qualitative analysis of student interviews and written responses paralleled a surface and deep approach to learning. However, qualitative data to support the alignment of the two factors measured by the R-SPQ-2F with the SAL constructs of surface and deep learning approaches have not been reported. These gaps in the literature fall within the bounds of QMPs as defined by Flake & Fried . Reliability and validity in survey research For survey results to be useful, they must be both reliable and valid. Reliability refers to the extent to which results are consistent across different administrations of the instrument and can be evaluated using measures of internal consistency across items, stability over time, and equivalence to other instruments measuring the same constructs . There are many components of instrument reliability that might be measured by researchers, but the choice of appropriate measurements depends on the construct under consideration and the specific context. For example, checking for stability over time using a standard test-retest protocol would not be appropriate for an instrument such as the R-SPQ-2F, where individual responses are expected and intended to vary based on context. In this paper, we consider the same aspects of reliability that have been demonstrated by other researchers using the R-SPQ-2F in other settings. Validity is concerned with how well the instrument measures what it is intended to measure. Although reliability is a necessary condition for validity, it is not sufficient to establish the validity of results, which is done through checking content validity, criterion validity, and construct validity . Content validity refers to the extent to which the instrument covers the entire domain of interest. Criterion validity is similar to equivalence, in that it is concerned with the extent to which the items measure the same domain as other instruments with the same intent. Lastly, construct validity refers to the extent to which one can draw inferences about the domain based on responses to the instrument. Validity of results from one population does not imply the validity of results from the same instrument used with a different population . Research question The R-SPQ-2F was developed and validated with undergraduates from a variety of majors in Hong Kong in the late 1990s. While additional validity and reliability work has been reported in undergraduate populations in various countries and in graduate and professional school contexts, no work has been published about undergraduate STEM students in the United States. This study addresses the research question: “Does the R-SPQ-2F yield valid results for classifying the learning approaches of STEM undergraduates enrolled in Anatomy & Physiology courses at an R1 institution in the southeastern United States?”
Present research on student learning has been built upon findings from the 1970s and 1980s related to student learning approaches and whether these approaches are fixed or context-dependent . The SAL body of literature was established by four main research groups. Because these groups were addressing the same questions during the same period of time, findings from one group influenced the views and responses of the others. The Swedish group, led by Marton, introduced the terms deep and surface approaches to learning and provided evidence that these approaches were flexible and context-dependent . The findings from the work of the Lancaster and Richmond groups supported and extended the findings of Marton . The Australian group was led by John Biggs and mainly used quantitative methods to understand student approaches to learning. Biggs developed various iterations of the 3P (Presage–Process–Product) learning model, which recognized the inter-relationships of student characteristics, teaching context, student learning processes, and learning outcomes . He also developed multiple iterations of the Study Process Questionnaire (SPQ and R-SPQ-2F ) to distinguish between deep and surface learning approaches of students. This instrument categorizes student learning approaches based on their motive for learning and the strategies they utilize. Agreeing with Marton, Biggs held that learning and its approach were context-dependent and flexible . Beattie and colleagues summarize the findings from these groups in this manner: Thus this literature, viewed as a whole, demonstrates that a student’s approach to learning is only partly a function of his or her general characteristics, since it can be modified by specific learning situations. Such situational influences include the students’ perception of the relevance of the learning task, the attitudes and enthusiasm of the lecturer and the expected forms of assessment. The extent to which a student’s predilection for a particular approach can be modified is determined by their meta-learning capability. (, pg. 10) Following in the European and Australian traditions, SAL can be viewed as a process that combines affective traits of the student with the specific learning context. This interaction leads to a specific cognitive response to the task. Overall, the idea of deep and surface approaches to learning was widely adopted in the study of learning in higher education and beyond . As research programs moved forward, they began to focus on how to promote a deep approach to learning, as well as how to assess deep learning approaches in students .
While the terms deep and surface to describe student learning approaches have been widely used in education research over the past 40 years, the definitions of these terms have been refined but mostly retained from their original introduction. A deep approach to learning has been previously defined as “an approach that connects new information to previous relevant knowledge” and is aligned with a focus to gain understanding of meaning and an intention to comprehend . Biggs also connected this approach to the process of internalizing , which is defined as an interest in personal growth and an intrinsic motivation to learn. A surface approach to learning has been previously defined as “an approach that focuses on bare essentials and reproduces through rote learning or memorization” . Other characteristics may also include memorization to succeed on a test, retention of literal aspects with no critical analysis or personal contribution, or simply storage of information . Biggs also connected this approach to the process of utilizing , which is viewing study as a task to accomplish and overcome in order to pursue a career. Multiple quantitative measures have been developed that use the terms of deep and surface to describe student approaches to learning, including the Approaches to Studying Inventory (ASI; ), Student Cognitions about Learning (SCALI; ) and the Inventory of Learning Styles (ILS; ). In a similar effort, Biggs and colleagues developed and then revised the aforementioned Study Process Questionnaire . However, little information was provided about the specific choices for item retention or subscale definitions on the revised instrument. Recent work within psychology has more clearly defined the presence of these types of questionable measurement practices (QMPs) as common within the literature despite the fact that they threaten the validity and conclusions of research . As deep and surface learning approaches were studied in additional cultures and contexts, new questions arose. The simple categorization of a deep or surface approach and the associated motives and strategies failed to capture the approaches taken by all students. A “new” approach to learning that combined understanding and memorization was described and coined as an achieving learning approach by Kember . In addition, this work further expanded the 3P Model and focused on how a student’s preferred learning approach interacted with the teaching environment to produce learning activities. Biggs’ work in developing his quantitative instruments identified two distinct groupings that interacted with the surface and deep approaches: a student’s motive and their strategy . Motive is defined as the student’s intention toward the work, which may include a fear of failure, intrinsic interest, or achievement. Strategy is defined as the particular actions taken by a student and their outcomes, which may include repetition or rote learning. This can also include work to maximize meaning and develop understanding, or an effective use of space and time. These characteristics form the basis for the items on the R-SPQ-2F.
The R-SPQ-2F instrument provides information about the preferred learning approaches of students . It consists of 20 items that are reported to fall on one of two approach scales or factors (Surface and Deep) and one of two characteristic groups or subscales (Motive and Strategy) . For instance, item 1 ( I find that at times studying gives me a feeling of deep personal satisfaction ) is grouped on the Deep factor and Motive subscale. Overall, five items fall on each of the four factor and subscale combinations. The 20 items are scored using a 5-point Likert-type scale (A— this item is never or only rarely true of me to E— this item is always or almost always true of me ) and then converted to numerical data (A = 1 to E = 5). Main factors (Surface, Deep) and subscales (Deep Motive, Deep Strategy, Surface Motive, Surface Strategy) are calculated by summing the responses to the corresponding questions. The full survey and complete scoring instructions are available in previous publications . Previous psychometric analysis completed with undergraduate students in the late 1990s found the instrument to have acceptable scale reliability (Cronbach’s α = 0.73 for deep and 0.64 for surface) and a reasonable fit to the two-factor structure for the general undergraduate population at that time . Justicia and colleagues examined the underlying structure of the R-SPQ-2F using exploratory and confirmatory factor analysis with survey responses from undergraduate students in Spain. Their results support a two-factor structure as reliable, with the 20 items clustering as noted in the original survey administration instructions. However, their results challenge the ability of the instrument to differentiate subscales. In addition, the validity of the named factors and subscales was not addressed. Entwistle & Entwistle found that a qualitative analysis of student interviews and written responses paralleled a surface and deep approach to learning. However, qualitative data to support the alignment of the two factors measured by the R-SPQ-2F with the SAL constructs of surface and deep learning approaches have not been reported. These gaps in the literature fall within the bounds of QMPs as defined by Flake & Fried .
For survey results to be useful, they must be both reliable and valid. Reliability refers to the extent to which results are consistent across different administrations of the instrument and can be evaluated using measures of internal consistency across items, stability over time, and equivalence to other instruments measuring the same constructs . There are many components of instrument reliability that might be measured by researchers, but the choice of appropriate measurements depends on the construct under consideration and the specific context. For example, checking for stability over time using a standard test-retest protocol would not be appropriate for an instrument such as the R-SPQ-2F, where individual responses are expected and intended to vary based on context. In this paper, we consider the same aspects of reliability that have been demonstrated by other researchers using the R-SPQ-2F in other settings. Validity is concerned with how well the instrument measures what it is intended to measure. Although reliability is a necessary condition for validity, it is not sufficient to establish the validity of results, which is done through checking content validity, criterion validity, and construct validity . Content validity refers to the extent to which the instrument covers the entire domain of interest. Criterion validity is similar to equivalence, in that it is concerned with the extent to which the items measure the same domain as other instruments with the same intent. Lastly, construct validity refers to the extent to which one can draw inferences about the domain based on responses to the instrument. Validity of results from one population does not imply the validity of results from the same instrument used with a different population .
The R-SPQ-2F was developed and validated with undergraduates from a variety of majors in Hong Kong in the late 1990s. While additional validity and reliability work has been reported in undergraduate populations in various countries and in graduate and professional school contexts, no work has been published about undergraduate STEM students in the United States. This study addresses the research question: “Does the R-SPQ-2F yield valid results for classifying the learning approaches of STEM undergraduates enrolled in Anatomy & Physiology courses at an R1 institution in the southeastern United States?”
This study was conducted as one step in a comparative case study that investigated the cognitive processes and pathways of undergraduate Anatomy & Physiology students. The research was reviewed and approved as exempt by the Institutional Review Board at Clemson University (2018-310). The quality frameworks of Q3 Quality in Qualitative Research and Legitimation were used to guide the design of this protocol. The Q3 framework provides six areas of validation to consider in all stages of qualitative research, while the Legitimation criteria were used to strengthen the conduct and reporting of mixed methods research . Quantitative sample & data collection During a particular fall semester, a total of 824 students were enrolled in three sections of two Anatomy & Physiology courses at a large institution in the southeastern United States classified as “highest research” (R1) by the Carnegie classifications . During the second week of classes, course instructors were emailed text for both an in-class announcement and an email to students. These course instructors were not part of the research team. This invitation included a link to the “Anatomy and Physiology Questions” Survey in Qualtrics , comprised of the 20 items that form the R-SPQ-2F followed by prompts for major, current section enrollment, and intent to enroll in the subsequent course in the next semester. The non-R-SPQ-2F items were used as part of the selection process for the full study. Instructors were not provided any information about which students completed the survey or were invited to participate in the full study. Two hundred thirty-one (231) students completed the survey for an overall 27.9% response rate. This low response rate is not unexpected, as survey completion was not a course requirement and few additional incentives were offered. A pool of potential participants for the full study was created of all respondents who provided informed consent, completed the R-SPQ-2F items, planned to take the second course of the sequence, and self-identified as a STEM or health science major based on two-digit Classification of Instructional Program (CIP) codes . Majors within the bounds of the study were Engineering (code 14), Engineering Technologies and Engineering Related Fields (code 15), Biological/Biomedical Sciences (code 26), Physical Sciences (code 40), or Health Professionals and Related Professions (code 51), although code 15 did not appear in the sample. The remaining pool consisted of 117 students (51.6% of those completing the survey, 14.2% of the course population). Based on previous literature indicating a lack of inclusion of students with a surface approach preference in education research , the intent was to recruit participants who showed a strong preference for either a surface or a deep approach. Because it is possible to receive a high score for both surface and deep learning approaches on the R-SPQ-2F, the difference in Deep and Surface approach scale scores was used as the selection criterion from within the winnowed sample. provides a histogram of deep–surface differential scores ranging from −33 (extreme surface differential) to +29 (extreme deep differential) for the winnowed pool. Qualitative sample & data collection The winnowed pool was then divided by course and rank-ordered based on differential scores. Within each class, the participants with the four most extreme differential scores at each end of the scale were invited to the full study for a target sample size of 16 students. If no response was received within two days, a reminder email was sent. After an additional three-day window, the student was removed from the list and the next rank-ordered candidate from that course was invited. The final participant pool for the full study included 11 students, five with a deep approach preference and six with a surface approach preference based on their R-SPQ-2F differential scores. These participants, together with their differential score and self-selected pseudonym, are shown with their relative location in the histogram of differential scores in . Interviews with the 11 participants were scheduled within three weeks of initial completion of the R-SPQ-2F and completed between September 18 and October 3 of the semester in which the study took place. The timing of the interview was important because SAL is considered a flexible characteristic that is impacted by course activities and other items described in the 3P Model . The interview protocol consisted of open-ended questions in a semi-structured protocol to allow participants the freedom to expand or elaborate on their responses. The protocol is provided in . Prompts were designed to probe for information about teaching context, student characteristics and preferences, and learning process and approach, aligned with the theoretical framework for the full study. It was not a specific intent during this interview to probe for validity of the R-SPQ-2F with this population, so there is not direct alignment between the interview and the survey. Interviews ranged in length from 22 minutes to 33 minutes, with a mean time of 27 minutes. Process reliability, which provides conditions to make the research process as independent from random influences as possible, was addressed by maintaining the same core prompts for each interview . All interviews were conducted in person and in a neutral location to allow for privacy and quality recording. To support communicative validity and process reliability, interviews were recorded with a digital recorder and transcribed verbatim using Descript software in preparation for analysis. Theoretical validation focuses on the fit between the phenomenon under investigation and the theory produced . In light of this, the interview prompts were designed to expose the reality of the unique learning processes and pathways taken by members within each bounded case. The semi-structured nature of the interview allowed for clarification of student use of words or description of ideas. The Legitimation framework from Onwuegbuzie and colleagues was utilized to ensure quality during the mixing of the data, particularly in the area of weakness minimization. Analysis As interview transcripts were verified, concerns arose in the research team about the ability of the R-SPQ-2F to differentiate surface and deep approaches within this population of undergraduate A&P students. Triangulation of individual item responses with their interview excerpts revealed a lack of agreement between the quantitative and qualitative data. This finding led to detailed analysis comparing quantitative (R-SPQ-2F responses, factor/subscale scores, and differential scores) and qualitative data (interview responses) to answer our research question. Analysis proceeded in four main steps: Qualitative and quantitative item comparisons: A priori codes for surface, deep, surface to deep, and each of the 20 R-SPQ-2F items were used to identify passages that provided qualitative information relevant to each of the 20 R-SPQ-2F items. A priori coding proceeded in iterative stages, with one team member identifying all excerpts that were considered to meet the criteria for a specific a priori code and the second team member blind-coding a subset of the data for the same a priori code. These iterative cycles continued until the team reached agreement on the boundaries of each code and on the coding of specific passages within the data. Quantitative and qualitative scale comparisons: After a priori coding was complete, the data were grouped by participant and R-SPQ-2F item. Each member of the research team independently determined whether the available data, considered holistically, indicated agreement or disagreement with the R-SPQ-2F item. Because the R-SPQ-2F is scored on a 5-point, Likert-type scale, a response of 1 or 2 on the survey was considered a “disagreement” with a positively worded item, while a response of 4 or 5 was considered an “agreement.” When the quantitative and qualitative data both indicated agreement or both indicated disagreement, we coded this as alignment . When one indicated agreement and the other indicated disagreement, we coded this as misalignment . When the qualitative data indicated agreement or disagreement and the survey response was a 3 (“neutral”), we coded this as mild misalignment . Item Review: Each item of the R-SPQ-2F was reviewed by the research team to determine the expected scale (Deep or Surface) and subscale (Motive or Strategy) measured, as well as additional areas of concern for each question. Confirmatory Factor Analysis: In addition to the data collected for recruitment to the main study, additional responses to the R-SPQ-2F were collected during the first two weeks of a second fall semester. In total, 381 complete responses and 66 partial responses were collected. An item-level confirmatory factor analysis (CFA) was performed using the complete responses to assess the fit of the previously reported deep and surface approach factor structure to the data from the population of interest and assess the reliability of the instrument. Each stage of analysis contributed new insights into our concerns with using the R-SPQ-2F with this population of students. Research quality considerations During analysis, qualitative responses were compared to the responses on the quantitative survey. The process of comparing student interview responses to responses to each survey item provided an opportunity for inside-outside legitimation, which is concerned with the extent to which the participant’s view is accurately presented and utilized for purposes of explanation and description . The steps for process reliability helped to ensure accurate presentation of participant words. In addition, the research team took care to take participant words at face value when determining alignment between the qualitative and quantitative data. Weaknesses minimization occurred as the qualitative data allowed for a greater breadth of response from participants than the quantitative survey alone .
During a particular fall semester, a total of 824 students were enrolled in three sections of two Anatomy & Physiology courses at a large institution in the southeastern United States classified as “highest research” (R1) by the Carnegie classifications . During the second week of classes, course instructors were emailed text for both an in-class announcement and an email to students. These course instructors were not part of the research team. This invitation included a link to the “Anatomy and Physiology Questions” Survey in Qualtrics , comprised of the 20 items that form the R-SPQ-2F followed by prompts for major, current section enrollment, and intent to enroll in the subsequent course in the next semester. The non-R-SPQ-2F items were used as part of the selection process for the full study. Instructors were not provided any information about which students completed the survey or were invited to participate in the full study. Two hundred thirty-one (231) students completed the survey for an overall 27.9% response rate. This low response rate is not unexpected, as survey completion was not a course requirement and few additional incentives were offered. A pool of potential participants for the full study was created of all respondents who provided informed consent, completed the R-SPQ-2F items, planned to take the second course of the sequence, and self-identified as a STEM or health science major based on two-digit Classification of Instructional Program (CIP) codes . Majors within the bounds of the study were Engineering (code 14), Engineering Technologies and Engineering Related Fields (code 15), Biological/Biomedical Sciences (code 26), Physical Sciences (code 40), or Health Professionals and Related Professions (code 51), although code 15 did not appear in the sample. The remaining pool consisted of 117 students (51.6% of those completing the survey, 14.2% of the course population). Based on previous literature indicating a lack of inclusion of students with a surface approach preference in education research , the intent was to recruit participants who showed a strong preference for either a surface or a deep approach. Because it is possible to receive a high score for both surface and deep learning approaches on the R-SPQ-2F, the difference in Deep and Surface approach scale scores was used as the selection criterion from within the winnowed sample. provides a histogram of deep–surface differential scores ranging from −33 (extreme surface differential) to +29 (extreme deep differential) for the winnowed pool.
The winnowed pool was then divided by course and rank-ordered based on differential scores. Within each class, the participants with the four most extreme differential scores at each end of the scale were invited to the full study for a target sample size of 16 students. If no response was received within two days, a reminder email was sent. After an additional three-day window, the student was removed from the list and the next rank-ordered candidate from that course was invited. The final participant pool for the full study included 11 students, five with a deep approach preference and six with a surface approach preference based on their R-SPQ-2F differential scores. These participants, together with their differential score and self-selected pseudonym, are shown with their relative location in the histogram of differential scores in . Interviews with the 11 participants were scheduled within three weeks of initial completion of the R-SPQ-2F and completed between September 18 and October 3 of the semester in which the study took place. The timing of the interview was important because SAL is considered a flexible characteristic that is impacted by course activities and other items described in the 3P Model . The interview protocol consisted of open-ended questions in a semi-structured protocol to allow participants the freedom to expand or elaborate on their responses. The protocol is provided in . Prompts were designed to probe for information about teaching context, student characteristics and preferences, and learning process and approach, aligned with the theoretical framework for the full study. It was not a specific intent during this interview to probe for validity of the R-SPQ-2F with this population, so there is not direct alignment between the interview and the survey. Interviews ranged in length from 22 minutes to 33 minutes, with a mean time of 27 minutes. Process reliability, which provides conditions to make the research process as independent from random influences as possible, was addressed by maintaining the same core prompts for each interview . All interviews were conducted in person and in a neutral location to allow for privacy and quality recording. To support communicative validity and process reliability, interviews were recorded with a digital recorder and transcribed verbatim using Descript software in preparation for analysis. Theoretical validation focuses on the fit between the phenomenon under investigation and the theory produced . In light of this, the interview prompts were designed to expose the reality of the unique learning processes and pathways taken by members within each bounded case. The semi-structured nature of the interview allowed for clarification of student use of words or description of ideas. The Legitimation framework from Onwuegbuzie and colleagues was utilized to ensure quality during the mixing of the data, particularly in the area of weakness minimization.
As interview transcripts were verified, concerns arose in the research team about the ability of the R-SPQ-2F to differentiate surface and deep approaches within this population of undergraduate A&P students. Triangulation of individual item responses with their interview excerpts revealed a lack of agreement between the quantitative and qualitative data. This finding led to detailed analysis comparing quantitative (R-SPQ-2F responses, factor/subscale scores, and differential scores) and qualitative data (interview responses) to answer our research question. Analysis proceeded in four main steps: Qualitative and quantitative item comparisons: A priori codes for surface, deep, surface to deep, and each of the 20 R-SPQ-2F items were used to identify passages that provided qualitative information relevant to each of the 20 R-SPQ-2F items. A priori coding proceeded in iterative stages, with one team member identifying all excerpts that were considered to meet the criteria for a specific a priori code and the second team member blind-coding a subset of the data for the same a priori code. These iterative cycles continued until the team reached agreement on the boundaries of each code and on the coding of specific passages within the data. Quantitative and qualitative scale comparisons: After a priori coding was complete, the data were grouped by participant and R-SPQ-2F item. Each member of the research team independently determined whether the available data, considered holistically, indicated agreement or disagreement with the R-SPQ-2F item. Because the R-SPQ-2F is scored on a 5-point, Likert-type scale, a response of 1 or 2 on the survey was considered a “disagreement” with a positively worded item, while a response of 4 or 5 was considered an “agreement.” When the quantitative and qualitative data both indicated agreement or both indicated disagreement, we coded this as alignment . When one indicated agreement and the other indicated disagreement, we coded this as misalignment . When the qualitative data indicated agreement or disagreement and the survey response was a 3 (“neutral”), we coded this as mild misalignment . Item Review: Each item of the R-SPQ-2F was reviewed by the research team to determine the expected scale (Deep or Surface) and subscale (Motive or Strategy) measured, as well as additional areas of concern for each question. Confirmatory Factor Analysis: In addition to the data collected for recruitment to the main study, additional responses to the R-SPQ-2F were collected during the first two weeks of a second fall semester. In total, 381 complete responses and 66 partial responses were collected. An item-level confirmatory factor analysis (CFA) was performed using the complete responses to assess the fit of the previously reported deep and surface approach factor structure to the data from the population of interest and assess the reliability of the instrument. Each stage of analysis contributed new insights into our concerns with using the R-SPQ-2F with this population of students.
During analysis, qualitative responses were compared to the responses on the quantitative survey. The process of comparing student interview responses to responses to each survey item provided an opportunity for inside-outside legitimation, which is concerned with the extent to which the participant’s view is accurately presented and utilized for purposes of explanation and description . The steps for process reliability helped to ensure accurate presentation of participant words. In addition, the research team took care to take participant words at face value when determining alignment between the qualitative and quantitative data. Weaknesses minimization occurred as the qualitative data allowed for a greater breadth of response from participants than the quantitative survey alone .
Findings from the four steps of analysis are presented sequentially. Qualitative and quantitative item comparisons provides information about the number of participants who provided a coded excerpt for each R-SPQ-2F item and the total number of excerpts coded for that item. As previously mentioned, the process of comparing qualitative and quantitative data was undertaken in a systematic fashion. An in-depth description of the analysis process for a representative prompt is presented in the following section. Example of analysis For item 13 ( I work hard at my studies because I find the material interesting ), nine participants provided information about this survey item with 30 total coded excerpts. This is not surprising, as the intention of the interview was to better understand each student’s approach to learning in their Anatomy & Physiology course and this prompt asks for similar information. This item is compound and gives two different statements: 13a) I work hard at my studies and 13b) I find the material interesting . The coded excerpts were identified by two coding passes completed for this item to capture qualitative information about effort level given by participants in the course (corresponding with statement 13a) and the participant’s interest level in the material of the course (corresponding with statement 13b). For compound items such as this one, diagrams like the one shown in were constructed to represent what agreement or disagreement in qualitative terms should translate to on the R-SPQ-2F. Consensus was reached that, if the relevant qualitative excerpts indicated that the participant did believe that they worked hard at their studies and that the participant did find the material interesting, a response to item 13 on the R-SPQ-2F with a “4” or “5” would be expected, while any other combination would lead to an expected “1” or “2” in response to item 13. All coded excerpts for each participant were grouped together and then read as a unit by the research team. The qualitative excerpt(s) were then used to predict an R-SPQ-2F response for each participant. For example, Kate provided the following quotes coded to 13a: For Anatomy, I definitely put a lot more effort into it… And I kind of will compare the two and so I’ll look at my big pictures and look at the outline and start looking at those smaller aspects—like maybe the molecules or the compounds and things that are like making up the different materials and all—just try to put things together. (emphasis added) The research team agreed that in these quotes, Kate is expressing that she is working hard in her Anatomy course. For the second portion of this item, Kate provided the following excerpts coded to 13b: I’m really interested in Anatomy and know it’s going to apply to my career…[I want to] understand everything about the human body. I think it’s really interesting and I want to be a physical therapist. So, it’s important to know how everything works together and how different people’s injuries could affect their anatomy and how that could be treated, so… (emphasis added) The research team agreed that these quotes showed that Kate has a strong interest in the course material of her Anatomy course. Because of these quotes, it was predicted that Kate would respond to item 13 on the R-SPQ-2F with a 4 or 5 to signify her agreement with this item. Kate’s actual response to item 13 on the R-SPQ-2F instrument was “2”. Therefore, the research team classified Kate’s qualitative and quantitative responses on item 13 to be misaligned . All 20 items for the R-SPQ-2F were analyzed in the manner described above. Item responses on the R-SPQ-2F that differed by a single unit (e.g. research team prediction = 2, participant response = 3) were considered to be mildly misaligned. presents the full results of alignment and misalignment for qualitative and quantitative responses. In summary, items 3, 6, 9, 10, 13, 17, 18, and 20 were found to present mild concern over misalignment, with stronger concerns regarding items 4, 11, 12, and 19. Items 1, 2, 5, 7, 8, and 15 appear well-aligned. No evidence was available for items 14 and 16. These determinations are based on the number of aligned responses compared to misaligned responses. Items with an equal or greater number of misaligned and mildly misaligned responses present strong concerns. Items with majority alignment but some misalignment are regarded as those with mild concern. Quantitative & qualitative scale comparisons Although 16 of the 20 items had majority alignment, concern remained about the validity of the R-SPQ-2F with this population. In the next stage of analysis, the overall scales of Surface approach and Deep approach were examined. Participant interviews were coded for surface and deep approach themes and these codes were then compared to the Surface and Deep scale scores. The interview transcripts were read again and one of three codes was assigned to relevant passages as described for the item analysis: surface , surface leading to deep , and deep . Details about the number of excerpts and code definitions are provided in . While it would be inappropriate to use counts of excerpts to classify the approach of a specific participant or to determine the validity of the R-SPQ-2F, patterns evident in some participant responses may be helpful. As indicated in , several participants provided quotes for each of the three codes. Ultimately, most of these groups of quotes have few qualitative differences. For example, Angie’s approach was classified as Surface by the R-SPQ-2F with a differential score of −12. She provided the following quotes which were coded as surface : I think one of the reasons it works out for me this way is because I know that the final exam isn’t cumulative. And so that makes me think about the fact that, whenever we end an exam, when we start something new it’s going to be the same process. Like I don’t have to continue studying what…I mean I should, but when it’s like new material and I need to just like create more brain space with all these new things… However, she also provided the following quote which was coded as deep : I really hope I learn, and like…I guess—is the word sustain? No —with—withhold the information? Right? I don’t want to forget it next semester because…I’m on a pre-med track. And so I think this is the…One of the most more interesting classes I’m going to take—that like, really interests me. Some things that I like, I’m going to see in my future career someday. And so these are concepts that I want to remember and like continue to grow and stuff. In contrast, K Diddy’s approach was classified as Deep by the R-SPQ-2F with a differential of +8. She provided the following quotes which were coded as surface : I feel like right now I’m not like remembering it because it’s like “okay, I gotta remember this for the test” and then it’s like “okay on to the next thing.” She also provided the following quote which was coded as deep : Like I would prefer to understand it before I start to study the information…So I really just wanted to understand…Basically how the how the body works…And like not a basic understanding because this is not a basic class, but like just enough to help me in my future career. There is little to no qualitative difference in the description provided by these students in their preferred approach to learning despite a 20-point difference in their deep—surface differential scores. In addition, six participants indicated the need for these approaches to be combined for success in the undergraduate Anatomy & Physiology classroom. The theme of surface to deep is demonstrated by the following interview excerpt from Shay, in which she connects the need for memorization in this course context to the understanding of relationships between various parts and systems: Yeah, for memorizing like you have to know certain terms to be able to build on things. Like if you don’t know what like “epithelial” means like—if you don’t know that or like the two types of it…Then you’re not able to apply it…So I guess that’s uh—like the basis of it…And I want to know those terms you’re able to know like you’re able to like learn them and figure out how they connect together like so…“Oh like these two different things are related.” So, you know the definition of them and then you know that they were like related then and kind of how they tie together. Triangulation of qualitative and quantitative results is a standard approach to assessing construct validity of an instrument. The results of this stage of analysis give rise to considerable concern about construct validity. In light of this, the R-SPQ-2F likely does not measure what it is intended to measure in this population, even if it is successful in doing so with other populations. Overall, this information provides additional evidence that the R-SPQ-2F did not discriminate between the surface and deep learning approaches of students taking an undergraduate Anatomy & Physiology course at the time of this study. Item review The research team reviewed each R-SPQ-2F item to determine our agreement with the factor and subscale to which it was assigned . This review is a standard technique in evaluating face validity, one aspect of content validity for an instrument. Additional areas of concern with those items in the context of interest were also noted. A summary of this analysis is presented in . Overall, the areas of concern identified with the R-SPQ-2F items can be summarized into four groups: Word interpretation issues Course context/alignment Compound items Validity of factor/subscale description Word interpretation was an area of concern identified in eight items (1, 2, 3, 4, 8, 9, 10, 11). For several of these, the use of the words “studying,” “memorizing,” and “understanding” in the prompt was the cause of concern. As noted in the interview protocol in , students were asked their definition of this term and provided varying responses. These findings are fully discussed in Johnson and Gallagher (in press) or Johnson . Additional terms that may vary in their interpretation due to the nature of the audience include “enough work” (item 2), “pass the course” (item 3), and “learn some things by rote” (item 8). As an example, the term “pass the course” may be defined very differently by students depending on their future goals and aspirations. Consider the following quote from Shay discussing her reasons for taking the course: I’m thinking of going to Pharmacy school. And so, this is a prerequisite, like for a lot of Pharmacy schools. Mainly—most of them require both, but some of them just want physiology. But like I mean so I’m gonna be taking both anyway, but it’s also on the PCAT too. So like that type of thing, like I need to be prepared for it for that. For students planning to attend medical school or nursing programs, an A or B in the course may be required when the class is a considered a prerequisite. Therefore, questions remain of how participants may interpret this phrase and it likely varies due to these factors. The phrase “learn some things by rote” is not a common description in the context of this course or population, and this term was never utilized by participants during their interviews. However, it should be noted that the nature of the course content in Anatomy & Physiology requires memorization or rote learning of many terms or anatomical parts for course success. Four items (4, 5, 6, 9) present concerns related to the specific course context by not being tied specifically to the course in question. For example, item 4 ( I only study seriously what’s given out in class or in the course outlines ) is classified as measuring Surface Strategy, but this interpretation would be dependent on the specific expectations for the course in which the survey is completed. For the participants in this study, there is evidence from both the interviews and the course syllabi that deep learning or understanding is required for success in the course and on individual assessments. Shay provided this description: He gives us the lecture objectives. And he says like if you can fill these out without notes, like and you understand it, like you’re able to thoroughly like, write about it, then you’ll do well on the tests, I guess. Therefore, a static assignment of this factor and subscale may not be appropriate and may skew R-SPQ-2F results. Items 5, 6, and 9 are not clearly tied to the course, which seems to violate Biggs’ own assertion that student results from the R-SPQ-2F are course- and context-dependent. Compound items are present for items 2, 6, 7, 12, and 13. In all cases, the items present two statements that are linked, and these statements describe both a strategy and a motive. For example, item 2 can be separated as follows: 2a) I have to do enough work so that I can form my own conclusions (strategy) and 2b) I have to do enough work before I am satisfied (motive). This pattern is repeated for the other items that are noted and is discussed more fully above in the analysis example of item 13. The most common area of concern with the R-SPQ-2F items was related to the validity of the factor and subscale descriptions, which was noted in 12 of the 20 items (2, 4, 5, 6, 7, 8, 11, 12, 13, 15, 16, 17, 20). Some of these issues were connected to one or more of the other themes we have previously discussed. When looking at factor or subscale assignment issues, consider the following examples. Item 11 ( I find I can get by in most assessments by memorizing key sections rather than trying to understand them ) is classified as measuring Surface Motive. However, the terms and actions used in this prompt align with a student’s strategy toward the course and its material. In addition, items 15 and 16 do not ask for a strategy or a motive, but probe for student or instructor expectation about a course. Item 20 is classified as Surface Strategy. However, the determination of whether this is a deep or surface strategy is dependent on the type of questions utilized by a student, which could be application-based in nature, which would correspond to a deep approach. Confirmatory factor analysis For instruments intended to have multiple factors, such as the R-SPQ-2F, factor analysis provides an accurate measure of instrument reliability by determining the best grouping of items to maximize internal consistency . Because the above analyses presented concerns about the survey validity, and because reliability is a necessary condition for validity, a confirmatory factor analysis (CFA) was performed in R Version 3.5.2 to assess the reliability of the two-factor instrument and its fit to the data gathered from the Anatomy & Physiology students in the study. The MVN and lavaan packages were used for multivariate analysis and confirmatory factor analysis, respectively. This CFA was performed following the procedures outlined in Bandalos , and the results reported follow the recommendations of Jackson and colleagues . The objective of the CFA was to determine if the instrument performed at least as reliably for this population of students as it did in previous analyses that form the basis for the justification of its usage in education research . Data preparation A total of 447 responses were obtained. Data collection for Year 1 is described in the Methods section. In Year 2, A&P instructors at six institutions from around the United States sent emails to their classes inviting them to complete the “Anatomy and Physiology Questions” Survey in Qualtrics. The response rate for Year 2 was 28.4% (223 responses from 784 enrolled students). Responses in which students only provided answers for a subset of the items ( n = 66) were removed from the dataset via listwise deletion, as the estimation methods in the software packages used for the CFA can only be performed using complete data. This left 381 complete responses to the R-SPQ-2F instrument from Years 1 and 2. For three-factor solutions with three to four variables per factor, Bandalos recommends a sample size of at least 300 if factor loadings are approximately 0.7, and a sample size of 500 or more for lower loadings. The estimates previously obtained through CFA by Justicia and colleagues indicate factor loadings ranging from 0.34 to 0.70, with the majority estimated between the 0.50–0.65 range. Bandalos further states that more accurate loading estimates are obtained when the number of variables per factor are increased. Because we have ten variables per factor and only two factors, our sample size of 381 was judged to be sufficient for a reasonable model estimation. Traditional CFA estimation methods were developed under the assumptions of continuous data, univariate and multivariate normality, and the absence of outliers, so these assumptions were tested prior to analysis. Justicia and colleagues critique the fact that prior CFA work on the R-SPQ-2F, including that of Biggs, did not account for the ordinal data generated by the Likert-type instrument items ; however, more recent research suggests that as long as there are at least five ordered categories on the item response scales of an instrument, data can be treated as continuous for purposes of model estimation with minimal bias in parameter estimates . Thus, because each item has five levels of response (1–5), the R-SPQ-2F data were treated as continuous. To assess the data for univariate normality, we considered both the Shapiro-Wilk test and the |2.0| cutoff for item skew and kurtosis recommended by Bandalos. Similarly, for multivariate normality, both Mardia’s test and the |3.0| cutoff for Mardia’s kurtosis coefficient were considered . Despite that the Shapiro-Wilk test yields strong evidence of non-normality for all twenty items ( p < 0.001), univariate skew and kurtosis items were all less than |2.0|, indicating that the deviations from univariate normality were not severe. However, Mardia’s test indicates deviation from multivarite normality ( p < 0.0001), and the kurtosis coefficient of 6.87 is well above the |3.0| threshold. As a result, the estimation methods in this CFA were chosen to allow for non-normal data. Using the MVN package in R , univariate outliers were identified for items 1, 2, 7, 11, 12, 13, 15, 16, and 18, and over forty multivariate outliers were identified for the entire dataset. The CFA model was fit to the data both with and without outliers to determine whether they had a noteworthy effect on the model fit, and while removing outliers resulted in slightly different values for parameter estimates and fit indices, evaluation of the fit indices overall did not change the assessment of the model as a whole. Ultimately, the decision was made to retain the outliers in the dataset, as they represented a reasonable range of student responses from the population of interest. The CFA results presented in this paper reflect those for the full dataset with no outliers removed. The covariance matrix of item responses was utilized as the input matrix for the CFA. The corresponding Pearson correlation matrix and standard deviations are provided in for those wishing to replicate our analysis. Note that, with the exception of the correlation between items 10 and 14, all correlations between items on the same scale (Deep or Surface) are at least 0.1. Correlations between items on different scales are all less than 0.1 and, in many cases, negative. In general, we see a relationship between the items we would expect to be correlated based on the nature of the instrument. Model specification The model tested in this CFA is that reported by Justicia and colleagues , which consists of the two factors hypothesized to represent Deep and Surface approaches to learning, but not the Motive and Strategy subscales. In this model, ten of the twenty Likert-type items are hypothesized to load onto the Deep approach factor, while the remaining ten correspond to the Surface approach factor. A description of which items are associated with which factors is included in , and the graphical representation of the model is shown in . Alternative models were not tested; while conducting further analyses to explore the existence of better-fitting models would likely be beneficial, doing so was outside the scope of this study. The purpose of this CFA was solely to assess the fit and reliability of the existing instrument structure for the population of interest as compared with the results provided by Biggs and Justicia and colleagues . Model identification This model meets the requirements for identification as described by Bollen , as it has the following characteristics: (1) ten items load onto each of the two factors, which is greater than the minimum requirement of three; (2) each item loads onto only one factor (either Deep or Surface); and (3) we assume the measurement error variances to be uncorrelated. Further, we set the factor metric by fixing the mean and variance of the factor “scores” to zero and one, respectively, which allows us to interpret the completely standardized factor loading estimates as the number of standard deviations that an item score would change for a one standard deviation change in the factor. These specifications result in an overidentified model with 169 degrees of freedom. Estimation of model parameters Two natural choices for model estimation arise, given the characteristics of the data. The first is weighted least squares (WLS), which Justicia and colleagues use in the study that led to the two-factor structure of the instrument used predominantly in education research. WLS estimation is advantageous because it makes minimal assumptions about the distribution of the observed variables, and thus the violation of multivariate normality for the R-SPQ-2F data does not pose an issue . In fact, Justicia and colleagues critique Biggs’ and other researchers’ appearance to ignore the non-normality of the data in prior factor analyses conducted for the R-SPQ-2F . However, in order to be most informative, WLS requires large sample sizes upwards of 2,000 sample points; research shows that, if the sample size is too small, WLS estimation can result in biased parameter estimates, inaccurate standard errors, and a poor fit to the data . It is of note that Justicia and colleagues do not consider this limitation of WLS estimation in their study ( n = 522). An alternative approach, recommended by Bandalos for when data are non-normal and large sample sizes are not available , is to use the more traditional maximum likelihood (ML) estimation and apply Satorra-Bentler (S-B) adjustments, which correct for the tendency of non-normality to inflate the chi-square goodness-of-fit statistic and underestimate parameter standard errors . In our CFA, we assessed the model fit by considering results from both the WLS and ML estimation methods. The maximum likelihood approach is preferred, based on the recommendations of Bandalos. However, the WLS approach was conducted alongside it to compare with the results obtained from Justicia and colleagues . Completely standardized parameter estimates for factor loadings and standard errors using both approaches are displayed in , with the preferred ML estimates in large text and the comparative WLS estimates in parentheses and in smaller text beside them. We see that factor loadings are higher and standard errors are lower when using WLS estimation; however, this should be considered cautiously in light of the small sample size. Still, the ML estimates indicate loadings of 0.35 or higher for all R-SPQ-2F items, similar to those reported by Justicia and colleagues . Also of interest are the R 2 values for each item. For the completely standardized estimates, these values can be computed by squaring the estimated loading of each item and are shown in . Each of these values can be interpreted as the proportion of variance in the item response that can be accounted for by the factor. The ML approach estimates that the first factor, hypothesized to be the Deep approach, accounts for 17.1% to 45.3% of the variance in item responses, and the second factor, or hypothesized Surface approach factor, accounts for 12.8% to 41.3% of the variance in item responses. Weighted least squares R 2 estimates are also included for comparison. Model testing As is fairly common in CFA research despite controversy over its usefulness, chi-square goodness-of-fit tests were conducted for each estimation method. In this test, the null hypothesis is that the model is a good fit to the data, so we hope to see p -values greater than a significance level of 0.05 when assessing the fit of a hypothesized model. However, given the dependency of the chi-square test on sample size and its tendency to reject the null hypothesis even when a model fits well (i.e., an inflated probability of a Type I error), Bandalos advocates for assessing a model using multiple fit indices to account for the chi-square test’s shortcomings. Similarly, Jackson and colleagues strongly recommend the inclusion of several fit indices and for the cutoff values for each fit index to be specified a priori . For this analysis, cutoffs based on prior research were chosen for the comparative fit index (CFI), Tuck Lewis index (TLI), root mean square error of approximation (RMSEA), and standardized root mean square residual (SRMR). Hu and Bentler suggest that CFI and TLI values of 0.95 or higher indicate good fit of a model, while values between 0.90 and 0.95 indicate acceptable fit . For RMSEA and SRMR, values ≤0.05 are indicative of good model fit, while values ≤0.08 indicate moderate but acceptable fit . Fit indices were generated for the R-SPQ-2F model under each type of estimation and are displayed in . Internal consistency of the items as they relate to the Deep and Surface approach factors was assessed by computing McDonald’s omega for each factor using the semTools package in R . Coefficients omega are reported in for each estimation method.
provides information about the number of participants who provided a coded excerpt for each R-SPQ-2F item and the total number of excerpts coded for that item. As previously mentioned, the process of comparing qualitative and quantitative data was undertaken in a systematic fashion. An in-depth description of the analysis process for a representative prompt is presented in the following section. Example of analysis For item 13 ( I work hard at my studies because I find the material interesting ), nine participants provided information about this survey item with 30 total coded excerpts. This is not surprising, as the intention of the interview was to better understand each student’s approach to learning in their Anatomy & Physiology course and this prompt asks for similar information. This item is compound and gives two different statements: 13a) I work hard at my studies and 13b) I find the material interesting . The coded excerpts were identified by two coding passes completed for this item to capture qualitative information about effort level given by participants in the course (corresponding with statement 13a) and the participant’s interest level in the material of the course (corresponding with statement 13b). For compound items such as this one, diagrams like the one shown in were constructed to represent what agreement or disagreement in qualitative terms should translate to on the R-SPQ-2F. Consensus was reached that, if the relevant qualitative excerpts indicated that the participant did believe that they worked hard at their studies and that the participant did find the material interesting, a response to item 13 on the R-SPQ-2F with a “4” or “5” would be expected, while any other combination would lead to an expected “1” or “2” in response to item 13. All coded excerpts for each participant were grouped together and then read as a unit by the research team. The qualitative excerpt(s) were then used to predict an R-SPQ-2F response for each participant. For example, Kate provided the following quotes coded to 13a: For Anatomy, I definitely put a lot more effort into it… And I kind of will compare the two and so I’ll look at my big pictures and look at the outline and start looking at those smaller aspects—like maybe the molecules or the compounds and things that are like making up the different materials and all—just try to put things together. (emphasis added) The research team agreed that in these quotes, Kate is expressing that she is working hard in her Anatomy course. For the second portion of this item, Kate provided the following excerpts coded to 13b: I’m really interested in Anatomy and know it’s going to apply to my career…[I want to] understand everything about the human body. I think it’s really interesting and I want to be a physical therapist. So, it’s important to know how everything works together and how different people’s injuries could affect their anatomy and how that could be treated, so… (emphasis added) The research team agreed that these quotes showed that Kate has a strong interest in the course material of her Anatomy course. Because of these quotes, it was predicted that Kate would respond to item 13 on the R-SPQ-2F with a 4 or 5 to signify her agreement with this item. Kate’s actual response to item 13 on the R-SPQ-2F instrument was “2”. Therefore, the research team classified Kate’s qualitative and quantitative responses on item 13 to be misaligned . All 20 items for the R-SPQ-2F were analyzed in the manner described above. Item responses on the R-SPQ-2F that differed by a single unit (e.g. research team prediction = 2, participant response = 3) were considered to be mildly misaligned. presents the full results of alignment and misalignment for qualitative and quantitative responses. In summary, items 3, 6, 9, 10, 13, 17, 18, and 20 were found to present mild concern over misalignment, with stronger concerns regarding items 4, 11, 12, and 19. Items 1, 2, 5, 7, 8, and 15 appear well-aligned. No evidence was available for items 14 and 16. These determinations are based on the number of aligned responses compared to misaligned responses. Items with an equal or greater number of misaligned and mildly misaligned responses present strong concerns. Items with majority alignment but some misalignment are regarded as those with mild concern.
For item 13 ( I work hard at my studies because I find the material interesting ), nine participants provided information about this survey item with 30 total coded excerpts. This is not surprising, as the intention of the interview was to better understand each student’s approach to learning in their Anatomy & Physiology course and this prompt asks for similar information. This item is compound and gives two different statements: 13a) I work hard at my studies and 13b) I find the material interesting . The coded excerpts were identified by two coding passes completed for this item to capture qualitative information about effort level given by participants in the course (corresponding with statement 13a) and the participant’s interest level in the material of the course (corresponding with statement 13b). For compound items such as this one, diagrams like the one shown in were constructed to represent what agreement or disagreement in qualitative terms should translate to on the R-SPQ-2F. Consensus was reached that, if the relevant qualitative excerpts indicated that the participant did believe that they worked hard at their studies and that the participant did find the material interesting, a response to item 13 on the R-SPQ-2F with a “4” or “5” would be expected, while any other combination would lead to an expected “1” or “2” in response to item 13. All coded excerpts for each participant were grouped together and then read as a unit by the research team. The qualitative excerpt(s) were then used to predict an R-SPQ-2F response for each participant. For example, Kate provided the following quotes coded to 13a: For Anatomy, I definitely put a lot more effort into it… And I kind of will compare the two and so I’ll look at my big pictures and look at the outline and start looking at those smaller aspects—like maybe the molecules or the compounds and things that are like making up the different materials and all—just try to put things together. (emphasis added) The research team agreed that in these quotes, Kate is expressing that she is working hard in her Anatomy course. For the second portion of this item, Kate provided the following excerpts coded to 13b: I’m really interested in Anatomy and know it’s going to apply to my career…[I want to] understand everything about the human body. I think it’s really interesting and I want to be a physical therapist. So, it’s important to know how everything works together and how different people’s injuries could affect their anatomy and how that could be treated, so… (emphasis added) The research team agreed that these quotes showed that Kate has a strong interest in the course material of her Anatomy course. Because of these quotes, it was predicted that Kate would respond to item 13 on the R-SPQ-2F with a 4 or 5 to signify her agreement with this item. Kate’s actual response to item 13 on the R-SPQ-2F instrument was “2”. Therefore, the research team classified Kate’s qualitative and quantitative responses on item 13 to be misaligned . All 20 items for the R-SPQ-2F were analyzed in the manner described above. Item responses on the R-SPQ-2F that differed by a single unit (e.g. research team prediction = 2, participant response = 3) were considered to be mildly misaligned. presents the full results of alignment and misalignment for qualitative and quantitative responses. In summary, items 3, 6, 9, 10, 13, 17, 18, and 20 were found to present mild concern over misalignment, with stronger concerns regarding items 4, 11, 12, and 19. Items 1, 2, 5, 7, 8, and 15 appear well-aligned. No evidence was available for items 14 and 16. These determinations are based on the number of aligned responses compared to misaligned responses. Items with an equal or greater number of misaligned and mildly misaligned responses present strong concerns. Items with majority alignment but some misalignment are regarded as those with mild concern.
Although 16 of the 20 items had majority alignment, concern remained about the validity of the R-SPQ-2F with this population. In the next stage of analysis, the overall scales of Surface approach and Deep approach were examined. Participant interviews were coded for surface and deep approach themes and these codes were then compared to the Surface and Deep scale scores. The interview transcripts were read again and one of three codes was assigned to relevant passages as described for the item analysis: surface , surface leading to deep , and deep . Details about the number of excerpts and code definitions are provided in . While it would be inappropriate to use counts of excerpts to classify the approach of a specific participant or to determine the validity of the R-SPQ-2F, patterns evident in some participant responses may be helpful. As indicated in , several participants provided quotes for each of the three codes. Ultimately, most of these groups of quotes have few qualitative differences. For example, Angie’s approach was classified as Surface by the R-SPQ-2F with a differential score of −12. She provided the following quotes which were coded as surface : I think one of the reasons it works out for me this way is because I know that the final exam isn’t cumulative. And so that makes me think about the fact that, whenever we end an exam, when we start something new it’s going to be the same process. Like I don’t have to continue studying what…I mean I should, but when it’s like new material and I need to just like create more brain space with all these new things… However, she also provided the following quote which was coded as deep : I really hope I learn, and like…I guess—is the word sustain? No —with—withhold the information? Right? I don’t want to forget it next semester because…I’m on a pre-med track. And so I think this is the…One of the most more interesting classes I’m going to take—that like, really interests me. Some things that I like, I’m going to see in my future career someday. And so these are concepts that I want to remember and like continue to grow and stuff. In contrast, K Diddy’s approach was classified as Deep by the R-SPQ-2F with a differential of +8. She provided the following quotes which were coded as surface : I feel like right now I’m not like remembering it because it’s like “okay, I gotta remember this for the test” and then it’s like “okay on to the next thing.” She also provided the following quote which was coded as deep : Like I would prefer to understand it before I start to study the information…So I really just wanted to understand…Basically how the how the body works…And like not a basic understanding because this is not a basic class, but like just enough to help me in my future career. There is little to no qualitative difference in the description provided by these students in their preferred approach to learning despite a 20-point difference in their deep—surface differential scores. In addition, six participants indicated the need for these approaches to be combined for success in the undergraduate Anatomy & Physiology classroom. The theme of surface to deep is demonstrated by the following interview excerpt from Shay, in which she connects the need for memorization in this course context to the understanding of relationships between various parts and systems: Yeah, for memorizing like you have to know certain terms to be able to build on things. Like if you don’t know what like “epithelial” means like—if you don’t know that or like the two types of it…Then you’re not able to apply it…So I guess that’s uh—like the basis of it…And I want to know those terms you’re able to know like you’re able to like learn them and figure out how they connect together like so…“Oh like these two different things are related.” So, you know the definition of them and then you know that they were like related then and kind of how they tie together. Triangulation of qualitative and quantitative results is a standard approach to assessing construct validity of an instrument. The results of this stage of analysis give rise to considerable concern about construct validity. In light of this, the R-SPQ-2F likely does not measure what it is intended to measure in this population, even if it is successful in doing so with other populations. Overall, this information provides additional evidence that the R-SPQ-2F did not discriminate between the surface and deep learning approaches of students taking an undergraduate Anatomy & Physiology course at the time of this study.
The research team reviewed each R-SPQ-2F item to determine our agreement with the factor and subscale to which it was assigned . This review is a standard technique in evaluating face validity, one aspect of content validity for an instrument. Additional areas of concern with those items in the context of interest were also noted. A summary of this analysis is presented in . Overall, the areas of concern identified with the R-SPQ-2F items can be summarized into four groups: Word interpretation issues Course context/alignment Compound items Validity of factor/subscale description Word interpretation was an area of concern identified in eight items (1, 2, 3, 4, 8, 9, 10, 11). For several of these, the use of the words “studying,” “memorizing,” and “understanding” in the prompt was the cause of concern. As noted in the interview protocol in , students were asked their definition of this term and provided varying responses. These findings are fully discussed in Johnson and Gallagher (in press) or Johnson . Additional terms that may vary in their interpretation due to the nature of the audience include “enough work” (item 2), “pass the course” (item 3), and “learn some things by rote” (item 8). As an example, the term “pass the course” may be defined very differently by students depending on their future goals and aspirations. Consider the following quote from Shay discussing her reasons for taking the course: I’m thinking of going to Pharmacy school. And so, this is a prerequisite, like for a lot of Pharmacy schools. Mainly—most of them require both, but some of them just want physiology. But like I mean so I’m gonna be taking both anyway, but it’s also on the PCAT too. So like that type of thing, like I need to be prepared for it for that. For students planning to attend medical school or nursing programs, an A or B in the course may be required when the class is a considered a prerequisite. Therefore, questions remain of how participants may interpret this phrase and it likely varies due to these factors. The phrase “learn some things by rote” is not a common description in the context of this course or population, and this term was never utilized by participants during their interviews. However, it should be noted that the nature of the course content in Anatomy & Physiology requires memorization or rote learning of many terms or anatomical parts for course success. Four items (4, 5, 6, 9) present concerns related to the specific course context by not being tied specifically to the course in question. For example, item 4 ( I only study seriously what’s given out in class or in the course outlines ) is classified as measuring Surface Strategy, but this interpretation would be dependent on the specific expectations for the course in which the survey is completed. For the participants in this study, there is evidence from both the interviews and the course syllabi that deep learning or understanding is required for success in the course and on individual assessments. Shay provided this description: He gives us the lecture objectives. And he says like if you can fill these out without notes, like and you understand it, like you’re able to thoroughly like, write about it, then you’ll do well on the tests, I guess. Therefore, a static assignment of this factor and subscale may not be appropriate and may skew R-SPQ-2F results. Items 5, 6, and 9 are not clearly tied to the course, which seems to violate Biggs’ own assertion that student results from the R-SPQ-2F are course- and context-dependent. Compound items are present for items 2, 6, 7, 12, and 13. In all cases, the items present two statements that are linked, and these statements describe both a strategy and a motive. For example, item 2 can be separated as follows: 2a) I have to do enough work so that I can form my own conclusions (strategy) and 2b) I have to do enough work before I am satisfied (motive). This pattern is repeated for the other items that are noted and is discussed more fully above in the analysis example of item 13. The most common area of concern with the R-SPQ-2F items was related to the validity of the factor and subscale descriptions, which was noted in 12 of the 20 items (2, 4, 5, 6, 7, 8, 11, 12, 13, 15, 16, 17, 20). Some of these issues were connected to one or more of the other themes we have previously discussed. When looking at factor or subscale assignment issues, consider the following examples. Item 11 ( I find I can get by in most assessments by memorizing key sections rather than trying to understand them ) is classified as measuring Surface Motive. However, the terms and actions used in this prompt align with a student’s strategy toward the course and its material. In addition, items 15 and 16 do not ask for a strategy or a motive, but probe for student or instructor expectation about a course. Item 20 is classified as Surface Strategy. However, the determination of whether this is a deep or surface strategy is dependent on the type of questions utilized by a student, which could be application-based in nature, which would correspond to a deep approach.
For instruments intended to have multiple factors, such as the R-SPQ-2F, factor analysis provides an accurate measure of instrument reliability by determining the best grouping of items to maximize internal consistency . Because the above analyses presented concerns about the survey validity, and because reliability is a necessary condition for validity, a confirmatory factor analysis (CFA) was performed in R Version 3.5.2 to assess the reliability of the two-factor instrument and its fit to the data gathered from the Anatomy & Physiology students in the study. The MVN and lavaan packages were used for multivariate analysis and confirmatory factor analysis, respectively. This CFA was performed following the procedures outlined in Bandalos , and the results reported follow the recommendations of Jackson and colleagues . The objective of the CFA was to determine if the instrument performed at least as reliably for this population of students as it did in previous analyses that form the basis for the justification of its usage in education research . Data preparation A total of 447 responses were obtained. Data collection for Year 1 is described in the Methods section. In Year 2, A&P instructors at six institutions from around the United States sent emails to their classes inviting them to complete the “Anatomy and Physiology Questions” Survey in Qualtrics. The response rate for Year 2 was 28.4% (223 responses from 784 enrolled students). Responses in which students only provided answers for a subset of the items ( n = 66) were removed from the dataset via listwise deletion, as the estimation methods in the software packages used for the CFA can only be performed using complete data. This left 381 complete responses to the R-SPQ-2F instrument from Years 1 and 2. For three-factor solutions with three to four variables per factor, Bandalos recommends a sample size of at least 300 if factor loadings are approximately 0.7, and a sample size of 500 or more for lower loadings. The estimates previously obtained through CFA by Justicia and colleagues indicate factor loadings ranging from 0.34 to 0.70, with the majority estimated between the 0.50–0.65 range. Bandalos further states that more accurate loading estimates are obtained when the number of variables per factor are increased. Because we have ten variables per factor and only two factors, our sample size of 381 was judged to be sufficient for a reasonable model estimation. Traditional CFA estimation methods were developed under the assumptions of continuous data, univariate and multivariate normality, and the absence of outliers, so these assumptions were tested prior to analysis. Justicia and colleagues critique the fact that prior CFA work on the R-SPQ-2F, including that of Biggs, did not account for the ordinal data generated by the Likert-type instrument items ; however, more recent research suggests that as long as there are at least five ordered categories on the item response scales of an instrument, data can be treated as continuous for purposes of model estimation with minimal bias in parameter estimates . Thus, because each item has five levels of response (1–5), the R-SPQ-2F data were treated as continuous. To assess the data for univariate normality, we considered both the Shapiro-Wilk test and the |2.0| cutoff for item skew and kurtosis recommended by Bandalos. Similarly, for multivariate normality, both Mardia’s test and the |3.0| cutoff for Mardia’s kurtosis coefficient were considered . Despite that the Shapiro-Wilk test yields strong evidence of non-normality for all twenty items ( p < 0.001), univariate skew and kurtosis items were all less than |2.0|, indicating that the deviations from univariate normality were not severe. However, Mardia’s test indicates deviation from multivarite normality ( p < 0.0001), and the kurtosis coefficient of 6.87 is well above the |3.0| threshold. As a result, the estimation methods in this CFA were chosen to allow for non-normal data. Using the MVN package in R , univariate outliers were identified for items 1, 2, 7, 11, 12, 13, 15, 16, and 18, and over forty multivariate outliers were identified for the entire dataset. The CFA model was fit to the data both with and without outliers to determine whether they had a noteworthy effect on the model fit, and while removing outliers resulted in slightly different values for parameter estimates and fit indices, evaluation of the fit indices overall did not change the assessment of the model as a whole. Ultimately, the decision was made to retain the outliers in the dataset, as they represented a reasonable range of student responses from the population of interest. The CFA results presented in this paper reflect those for the full dataset with no outliers removed. The covariance matrix of item responses was utilized as the input matrix for the CFA. The corresponding Pearson correlation matrix and standard deviations are provided in for those wishing to replicate our analysis. Note that, with the exception of the correlation between items 10 and 14, all correlations between items on the same scale (Deep or Surface) are at least 0.1. Correlations between items on different scales are all less than 0.1 and, in many cases, negative. In general, we see a relationship between the items we would expect to be correlated based on the nature of the instrument. Model specification The model tested in this CFA is that reported by Justicia and colleagues , which consists of the two factors hypothesized to represent Deep and Surface approaches to learning, but not the Motive and Strategy subscales. In this model, ten of the twenty Likert-type items are hypothesized to load onto the Deep approach factor, while the remaining ten correspond to the Surface approach factor. A description of which items are associated with which factors is included in , and the graphical representation of the model is shown in . Alternative models were not tested; while conducting further analyses to explore the existence of better-fitting models would likely be beneficial, doing so was outside the scope of this study. The purpose of this CFA was solely to assess the fit and reliability of the existing instrument structure for the population of interest as compared with the results provided by Biggs and Justicia and colleagues . Model identification This model meets the requirements for identification as described by Bollen , as it has the following characteristics: (1) ten items load onto each of the two factors, which is greater than the minimum requirement of three; (2) each item loads onto only one factor (either Deep or Surface); and (3) we assume the measurement error variances to be uncorrelated. Further, we set the factor metric by fixing the mean and variance of the factor “scores” to zero and one, respectively, which allows us to interpret the completely standardized factor loading estimates as the number of standard deviations that an item score would change for a one standard deviation change in the factor. These specifications result in an overidentified model with 169 degrees of freedom. Estimation of model parameters Two natural choices for model estimation arise, given the characteristics of the data. The first is weighted least squares (WLS), which Justicia and colleagues use in the study that led to the two-factor structure of the instrument used predominantly in education research. WLS estimation is advantageous because it makes minimal assumptions about the distribution of the observed variables, and thus the violation of multivariate normality for the R-SPQ-2F data does not pose an issue . In fact, Justicia and colleagues critique Biggs’ and other researchers’ appearance to ignore the non-normality of the data in prior factor analyses conducted for the R-SPQ-2F . However, in order to be most informative, WLS requires large sample sizes upwards of 2,000 sample points; research shows that, if the sample size is too small, WLS estimation can result in biased parameter estimates, inaccurate standard errors, and a poor fit to the data . It is of note that Justicia and colleagues do not consider this limitation of WLS estimation in their study ( n = 522). An alternative approach, recommended by Bandalos for when data are non-normal and large sample sizes are not available , is to use the more traditional maximum likelihood (ML) estimation and apply Satorra-Bentler (S-B) adjustments, which correct for the tendency of non-normality to inflate the chi-square goodness-of-fit statistic and underestimate parameter standard errors . In our CFA, we assessed the model fit by considering results from both the WLS and ML estimation methods. The maximum likelihood approach is preferred, based on the recommendations of Bandalos. However, the WLS approach was conducted alongside it to compare with the results obtained from Justicia and colleagues . Completely standardized parameter estimates for factor loadings and standard errors using both approaches are displayed in , with the preferred ML estimates in large text and the comparative WLS estimates in parentheses and in smaller text beside them. We see that factor loadings are higher and standard errors are lower when using WLS estimation; however, this should be considered cautiously in light of the small sample size. Still, the ML estimates indicate loadings of 0.35 or higher for all R-SPQ-2F items, similar to those reported by Justicia and colleagues . Also of interest are the R 2 values for each item. For the completely standardized estimates, these values can be computed by squaring the estimated loading of each item and are shown in . Each of these values can be interpreted as the proportion of variance in the item response that can be accounted for by the factor. The ML approach estimates that the first factor, hypothesized to be the Deep approach, accounts for 17.1% to 45.3% of the variance in item responses, and the second factor, or hypothesized Surface approach factor, accounts for 12.8% to 41.3% of the variance in item responses. Weighted least squares R 2 estimates are also included for comparison. Model testing As is fairly common in CFA research despite controversy over its usefulness, chi-square goodness-of-fit tests were conducted for each estimation method. In this test, the null hypothesis is that the model is a good fit to the data, so we hope to see p -values greater than a significance level of 0.05 when assessing the fit of a hypothesized model. However, given the dependency of the chi-square test on sample size and its tendency to reject the null hypothesis even when a model fits well (i.e., an inflated probability of a Type I error), Bandalos advocates for assessing a model using multiple fit indices to account for the chi-square test’s shortcomings. Similarly, Jackson and colleagues strongly recommend the inclusion of several fit indices and for the cutoff values for each fit index to be specified a priori . For this analysis, cutoffs based on prior research were chosen for the comparative fit index (CFI), Tuck Lewis index (TLI), root mean square error of approximation (RMSEA), and standardized root mean square residual (SRMR). Hu and Bentler suggest that CFI and TLI values of 0.95 or higher indicate good fit of a model, while values between 0.90 and 0.95 indicate acceptable fit . For RMSEA and SRMR, values ≤0.05 are indicative of good model fit, while values ≤0.08 indicate moderate but acceptable fit . Fit indices were generated for the R-SPQ-2F model under each type of estimation and are displayed in . Internal consistency of the items as they relate to the Deep and Surface approach factors was assessed by computing McDonald’s omega for each factor using the semTools package in R . Coefficients omega are reported in for each estimation method.
A total of 447 responses were obtained. Data collection for Year 1 is described in the Methods section. In Year 2, A&P instructors at six institutions from around the United States sent emails to their classes inviting them to complete the “Anatomy and Physiology Questions” Survey in Qualtrics. The response rate for Year 2 was 28.4% (223 responses from 784 enrolled students). Responses in which students only provided answers for a subset of the items ( n = 66) were removed from the dataset via listwise deletion, as the estimation methods in the software packages used for the CFA can only be performed using complete data. This left 381 complete responses to the R-SPQ-2F instrument from Years 1 and 2. For three-factor solutions with three to four variables per factor, Bandalos recommends a sample size of at least 300 if factor loadings are approximately 0.7, and a sample size of 500 or more for lower loadings. The estimates previously obtained through CFA by Justicia and colleagues indicate factor loadings ranging from 0.34 to 0.70, with the majority estimated between the 0.50–0.65 range. Bandalos further states that more accurate loading estimates are obtained when the number of variables per factor are increased. Because we have ten variables per factor and only two factors, our sample size of 381 was judged to be sufficient for a reasonable model estimation. Traditional CFA estimation methods were developed under the assumptions of continuous data, univariate and multivariate normality, and the absence of outliers, so these assumptions were tested prior to analysis. Justicia and colleagues critique the fact that prior CFA work on the R-SPQ-2F, including that of Biggs, did not account for the ordinal data generated by the Likert-type instrument items ; however, more recent research suggests that as long as there are at least five ordered categories on the item response scales of an instrument, data can be treated as continuous for purposes of model estimation with minimal bias in parameter estimates . Thus, because each item has five levels of response (1–5), the R-SPQ-2F data were treated as continuous. To assess the data for univariate normality, we considered both the Shapiro-Wilk test and the |2.0| cutoff for item skew and kurtosis recommended by Bandalos. Similarly, for multivariate normality, both Mardia’s test and the |3.0| cutoff for Mardia’s kurtosis coefficient were considered . Despite that the Shapiro-Wilk test yields strong evidence of non-normality for all twenty items ( p < 0.001), univariate skew and kurtosis items were all less than |2.0|, indicating that the deviations from univariate normality were not severe. However, Mardia’s test indicates deviation from multivarite normality ( p < 0.0001), and the kurtosis coefficient of 6.87 is well above the |3.0| threshold. As a result, the estimation methods in this CFA were chosen to allow for non-normal data. Using the MVN package in R , univariate outliers were identified for items 1, 2, 7, 11, 12, 13, 15, 16, and 18, and over forty multivariate outliers were identified for the entire dataset. The CFA model was fit to the data both with and without outliers to determine whether they had a noteworthy effect on the model fit, and while removing outliers resulted in slightly different values for parameter estimates and fit indices, evaluation of the fit indices overall did not change the assessment of the model as a whole. Ultimately, the decision was made to retain the outliers in the dataset, as they represented a reasonable range of student responses from the population of interest. The CFA results presented in this paper reflect those for the full dataset with no outliers removed. The covariance matrix of item responses was utilized as the input matrix for the CFA. The corresponding Pearson correlation matrix and standard deviations are provided in for those wishing to replicate our analysis. Note that, with the exception of the correlation between items 10 and 14, all correlations between items on the same scale (Deep or Surface) are at least 0.1. Correlations between items on different scales are all less than 0.1 and, in many cases, negative. In general, we see a relationship between the items we would expect to be correlated based on the nature of the instrument.
The model tested in this CFA is that reported by Justicia and colleagues , which consists of the two factors hypothesized to represent Deep and Surface approaches to learning, but not the Motive and Strategy subscales. In this model, ten of the twenty Likert-type items are hypothesized to load onto the Deep approach factor, while the remaining ten correspond to the Surface approach factor. A description of which items are associated with which factors is included in , and the graphical representation of the model is shown in . Alternative models were not tested; while conducting further analyses to explore the existence of better-fitting models would likely be beneficial, doing so was outside the scope of this study. The purpose of this CFA was solely to assess the fit and reliability of the existing instrument structure for the population of interest as compared with the results provided by Biggs and Justicia and colleagues .
This model meets the requirements for identification as described by Bollen , as it has the following characteristics: (1) ten items load onto each of the two factors, which is greater than the minimum requirement of three; (2) each item loads onto only one factor (either Deep or Surface); and (3) we assume the measurement error variances to be uncorrelated. Further, we set the factor metric by fixing the mean and variance of the factor “scores” to zero and one, respectively, which allows us to interpret the completely standardized factor loading estimates as the number of standard deviations that an item score would change for a one standard deviation change in the factor. These specifications result in an overidentified model with 169 degrees of freedom.
Two natural choices for model estimation arise, given the characteristics of the data. The first is weighted least squares (WLS), which Justicia and colleagues use in the study that led to the two-factor structure of the instrument used predominantly in education research. WLS estimation is advantageous because it makes minimal assumptions about the distribution of the observed variables, and thus the violation of multivariate normality for the R-SPQ-2F data does not pose an issue . In fact, Justicia and colleagues critique Biggs’ and other researchers’ appearance to ignore the non-normality of the data in prior factor analyses conducted for the R-SPQ-2F . However, in order to be most informative, WLS requires large sample sizes upwards of 2,000 sample points; research shows that, if the sample size is too small, WLS estimation can result in biased parameter estimates, inaccurate standard errors, and a poor fit to the data . It is of note that Justicia and colleagues do not consider this limitation of WLS estimation in their study ( n = 522). An alternative approach, recommended by Bandalos for when data are non-normal and large sample sizes are not available , is to use the more traditional maximum likelihood (ML) estimation and apply Satorra-Bentler (S-B) adjustments, which correct for the tendency of non-normality to inflate the chi-square goodness-of-fit statistic and underestimate parameter standard errors . In our CFA, we assessed the model fit by considering results from both the WLS and ML estimation methods. The maximum likelihood approach is preferred, based on the recommendations of Bandalos. However, the WLS approach was conducted alongside it to compare with the results obtained from Justicia and colleagues . Completely standardized parameter estimates for factor loadings and standard errors using both approaches are displayed in , with the preferred ML estimates in large text and the comparative WLS estimates in parentheses and in smaller text beside them. We see that factor loadings are higher and standard errors are lower when using WLS estimation; however, this should be considered cautiously in light of the small sample size. Still, the ML estimates indicate loadings of 0.35 or higher for all R-SPQ-2F items, similar to those reported by Justicia and colleagues . Also of interest are the R 2 values for each item. For the completely standardized estimates, these values can be computed by squaring the estimated loading of each item and are shown in . Each of these values can be interpreted as the proportion of variance in the item response that can be accounted for by the factor. The ML approach estimates that the first factor, hypothesized to be the Deep approach, accounts for 17.1% to 45.3% of the variance in item responses, and the second factor, or hypothesized Surface approach factor, accounts for 12.8% to 41.3% of the variance in item responses. Weighted least squares R 2 estimates are also included for comparison.
As is fairly common in CFA research despite controversy over its usefulness, chi-square goodness-of-fit tests were conducted for each estimation method. In this test, the null hypothesis is that the model is a good fit to the data, so we hope to see p -values greater than a significance level of 0.05 when assessing the fit of a hypothesized model. However, given the dependency of the chi-square test on sample size and its tendency to reject the null hypothesis even when a model fits well (i.e., an inflated probability of a Type I error), Bandalos advocates for assessing a model using multiple fit indices to account for the chi-square test’s shortcomings. Similarly, Jackson and colleagues strongly recommend the inclusion of several fit indices and for the cutoff values for each fit index to be specified a priori . For this analysis, cutoffs based on prior research were chosen for the comparative fit index (CFI), Tuck Lewis index (TLI), root mean square error of approximation (RMSEA), and standardized root mean square residual (SRMR). Hu and Bentler suggest that CFI and TLI values of 0.95 or higher indicate good fit of a model, while values between 0.90 and 0.95 indicate acceptable fit . For RMSEA and SRMR, values ≤0.05 are indicative of good model fit, while values ≤0.08 indicate moderate but acceptable fit . Fit indices were generated for the R-SPQ-2F model under each type of estimation and are displayed in . Internal consistency of the items as they relate to the Deep and Surface approach factors was assessed by computing McDonald’s omega for each factor using the semTools package in R . Coefficients omega are reported in for each estimation method.
These results yield some concerns over the reliability of the R-SPQ-2F when the instrument is administered to undergraduate A&P students. However, as discussed below, the results are at least as reliable with this population as with other populations of interest where reliability estimates have been reported. There are much stronger concerns over the validity of the results of the R-SPQ-2F when administered to undergraduate A&P students. At best, our quantitative analysis yields moderate reliability of results with this population. As indicated in , the chi-square test results under each type of model estimation indicate strong evidence ( p < 0.001) that the model is not a good fit for the data. Looking to the alternative fit indices, we see that none of those calculated for the model using the WLS estimation method indicate a good fit. The “best” results are shown for the ML estimation method with the S-B adjustments for non-normality. Though the CFI and TLI fits do not indicate a good model fit, RMSEA and SRMR both indicate fit index values that correspond to “acceptable” model fits with respect to the index cutoffs specified a priori . For comparison, we consider the results of the confirmatory factor analysis by Justicia and colleagues; when taking the preferred maximum likelihood approach, our CFI values are worse than the those found by the authors in their assessment of the R-SPQ-2F (0.92 for their preferred model), but our RMSEA and SRMR indices are slightly better (Justicia and colleagues reported RMSEA = 0.07 and SRMR = 0.09) . Though this is certainly not evidence of a “good” model fit, the model can be deemed at least as acceptable as that reported by Justicia and colleagues. Though the analysis of the R-SPQ-2F instrument by Justicia et al. does not report measures of internal consistency, Biggs reports Cronbach’s alpha scores of 0.73 for the Deep approach factor and 0.64 for the Surface approach factor . McDonald’s omega is argued to be a similarly interpreted but more accurate measure of internal consistency than Cronbach’s alpha when performing confirmatory factor analysis on an instrument such as the R-SPQ-2F . In this regard, using either estimation method, our CFA indicates more internal consistency and thus better reliability with our population of interest than the results presented by Biggs, whose Cronbach alpha scores were deemed acceptable. Taken holistically, the confirmatory factor analysis indicates that the R-SPQ-2F instrument is at least as reliable with this population of undergraduate A&P students as it is reported to be in the studies by Biggs and Justicia and colleagues with their populations of interest. Because reliability is a necessary condition for validity, should the reliability of the survey be deemed insufficient by some standards based on the CFA results, the validity of the instrument would justifiably be called into question. However, the R-SPQ-2F has been used and continues to be used in education research. This continued use indicates acceptance, either explicitly or implicitly, of the reliability of the instrument based on the results reported by Biggs and Justicia et al. . Even if one accepts the results of the survey as reliable based on this standard, we nonetheless have reason to believe that the two factors measured by the instrument do not truly represent deep and surface approaches to learning, calling into question the validity of the instrument, at least with this population and potentially with other populations as well. This is cause for concern given the continued usage of the R-SPQ-2F in education research , and in A&P education research in particular . Results from the qualitative and quantitative item comparisons yielded eight items with mild misalignment concerns and four items with significant concerns (12/20 or 60% of items). The comparison of qualitative and quantitative scales raised concern as evidence was present that student learning approaches were not distinguished by the R-SPQ-2F instrument. The review of all 20 items produced concerns in all but three items, and several of the items with concerns had multiple issues. The research team identified word interpretation issues, interactions between the course context and phrasing of items, presence of compound items, and items assigned to a specific factor or subscale that call into question construct validity. One possible explanation for the issues observed with the R-SPQ-2F in this study is the lack of recognition of the achieving approach to learning which has been previously noted in the literature. Kember defined an achieving approach as “an approach that believes memorization is necessary to maintain a high grade, but desires to connect new information to previous knowledge” . As previously mentioned, many of the participants of this study expressed aspirations to attend professional or graduate school. This fact motivated them to achieve high grades while they desired to make additional connections to their existing knowledge. Biggs and colleagues briefly acknowledge this orientation in relation to the original SPQ, stating that “higher order factor analyses [of the original SPQ] usually associate the achieving motive and strategy with the deep approach” . However, the data presented in this paper would question whether this association is true for the updated instrument and for this population. In fact, participants who qualitatively described a learning approach in alignment with the achieving definition were not consistently categorized by the R-SPQ-2F as adopting a deep approach to learning. Another factor to consider related to the validity of this instrument with undergraduate A&P students is the nature of the discipline itself. The participants in this study noted multiple times the need to memorize certain aspects of the course material (classified as a surface approach within the SAL literature) in order to be able to fully understand it. We categorized these responses as surface to deep approaches within the qualitative data. Michael and colleagues note that physiology is difficult for students to learn, partly because of the need for an adequate knowledge base or other prerequisite knowledge. Much of this knowledge, like names and locations of anatomical parts or various terms, can only be learned through processes or strategies that are often categorized by instructors and researchers as surface approaches. Given this information, it may also be helpful to consider the surface, achieving, and deep approaches to learning not only as context-dependent characteristics, but perhaps as traits on a continuum rather than as discrete categories or groupings. Finally, the possible issues mentioned above may stem from employment of QMPs mainly relating to construct validity of the factors and subscales in the development of the R-SPQ-2F. Biggs and colleagues provide no information concerning the revision or retention of items from the longer Study Process Questionnaire. As stated by Flake and Fried: As such, modifications…introduce uncertainty about the construct validity evidence for the interpretation of the scale score. (, pg.16) As mentioned previously, there is no published information providing qualitative data to support the construct definitions as connected to the items present on the R-SPQ-2F. This gap in the literature makes it difficult to determine the overall validity of this instrument in categorizing student approaches to learning for any population. Our work indicates that the instrument does not produce valid results for the specific population of undergraduate A&P students.
Although the R-SPQ-2F is widely used and results are accepted as reliable based on previously published measures of internal consistency, questions about the validity of the instrument remain. Validity issues could be attributed to several causes, including omission of the achieving approach to learning, specific features of the biological subdisciplines of anatomy and physiology, or the employment of QMPs in the development of the instrument. Further study is needed to determine the identity of specific factors measured by the R-SPQ-2F. Limitations This work did not begin with the intent to analyze the validity and reliability of the R-SPQ-2F. The interview protocol did not probe directly for answers to the survey prompts, so important ideas and themes from the instrument may not have been detected. However, care was taken to interpret participant words at face value and only declare a misalignment when the qualitative data presented a clear disagreement with the survey prompt. Future work As previously mentioned, researchers or practitioners who wish to utilize the R-SPQ-2F should test the validity of the instrument in their population of interest prior to use. These analyses should include testing for face validity and construct validity. Alternatively, an updated instrument that measures or categorizes student learning approaches as surface or deep could be developed for populations for which the R-SPQ-2F is not valid. Given the minimally acceptable reliability of the two-factor structure of the R-SPQ-2F, additional study could clarify the constructs that are being measured. In addition, it would be worthwhile to evaluate other theoretical factor structures to test alternative models through exploratory and confirmatory factor analyses, considering the overall poor fit to the data in this study. Finally, future work could evaluate the impact of student approach to learning in specific course contexts with revised or redeveloped instruments. This could involve a study of specific teaching tools or practices in undergraduate courses.
This work did not begin with the intent to analyze the validity and reliability of the R-SPQ-2F. The interview protocol did not probe directly for answers to the survey prompts, so important ideas and themes from the instrument may not have been detected. However, care was taken to interpret participant words at face value and only declare a misalignment when the qualitative data presented a clear disagreement with the survey prompt.
As previously mentioned, researchers or practitioners who wish to utilize the R-SPQ-2F should test the validity of the instrument in their population of interest prior to use. These analyses should include testing for face validity and construct validity. Alternatively, an updated instrument that measures or categorizes student learning approaches as surface or deep could be developed for populations for which the R-SPQ-2F is not valid. Given the minimally acceptable reliability of the two-factor structure of the R-SPQ-2F, additional study could clarify the constructs that are being measured. In addition, it would be worthwhile to evaluate other theoretical factor structures to test alternative models through exploratory and confirmatory factor analyses, considering the overall poor fit to the data in this study. Finally, future work could evaluate the impact of student approach to learning in specific course contexts with revised or redeveloped instruments. This could involve a study of specific teaching tools or practices in undergraduate courses.
S1 Data (TXT) Click here for additional data file.
|
Microfluidic design in single-cell sequencing and application to cancer precision medicine | 85452273-b66a-4589-932e-f72470010e7e | 10545941 | Internal Medicine[mh] | Both intra-patient and intra-tumor heterogeneity have profound implications for the progression and treatment of cancers. Intra-tumor heterogeneity directly influences tumor proliferation, metastasis, and treatment resistance. The microenvironment of tumors and the heterogeneity of infiltrated immune cells determine the sensitivity of immunotherapy. Thus, the idea of precision medicine, i.e., therapies and treatment plans that are tailored to the genomic and cellular specificities of a given patient, has gained increased attention. An accurate characterization of specific molecular or cellular features of cancers and the surrounding microenvironment will be needed to provide therapeutic options of maximum benefit to individual patients. Standard oncology cell sequencing involves bulk profiling of a large sample consisting of thousands to millions of cells. This technique, however, is constrained by the loss of important information about the structural and functional properties of individual cells that may drive critical tumor processes. To address this dilemma, single-cell sequencing (SCS) has been developed to enable comprehension of the complex genetic variability within individual cells. Information derived from SCS potentially has a vital function in terms of including forecasting treatment response, tracking disease progression, prognosis, anticipating treatment efficacy, and predicting emerging drug resistance . The development of SCS platforms and workflows have developed hand in hand with microfluidics-based technologies. Microfluidics has helped in particular to solve the challenge of low throughput in SCS but also offers the potential for integration and automation to improve the performance of SCS. In recent years, a number of great reviews have been published to describe the combination of microfluidics and SCS techniques. , , , , , , Here, we aim to provide a thorough description of single-cell isolation techniques based on different microfluidics design principles, as well as an overview of the complementary sequencing or profiling technologies used with these. We also highlight recent applications in precision oncology research and challenges and perspective of microfluidics-based SCS in clinical settings.
SCS can generally be described in the following steps: (1) sample acquisition; (2) cell isolation ; (3) cell lysis , ; (4) amplification (library preparation); (5) sequencing (e.g., Illumina, Ion Torrent, Pacific Biosciences, and Oxford Nanopore technologies) , ; and (6) cellular mono-omics or multiomics analysis. Microfluidics-based cell isolation combines small sample volume and high throughput, and it is therefore broadly employed in SCS research and in the development of commercialized products. According to different microfluidic technology principles, microfluidic chips for SCS can be categorized into two general types: devices based on microstructural entrapment and those with external forces. In this section, we outline six structures that are utilized in microfluidics-based SCS. Schematics and a summary of some of the key advantages and disadvantages of these designs are also summarized in . Droplet-based microfluidics Droplet-based microfluidics has become one of the most frequently used single-cell isolation methods owing to its simplicity of operation, high throughput, and low cost. Recently, there have been numerous review publications regarding the application of droplets in single-cell isolation and analysis. , , Droplets are generated by applying two immiscible fluids and could be generated by three classical geometries of microfluidic devices, which are T-junctions, co-flow, and flow focusing. The small volume of droplets co-encapsulated with cells and microbeads reduces the molecular reaction time and avoids external contamination. The aqueous phase of the droplet in SCS can also be replaced by hydrogel. Hydrogel droplet provides a 3D culture matrix to support single-cell cultures for longer periods and is flexible for downstream analysis and processing. Different microfluidics modules can subsequently be leveraged for highly flexible manipulation of droplets, such as incubation, splitting, merging, reloading, detection, and sorting. Notwithstanding the advantages of droplet microfluidics, the cells encapsulated in droplets are randomly distributed according to a Poisson distribution, risking the creation of empty droplets and droplets containing multiple cells. There are several solutions to this challenge. Fluorescence and magnetophoretic sorting have been adopted to screen the droplets themselves. Alternatively, a number of algorithms have been developed to statistically identify reads from empty or multicell droplets from sequencing datasets themselves. , In terms of droplet generation, fluorocarbon oil is the most commonly selected oil for single-cell assays because of its good permeability, and block copolymer of polyethylene glycol and perfluoropolyether (PEG-PFPE) is the most frequently chosen surfactant due to its good stability, but the risk of leakage due to the fragility of the droplets still cannot be overcome. , As such, the development of more high-performance surfactants may be meaningful for a wide range of applications of droplets. Valve-based microfluidics Microvalves for microfluidic control can be classified into four categories: active mechanical, active non-mechanical, passive mechanical, and passive non-mechanical. In general, active mechanical microvalves offer the best performance, whereas simple passive valves are more practical for applications. , Microvalves serve to isolate specific areas of the channel network, permitting the generation of reaction chambers and performing independent reactions. In order to achieve different needs, the valves are able to be opened or closed. For example, in a multilayer microfluidic structure designed by Chen and colleagues, a membrane is sandwiched between two layers, and pneumatic microvalves are able to regulate the membrane size and pressure, enabling dynamic control of the upper and lower size limits of the isolated cells. Hydrogels can also be involved in the construction of microvalves in microfluidics to regulate fluid flow by controlling the solubility of hydrogels in channels. Obst et al. developed a microfluidic hydrogel valve that is capable of visually switching on and off by ultraviolet (UV) spectroscopy. Valve-based microfluidics can achieve microfluidic large-scale integration, enabling the fabrication of microfluidic chips consisting of hundreds to thousands of pneumatic membrane valves similar to an array of flow-controlled multiplexers. This type of system allows precise manipulation of cells and reagents to automate more sophisticated analysis. Nevertheless, the higher manufacturing cost and complex operation of these devices may present barriers to widespread commercial applications. Hydrodynamic trap-based microfluidics Hydrodynamic traps are structures that are currently popular for passively trapping cells. This device is designed in a manner that allows individual cells to be effectively trapped in a mechanical barrier by the vortices induced by a fluid flow. , When a cell occupies a typical trap, the altered dynamic flow surrounding it has increased fluid resistance, which limits the access of other cells to the trap. Carlo et al. designed a U-shaped hydrodynamic cell trap to accomplish dynamic control of individual adherent cells for array culture and fluid perfusion. Besides U-shaped traps, other geometric traps are also available, such as hook-shaped traps, ratchet filters, and multibarrier structure-type filters. Microfluidics based on traps can achieve a large number of single-cell separations, but the efficiency of trapping and trap clogging are challenges to be considered. A proposed continuous hydrodynamic channel consisting of a main channel for loading cells, a narrow trap site, and a downstream bypass partly address the problem. A compact geometry designed by Jin et al. produced faster and higher capture efficiency compared to the serpentine cell-trapping structure. Apart from these, Lipp et al. proposed a planar hydrodynamic trap structure to overcome the problem of low trap density in the chip, which is composed of two superimposed microchannels with particles being trapped in the top channel. Overall, hydrodynamics-based single-cell trapping microfluidics demands meticulous design of structures and flow rates to improve trapping efficiency while reducing mechanical damage to cells and avoiding chip clogging. Microwell-based microfluidics Contrary to hydrodynamic traps, microfluidics based on microwell arrays for single-cell separation is mainly forced by gravity. Due to simplicity and scalability, microwell arrays are well accepted in the single-cell analysis field. Microwells are capable of being produced in a variety of geometric shapes with cross-sectional round, triangular, square, and rectangular shapes. , Indeed, the shape of the microwell array has an effect on the capture efficiency. Both Park et al. and Lai et al. have proven, by comparing different shapes of microwells, that triangular microwells are a more efficient geometry for capturing cells because the water phase generates a stronger backflow in the triangle. Surface functionalization of the microwells is also a critical factor to be taken into account regarding the capture efficiency of the cells, as is the size of the microwells. Microwell chips are easy to design, versatile in structure, and flexible in operation and thus are suitable for integration with other platforms to accomplish high-throughput single-cell analysis. However, applications can be restricted by the chip size and fabrication techniques. And when facing changes in experimental conditions, microwell chips will require redesign and manufacture. Cross-contamination is another thorny issue due to the difficulty of achieving complete isolation between microwells. In addition to this, microwell chips have the limitation of it being hard to retrieve the sample. Electrowetting-on-dielectric microfluidics The introduction of external forces in microfluidic devices has become an effective technique for single-cell trapping. Digital microfluidics (DMF) is a technology to actuate droplets based on the principle of electrowetting on dielectric (EWOD). It involves altering the surface wettability by applying an electrical potential to an array of electrodes underneath a hydrophobic surface, which ultimately pulls the droplet to the activated electrode. Different electrodes and a programmed sequence of applied voltages can cause the droplet to move in different directions on the DMF chip surface. For the EWOD effect, it can be described by contact-angle shift, which is quantitatively characterized by the Young-Lippmann equation. DMF features outstanding flexibility and simplicity in handling picoliter-to-microliter-volume samples and is widely employed for single-cell assays. The DMF chip designed by Ruan et al. contained 10 hydrophilic sites on an array of 95 electrodes, allowing cellular captured on the hydrophilic sites by precise manipulation. However, for this type of DMF chip, throughput is low, and its lifetime may be shortened because of the high drive voltage that is applied to the droplet. Zhai et al. have reported an alternative DMF chip with a 3D microstructure for cell capture and the combination of gas-soluble silicone oil with a fluorinated surfactant to reduce the drive voltage to 36 V. However, the complex manufacturing process and high cost of DMF chips are significant issues for wider adoption of these designs. Dielectrophoresis-based microfluidics Dielectrophoresis (DEP) is widely used in cellular manipulation techniques owing to its non-invasive, specific selectivity and high capture efficiency advantages. In DEP, a non-uniform electric field is applied, and the separation of the single cells is based on the relative difference in the polarization between the cell and the surrounding medium. Based on operating mode, DEP is mainly grouped into electrode-based DEP (eDEP), insulator-based DEP (iDEP), and contactless DEP (cDEP). In DEP, larger gradients can lead to joule heating, which can harm cells. And due to the short extension of the electric field gradient from the electrode or insulating structure, another critical difficulty of DEP-based single-cell isolation is the low volumetric throughput, especially when low-abundance cells need to be studied. Microfluidics with 3D electrodes would overcome these challenges to some extent. Puttaswamy et al. developed a 3D electrode with a height of 70 μm to enable capture and rotation of red blood cells. However, the fact that the electrodes have to be wired to the power supply causes complexity in the microfluidic device architecture. The integration of the wireless electrodes eliminates the branching microchannels of leads that are easily expanded into DEP devices. Li et al. developed an array of over 700 wireless bipolar electrodes (BPEs) that accomplished higher capture rates of single cells. DEP microfluidic platforms work as a highly usable platform for cell separation, but common problems include joule heating, bubble generation, and pearl-chain formation.
Droplet-based microfluidics has become one of the most frequently used single-cell isolation methods owing to its simplicity of operation, high throughput, and low cost. Recently, there have been numerous review publications regarding the application of droplets in single-cell isolation and analysis. , , Droplets are generated by applying two immiscible fluids and could be generated by three classical geometries of microfluidic devices, which are T-junctions, co-flow, and flow focusing. The small volume of droplets co-encapsulated with cells and microbeads reduces the molecular reaction time and avoids external contamination. The aqueous phase of the droplet in SCS can also be replaced by hydrogel. Hydrogel droplet provides a 3D culture matrix to support single-cell cultures for longer periods and is flexible for downstream analysis and processing. Different microfluidics modules can subsequently be leveraged for highly flexible manipulation of droplets, such as incubation, splitting, merging, reloading, detection, and sorting. Notwithstanding the advantages of droplet microfluidics, the cells encapsulated in droplets are randomly distributed according to a Poisson distribution, risking the creation of empty droplets and droplets containing multiple cells. There are several solutions to this challenge. Fluorescence and magnetophoretic sorting have been adopted to screen the droplets themselves. Alternatively, a number of algorithms have been developed to statistically identify reads from empty or multicell droplets from sequencing datasets themselves. , In terms of droplet generation, fluorocarbon oil is the most commonly selected oil for single-cell assays because of its good permeability, and block copolymer of polyethylene glycol and perfluoropolyether (PEG-PFPE) is the most frequently chosen surfactant due to its good stability, but the risk of leakage due to the fragility of the droplets still cannot be overcome. , As such, the development of more high-performance surfactants may be meaningful for a wide range of applications of droplets.
Microvalves for microfluidic control can be classified into four categories: active mechanical, active non-mechanical, passive mechanical, and passive non-mechanical. In general, active mechanical microvalves offer the best performance, whereas simple passive valves are more practical for applications. , Microvalves serve to isolate specific areas of the channel network, permitting the generation of reaction chambers and performing independent reactions. In order to achieve different needs, the valves are able to be opened or closed. For example, in a multilayer microfluidic structure designed by Chen and colleagues, a membrane is sandwiched between two layers, and pneumatic microvalves are able to regulate the membrane size and pressure, enabling dynamic control of the upper and lower size limits of the isolated cells. Hydrogels can also be involved in the construction of microvalves in microfluidics to regulate fluid flow by controlling the solubility of hydrogels in channels. Obst et al. developed a microfluidic hydrogel valve that is capable of visually switching on and off by ultraviolet (UV) spectroscopy. Valve-based microfluidics can achieve microfluidic large-scale integration, enabling the fabrication of microfluidic chips consisting of hundreds to thousands of pneumatic membrane valves similar to an array of flow-controlled multiplexers. This type of system allows precise manipulation of cells and reagents to automate more sophisticated analysis. Nevertheless, the higher manufacturing cost and complex operation of these devices may present barriers to widespread commercial applications.
Hydrodynamic traps are structures that are currently popular for passively trapping cells. This device is designed in a manner that allows individual cells to be effectively trapped in a mechanical barrier by the vortices induced by a fluid flow. , When a cell occupies a typical trap, the altered dynamic flow surrounding it has increased fluid resistance, which limits the access of other cells to the trap. Carlo et al. designed a U-shaped hydrodynamic cell trap to accomplish dynamic control of individual adherent cells for array culture and fluid perfusion. Besides U-shaped traps, other geometric traps are also available, such as hook-shaped traps, ratchet filters, and multibarrier structure-type filters. Microfluidics based on traps can achieve a large number of single-cell separations, but the efficiency of trapping and trap clogging are challenges to be considered. A proposed continuous hydrodynamic channel consisting of a main channel for loading cells, a narrow trap site, and a downstream bypass partly address the problem. A compact geometry designed by Jin et al. produced faster and higher capture efficiency compared to the serpentine cell-trapping structure. Apart from these, Lipp et al. proposed a planar hydrodynamic trap structure to overcome the problem of low trap density in the chip, which is composed of two superimposed microchannels with particles being trapped in the top channel. Overall, hydrodynamics-based single-cell trapping microfluidics demands meticulous design of structures and flow rates to improve trapping efficiency while reducing mechanical damage to cells and avoiding chip clogging.
Contrary to hydrodynamic traps, microfluidics based on microwell arrays for single-cell separation is mainly forced by gravity. Due to simplicity and scalability, microwell arrays are well accepted in the single-cell analysis field. Microwells are capable of being produced in a variety of geometric shapes with cross-sectional round, triangular, square, and rectangular shapes. , Indeed, the shape of the microwell array has an effect on the capture efficiency. Both Park et al. and Lai et al. have proven, by comparing different shapes of microwells, that triangular microwells are a more efficient geometry for capturing cells because the water phase generates a stronger backflow in the triangle. Surface functionalization of the microwells is also a critical factor to be taken into account regarding the capture efficiency of the cells, as is the size of the microwells. Microwell chips are easy to design, versatile in structure, and flexible in operation and thus are suitable for integration with other platforms to accomplish high-throughput single-cell analysis. However, applications can be restricted by the chip size and fabrication techniques. And when facing changes in experimental conditions, microwell chips will require redesign and manufacture. Cross-contamination is another thorny issue due to the difficulty of achieving complete isolation between microwells. In addition to this, microwell chips have the limitation of it being hard to retrieve the sample.
The introduction of external forces in microfluidic devices has become an effective technique for single-cell trapping. Digital microfluidics (DMF) is a technology to actuate droplets based on the principle of electrowetting on dielectric (EWOD). It involves altering the surface wettability by applying an electrical potential to an array of electrodes underneath a hydrophobic surface, which ultimately pulls the droplet to the activated electrode. Different electrodes and a programmed sequence of applied voltages can cause the droplet to move in different directions on the DMF chip surface. For the EWOD effect, it can be described by contact-angle shift, which is quantitatively characterized by the Young-Lippmann equation. DMF features outstanding flexibility and simplicity in handling picoliter-to-microliter-volume samples and is widely employed for single-cell assays. The DMF chip designed by Ruan et al. contained 10 hydrophilic sites on an array of 95 electrodes, allowing cellular captured on the hydrophilic sites by precise manipulation. However, for this type of DMF chip, throughput is low, and its lifetime may be shortened because of the high drive voltage that is applied to the droplet. Zhai et al. have reported an alternative DMF chip with a 3D microstructure for cell capture and the combination of gas-soluble silicone oil with a fluorinated surfactant to reduce the drive voltage to 36 V. However, the complex manufacturing process and high cost of DMF chips are significant issues for wider adoption of these designs.
Dielectrophoresis (DEP) is widely used in cellular manipulation techniques owing to its non-invasive, specific selectivity and high capture efficiency advantages. In DEP, a non-uniform electric field is applied, and the separation of the single cells is based on the relative difference in the polarization between the cell and the surrounding medium. Based on operating mode, DEP is mainly grouped into electrode-based DEP (eDEP), insulator-based DEP (iDEP), and contactless DEP (cDEP). In DEP, larger gradients can lead to joule heating, which can harm cells. And due to the short extension of the electric field gradient from the electrode or insulating structure, another critical difficulty of DEP-based single-cell isolation is the low volumetric throughput, especially when low-abundance cells need to be studied. Microfluidics with 3D electrodes would overcome these challenges to some extent. Puttaswamy et al. developed a 3D electrode with a height of 70 μm to enable capture and rotation of red blood cells. However, the fact that the electrodes have to be wired to the power supply causes complexity in the microfluidic device architecture. The integration of the wireless electrodes eliminates the branching microchannels of leads that are easily expanded into DEP devices. Li et al. developed an array of over 700 wireless bipolar electrodes (BPEs) that accomplished higher capture rates of single cells. DEP microfluidic platforms work as a highly usable platform for cell separation, but common problems include joule heating, bubble generation, and pearl-chain formation.
Sequencing genomes, transcriptomes, epigenomes, proteomes, etc., of single cells facilitates a precise understanding of individual tumors and, therefore, achieving precision therapies. The incorporation of microfluidics with single-cell omics analysis offers several benefits, not only improving the throughput and sensitivity of cellular analysis but also reducing the possibility of contamination and bias during the amplification process. In the following sections, we discuss different sequencing modalities and the microfluidic designs that have been paired with these. Meanwhile, a summary of the partial methods and applications of SCS to cancer is shown in , and microfluidics-based SCS applications to cancer are presented in . Genome Single-cell DNA sequencing (scDNA-seq) provides insight into intercellular variation and heterogeneity of the genome at varying levels, including single-nucleotide variants (SNVs) and copy-number aberrations (CNAs). scDNA-seq is arguably the most direct way to reveal clonal substructures and lineages for tumor evolution, treatment, or metastasis. Sequencing the genome, however, is challenging due to the fact that there are only few copies of DNA molecules in a cell. Hence, whole-genome amplification (WGA) is a key part of scDNA-seq. The most frequent WGA methods currently utilized are multiple displacement amplification (MDA) and degenerated oligonucleotide primer-polymerase chain reaction (DOP-PCR). Newer methods include multiple annealing and looping-based amplification cycles (MALBACs) and linear amplification via transposon insertion (LIANTI). WGA generally presents several challenges, such as variability in sequencing depth, allele shedding (two alleles will not amplify at the same time), amplification bias, etc. Moreover, structural variants (SVs) of genes include deletions, duplications, insertions, and translocations, which can be difficult to recognize with the WGA approaches. Several technologies have been proposed to tackle these problems. Non-amplified Tn5-labeled scDNA-seq methodologies such as microfluidics-based DLP and DLP+ ( A) can avoid amplification based errors. Ruan et al. have developed a DMF chip that enabled the isolation of single cells with a butterfly structure of hydrodynamic traps ( B). In this design, cells were amplified by mixing them with lysate and MDA reagent via actuated droplets. The entire step of single-cell nanoliter-volume MDA processing was achieved in an integrated and non-contact manner. This could reduce amplification bias and errors in exponential amplification and provided excellent detection of CNVs and SNVs. A key contribution of scDNA-seq is to reveal the clonal evolution of drug resistance in tumor cells after targeted therapy. scDNA-seq together with microfluidics offers the possibility of achieving direct clinical relevance for the dynamic treatment of cancer. As an example, McMahon et al. performed high-throughput scDNA-seq of patients with FLT3-mutated acute myeloid leukemia (AML) who received gilteritinib clinical therapy by utilizing a commercial two-step droplet microfluidic platform (Tapestri). The results identified activation of RAS/MAPK pathway signaling by gilteritinib treatment, which is a clinically significant mechanism of drug resistance, and suggested that a combinatorial approach to enhance antileukemic cytotoxicity will be required to treat AML. Transcriptome Single-cell RNA sequencing (scRNA-seq) is the most popular sequencing modality in tumor research and therapy. In oncology therapeutic investigations, scRNA-seq provides a valuable perspective by analyzing the factors that determine cellular expression and functional heterogeneity within tumors. In scRNA-seq, RNA is first required to be converted into cDNA. In terms of different methods of reverse transcription and amplification, scRNA-seq can be broadly classified into three groups. Poly(A)-conjugated PCR is one option, though it is not often adopted due to reduced sequencing sensitivity and loss of low-abundance transcripts. In vitro transcription (IVT) is a linear amplification process that involves complex primers containing the T7 promoter at the 5′ end. The approach is applicable to the amplification step of cell expression by linear amplification and sequencing (CEL-seq). Lastly, reverse transcription of RNA by Moloney murine leukemia virus (MMLV) could enable synthesis of full-length cDNA by including the complete 5′ end, with reduced 3′ bias of transcription. MMLV-based reverse transcription has been utilized in Smart-Seq, Smart-Seq2, and single-cell tagged reverse transcription sequencing (STRT-seq) , workflows. In order to the overcome bias problem, a unique molecular identifier (UMI) is incorporated in the reverse transcription step to barcode each mRNA molecule in the cell, enhancing the quantitative nature of scRNA-seq and improving the accuracy of the reads. In scRNA-seq, the two most common approaches are droplet-based scRNA-seq and microwell-based scRNA-seq. The accuracy and sensitivity of microwell-based scRNA-seq may be superior, but the cost for commercialization is larger than droplet-based approaches. InDrop, Drop-seq, and 10× Genomics Chromium are the more common droplet-based technologies due to their low cost, easy operation, and high throughput. All three platforms involve the use of barcoded bead primers to distinguish individual cells and UMIs for bias correction. Beyond the more standard droplet-based methods, a number of groups have been working to leverage alternative microfluidic design principles in their scRNA-seq workflows. Li et al. achieved transcriptional gene analysis of tumor cells with different secretion efficiencies of MMP9 by using hydrodynamic capture of cells and virtual droplets not fully encapsulated by oil for selection and downstream mRNA sequencing. This microfluidic platform, which has 648 microchambers and a single-cell capture efficiency of up to 90%, as well as the ability to retrieve cells of interest, presents a new possibility for assessing tumor malignancy in clinical diagnostics. Further, Bai et al. demonstrated an integrated DEP-trapping-nanowell-transfer (dTNT) technique to break the Poisson limit and achieve scRNA-seq with a 91.84% single-cell capture rate, 82% transfer efficiency, and > 99% bead loading ( D). Valve-based scRNA-seq has been adopted increasingly due to its potential for efficient cell capture and selection of cells of interest for analysis. Zhang et al. leveraged hydrodynamic trapping and valve-based designs to create a highly parallel scRNA-seq platform called “Paired-seq.” Both individual cells and single barcode beads were trapped in paired chambers, and then the valve and pump combined system was capable of actively mixing them to allow lysis of cells and capture of mRNA. This system provides cell utilization efficiency of up to 95% and high mRNA detection accuracy ( E). Another valve-based microfluidic chip from Agnihotri et al. designed a three-layer microfluidic system for “bottom-up” SCS by co-encapsulating natural killer (NK) cells and tumor cells in droplets. It selectively released cells of interest for downstream sequencing analysis through usage of microvalves ( C). In this work, visual observation of immune cells with tumor cells, combined with selection of cells of interest for subsequent scRNA-seq, has provided new insights into immunotherapy. Temporally resolved transcriptomics scRNA-seq methods are also being developed to incorporate a temporal dimensions, enabling an understanding of the dynamic responses of cells to therapeutic stimuli and tumor progression. A single-cell, metabolically labeled new RNA tagging sequencing (scNT-seq) platform developed by Qiu et al. utilizes 4-thiouridine (4sU), a nucleotide analog that undergoes a chemical reaction resulting in a T-to-C switch in the sequencing data. Lin et al. demonstrated a Well-TEMP-seq assay with merits of low-cost ( < $0.1 per cell), high-throughput (8 parallel samples of thousands of cells each), and efficient cell loading ( A). This technique utilized a microfluidic chip with quasi-static hydrodynamic microwells and 4sU-labeled metabolic RNA. The authors used this method to investigate the transcriptional temporal dynamics of colorectal cancer cells exposed to a clinical drug (5-AZA-CdR) and discovered upregulation of tumor suppressor genes and downregulation of oncogenes. Another approach for temporal transcriptional sequencing is based on the introduction of mutations at specific sites through CRISPR-Cas9. Quinn et al. investigated the metastatic nature of tumor cells via temporal transcription of Cas9 on tens of thousands of cells. With this approach, they demonstrated that the metastatic capacity of tumor cells is heterogeneous and that it varies orthogonally to proliferative potential and characteristic transcriptional profiles. Molecular mechanisms underlying the metastasis-suppressive phenotype of KRT17 in vitro and in vivo were also elucidated. Spatial transcriptomics The spatial structure and arrangement of cell types within tumors is essential to cancer biology. However, in most scRNA-seq methods, the tissue would have to be dissociated prior to sequencing, which leads to a loss of spatial information. Spatial transcription techniques broadly belong to two categories. , One is imaging-based methods that are associated with the utilization of high-resolution imagers and in situ reagents, including in situ sequencing (ISS)-based methods (where transcripts are amplified and sequenced in tissue) and in situ hybridization (ISH)-based methods (where imaging probes are hybridized sequentially in tissue), such as single-molecule fluorescence ISH (smFISH), multiplexed error-robust fluorescence ISH (MERFISH), and sequential fluorescence ISH+ (seqFISH+). The second group is built on next-generation sequencing (NGS) technology and mainly relies on the spatial bar-code application prior to library preparation, such as Slide-seq. , The development of commercially available spatial transcriptomics such as Visium released by 10× Genomics, as well as GeoMx and CosMx released by Nanostring, enables spatial transcriptomics to become more accessible. Hirz et al. demonstrate that by combining droplet scRNA-seq and spatial transcriptomic data from Slide-seqV2, even more comprehensive information on tumor stromal cell and microenvironment interactions can be obtained. This study characterized tumor, immune, and non-immune stromal cell changes within the tumor microenvironment (TME) at high resolution for localized prostate cancer (PCa). They discovered that a suppressive immune microenvironment was established even in low-risk primary PCa. And cytotoxicity scores were significantly greater in “hot” tumors with high T cell infiltration, indicating that more functional T cells may influence the tumor response to immunotherapy. They further identified that the use of a CCL20-blocking antibody could block the communication between tumor-inflammatory monocytes and T-regulatory cells to significantly reduce tumor growth. Epigenome Epigenetic changes are thought to significantly contribute to tumor heterogeneity in response to drugs or the environment. At present, single-cell epigenetic analysis focuses on methylation patterns, histone modifications, and chromatin status. DNA methylation of cytosine (5mC) residues can be mapped genome wide with a variety of methods. Bisulfite conversion following sequencing (BS-seq) is a well-established approach that is regarded as the gold standard. , Histone markers are commonly mapped using chromatin immunoprecipitation (ChIP)-seq, where ChIP is performed with antibodies specific to such histone markers. High background noise and low efficiency of this assay is challenging for performing ChIP-seq at the single-cell level. Notably, Grosseli’s research team established a high-throughput droplet microfluidic scChIP-seq assay to characterize histone modifications with high coverage (averaging up to 10,000 loci per cell), uncovering the existence of relatively rare chromatin status in tumor samples. Transposase-accessible chromatin sequencing (ATAC-seq) cleaves only DNA regions not protected by binding proteins by using Tn5 transposase, which is already employed to measure dynamic changes in overall chromatin structure. Although ATAC-seq does not provide a complete picture of gene regulatory mechanisms, single-cell protocols for this type of profiling have proved to be relatively scalable. Satpathy et al. carried out droplet-based scATAC-seq for over 200,000 human blood and basal carcinoma cells. In addition to reconstructing cell-type-specific cis - and trans -regulatory profiles and B cell and dendritic cell (DC) developmental trajectories, the authors identified regulators of therapeutically responsive T cell subtypes and regulatory programs that control T cell depletion and shared programs with T follicular helper (Tfh) cells, possibly via interleukin-21 (IL-21). Proteome Assaying cancer proteomes at the single-cell level holds tremendous potential for understanding the ultimate, functional state and workings of an individual tumor. Currently, single-cell proteomic technologies mainly rely on mass spectrometry (MS)-based approaches to identify proteolytically digested peptides. The mass cytometry by time of flight (CyTOF) technique combines heavy-metal labeling of antibodies and MS. But in single cells, the protein content is extremely small. Two general approaches have been developed to overcome this challenge. One is through reducing the sample preparation volume to the nanoliter scale, for example using the nanoPOTS nanoliter dispensing system to increase peptide yields. Another method is multiplexing single cells with a carrier sample through chemical labeling using tandem mass tags (TMTs) to improve MS signaling and recognition. Microfluidics have contributed to the development of several cutting-edge protein profiling techniques. Similar to the other sequencing platforms discussed thus far, microfluidics has particularly contributed to the miniaturization of single-cell proteomics. As an example, Gebreyesus et al. proposed an automated, all-in-one proteomic sample preparation microfluidic chip to achieve cell capture, imaging and counting, lysis, protein digestion, desalting and peptide collection, and characterization ( B). This chip together with data-independent acquisition enabled profiling 1,500 ± 131 proteomes from single cells with a false discovery rate of 1% and identified protein abundances spanning five orders of magnitude. This microfluidic workflow improved proteomic sensitivity and provided good reproducibility compared with traditional single-cell proteomics assays. Microfluidic systems have also been designed to monitor specific protein markers, at the single-cell level, providing promising tools for clinical drug response monitoring for cancer precision medicine. A sophisticated example from Abdulla et al. entailed a multilayer structured, integrated multifunctional microfluidic device ( C). This platform combined inertial force-based cell sorting, a membrane filter for separation and enrichment, a gel, and a bottom-most array of microwells. It has accomplished rapid isolation and accurate monitoring of cisplatin-stimulated circulating tumor cells (CTCs), and dynamic detection of EpCAM expression on CTCs for patients with metastatic breast cancer to monitor their therapeutic response. Multiomics Combining sequencing modalities—i.e. multiomics approaches—has the potential to further reveal the complex regulatory mechanisms of tumors. For instance, combining genome and transcriptome sequencing can establish the correlation between genotype and phenotype. gDNA-mRNA sequencing (DR-seq) and genome and transcriptome sequencing (G&T-seq) are two such methods. DR-seq amplifies cDNA and gDNA by MALBAC and CEL-seq to construct DNA and RNA libraries. DR-seq reduces the loss of nucleic acids by eliminating the physical separation of DNA and RNA. G&T-seq involves the physical isolation of polyadenylated mRNA from DNA in cell lysates via oligo-d(T)-encapsulated magnetic beads, an approach that may result in the potential loss of mRNA and DNA molecules. The application of microfluidics can somewhat avoid the problems of cross-contamination and limited the sensitivity associated with simultaneous sequencing of DNA and RNA. Xu et al. developed DMF-DR-seq, a nanoliter-scale single-cell DNA and RNA detection platform based on DMF. DMF-DR-seq has shown a lower amplification bias, higher whole-genome coverage, and better gene detectability in DNA-/RNA-seq outcomes. By combining analyses of genomic and transcriptomic variants, the authors characterized aberrant transcriptome expression in CTCs and multiple myeloma cancer cells and identified genes participating in pathological progression, including the ABC transporter genes TAP1 and TAP2, involved in antigen processing. Joint analysis of transcriptome and proteome facilitates interpretation of how transcriptional diversity becomes functional phenotypic diversity. Cellular indexing of transcriptomes and epitopes by sequencing (CITE-seq) and RNA expression and protein sequencing (REAP-seq) assays are two methodologies that involve the simultaneous analysis of proteins and transcripts with the use of droplet microfluidics and DNA barcode-coupled antibodies. Liu et al. engineered a unique microfluidic platform, DBiT-seq, to produce 2D mosaics of tissue pixels for spatial transcriptomic analysis via cross-flow delivery of two sets of barcodes, A1-50 and B1-50, into tissue sections ( D). Tissue slides were stained by pre-introducing a mixture of 22 antibody-derived DNA tags (ADTs), each of which contains a unique barcode that allows for the co-detection of tissue transcriptomes and proteins. Xu et al. described a highly cell-utilized microfluidic platform—multiple pairing sequence—that enabled efficient single-cell/single-bead capture and pairing based on hydrodynamic differential flow resistance. They used this technique to classify approximately 1,000 different types of breast cancer cells and characterize expression dynamics. Correlation between mRNAs and proteins was quantified by DNA-encoded antibodies and barcoded beads incubated with single cells. The findings suggested that the correlation between mRNA and protein at the single-cell level did not match the high consistency exhibited at the population level. For example, it was observed that ErbB2 protein was reduced in SK-BR-3 cells before and after trastuzumab treatment, while ERBB2 mRNA remained unchanged. Reduction of ErbB2 protein was possibly ascribed to drug-induced protein endocytosis or ErbB2 shedding, which may prevent binding of trastuzumab and could lead to the development of drug resistance. Joint interrogation of the transcriptome and epigenome can provide insights into evolution or dynamics of gene regulatory networks within tumors. Single-cell methylation and transcriptome sequencing (scM&T-seq) is a method that involves a mild lysis protocol to prevent mixing of DNA and RNA. Methylation is analyzed by single-cell reduced-representation bisulfite sequencing (scRRBs) and RNA is analyzed by Smart-seq2. Chen et al. conducted a combined analysis of chromatin accessibility and mRNA based on droplet microfluidic co-encapsulation of Tn5 transposase-permeabilized nuclei and oligo-bearing barcoded beads. After the droplets heated, the fragmented accessible sites and mRNA were released and harvested, producing a pair of RNA-seq and chromatin-accessible libraries linked by their shared cellular barcodes. This droplet-based single-nucleus chromatin accessibility and mRNA expression sequencing (SNARE-seq) method provides better cell cluster separation with the ability to capture 4–5 times more accessible sites.
Single-cell DNA sequencing (scDNA-seq) provides insight into intercellular variation and heterogeneity of the genome at varying levels, including single-nucleotide variants (SNVs) and copy-number aberrations (CNAs). scDNA-seq is arguably the most direct way to reveal clonal substructures and lineages for tumor evolution, treatment, or metastasis. Sequencing the genome, however, is challenging due to the fact that there are only few copies of DNA molecules in a cell. Hence, whole-genome amplification (WGA) is a key part of scDNA-seq. The most frequent WGA methods currently utilized are multiple displacement amplification (MDA) and degenerated oligonucleotide primer-polymerase chain reaction (DOP-PCR). Newer methods include multiple annealing and looping-based amplification cycles (MALBACs) and linear amplification via transposon insertion (LIANTI). WGA generally presents several challenges, such as variability in sequencing depth, allele shedding (two alleles will not amplify at the same time), amplification bias, etc. Moreover, structural variants (SVs) of genes include deletions, duplications, insertions, and translocations, which can be difficult to recognize with the WGA approaches. Several technologies have been proposed to tackle these problems. Non-amplified Tn5-labeled scDNA-seq methodologies such as microfluidics-based DLP and DLP+ ( A) can avoid amplification based errors. Ruan et al. have developed a DMF chip that enabled the isolation of single cells with a butterfly structure of hydrodynamic traps ( B). In this design, cells were amplified by mixing them with lysate and MDA reagent via actuated droplets. The entire step of single-cell nanoliter-volume MDA processing was achieved in an integrated and non-contact manner. This could reduce amplification bias and errors in exponential amplification and provided excellent detection of CNVs and SNVs. A key contribution of scDNA-seq is to reveal the clonal evolution of drug resistance in tumor cells after targeted therapy. scDNA-seq together with microfluidics offers the possibility of achieving direct clinical relevance for the dynamic treatment of cancer. As an example, McMahon et al. performed high-throughput scDNA-seq of patients with FLT3-mutated acute myeloid leukemia (AML) who received gilteritinib clinical therapy by utilizing a commercial two-step droplet microfluidic platform (Tapestri). The results identified activation of RAS/MAPK pathway signaling by gilteritinib treatment, which is a clinically significant mechanism of drug resistance, and suggested that a combinatorial approach to enhance antileukemic cytotoxicity will be required to treat AML.
Single-cell RNA sequencing (scRNA-seq) is the most popular sequencing modality in tumor research and therapy. In oncology therapeutic investigations, scRNA-seq provides a valuable perspective by analyzing the factors that determine cellular expression and functional heterogeneity within tumors. In scRNA-seq, RNA is first required to be converted into cDNA. In terms of different methods of reverse transcription and amplification, scRNA-seq can be broadly classified into three groups. Poly(A)-conjugated PCR is one option, though it is not often adopted due to reduced sequencing sensitivity and loss of low-abundance transcripts. In vitro transcription (IVT) is a linear amplification process that involves complex primers containing the T7 promoter at the 5′ end. The approach is applicable to the amplification step of cell expression by linear amplification and sequencing (CEL-seq). Lastly, reverse transcription of RNA by Moloney murine leukemia virus (MMLV) could enable synthesis of full-length cDNA by including the complete 5′ end, with reduced 3′ bias of transcription. MMLV-based reverse transcription has been utilized in Smart-Seq, Smart-Seq2, and single-cell tagged reverse transcription sequencing (STRT-seq) , workflows. In order to the overcome bias problem, a unique molecular identifier (UMI) is incorporated in the reverse transcription step to barcode each mRNA molecule in the cell, enhancing the quantitative nature of scRNA-seq and improving the accuracy of the reads. In scRNA-seq, the two most common approaches are droplet-based scRNA-seq and microwell-based scRNA-seq. The accuracy and sensitivity of microwell-based scRNA-seq may be superior, but the cost for commercialization is larger than droplet-based approaches. InDrop, Drop-seq, and 10× Genomics Chromium are the more common droplet-based technologies due to their low cost, easy operation, and high throughput. All three platforms involve the use of barcoded bead primers to distinguish individual cells and UMIs for bias correction. Beyond the more standard droplet-based methods, a number of groups have been working to leverage alternative microfluidic design principles in their scRNA-seq workflows. Li et al. achieved transcriptional gene analysis of tumor cells with different secretion efficiencies of MMP9 by using hydrodynamic capture of cells and virtual droplets not fully encapsulated by oil for selection and downstream mRNA sequencing. This microfluidic platform, which has 648 microchambers and a single-cell capture efficiency of up to 90%, as well as the ability to retrieve cells of interest, presents a new possibility for assessing tumor malignancy in clinical diagnostics. Further, Bai et al. demonstrated an integrated DEP-trapping-nanowell-transfer (dTNT) technique to break the Poisson limit and achieve scRNA-seq with a 91.84% single-cell capture rate, 82% transfer efficiency, and > 99% bead loading ( D). Valve-based scRNA-seq has been adopted increasingly due to its potential for efficient cell capture and selection of cells of interest for analysis. Zhang et al. leveraged hydrodynamic trapping and valve-based designs to create a highly parallel scRNA-seq platform called “Paired-seq.” Both individual cells and single barcode beads were trapped in paired chambers, and then the valve and pump combined system was capable of actively mixing them to allow lysis of cells and capture of mRNA. This system provides cell utilization efficiency of up to 95% and high mRNA detection accuracy ( E). Another valve-based microfluidic chip from Agnihotri et al. designed a three-layer microfluidic system for “bottom-up” SCS by co-encapsulating natural killer (NK) cells and tumor cells in droplets. It selectively released cells of interest for downstream sequencing analysis through usage of microvalves ( C). In this work, visual observation of immune cells with tumor cells, combined with selection of cells of interest for subsequent scRNA-seq, has provided new insights into immunotherapy. Temporally resolved transcriptomics scRNA-seq methods are also being developed to incorporate a temporal dimensions, enabling an understanding of the dynamic responses of cells to therapeutic stimuli and tumor progression. A single-cell, metabolically labeled new RNA tagging sequencing (scNT-seq) platform developed by Qiu et al. utilizes 4-thiouridine (4sU), a nucleotide analog that undergoes a chemical reaction resulting in a T-to-C switch in the sequencing data. Lin et al. demonstrated a Well-TEMP-seq assay with merits of low-cost ( < $0.1 per cell), high-throughput (8 parallel samples of thousands of cells each), and efficient cell loading ( A). This technique utilized a microfluidic chip with quasi-static hydrodynamic microwells and 4sU-labeled metabolic RNA. The authors used this method to investigate the transcriptional temporal dynamics of colorectal cancer cells exposed to a clinical drug (5-AZA-CdR) and discovered upregulation of tumor suppressor genes and downregulation of oncogenes. Another approach for temporal transcriptional sequencing is based on the introduction of mutations at specific sites through CRISPR-Cas9. Quinn et al. investigated the metastatic nature of tumor cells via temporal transcription of Cas9 on tens of thousands of cells. With this approach, they demonstrated that the metastatic capacity of tumor cells is heterogeneous and that it varies orthogonally to proliferative potential and characteristic transcriptional profiles. Molecular mechanisms underlying the metastasis-suppressive phenotype of KRT17 in vitro and in vivo were also elucidated. Spatial transcriptomics The spatial structure and arrangement of cell types within tumors is essential to cancer biology. However, in most scRNA-seq methods, the tissue would have to be dissociated prior to sequencing, which leads to a loss of spatial information. Spatial transcription techniques broadly belong to two categories. , One is imaging-based methods that are associated with the utilization of high-resolution imagers and in situ reagents, including in situ sequencing (ISS)-based methods (where transcripts are amplified and sequenced in tissue) and in situ hybridization (ISH)-based methods (where imaging probes are hybridized sequentially in tissue), such as single-molecule fluorescence ISH (smFISH), multiplexed error-robust fluorescence ISH (MERFISH), and sequential fluorescence ISH+ (seqFISH+). The second group is built on next-generation sequencing (NGS) technology and mainly relies on the spatial bar-code application prior to library preparation, such as Slide-seq. , The development of commercially available spatial transcriptomics such as Visium released by 10× Genomics, as well as GeoMx and CosMx released by Nanostring, enables spatial transcriptomics to become more accessible. Hirz et al. demonstrate that by combining droplet scRNA-seq and spatial transcriptomic data from Slide-seqV2, even more comprehensive information on tumor stromal cell and microenvironment interactions can be obtained. This study characterized tumor, immune, and non-immune stromal cell changes within the tumor microenvironment (TME) at high resolution for localized prostate cancer (PCa). They discovered that a suppressive immune microenvironment was established even in low-risk primary PCa. And cytotoxicity scores were significantly greater in “hot” tumors with high T cell infiltration, indicating that more functional T cells may influence the tumor response to immunotherapy. They further identified that the use of a CCL20-blocking antibody could block the communication between tumor-inflammatory monocytes and T-regulatory cells to significantly reduce tumor growth.
scRNA-seq methods are also being developed to incorporate a temporal dimensions, enabling an understanding of the dynamic responses of cells to therapeutic stimuli and tumor progression. A single-cell, metabolically labeled new RNA tagging sequencing (scNT-seq) platform developed by Qiu et al. utilizes 4-thiouridine (4sU), a nucleotide analog that undergoes a chemical reaction resulting in a T-to-C switch in the sequencing data. Lin et al. demonstrated a Well-TEMP-seq assay with merits of low-cost ( < $0.1 per cell), high-throughput (8 parallel samples of thousands of cells each), and efficient cell loading ( A). This technique utilized a microfluidic chip with quasi-static hydrodynamic microwells and 4sU-labeled metabolic RNA. The authors used this method to investigate the transcriptional temporal dynamics of colorectal cancer cells exposed to a clinical drug (5-AZA-CdR) and discovered upregulation of tumor suppressor genes and downregulation of oncogenes. Another approach for temporal transcriptional sequencing is based on the introduction of mutations at specific sites through CRISPR-Cas9. Quinn et al. investigated the metastatic nature of tumor cells via temporal transcription of Cas9 on tens of thousands of cells. With this approach, they demonstrated that the metastatic capacity of tumor cells is heterogeneous and that it varies orthogonally to proliferative potential and characteristic transcriptional profiles. Molecular mechanisms underlying the metastasis-suppressive phenotype of KRT17 in vitro and in vivo were also elucidated.
The spatial structure and arrangement of cell types within tumors is essential to cancer biology. However, in most scRNA-seq methods, the tissue would have to be dissociated prior to sequencing, which leads to a loss of spatial information. Spatial transcription techniques broadly belong to two categories. , One is imaging-based methods that are associated with the utilization of high-resolution imagers and in situ reagents, including in situ sequencing (ISS)-based methods (where transcripts are amplified and sequenced in tissue) and in situ hybridization (ISH)-based methods (where imaging probes are hybridized sequentially in tissue), such as single-molecule fluorescence ISH (smFISH), multiplexed error-robust fluorescence ISH (MERFISH), and sequential fluorescence ISH+ (seqFISH+). The second group is built on next-generation sequencing (NGS) technology and mainly relies on the spatial bar-code application prior to library preparation, such as Slide-seq. , The development of commercially available spatial transcriptomics such as Visium released by 10× Genomics, as well as GeoMx and CosMx released by Nanostring, enables spatial transcriptomics to become more accessible. Hirz et al. demonstrate that by combining droplet scRNA-seq and spatial transcriptomic data from Slide-seqV2, even more comprehensive information on tumor stromal cell and microenvironment interactions can be obtained. This study characterized tumor, immune, and non-immune stromal cell changes within the tumor microenvironment (TME) at high resolution for localized prostate cancer (PCa). They discovered that a suppressive immune microenvironment was established even in low-risk primary PCa. And cytotoxicity scores were significantly greater in “hot” tumors with high T cell infiltration, indicating that more functional T cells may influence the tumor response to immunotherapy. They further identified that the use of a CCL20-blocking antibody could block the communication between tumor-inflammatory monocytes and T-regulatory cells to significantly reduce tumor growth.
Epigenetic changes are thought to significantly contribute to tumor heterogeneity in response to drugs or the environment. At present, single-cell epigenetic analysis focuses on methylation patterns, histone modifications, and chromatin status. DNA methylation of cytosine (5mC) residues can be mapped genome wide with a variety of methods. Bisulfite conversion following sequencing (BS-seq) is a well-established approach that is regarded as the gold standard. , Histone markers are commonly mapped using chromatin immunoprecipitation (ChIP)-seq, where ChIP is performed with antibodies specific to such histone markers. High background noise and low efficiency of this assay is challenging for performing ChIP-seq at the single-cell level. Notably, Grosseli’s research team established a high-throughput droplet microfluidic scChIP-seq assay to characterize histone modifications with high coverage (averaging up to 10,000 loci per cell), uncovering the existence of relatively rare chromatin status in tumor samples. Transposase-accessible chromatin sequencing (ATAC-seq) cleaves only DNA regions not protected by binding proteins by using Tn5 transposase, which is already employed to measure dynamic changes in overall chromatin structure. Although ATAC-seq does not provide a complete picture of gene regulatory mechanisms, single-cell protocols for this type of profiling have proved to be relatively scalable. Satpathy et al. carried out droplet-based scATAC-seq for over 200,000 human blood and basal carcinoma cells. In addition to reconstructing cell-type-specific cis - and trans -regulatory profiles and B cell and dendritic cell (DC) developmental trajectories, the authors identified regulators of therapeutically responsive T cell subtypes and regulatory programs that control T cell depletion and shared programs with T follicular helper (Tfh) cells, possibly via interleukin-21 (IL-21).
Assaying cancer proteomes at the single-cell level holds tremendous potential for understanding the ultimate, functional state and workings of an individual tumor. Currently, single-cell proteomic technologies mainly rely on mass spectrometry (MS)-based approaches to identify proteolytically digested peptides. The mass cytometry by time of flight (CyTOF) technique combines heavy-metal labeling of antibodies and MS. But in single cells, the protein content is extremely small. Two general approaches have been developed to overcome this challenge. One is through reducing the sample preparation volume to the nanoliter scale, for example using the nanoPOTS nanoliter dispensing system to increase peptide yields. Another method is multiplexing single cells with a carrier sample through chemical labeling using tandem mass tags (TMTs) to improve MS signaling and recognition. Microfluidics have contributed to the development of several cutting-edge protein profiling techniques. Similar to the other sequencing platforms discussed thus far, microfluidics has particularly contributed to the miniaturization of single-cell proteomics. As an example, Gebreyesus et al. proposed an automated, all-in-one proteomic sample preparation microfluidic chip to achieve cell capture, imaging and counting, lysis, protein digestion, desalting and peptide collection, and characterization ( B). This chip together with data-independent acquisition enabled profiling 1,500 ± 131 proteomes from single cells with a false discovery rate of 1% and identified protein abundances spanning five orders of magnitude. This microfluidic workflow improved proteomic sensitivity and provided good reproducibility compared with traditional single-cell proteomics assays. Microfluidic systems have also been designed to monitor specific protein markers, at the single-cell level, providing promising tools for clinical drug response monitoring for cancer precision medicine. A sophisticated example from Abdulla et al. entailed a multilayer structured, integrated multifunctional microfluidic device ( C). This platform combined inertial force-based cell sorting, a membrane filter for separation and enrichment, a gel, and a bottom-most array of microwells. It has accomplished rapid isolation and accurate monitoring of cisplatin-stimulated circulating tumor cells (CTCs), and dynamic detection of EpCAM expression on CTCs for patients with metastatic breast cancer to monitor their therapeutic response.
Combining sequencing modalities—i.e. multiomics approaches—has the potential to further reveal the complex regulatory mechanisms of tumors. For instance, combining genome and transcriptome sequencing can establish the correlation between genotype and phenotype. gDNA-mRNA sequencing (DR-seq) and genome and transcriptome sequencing (G&T-seq) are two such methods. DR-seq amplifies cDNA and gDNA by MALBAC and CEL-seq to construct DNA and RNA libraries. DR-seq reduces the loss of nucleic acids by eliminating the physical separation of DNA and RNA. G&T-seq involves the physical isolation of polyadenylated mRNA from DNA in cell lysates via oligo-d(T)-encapsulated magnetic beads, an approach that may result in the potential loss of mRNA and DNA molecules. The application of microfluidics can somewhat avoid the problems of cross-contamination and limited the sensitivity associated with simultaneous sequencing of DNA and RNA. Xu et al. developed DMF-DR-seq, a nanoliter-scale single-cell DNA and RNA detection platform based on DMF. DMF-DR-seq has shown a lower amplification bias, higher whole-genome coverage, and better gene detectability in DNA-/RNA-seq outcomes. By combining analyses of genomic and transcriptomic variants, the authors characterized aberrant transcriptome expression in CTCs and multiple myeloma cancer cells and identified genes participating in pathological progression, including the ABC transporter genes TAP1 and TAP2, involved in antigen processing. Joint analysis of transcriptome and proteome facilitates interpretation of how transcriptional diversity becomes functional phenotypic diversity. Cellular indexing of transcriptomes and epitopes by sequencing (CITE-seq) and RNA expression and protein sequencing (REAP-seq) assays are two methodologies that involve the simultaneous analysis of proteins and transcripts with the use of droplet microfluidics and DNA barcode-coupled antibodies. Liu et al. engineered a unique microfluidic platform, DBiT-seq, to produce 2D mosaics of tissue pixels for spatial transcriptomic analysis via cross-flow delivery of two sets of barcodes, A1-50 and B1-50, into tissue sections ( D). Tissue slides were stained by pre-introducing a mixture of 22 antibody-derived DNA tags (ADTs), each of which contains a unique barcode that allows for the co-detection of tissue transcriptomes and proteins. Xu et al. described a highly cell-utilized microfluidic platform—multiple pairing sequence—that enabled efficient single-cell/single-bead capture and pairing based on hydrodynamic differential flow resistance. They used this technique to classify approximately 1,000 different types of breast cancer cells and characterize expression dynamics. Correlation between mRNAs and proteins was quantified by DNA-encoded antibodies and barcoded beads incubated with single cells. The findings suggested that the correlation between mRNA and protein at the single-cell level did not match the high consistency exhibited at the population level. For example, it was observed that ErbB2 protein was reduced in SK-BR-3 cells before and after trastuzumab treatment, while ERBB2 mRNA remained unchanged. Reduction of ErbB2 protein was possibly ascribed to drug-induced protein endocytosis or ErbB2 shedding, which may prevent binding of trastuzumab and could lead to the development of drug resistance. Joint interrogation of the transcriptome and epigenome can provide insights into evolution or dynamics of gene regulatory networks within tumors. Single-cell methylation and transcriptome sequencing (scM&T-seq) is a method that involves a mild lysis protocol to prevent mixing of DNA and RNA. Methylation is analyzed by single-cell reduced-representation bisulfite sequencing (scRRBs) and RNA is analyzed by Smart-seq2. Chen et al. conducted a combined analysis of chromatin accessibility and mRNA based on droplet microfluidic co-encapsulation of Tn5 transposase-permeabilized nuclei and oligo-bearing barcoded beads. After the droplets heated, the fragmented accessible sites and mRNA were released and harvested, producing a pair of RNA-seq and chromatin-accessible libraries linked by their shared cellular barcodes. This droplet-based single-nucleus chromatin accessibility and mRNA expression sequencing (SNARE-seq) method provides better cell cluster separation with the ability to capture 4–5 times more accessible sites.
Tumors and their infiltrating environment (i.e., the TME) comprise a space with diverse biochemical and biomechanical properties as well as complex dynamics. Knowledge of tumor clonal diversity and kinetics can bring a valuable new perspective on its metastasis, drug treatment, and resistance. A growing proportion of SCS studies are therefore conducted from a spatial and temporal viewpoint. Multiomic profiling of cells is another area for future growth, providing a more multidimensional and comprehensive view of tumor biology. A growing number of scMultiomics data analysis tools , , can help deliver novel breakthroughs in the precision treatment of tumors by accelerating the discovery of pathogenic mechanisms and therapeutic approaches for a wide range of cancers. Meanwhile, more work is necessary to validate the results of the scMultiomics data in a consolidated assay. Single-cell CRISPR screening is one choice, with techniques that enable simultaneous phenotypic evaluation of target gene perturbations in combination with scMultiomics. In general, accurate and rapid analysis of massive numbers of cells at an affordable cost would be the ideal blueprint for bringing SCS technologies to applied medical contexts. The combination of microfluidics in SCS can improve sequencing throughput and reduce cost to a certain extent. Droplet- and microwell-based SCS are two microfluidic forms of SCS that have been commercialized. The cost of the experiment in the most popular droplet-based 10× Genomics is approximately $0.87 per cell. The droplet size and number are very flexible and controllable, but the efficiency of droplet co-encapsulation is not ideal, and the fragility of the droplet may interfere with the sequencing outcomes. Comparatively, microwell-based technology can be more productive. Smart-seq3 based microwells are priced at approximately $1.14 per cell for the experiment, an average price that is at an affordable level. An additional issue is that shallow sequencing of a large number of cell samples (e.g., tens of thousands or more cells) can minimize random variation and offer a more robust approach, but the overall cost of SCS sequencing remains prohibitively high at this scale. The combination of droplet- and microwell-based microfluidics with sample multiplexing technologies opens a further possibility to improve throughput and decrease cost. , In parallel, with the emergence of DMF as an innovative microfluidic technology, it brings the whole SCS implementation into the microfluidic platform, foreshadowing that automation is feasible. cell-in-library-out scRNA-seq (Cilo-seq), a platform of DMF designed by Zhang et al., is reusable and low cost. More importantly, this platform achieves outstanding accuracy with reduced nucleic acid loss and background interference. But the throughput of this assay is restricted to handle only one cell at a time, so high throughput at high sensitivity becomes another challenge. This is an era where SCS technology is flourishing. Sequencing of the transcriptome of individual oocytes and embryos was the first scRNA-seq in 2009, whereas in 2017, the 10× Genomics technology was advanced to sequence tens of thousands of cells. The introduction of microfluidics not only reduces costs by reducing sample volume but also helps to achieve high throughput. Integration of SCS procedures on a microfluidic platform promises to bring automation and better accuracy of sequencing. Therefore, there is a strong expectation that in the future, more automated and simplified SCS platforms will be available for clinical use. Current multidisciplinary developments in molecular biology, computer, and mechanical engineering will be needed to continue to address existing challenges and realize widespread, personalized profiling of patient cancers.
|
Clinical outcomes of arthroscopic all-inside anterior talofibular ligament trans- augmentation repair versus modified trans- augmentation repair for patients with chronic ankle instability | a6572df3-90ac-49a6-a86a-5d5ac5601746 | 11829408 | Surgical Procedures, Operative[mh] | Acute ankle sprains are among the most common sports-related injuries , with up to 70% of recurrent cases progressing to chronic ankle instability (CAI) owing to long-term symptoms . Patients with CAI typically experience persistent ankle pain, swelling, and functional deficits, significantly affecting their quality of life and athletic performance . Nonsurgical treatments for CAI, such as functional rehabilitation, proprioceptive training, aquatic therapy, and bracing are widely employed but are associated with a failure rate of approximately 20% and slower recovery outcomes . When non-surgical interventions fail to achieve the desired therapeutic efficacy, surgical options are considered. These include anatomic repair, anatomic reconstruction, tenodesis procedures, and arthroscopic repair . In 2016, the International Ankle Association issued a consensus statement on CAI , offering evidence-based guidelines for understanding, preventing, and treating the condition, while outlining objectives for future research. Accompanying isolated unstable syndesmotic injury has also become one of the difficulties in treatment . In 2018, Vega et al. introduced anterior talofibular ligament (ATFL) suture augmentation repair under total arthroscopy to address cases with poor residual ligament tissue quality, achieving favorable results . With advancements in arthroscopic technology, its clinical applications have expanded, and studies increasingly support its feasibility and clinical efficacy for CAI management . One persistent challenge in CAI treatment is the detachment, rupture, or avulsion of the anterior talofibular ligament at the talar end, which complicates effective surgical repair . To address this, we propose a modified trans-augmentation repair surgical protocol, building on the principles of modified suture augmentation repair . This technique involves suturing the injured ATFL with surrounding intact tissues to enhance ligament repair and stabilise the ankle joint. However, the clinical efficacy and anatomical validation of this surgical protocol remain underexplored. This study aimed to compare the clinical outcomes of ATFL trans-augmentation (TA) repair and modified trans-augmentation (MTA) repair in patients with CAI. Outcomes were assessed using the American orthopedic Foot and Ankle Society (AOFAS) score, visual analog scale (VAS), anterior drawer test, and patient satisfaction. We hypothesise that MTA repair will yield superior clinical outcomes compared to ATFL-TA repair. This retrospective case–cohort study aimed to compare and evaluate the clinical outcomes of ATFL TA repair and MTA repair protocols. The study adhered to the principles of the Declaration of Helsinki, and written informed consent was obtained from all patients. All surgeries were performed by the same surgeon between February 2019 and January 2021. Based on the surgical method, patients were categorised into the TA group or the MTA group. Inclusion and exclusion criteria The inclusion criteria were as follows: (1) no prior surgical treatment of the ankle; (2) lack of significant improvement after > 6 months of nonsurgical interventions, including bracing, physical therapy, and proprioceptive training; (3) clinical diagnosis of chronic ankle instability ; (4) mechanical instability confirmed via the anterior drawer test; (5) the patients’ age is under 60 years; and (6) at least 30 months of follow-up. The exclusion criteria were as follows: (1) history of fractures or prior ankle surgery, (2) abnormal skeletal development, (3) neuromuscular diseases, (4) systemic diseases such as diabetes, and (5) follow-up duration < 30 months. Surgical technique ATFL trans-augmentation repair The surgery was performed under lumbar epidural anesthesia with the patient in the supine position. The ankle joint was placed in dorsiflexion and lateral recumbency. Three approaches were established: (1) anteromedial (at the distal end of the ankle line, lateral to the third peroneal tendon); (2) anterolateral (approximately 0.5 cm distal to the ankle line and lateral to the third peroneal tendon, and 3) accessory anterolateral (approximately 0.5 cm distal to the ankle line, anterior to the fibula, and 1.0 cm from the fibular tip). A No. 0 nonabsorbable suture (Smith & Nephew, Arthrex) was looped to tighten the superior ATFL bundle. A knotless anchor (Pushlock 2.9 mm x 15 mm, Arthrex) was implanted on the talar side. A guide drill was positioned through the anterolateral approach, centered on the talar neck. Care was taken to avoid violating the subtalar joint space. The suture tension was adjusted before anchor implantation. The ankle was held in dorsiflexion and eversion during the fixation of the ATFL superior bundle, and another anchor was implanted at the fibular footprint of the ATFL. Sutures were secured and trimmed, and the wound was closed (Fig. ). Modified trans-augmentation repair The MTA repair followed the same steps as the TA repair, except for modifications in suture application. Two No. 0 non-absorbable high-strength sutures (Smith & Nephew, Arthrex) were used to loop and tighten the superior and inferior ATFL bundles as a unit. The remaining steps followed the TA repair protocol (Fig. ). Postoperative rehabilitation Both groups followed the same postoperative rehabilitation. Weeks 0–3: A cast maintained the ankle in a neutral position. Weeks 3–6: The cast was replaced with a controlled ankle-motion boot, and physical therapy commenced. Post-6 Weeks: The boot was removed, and patients resumed normal daily activities based on rehabilitation progress. Clinical evaluation We conducted a detailed evaluation of patients’ clinical outcomes. Examinations were performed preoperatively and postoperatively at 1 week, 1 month, 6 months, and 1 year, with annual assessments thereafter. Clinical efficacy was evaluated using the outcomes from the most recent follow-up. American orthopedic foot & ankle society score The AOFAS score is a widely used tool to assess the condition of the ankle joint. Scores range from 0 to 100, with higher scores reflecting better ankle joint functionality and health . Visual analog scale The VAS is a standardised metric for evaluating pain intensity. It is scored from 0 to 10, with higher scores indicating greater levels of pain . Anterior drawer test The anterior drawer test was used as a measure of ankle joint stability. During the test, the patient’s lower leg was positioned to hang over the edge of the examination table. A specialist stabilized the distal tibia with one hand while applying anterior force to the calcaneus . Stability was classified into four grades: grade 0 (no laxity), grade 1 (mild laxity), grade 2 (moderate laxity), and grade 3 (severe laxity) . Patient satisfaction Patient satisfaction was evaluated using a standardised questionnaire, with scores ranging from 0 (completely dissatisfied) to 10 (completely satisfied) . Statistical analysis Statistical analyses were performed using IBM SPSS Statistics (version 22.0; IBM, Armonk, NY, USA) and GraphPad Prism (GraphPad Software, San Diego, CA, USA). A P -value < 0.05 was considered statistically significant. The Shapiro–Wilk test was employed to evaluate the normality of the data distribution. Continuous variables, such as age, follow-up duration, body mass index (BMI), and preoperative and postoperative AOFAS and VAS scores, were analysed using the chi-square test. To minimize bias, a power analysis was conducted using PASS software (PASS package, NCSS, USA). Propensity Score Matching (PSM) was applied, with age, sex, BMI, preoperative AOFAS scores, preoperative VAS scores, and preoperative anterior drawer test results used as covariates. A 1:1 matching ratio with a caliper width of 0.05 was implemented. At a bilateral α level of 0.05 and a sample size of 73, the statistical power achieved was 90%. The inclusion criteria were as follows: (1) no prior surgical treatment of the ankle; (2) lack of significant improvement after > 6 months of nonsurgical interventions, including bracing, physical therapy, and proprioceptive training; (3) clinical diagnosis of chronic ankle instability ; (4) mechanical instability confirmed via the anterior drawer test; (5) the patients’ age is under 60 years; and (6) at least 30 months of follow-up. The exclusion criteria were as follows: (1) history of fractures or prior ankle surgery, (2) abnormal skeletal development, (3) neuromuscular diseases, (4) systemic diseases such as diabetes, and (5) follow-up duration < 30 months. ATFL trans-augmentation repair The surgery was performed under lumbar epidural anesthesia with the patient in the supine position. The ankle joint was placed in dorsiflexion and lateral recumbency. Three approaches were established: (1) anteromedial (at the distal end of the ankle line, lateral to the third peroneal tendon); (2) anterolateral (approximately 0.5 cm distal to the ankle line and lateral to the third peroneal tendon, and 3) accessory anterolateral (approximately 0.5 cm distal to the ankle line, anterior to the fibula, and 1.0 cm from the fibular tip). A No. 0 nonabsorbable suture (Smith & Nephew, Arthrex) was looped to tighten the superior ATFL bundle. A knotless anchor (Pushlock 2.9 mm x 15 mm, Arthrex) was implanted on the talar side. A guide drill was positioned through the anterolateral approach, centered on the talar neck. Care was taken to avoid violating the subtalar joint space. The suture tension was adjusted before anchor implantation. The ankle was held in dorsiflexion and eversion during the fixation of the ATFL superior bundle, and another anchor was implanted at the fibular footprint of the ATFL. Sutures were secured and trimmed, and the wound was closed (Fig. ). The surgery was performed under lumbar epidural anesthesia with the patient in the supine position. The ankle joint was placed in dorsiflexion and lateral recumbency. Three approaches were established: (1) anteromedial (at the distal end of the ankle line, lateral to the third peroneal tendon); (2) anterolateral (approximately 0.5 cm distal to the ankle line and lateral to the third peroneal tendon, and 3) accessory anterolateral (approximately 0.5 cm distal to the ankle line, anterior to the fibula, and 1.0 cm from the fibular tip). A No. 0 nonabsorbable suture (Smith & Nephew, Arthrex) was looped to tighten the superior ATFL bundle. A knotless anchor (Pushlock 2.9 mm x 15 mm, Arthrex) was implanted on the talar side. A guide drill was positioned through the anterolateral approach, centered on the talar neck. Care was taken to avoid violating the subtalar joint space. The suture tension was adjusted before anchor implantation. The ankle was held in dorsiflexion and eversion during the fixation of the ATFL superior bundle, and another anchor was implanted at the fibular footprint of the ATFL. Sutures were secured and trimmed, and the wound was closed (Fig. ). The MTA repair followed the same steps as the TA repair, except for modifications in suture application. Two No. 0 non-absorbable high-strength sutures (Smith & Nephew, Arthrex) were used to loop and tighten the superior and inferior ATFL bundles as a unit. The remaining steps followed the TA repair protocol (Fig. ). Both groups followed the same postoperative rehabilitation. Weeks 0–3: A cast maintained the ankle in a neutral position. Weeks 3–6: The cast was replaced with a controlled ankle-motion boot, and physical therapy commenced. Post-6 Weeks: The boot was removed, and patients resumed normal daily activities based on rehabilitation progress. We conducted a detailed evaluation of patients’ clinical outcomes. Examinations were performed preoperatively and postoperatively at 1 week, 1 month, 6 months, and 1 year, with annual assessments thereafter. Clinical efficacy was evaluated using the outcomes from the most recent follow-up. American orthopedic foot & ankle society score The AOFAS score is a widely used tool to assess the condition of the ankle joint. Scores range from 0 to 100, with higher scores reflecting better ankle joint functionality and health . Visual analog scale The VAS is a standardised metric for evaluating pain intensity. It is scored from 0 to 10, with higher scores indicating greater levels of pain . Anterior drawer test The anterior drawer test was used as a measure of ankle joint stability. During the test, the patient’s lower leg was positioned to hang over the edge of the examination table. A specialist stabilized the distal tibia with one hand while applying anterior force to the calcaneus . Stability was classified into four grades: grade 0 (no laxity), grade 1 (mild laxity), grade 2 (moderate laxity), and grade 3 (severe laxity) . Patient satisfaction Patient satisfaction was evaluated using a standardised questionnaire, with scores ranging from 0 (completely dissatisfied) to 10 (completely satisfied) . The AOFAS score is a widely used tool to assess the condition of the ankle joint. Scores range from 0 to 100, with higher scores reflecting better ankle joint functionality and health . The VAS is a standardised metric for evaluating pain intensity. It is scored from 0 to 10, with higher scores indicating greater levels of pain . The anterior drawer test was used as a measure of ankle joint stability. During the test, the patient’s lower leg was positioned to hang over the edge of the examination table. A specialist stabilized the distal tibia with one hand while applying anterior force to the calcaneus . Stability was classified into four grades: grade 0 (no laxity), grade 1 (mild laxity), grade 2 (moderate laxity), and grade 3 (severe laxity) . Patient satisfaction was evaluated using a standardised questionnaire, with scores ranging from 0 (completely dissatisfied) to 10 (completely satisfied) . Statistical analyses were performed using IBM SPSS Statistics (version 22.0; IBM, Armonk, NY, USA) and GraphPad Prism (GraphPad Software, San Diego, CA, USA). A P -value < 0.05 was considered statistically significant. The Shapiro–Wilk test was employed to evaluate the normality of the data distribution. Continuous variables, such as age, follow-up duration, body mass index (BMI), and preoperative and postoperative AOFAS and VAS scores, were analysed using the chi-square test. To minimize bias, a power analysis was conducted using PASS software (PASS package, NCSS, USA). Propensity Score Matching (PSM) was applied, with age, sex, BMI, preoperative AOFAS scores, preoperative VAS scores, and preoperative anterior drawer test results used as covariates. A 1:1 matching ratio with a caliper width of 0.05 was implemented. At a bilateral α level of 0.05 and a sample size of 73, the statistical power achieved was 90%. From a total of 121 orthopaedic patients treated at our hospital between February 2019 and January 2021, 79 patients with CAI were identified. Based on the inclusion and exclusion criteria, six patients were excluded: four due to arthritis and two owing to systemic diseases. Consequently, 73 patients (42 males and 31 females) were included in this study (Fig. ). Of these, 40 patients (21 males and 19 females; follow-up duration: 35.6 ± 3.7 months) were assigned to the TA group, and 33 patients (21 males and 12 females; follow-up duration: 35.7 ± 3.5 months) were assigned to the MTA group. PSM was used to ensure comparability between the two groups. There were no significant differences in age, BMI, preoperative AOFAS scores, preoperative VAS scores, or preoperative anterior drawer test scores between the two groups (Table ). Additional baseline patient information is presented in Table . The mean postoperative AOFAS score was significantly higher in the MTA group (91.0 ± 7.1) compared to the TA group (83.3 ± 9.4, P < 0.001) (Table ; Fig. ). Similarly, the mean postoperative VAS score was significantly lower in the MTA group (1.2 ± 0.4) compared to the TA group (1.4 ± 0.5, P = 0.01) (Table ; Fig. ). Patient satisfaction was also higher in the MTA group (8.6 ± 0.9) than in the TA group (8.1 ± 1.0, P = 0.02) (Table ; Fig. ). No significant differences were observed between the two groups in terms of anterior drawer test results (Table ). In the MTA group, 21 patients (63.6%) exhibited grade 1 laxity, and 12 patients (36.4%) exhibited grade 0 laxity. In comparison, the TA group had 30 patients (75%) with grade 1 laxity and 10 patients (25%) with grade 0 laxity (Table ). Postoperative complications were minimal. In the TA group, two patients experienced superficial skin infections, and one patient had sural nerve injury. In the MTA group, only one patient experienced a superficial skin infection (Table ). The primary contribution of this study lies in the development of the MTA suturing technique and the rigorous evaluation of its clinical efficacy compared with the traditional TA suturing technique. CAI remains a significant challenge in clinical practice. There is an ongoing debate regarding whether conservative or surgical treatments should be prioritized for patients with CAI. Anatomic open Broström-Gould reconstruction is widely considered the gold standard for repairing injured ATFL and restoring athletic performance . However, in 2020, Vega et al. introduced ankle arthroscopy in CAI surgery, achieving commendable clinical outcomes . Since then, minimally invasive arthroscopic surgery has gained traction, leading to the development of various modified techniques, such as the modified Broström procedures , ATFL’s distal fascicle transfer repair , ligament reconstruction with InternalBrace™ , and modified ATFL suture augmentation repair . In 2021, Vega et al. classified ATFL ligament injuries into four types . Although injuries at the talar end of the ATFL are relatively rare, they present unique surgical challenges. Drawing on extensive clinical experience, our research team proposed the MTA suturing technique specifically for such cases. This approach involves suturing the injured ATFL with surrounding intact tissues to achieve a more even force distribution and improved structural stability. The MTA group demonstrated superior clinical outcomes compared with the TA group, as reflected in significantly higher AOFAS scores. When compared with prior studies, such as those by Tian et al. and Matteo et al. , our study’s latest follow-up AOFAS score (91) was comparable, further supporting the efficacy of the MTA technique. The improvement in the AOFAS score from a preoperative average of 68.2 to 91 highlights the feasibility and practicality of this approach. Nevertheless, further anatomical and biomechanical studies are required validate these findings. In addition to AOFAS scores, the MTA group exhibited better outcomes in VAS scores and patient satisfaction. The final follow-up VAS scores in this study (1.2) were consistent with those reported by Hu et al. (VAS: 1.5) and Tian et al. (VAS: 3.31), reaffirming the feasibility of the MTA suturing repair technique. By suturing the injured ATFL together with the surrounding intact tissues, ankle joint instability was significantly reduced, potentially explaining the low pain levels and high satisfaction among patients. The anterior drawer test revealed that most patients exhibited grade 0 laxity postoperatively, with a small number showing grade 1 laxity. This variability may be attributable to individual differences in the ATFL ligament or deviations in rehabilitation protocols. In terms of postoperative complications, both groups demonstrated favourable outcomes. The TA suturing group reported two cases of superficial wound infection and one case of peroneal nerve injury, whereas the MTA suturing group had one case of superficial wound infection. Although the overall incidence of postoperative complications was low, these findings underscore the need for continued vigilance. The MTA repair technique demonstrated superiority over the TA repair technique in AOFAS scores, VAS scores, and patient satisfaction. This may be attributed to the approach of the MTA repair technique in treating the superior and inferior bundles of the ATFL as a single entity. This approach enhances the structural stability of the repaired ligament and reduces impingement to some extent. Furthermore, existing research suggests that mechanoreceptors in the ankle joint ligaments contribute minimally to joint proprioception . Therefore, the MTA repair technique is unlikely to significantly impact ankle joint proprioception. This study had certain limitations. First, this was a single-center, retrospective study with a limited sample size. Therefore, it is necessary to conduct further multicenter, randomized controlled studies. Second, the follow-up period was < 3 years, warranting longer-term follow-up studies. Despite these limitations, we present these findings with the aim of inspiring further research and innovation among orthopedic physicians in related fields. In summary, MTA suturing repair outperformed TA suturing repair in terms of AOFAS scores, VAS score, and patient satisfaction, demonstrating its clinical efficacy as a viable option for patients with CAI. However, future studies on the long-term outcomes are warranted. |
Comparative pathology and immunohistochemistry of Newcastle disease in domestic chicken ( | 2908f7c7-4e0b-4a8a-860f-629dd8945327 | 10219819 | Anatomy[mh] | Newcastle disease (ND) is highly pathogenic in reared chicken but less pathogenic in other birds that are important in spreading the Newcastle disease virus (NDV). It has high morbidity and mortality and can spread rapidly . NDV has widely infected bird species, such as chickens, ducks, geese, pigeons, parrots, and several other birds . Ducks and geese act as a natural reservoir that does not show clinical symptoms but can transmit the deadly disease to chickens . compared six breeds of ducks and found mallard ducks were the most susceptible and Pekin ducks were the more resistant; they found that the susceptibility of ducks to NDV decreased with age, and most deaths occurred between 15 and 30 days of age. reported that 10 domestic chicken samples collected from field cases in 2008–2009 in Bali, Indonesia, were positive to be infected by the acute NDV. NDV infection in domestic chicken, broiler chicken, and waterfowl in Aceh, Indonesia, is dominated by virulent strains . In addition, several pathogenic isolates have been isolated from reared ducks ( ; Zhang et al. , 2011; ). ND can cause damage to lymphoid tissue and macrophages. Previous studies reported that histopathologically ND causes lymphoid follicular depletion, necrosis, and apoptosis in the caecal tonsil, thymus, and bursa of Fabricius and spleen chicken . Skeletal muscle congestion, mild intestinal erosion, and mild hemorrhagic cecal tonsils in ducks . Apoptosis is a significant factor in viral pathogenesis, especially in the mechanism of clearance virus by the immune system . NDV causes apoptosis in the spleen of domestic chickens , chicken embryo fibroblast cells , and chicken macrophages . There are many things yet unknown on the cause of why ducks are more resistant to ND disease compared to chickens. Therefore, this study was designed to compare the clinical symptoms features, pathological lesions, viral distribution, and apoptosis response due to NDV in domestic chickens and Alabio ducks.
Research procedure One-day-old domestic chickens ( Gallus gallus domesticus) and 1-day-old Alabio duck ( Anas platyrhynchos Borneo) were reared until 6 weeks in semi-isolated cages in groups. Feed and drinking water were provided ad libitum. The treatment groups are: AC-A (Domestic chicken group, n = 20) and AC-I (Alabio duck group, n = 20), were infected by NDV velogenic isolate Ducks/Aceh Besar_IND/2013/eoAC080721 under 10 6 ELD 50 dosage. The K-A (Domestic chicken control group, n = 20) and K-I (Alabio duck control group, n = 20) were inoculated by PBS. All inoculations were performed via intraorbital as much as 0.1 ml. Before being infected by NDV, the domestic chickens and Alabio ducks tested negative for NDV antibody by hemaglutinin inhibition test. Clinical symptoms and gross anatomy observation Clinical symptoms were observed from day 1 until day 7 post-infection (PI). Three individuals from each group were necropsied on days 1, 2, 3, 5, and 7 PI. Gross pathological changes in the proventriculus, duodenum, ceca tonsil, trachea, lung, heart, thymus, Fabricius bursa, spleen, kidney, and brain were observed. All organ samples were cut into 1 × 1 × 0.5 cm sizes and fixed in neutral buffered formalin 10% for a minimum of 24 hours to be made into histopathology preparations in paraffin blocks. Histopathology examination Each organ was trimmed into 5 mm size and put inside a tissue cassette, then put into an automatic tissue processor for dehydration, clearing, paraffin infiltration, embedding, and paraffin blocking. Finally, the blocks were cut into 5 µm with a rotary microtome to be stained with hematoxylin staining or immunohistochemistry. Hematoxylin and eosin staining Hematoxylin and eosin staining starter by deparaffinization by xylol and ethanol rehydration. Staining was performed by submerging preparations inside Mayer's hematoxylin stain, followed by eosin. The tissues were then dehydrated with ethanol 96% and absolute ethanol 2. The clearing was performed by submerging the tissue in xylol. The last process was mounting using gum and cover glass. Histopathology observation was performed by examining the lesion severity. The following criteria had determined: if the lesion was spread locally, multifocally, or diffuse, the seriousness in that order was light, moderate, or severe. The examination was conducted under 100 times magnification with five fields of view repetition. Immunohistochemistry staining The immunohistochemistry staining referred to the procedures recommended in the catalog from Dako, North America Inc. (Dako) with several modifications. Tissue slices attached to poly-L-Lysine 1% spread object glass was deparaffinized by xylol and then rehydrated by ethanol. The antigen retrieval process was performed by boiling the preparations in citrate buffer at 100°C for 15 minutes. Blocking of endogenous activity was performed by submerging the preparations in H 2 O 2 3% for 35 minutes at room temperature and washing them with PBS in three repetitions for 5 minutes each. Blocking of non-specific protein bonds was conducted using normal fetal bovine serum 10% for 35 minutes at room temperature and then washed once more with PBS in three repetitions for 5 minutes each. Each tissue was given drops of primary antibody rabbit anti-NDV polyclonal antibody (1:250 in PBS), and for caspase-3, was given drops of primary antibody Polyclonal Anti-Casp3 (HPA002643, Sigma-Aldrich; 1:250 in PBS). The preparations were then incubated overnight at −5°C. The preparations were washed with PBS in three repetitions for 5 minutes each at room temperature. The preparations were then dropped with secondary antibody Dako REAL™ envision™/HRP, Rabbit/Mouse (K5007) for 40 minutes at room temperature, then washed with PBS in three repetitions 5 minutes each at room temperature. The preparations were then given Dako REAL™ DAB+chromogen in Dako REAL™ substrate buffer (K5007) for 40 seconds at room temperature and then washed with flowing water for 10 minutes and washed by PBS in three repetitions for 5 minutes each at room temperature. Mayer’s Hematoxylin was used as counterstaining. The examination was regarded as positive if, during preparation, reading brown stained antigen was found and regarded as negative if all preparation appeared blue. The immunopositivity against NDV on each organ was scored with light severity (1–10 immunopositive cells), moderate (11–20 immunopositive cells), and severe (more than 20 immunopositive cells) . The immunohistochemistry results against NDV and caspase-3 were examined under 400 times magnification with five field view repetitions. Data analysis The data from clinical symptoms and pathological lesions examinations were analyzed descriptively. The immunohistochemistry results against the NDV were scored based on the immunopositive cells’ level, while for caspase-3, the immunopositive reaction was analyzed based on the positive area percentage against caspase-3. Ethical approval The use and treatment of experimental animals in this study have approval from the animal ethics committee of the Institute for Research and Community Service, Bogor Agricultural University.
One-day-old domestic chickens ( Gallus gallus domesticus) and 1-day-old Alabio duck ( Anas platyrhynchos Borneo) were reared until 6 weeks in semi-isolated cages in groups. Feed and drinking water were provided ad libitum. The treatment groups are: AC-A (Domestic chicken group, n = 20) and AC-I (Alabio duck group, n = 20), were infected by NDV velogenic isolate Ducks/Aceh Besar_IND/2013/eoAC080721 under 10 6 ELD 50 dosage. The K-A (Domestic chicken control group, n = 20) and K-I (Alabio duck control group, n = 20) were inoculated by PBS. All inoculations were performed via intraorbital as much as 0.1 ml. Before being infected by NDV, the domestic chickens and Alabio ducks tested negative for NDV antibody by hemaglutinin inhibition test.
Clinical symptoms were observed from day 1 until day 7 post-infection (PI). Three individuals from each group were necropsied on days 1, 2, 3, 5, and 7 PI. Gross pathological changes in the proventriculus, duodenum, ceca tonsil, trachea, lung, heart, thymus, Fabricius bursa, spleen, kidney, and brain were observed. All organ samples were cut into 1 × 1 × 0.5 cm sizes and fixed in neutral buffered formalin 10% for a minimum of 24 hours to be made into histopathology preparations in paraffin blocks.
Each organ was trimmed into 5 mm size and put inside a tissue cassette, then put into an automatic tissue processor for dehydration, clearing, paraffin infiltration, embedding, and paraffin blocking. Finally, the blocks were cut into 5 µm with a rotary microtome to be stained with hematoxylin staining or immunohistochemistry.
Hematoxylin and eosin staining starter by deparaffinization by xylol and ethanol rehydration. Staining was performed by submerging preparations inside Mayer's hematoxylin stain, followed by eosin. The tissues were then dehydrated with ethanol 96% and absolute ethanol 2. The clearing was performed by submerging the tissue in xylol. The last process was mounting using gum and cover glass. Histopathology observation was performed by examining the lesion severity. The following criteria had determined: if the lesion was spread locally, multifocally, or diffuse, the seriousness in that order was light, moderate, or severe. The examination was conducted under 100 times magnification with five fields of view repetition.
The immunohistochemistry staining referred to the procedures recommended in the catalog from Dako, North America Inc. (Dako) with several modifications. Tissue slices attached to poly-L-Lysine 1% spread object glass was deparaffinized by xylol and then rehydrated by ethanol. The antigen retrieval process was performed by boiling the preparations in citrate buffer at 100°C for 15 minutes. Blocking of endogenous activity was performed by submerging the preparations in H 2 O 2 3% for 35 minutes at room temperature and washing them with PBS in three repetitions for 5 minutes each. Blocking of non-specific protein bonds was conducted using normal fetal bovine serum 10% for 35 minutes at room temperature and then washed once more with PBS in three repetitions for 5 minutes each. Each tissue was given drops of primary antibody rabbit anti-NDV polyclonal antibody (1:250 in PBS), and for caspase-3, was given drops of primary antibody Polyclonal Anti-Casp3 (HPA002643, Sigma-Aldrich; 1:250 in PBS). The preparations were then incubated overnight at −5°C. The preparations were washed with PBS in three repetitions for 5 minutes each at room temperature. The preparations were then dropped with secondary antibody Dako REAL™ envision™/HRP, Rabbit/Mouse (K5007) for 40 minutes at room temperature, then washed with PBS in three repetitions 5 minutes each at room temperature. The preparations were then given Dako REAL™ DAB+chromogen in Dako REAL™ substrate buffer (K5007) for 40 seconds at room temperature and then washed with flowing water for 10 minutes and washed by PBS in three repetitions for 5 minutes each at room temperature. Mayer’s Hematoxylin was used as counterstaining. The examination was regarded as positive if, during preparation, reading brown stained antigen was found and regarded as negative if all preparation appeared blue. The immunopositivity against NDV on each organ was scored with light severity (1–10 immunopositive cells), moderate (11–20 immunopositive cells), and severe (more than 20 immunopositive cells) . The immunohistochemistry results against NDV and caspase-3 were examined under 400 times magnification with five field view repetitions.
The data from clinical symptoms and pathological lesions examinations were analyzed descriptively. The immunohistochemistry results against the NDV were scored based on the immunopositive cells’ level, while for caspase-3, the immunopositive reaction was analyzed based on the positive area percentage against caspase-3.
The use and treatment of experimental animals in this study have approval from the animal ethics committee of the Institute for Research and Community Service, Bogor Agricultural University.
Clinical symptoms observation On day 1, domestic chicken suffered from depression and lethargy; on day 2, PI conjunctivitis appeared; on day 3, PI had difficulty breathing, greenish-white diarrhea, and anorexia appeared; on days 5 and 7, PI nervous symptoms such as muscle tremor, difficulty in standing, and wings dropping along with light edema on the head appeared. Death in domestic chickens started to appear on day 4 PI . Alabio duck groups only seemed to be depressed and lightly lethargic starting from day 5 PI. Control domestic chicken and Alabio duck group did not show any clinical symptoms. Gross anatomy observation The control domestic chicken and Alabio duck group showed no gross anatomy lesions. Proventriculus, duodenum, and cecal tonsil did not show lesions on day 1 PI in domestic chickens, but in Alabio ducks, proventriculus appeared to have diffuse catarrhal exudation on the mucosal layer. Proventriculus on day 3 PI showed multifocal petechiae. On days 5 and 7, PI showed diffuse hemorrhage on domestic chickens . Alabio duck proventriculus on days 3 and 5 PI showed diffuse catarrhal exudation, and no lesion appeared on day 7 PI . Duodenum on day 3 until day 7 PI appeared hemorrhagic with multifocal necrosis on domestic chickens. In contrast, in Alabio ducks, no lesion appeared on all observation days. Cecal tonsil on day 3 PI until the last observation day seemed to have a multifocal hemorrhage in domestic chicken, but in Alabio ducks, no lesion appeared on all observation days . On day 5 PI, the trachea suffered multifocal diffuse congestion on domestic chickens and focal congestion on Alabio ducks . Lungs on days 1 and 3 PI appeared abnormal from multifocal congestion. On every following observation day, the lesions spread diffusely on domestic chickens, whereas, on Alabio ducks, multifocal congestion only appeared on day 1 PI. On all observation days, the Thymus of domestic chicken appeared to have half of its lobes atrophied and had petechiae/hemorrhage. In contrast, in Alabio ducks, part of the lobus showed petechiae on all observation days. The spleen of domestic chickens on days 1, 3, and 5 PI showed congestion, swelling, and multifocal necrosis, while the days after were followed by atrophy. In Alabio ducks, the spleen showed swelling and multifocal necrosis only during day 3 PI. The Fabricius bursa of domestic chicken on day 1 PI appeared to be swelling with focal hemorrhage. On the rest of the observation days, it was followed by atrophy and diffuse hemorrhage, whereas in Alabio ducks, no lesion was found on all observation days. Starting from day 3 PI, the domestic chicken heart showed swelling, while the Alabio duck heart showed no gross pathology changes on every observation day. Domestic chicken kidneys on all observation days appeared swelled with multifocal paleness, while in Alabio ducks, no lesion was seen on any observation day. Domestic chicken brain from day 5 PI started to show edema and multifocal congestion, while in Alabio duck, the lesion was focal Histopathology examination The control domestic chicken and Alabio duck group showed no histopathology lesions. Domestic chicken proventriculus on day 1 PI showed congestion, focal mononuclear proliferation on the mucosal layer, necrosis, and focal epithelial cell desquamation on proventriculus glands. On day 3 PI, the lesions spread in a multifocal pattern; on days 5 and 7 PI epithelial cells showed necrosis and diffuse hemorrhage by the muscle layer and multifocal epithelial cell desquamation as well as necrosis, congestion, and multifocal mononuclear cell infiltration by the proventiculus gland. Alabio duck proventriculus on day 1 PI showed focal epithelial cell desquamation; on days 3 and 5 PI showed congestion, epithelial cell desquamation, and multifocal mononuclear cell proliferation, while proventriculus glands showed multifocal necrosis, epithelial cell desquamation, and focal congestion. On day 7 PI, no lesion was seen by the proventriculus. On days 1 and 3 PI, domestic chicken duodenum showed hemorrhage, congestion, and focal crypt epithelial cell necrosis. On all following observation days, multifocal hemorrhage with intestine villus desquamation, diffuse crypt epithelial cell necrosis, and multifocal proliferation of crypt epithelial cells in lamina propia was observed. Alabio duck duodenum from days 3 to 5 PI showed multifocal crypt epithelial necrosis, congestion, hemorrhage, and focal goblet cell proliferation. On day 7 PI, there were multifocal crypt epithelial cell necrosis and focal mononuclear cell proliferation in lamina propria. The cecal tonsil of domestic chicken on all observation days histopathologically showed congestion, hemorrhage, necrosis (karyopicnosis) of crypt epithelial cells, mononuclear cell proliferation on the lamina propria, and lymphocyte cell depletion inside the lymphoid follicle, which spread in the multifocal pattern. The Alabio duck cecal tonsil on days 1 and 3 PI showed congestion, necrosis (Karyopicnosis) of crypt cells, and multifocal mononuclear cell proliferation by the lamina propria. On days 5 and 7, multifocal mononuclear cell proliferation in lamina propria and depletion of lymphocyte cells in lymphoid follicles were seen . Domestic chicken trachea on day 1 PI showed congestion, epithelial cell desquamation, focal inflammatory cell infiltration, and diffuse edema. On the following observation days, the lesions spread in multifocal patterns. Alabio duck trachea on all observation days showed congestion, epithelial desquamation, focal inflammatory cell infiltration, goblet cell proliferation, and diffuse edema . Domestic chicken lung on day 1 PI showed hemorrhage, congestion, edema, and multifocal mononuclear cell proliferation. On all following observation days, the lesion spread in a diffuse pattern. Alabio duck lungs on all observation days showed congestion, edema, hemorrhage, and multifocal mononuclear cell proliferation. The domestic chicken's thymus, Fabricius bursa, and spleen on days 1 and 3 PI showed lymphoid depletion, congestion, and multifocal vasculitis. On days 5 and 7, the lesion spread in a diffuse pattern accompanied by cyst formation, lysing cells, and part of the tissue being replaced by connective tissue. The ducks' Thymus, Fabricius bursae, and spleen on day 1 appeared to suffer lymphoid depletion, congestion, and focal vasculitis. On days 3 and 5 PI, the lesions were spread in a multifocal pattern, and on day 7, the lesions were spread in a focal pattern. The heart of domestic chicken on day 3 PI started to show edema and focal degeneration; on day 5 PI, it was accompanied by hemorrhage, congestion, and focal mononuclear cell infiltration. On day 7, the lesions spread in a multifocal pattern accompanied by pericarditis and endothelial hypertrophy. However, in Alabio duck, the lesions were only spread in a focal pattern without pericarditis. The kidney of domestic chicken on day 1 PI appeared to suffer from edema, congestion, and focal hemorrhage. On day 3 PI, the lesions were spread in a multifocal pattern. Every following observation day, the lesions spread diffusely, followed by multifocal mononuclear cell infiltration by the interstitial area of the kidney. Alabio duck kidney appeared to suffer from edema and focal congestion on all observation days, and focal mononuclear infiltration by kidney interstitial started to appear from day 3 PI. Domestic chicken brain on days 1 and 3 PI showed neuron degeneration, congestion, edema, and multifocal endothelial hypertrophy. On days 5 and 7 PI, they were followed by neuron cells necrosis, multifocal gliosis, and focal perivascular cuffing . The Alabio duck brain on every observation day appeared to suffer from congestion, edema, endothelial cell hypertrophy, and multifocal gliosis. NDV distribution Immunohistochemistry staining showed that immunopositive reaction against NDV was found in all treatment groups from day 1 until the last observation day, with severity ranging from light to severe, while on all control groups, immunonegative. The immunopositive location against NDV was not different between domestic chicken and Alabio duck. A positive reaction was found in epithelial cells and mononuclear inflammation in the intestinal organs . Cilia epithelial cells, mucosal layer mononuclear cells, and in the spaces of tracheal goblet calls . Parabronchi epithelial cells, pneumocystis, and inflammatory cells in lung alveoli. Heart blood vessel endothelial cells and the urinary. Reticular epithelial cells at the medulla layer and mononuclear cells by the thymus cortex. Lymphoid cells of white pulp and lymphoid cells inside the Fabricius bursa lymphoid follicle. The plica epithelial cells and mononuclear cells are suffering from depletion within the lymphoid follicle of the Fabricius bursa: the Virchow-Robin endothelial cell, necrotic glial cells, and neurons in the brain . The caspase-3 expression in lymphoreticular organs The percentage of caspase-3 in lymphoreticular organs peaked earlier in the Alabio duck group, which was on day 2 PI, compared to the domestic chicken group, which on average, occurred after day 3 PI. The caspase-3 expression in domestic chicken and Alabio duck thymus was more dominant in the medullar area and rarely by the cortex. The caspase-3 expression in domestic chicken and Alabio duck Fabricius bursa by the plica epithelial and lymphoid follicle, while in the cecal tonsil by the mucosa epithelial cell, lamina propria inflammatory cells, and crypt epithelial cells. The caspase-3 expression of domestic chicken spleen was more dominant around the germinal center, whereas caspase-3 expression in Alabio ducks was more dominant in the germinal center.
On day 1, domestic chicken suffered from depression and lethargy; on day 2, PI conjunctivitis appeared; on day 3, PI had difficulty breathing, greenish-white diarrhea, and anorexia appeared; on days 5 and 7, PI nervous symptoms such as muscle tremor, difficulty in standing, and wings dropping along with light edema on the head appeared. Death in domestic chickens started to appear on day 4 PI . Alabio duck groups only seemed to be depressed and lightly lethargic starting from day 5 PI. Control domestic chicken and Alabio duck group did not show any clinical symptoms.
The control domestic chicken and Alabio duck group showed no gross anatomy lesions. Proventriculus, duodenum, and cecal tonsil did not show lesions on day 1 PI in domestic chickens, but in Alabio ducks, proventriculus appeared to have diffuse catarrhal exudation on the mucosal layer. Proventriculus on day 3 PI showed multifocal petechiae. On days 5 and 7, PI showed diffuse hemorrhage on domestic chickens . Alabio duck proventriculus on days 3 and 5 PI showed diffuse catarrhal exudation, and no lesion appeared on day 7 PI . Duodenum on day 3 until day 7 PI appeared hemorrhagic with multifocal necrosis on domestic chickens. In contrast, in Alabio ducks, no lesion appeared on all observation days. Cecal tonsil on day 3 PI until the last observation day seemed to have a multifocal hemorrhage in domestic chicken, but in Alabio ducks, no lesion appeared on all observation days . On day 5 PI, the trachea suffered multifocal diffuse congestion on domestic chickens and focal congestion on Alabio ducks . Lungs on days 1 and 3 PI appeared abnormal from multifocal congestion. On every following observation day, the lesions spread diffusely on domestic chickens, whereas, on Alabio ducks, multifocal congestion only appeared on day 1 PI. On all observation days, the Thymus of domestic chicken appeared to have half of its lobes atrophied and had petechiae/hemorrhage. In contrast, in Alabio ducks, part of the lobus showed petechiae on all observation days. The spleen of domestic chickens on days 1, 3, and 5 PI showed congestion, swelling, and multifocal necrosis, while the days after were followed by atrophy. In Alabio ducks, the spleen showed swelling and multifocal necrosis only during day 3 PI. The Fabricius bursa of domestic chicken on day 1 PI appeared to be swelling with focal hemorrhage. On the rest of the observation days, it was followed by atrophy and diffuse hemorrhage, whereas in Alabio ducks, no lesion was found on all observation days. Starting from day 3 PI, the domestic chicken heart showed swelling, while the Alabio duck heart showed no gross pathology changes on every observation day. Domestic chicken kidneys on all observation days appeared swelled with multifocal paleness, while in Alabio ducks, no lesion was seen on any observation day. Domestic chicken brain from day 5 PI started to show edema and multifocal congestion, while in Alabio duck, the lesion was focal
The control domestic chicken and Alabio duck group showed no histopathology lesions. Domestic chicken proventriculus on day 1 PI showed congestion, focal mononuclear proliferation on the mucosal layer, necrosis, and focal epithelial cell desquamation on proventriculus glands. On day 3 PI, the lesions spread in a multifocal pattern; on days 5 and 7 PI epithelial cells showed necrosis and diffuse hemorrhage by the muscle layer and multifocal epithelial cell desquamation as well as necrosis, congestion, and multifocal mononuclear cell infiltration by the proventiculus gland. Alabio duck proventriculus on day 1 PI showed focal epithelial cell desquamation; on days 3 and 5 PI showed congestion, epithelial cell desquamation, and multifocal mononuclear cell proliferation, while proventriculus glands showed multifocal necrosis, epithelial cell desquamation, and focal congestion. On day 7 PI, no lesion was seen by the proventriculus. On days 1 and 3 PI, domestic chicken duodenum showed hemorrhage, congestion, and focal crypt epithelial cell necrosis. On all following observation days, multifocal hemorrhage with intestine villus desquamation, diffuse crypt epithelial cell necrosis, and multifocal proliferation of crypt epithelial cells in lamina propia was observed. Alabio duck duodenum from days 3 to 5 PI showed multifocal crypt epithelial necrosis, congestion, hemorrhage, and focal goblet cell proliferation. On day 7 PI, there were multifocal crypt epithelial cell necrosis and focal mononuclear cell proliferation in lamina propria. The cecal tonsil of domestic chicken on all observation days histopathologically showed congestion, hemorrhage, necrosis (karyopicnosis) of crypt epithelial cells, mononuclear cell proliferation on the lamina propria, and lymphocyte cell depletion inside the lymphoid follicle, which spread in the multifocal pattern. The Alabio duck cecal tonsil on days 1 and 3 PI showed congestion, necrosis (Karyopicnosis) of crypt cells, and multifocal mononuclear cell proliferation by the lamina propria. On days 5 and 7, multifocal mononuclear cell proliferation in lamina propria and depletion of lymphocyte cells in lymphoid follicles were seen . Domestic chicken trachea on day 1 PI showed congestion, epithelial cell desquamation, focal inflammatory cell infiltration, and diffuse edema. On the following observation days, the lesions spread in multifocal patterns. Alabio duck trachea on all observation days showed congestion, epithelial desquamation, focal inflammatory cell infiltration, goblet cell proliferation, and diffuse edema . Domestic chicken lung on day 1 PI showed hemorrhage, congestion, edema, and multifocal mononuclear cell proliferation. On all following observation days, the lesion spread in a diffuse pattern. Alabio duck lungs on all observation days showed congestion, edema, hemorrhage, and multifocal mononuclear cell proliferation. The domestic chicken's thymus, Fabricius bursa, and spleen on days 1 and 3 PI showed lymphoid depletion, congestion, and multifocal vasculitis. On days 5 and 7, the lesion spread in a diffuse pattern accompanied by cyst formation, lysing cells, and part of the tissue being replaced by connective tissue. The ducks' Thymus, Fabricius bursae, and spleen on day 1 appeared to suffer lymphoid depletion, congestion, and focal vasculitis. On days 3 and 5 PI, the lesions were spread in a multifocal pattern, and on day 7, the lesions were spread in a focal pattern. The heart of domestic chicken on day 3 PI started to show edema and focal degeneration; on day 5 PI, it was accompanied by hemorrhage, congestion, and focal mononuclear cell infiltration. On day 7, the lesions spread in a multifocal pattern accompanied by pericarditis and endothelial hypertrophy. However, in Alabio duck, the lesions were only spread in a focal pattern without pericarditis. The kidney of domestic chicken on day 1 PI appeared to suffer from edema, congestion, and focal hemorrhage. On day 3 PI, the lesions were spread in a multifocal pattern. Every following observation day, the lesions spread diffusely, followed by multifocal mononuclear cell infiltration by the interstitial area of the kidney. Alabio duck kidney appeared to suffer from edema and focal congestion on all observation days, and focal mononuclear infiltration by kidney interstitial started to appear from day 3 PI. Domestic chicken brain on days 1 and 3 PI showed neuron degeneration, congestion, edema, and multifocal endothelial hypertrophy. On days 5 and 7 PI, they were followed by neuron cells necrosis, multifocal gliosis, and focal perivascular cuffing . The Alabio duck brain on every observation day appeared to suffer from congestion, edema, endothelial cell hypertrophy, and multifocal gliosis.
Immunohistochemistry staining showed that immunopositive reaction against NDV was found in all treatment groups from day 1 until the last observation day, with severity ranging from light to severe, while on all control groups, immunonegative. The immunopositive location against NDV was not different between domestic chicken and Alabio duck. A positive reaction was found in epithelial cells and mononuclear inflammation in the intestinal organs . Cilia epithelial cells, mucosal layer mononuclear cells, and in the spaces of tracheal goblet calls . Parabronchi epithelial cells, pneumocystis, and inflammatory cells in lung alveoli. Heart blood vessel endothelial cells and the urinary. Reticular epithelial cells at the medulla layer and mononuclear cells by the thymus cortex. Lymphoid cells of white pulp and lymphoid cells inside the Fabricius bursa lymphoid follicle. The plica epithelial cells and mononuclear cells are suffering from depletion within the lymphoid follicle of the Fabricius bursa: the Virchow-Robin endothelial cell, necrotic glial cells, and neurons in the brain .
The percentage of caspase-3 in lymphoreticular organs peaked earlier in the Alabio duck group, which was on day 2 PI, compared to the domestic chicken group, which on average, occurred after day 3 PI. The caspase-3 expression in domestic chicken and Alabio duck thymus was more dominant in the medullar area and rarely by the cortex. The caspase-3 expression in domestic chicken and Alabio duck Fabricius bursa by the plica epithelial and lymphoid follicle, while in the cecal tonsil by the mucosa epithelial cell, lamina propria inflammatory cells, and crypt epithelial cells. The caspase-3 expression of domestic chicken spleen was more dominant around the germinal center, whereas caspase-3 expression in Alabio ducks was more dominant in the germinal center.
The chains of pathological changes occurring in the respiratory, circulation, gastrointestinal, urinary, and nervous systems are closely related to the clinical symptoms. For example, Conjunctivitis appearing on day 2 PI occurred because the infection route was intraorbital, causing a local immune response by the eye region. Depression, lethargy, difficulty breathing, and catarrhal exudation in the nose on AC-A align with the gross pathological changes, which were congestion and catarrhal exudation by the trachea. This is related to the increase of goblet cells initiated by viruses attaching to the epithelial cells via the use of sialic acid on the host cells as receptors. Clinical symptoms such as greenish-white diarrhea, nervous disorder, death, and head edema only appeared in domestic chickens but were not found in Alabio ducks. Diarrhea and anorexia on day 3 PI in domestic chicken were in line with gross pathological changes in skeletal muscle, which looked pale and emaciated. Anorexia was indicated by depression in the chicken period. The low feed and drinking water consumption was due to the chicken feeling sick from septicemia or viremia. The sickness in domestic chicken continued after day 3 PI and caused death starting from day 4 PI, followed by nervous symptoms on day 5 PI. Death with the nervous disorder is a clinical manifestation of neuron degeneration, edema, congestion, thickening blood vessel walls, perivascular cuffing, the proliferation of glial cells (gliosis), and necrosis in the brain. The presence of NDV in the brain can cause vascular and neuron damage, further causing an inflammatory response. The replication of NDV in internal organs initiated the pathogenesis of ND in the gastrointestinal tract. The NDV was distributed from the respiratory to the gastrointestinal system, possibly through the circulatory system or directly into the chicken’s internal organs. NDV replication in the gastrointestinal system is indicated by catarrhal exudation to widespread hemorrhage by the viscera due to blood vessel damage. The gross anatomy lesion that appeared by the gastrointestinal tract is in line with a histopathological lesion found: congestion, edema, epithelial cell necrosis and desquamation, the proliferation of mononuclear cells, and goblet cell hyperplasia from light to severe level. The findings matched a previous report by about goblet cell hyperplasia and severe desquamation of intestinal epithelial cells. Necrosis was in the form of karyorrhexis debris and ulceration of intestine epithelial cells . The NDV immunopositive reaction in gastrointestinal organs was also distributed in severe severity. The immunopositive response in the gastrointestinal tract matched the previous report by on inflammatory and gastrointestinal system epithelial cells. reported in the duodenum, proventriculus, and heart. said that in the esophagus, crop, pancreas, and proventriculus. reported in the cecal tonsil. Kidney lesions also occur due to viremia, which allows NDV to be spread from the respiratory or gastrointestinal tract via the blood circulation system to the kidney. We assume that one of the reasons Alabio ducks are more resistant than local chickens is the lack of severe structural and histological abnormalities in the Alabio duck kidney. However, we are still unsure of the mechanism that causes the absence of these lesions. The immunopositive reaction in the kidney is similar to a previous report by about chicken kidney tubules epithelial cells and in the cytoplasm and nucleolus of duck kidney tubules cells . The pathogenesis of a disease is also closely related the lymphoreticular organ damage as they produce immunity compounds to eliminate infectious agents. This research showed that all lymphoreticular organs generally suffered from changes in gross pathology and histopathology. The gross pathology and histopathology lesions generally spread lightly in Alabio duck groups, different from immunohistochemistry results whose distribution was light to severe. This data showed that although no severe lesion was found according to gross pathology and histopathology, the viral concentration within the lymphoreticular organ was high. The different bird species used in this research showed that the severe immunopositive reaction found in the gastrointestinal and lymphoreticular organs in Alabio duck proved that although Alabio ducks do not show ND clinical symptoms, inside their body, NDV was present in high concentration. The differences in clinical symptoms and lesion features that appeared on domestic chickens and Alabio ducks may be caused by different genes or protein expressions in domestic chickens and Alabio ducks that fight viral infection. According to , the interferon-ß expression is earlier, stronger, and more intensive in duck tissue (Japanese commercial duck) compared to chicken (white leghorn SPF) upon infection by virulent NDV. In chicken, retinoic acid-induced gene-I (RIG-I) was reported to be absent. However, it was highly expressed in the duck spleen and heart . The absence of RIG-I is hypothesized to make chicken less resistant to the influenza virus than duck as its natural reservoir . explained that RIG-I is identified as a cytoplasmic censor against RNA virus, which is important in initiating the nonspecific immune response. The caspase-3 expression as an apoptosis indicator in this research added information that the clinical expression and lesion differences in domestic chicken and Alabio duck infected by a local isolate from duck might be caused by the time difference of the highest progressive increase of apoptosis response, which was on day 2 PI in Alabio duck and day 3 PI in domestic chicken by the thymus, Fabricius bursa, and spleen; all of them crucial as primary and secondary systems in fighting infections. Ideally, when an infectious agent contacts host cells, an apoptosis process for clearance rapidly occurs to prevent viral aggression from continuing. The apoptosis percentage in the lymphoreticular organ continued to show a regressive decrease after reaching the peak, probably caused by two conditions. In domestic chickens, histopathologically, on days 1 and 3, PI lymphoid depletion, congestion, and multifocal vasculitis were found; on days 5 and 7, PI lesions were spread in diffuse accompanied by cells undergoing lysis and partly replaced by connective tissue. In the Fabricius bursa, the process was accompanied by cyst formation. It was different in Alabio ducks. In Alabio ducks, the apoptosis response decrease at the end of observation was probably caused by the rapid NDV clearance, which prompts earlier cell regeneration. stated that several duck organs showed increased mitosis compared to ducks, allowing rapid tissue regeneration. The lowering apoptosis responses in Alabio duck after reaching its peak align with the lowering lesion severity into focal in the thymus, Fabricius bursa, and spleen. This allows the lymphoreticular organ ability as the defensive organ to return to normal, so Alabio ducks turn healthy again until the last observation day. In closing, it can be said that generally, the difference in lesion pattern between domestic chickens and ducks infected by the velogenic NDV Ducks/Aceh Besar_IND/2013/eoAC080721 is that in domestic chickens until the last observation day, the lesion progressively turned severe, while in Alabio ducks the lesions showed improvements. In summary, the clinical symptoms, pathological lesions, and histopathology of ND are more severe in domestic chickens compared to Alabio ducks in the treatment group. The immunopositive reaction against the NDV in domestic chicken continued to increase, while in Alabio ducks, it decreased until the last observation day, especially in gastrointestinal tracts. The apoptosis response showed earlier in Alabio ducks compared to domestic chickens.
|
Valorizing waste streams to enhance sustainability and economics in microbial oil production | 195cf0bb-a76a-4903-9b1e-15ba27af4a33 | 11630272 | Microbiology[mh] | The growing interest in sustainable and environmentally friendly production methods has brought biofermentation technologies, particularly for lipid production, to the forefront of scientific research (Leman, ). This approach, which harnesses microorganisms to convert organic substrates into valuable lipids, offers a compelling alternative to traditional production methods, aligning with the global push toward environmental stewardship and circular economy principles (Masri et al., ). The primary motivation behind exploring microbial lipid production is its potential to significantly mitigate the environmental impacts commonly associated with conventional lipid extraction from agricultural crops. Traditional methods have long been scrutinized for their extensive land use, deforestation implications, and high water consumption (Mattsson et al., ). Microbial lipid production, in contrast, utilizes renewable and often waste-derived feedstocks, presenting a solution that not only addresses environmental concerns but also contributes to carbon footprint reduction (Parsons et al., ). This shift is further underscored by the increasing volume of investments flowing into the sector. A diverse range of entities are actively researching and investing in this field. These ventures are pioneering new metabolic pathways and refining biofermentation techniques to enhance yield, productivity, and cost-effectiveness, reflecting the industry’s trajectory toward leveraging biotechnology in meeting the growing global lipid demand. However, the economic aspects of microbial lipid production, particularly its competitiveness with traditional lipid sources such as crude palm oil (CPO), remain a critical area of investigation. Koutinas et al. analyzed production of microbial oils using the yeast strain Rhodosporidium toruloides at an annual production capacity of 10,000T microbial oil and at zero cost of glucose, estimated a unitary production cost of $3.40/kg oil ($4.20/kg adjusted for inflation) (Koutinas et al., ). Braunwald et al. studied large-scale microbial production of lipids from yeast and estimated a break-even price of $2.35/kg ($3.02/kg adjusted for inflation) (Braunwald et al., ). Parsons et al. analyzed single cell oil (SCO) production cost from the oleaginous yeast M. pulcherrima , estimating a €4–8/kg breakeven sales cost depending on the feedstock (€5–9/kg adjusted for inflation) on scaling to 10,000T/year production (Parsons et al., ). Karamerou et al. calculated a lower-bound on cost for microbial production of lipids, estimating cost under "an ideal case that which while not achievable in reality, importantly would not be able to be improved on, irrespective of the scientific advances in this area” (Karamerou et al., ). Under this analysis they estimated a lower bound on production cost of $1.81/kg ($2.10/kg adjusted for inflation) at ~ 8,000T/year production and $1.20/kg ($1.40/kg adjusted for inflation) at ~ 48,000T/year production. Most recently, Caporusso et al. analyzed production costs of microbial oils by oleaginous yeasts with wheat straw as a feedstock, finding a production cost of the microbial oil of about €4/kg (Caporusso et al., ). This paper presents what is believed to be one of the most rigorous and methodologically detailed techno-economic analyses to date on the feasibility of microbial fermentation as a viable long-term alternative to conventional lipid sources. The authors background in the development of commercial scale biofermentation has been combined with academic rigor to identify the salient cost elements and technical challenges in commercial-scale production. In particular, this paper develops rigorous data-driven assumptions on capital expense (by leveraging similarities between these microbial biofermentation facilities with 2G ethanol facilities as discussed in section ), operating full-time equivalents (FTEs), feedstock costs, and other operating costs at small to large-scale production capacities. This paper also does an exhaustive review on the key cost drivers in the fermentation process, including developing distributional assumptions on theoretical yield, actual yield, and titer. Economics are presented across a range of productivity scenarios, geographic regions, feedstocks, and product targets. The development of a robust and comprehensive cost model for microbial lipid production required a broad set of assumptions ranging from strain productivity to operational expenses (OpEx). The initial phase in developing these assumptions involved an extensive literature review focused on understanding the distribution of productivity and titer metrics of various microbial strains employed in lipid production (see section Literature Review for full details). This review covered original research papers, scientific reviews, and meta-analyses, emphasizing studies that investigated yield optimization, strain engineering, and fermentation process efficiencies. The data extracted from these publications provided crucial benchmarks for theoretical and actual yields, titers, and growth rates. Public data sources, including market reports, industry analyses, and financial records, were also mined to enrich the model with real-world cost parameters (see sections Capital Expense and Operating FTEs and ). This data provided a foundation for assumptions related to capital expenditure, OpEx, and revenue potential. Interviews with industry experts and stakeholders added depth to our understanding, offering qualitative insights that complemented quantitative data. Expert opinions particularly informed OpEx assumptions. The culmination of these methodologies resulted in a dynamic cost model encapsulating key inputs across various categories, as detailed in Table shown below. This model allowed for the simulation of different scenarios, reflecting variations in strain productivity, geographical regions, feedstocks, plant capacities, and product markets. Based on literature review, glucose and glycerol were modeled as biofermentation feedstocks, while corn stover, sugar beets, bread waste, cassava, and palm biomass empty fruit bunches (EFB) were modeled as additional raw material feedstocks to be converted to glucose during pre-processing. The analysis largely focuses on batch fermentation, which to our knowledge is the only process that has been successfully commercialized to date. However the impact of continuous fermentation is also modeled as it is previously demonstrated in oleaginous yeast (Abeln et al., ). This cost model provides a tool for examining the economic implications of diverse production setups and the feasibility of microbial lipid production under varying conditions. All costs are denoted in $USD. Literature Review Original research papers published within the past 15 years were assessed for their broad methodology and findings. These papers originated from a search by notable researchers and keywords related to fatty acid and oleochemical production from microorganisms. These works were in part chosen for their objectives, such as an aim to improve at least titer, yield, or productivity. Broadly, the approaches to improve the performance of these yeasts include genetic and metabolic engineering, enzyme engineering, and process optimization. The majority of organisms in these works are lipid-producing (oleaginous) yeasts, with the most common strain of yeast being Yarrowia lipolytica . Key measures from these papers, including strain, productivity, and yield, are listed in Table below. In the context of commercialized chemical production, most of these works are considered early feasibility studies at a bench scale, with few exceeding 3 L. Consequently, the process optimization approaches are relatively simple, and constrained by less-sophisticated equipment. Most feeding regimens are limited to batch and simpler fed-batch of a lone carbon source, with few works harboring the equipment to control pH, dissolved oxygen, and temperature. As oleaginous yeasts are known to switch from a growth phase to a stationary, lipid-producing stage, some recent papers develop multi-stage fed-batch feeding regimens. A notable exception is the advanced semi-continuous and continuous cultivation strategies using M. pulcherrima (Abeln & Chuck, ), which motivated an added scenario to our cost modeling. The feedstocks in these works were mostly simple sugars and first-generation feeds. Some works utilize lignocellulosic feedstocks, and many of these are motivated by a recognition that simple sugars such as glucose, while useful for R&D and strain development, are economically prohibitive, while also referencing growing examples of lignocellulosic feed utilization in bioethanol plants. The tradeoffs of these second-generation feeds are noticeable, as some key performance metrics (e.g., titer) are poor relative to glucose. This is likely due to inhibitors from the cellulosic substrate, which are not simple to address. None of these works perform direct or relative cost comparisons of alternative feedstocks to glucose, though much is referenced elsewhere and highlighted in other technoeconomic and cost modeling analyses and review papers. The full distribution of titers and yields by microbial strain is shown in Fig. below. The highest reported titers and productivities across these works are approximately 100 g/L and 1.2 g/L/hr of fatty acid methyl esters in Y. lipolytica , fed with glucose and yeast nitrogen (Qiao et al., ). Titers above 30 g/L on non-standard feedstocks were reported in the lesser-studied Trichosporon oleaginosus strain fed with a microalgae hydrolysate (Meo et al., ), although the productivity was significantly lower than observed with glucose feeds in Y. lipolytica . Interestingly, a productivity competitive to glucose feeds was noted in an unoptimized Rhodosporidiobolus fluvialis strain grown on crude glycerol (Poontawee & Limtong, ). The literature review was finally used to construct the various productivity scenarios for the technoeconomic analysis as follows. From these works, the highest reported titers for glucose and glycerol were 98.6 g/L and 23.6 g/L respectively, which acted as the state-of-the-art values in the technoeconomic models (noted as ‘Current Tech High’ in Table ). The g/g lipid yields reported in these works were then used to calculate their percent of maximal theoretical yield, which for conversion of glucose and glycerol to palmitic acid are 0.35 g/g and 0.38 g/g respectively. From these numbers, best-case scenario titers were calculated, representing the theoretical maximum. These values for glucose and glycerol feeds were 111.63 g/L and 26.88 g/L respectively (‘Future Tech-High’ in Table ). For additional comparison, the third quartile and mean of these titers were used to generate ‘Current Tech Base Case’ (57.85 g/L and 13.73 g/L, glucose and glycerol) and ‘Current Tech Low’ values respectively (32.7 g/L and 7.86 g/L, glucose and glycerol). The review additionally grounded assumptions of cycle times and plant utilization in the cost models. The smallest scale studies in this review predominantly performed batch processes of 1 L or less and averaged about 5 days of fermentation. Larger volume fermentations in this review, at upwards of 14 L, utilized a mixture of batch and fed-batch processes, and these larger fed-batch processes lasted upwards of 25 days. To form a comparable assessment of processes in the cost models, the number of batches and final performance in a given period were roughly normalized by the number of media turnovers. Turnovers of a total media volume could be estimated from fed-batch studies which reported a media dilution rate. The average dilution rate of 0.14/d in these studies corresponds to about one full volume of media exchange per week. Therefore, in comparing the reported titers possible for a given fermentation run across process types, one can consider either the titer at the end of a 5-day batch process, or that reported at the end of a month-long process, which would utilize roughly an equal volume of media. Then for simplicity in the cost models, the total number of batches producing the performance measure of interest (titer, e.g.) was drawn from the 5-day batch process scenario, which then also would generally apply to the same total output if modeled from month-long fed-batch processes. Capital Expense and Operating FTEs As noted by Caporusso et al. , the capital expense of building a biofermentation plant leveraging waste streams is modeled using 2G ethanol plants as a proxy. 2G ethanol plants, also known as second-generation ethanol plants, are facilities that produce ethanol fuel from non-food sources, such as agricultural waste, forest residues, energy crops, or dedicated energy crops. Unlike first-generation ethanol, which is primarily produced from food crops like corn or sugarcane, second-generation ethanol production focuses on utilizing biomass materials that are not in direct competition with the food supply. The process of producing 2G ethanol involves the following steps, also shown in Fig. (Hindustan Petrolum Corporation Limited PRAJ, ): Feedstock collection: Biomass materials, such as corn stover are collected or cultivated. Pre-treatment: The collected biomass is subjected to a mechanical, chemical, or thermal pre-treatment process, making it more accessible for further processing. Enzymatic hydrolysis: The pre-treated biomass is then treated with enzymes to break down the complex carbohydrates into simple sugars, such as glucose. Fermentation: The obtained sugars are fermented using specialized microorganisms, typically yeast or bacteria, which convert the sugars into ethanol through the process of anaerobic fermentation. Recovery: The fermented solution is distilled to separate the ethanol from water and other impurities. Additional purification steps and dehydration may be employed to obtain a higher concentration of ethanol. 2G ethanol plants provide good benchmarks for capital expenses and operating FTEs for several reasons: Similar processing steps: Both 2G ethanol production and microbial fermentation of oils involve similar processing steps, such as pre-treatment, enzymatic hydrolysis, fermentation, separation, and purification. Equipment and infrastructure: Both 2G ethanol plants and microbial fermentation facilities require the same types of equipment and infrastructure. This includes fermentation vessels or tanks, separation and purification units, storage facilities, utilities, and waste treatment systems. The specific technical requirements may vary, but enough similarities exist in the core infrastructure to make meaningful cost comparisons and estimations and to drive similar economies-of-scale. Experience: 2G ethanol plants have been in operation for some time and a large amount of public data is available on construction expenses and jobs created (FTEs). While there are inherent differences between anaerobic and aerobic processes, particularly in terms of oxygen supply, agitation, and energy requirements, these cost differences largely impact operating expenses which are adjusted for separately in section Other Operating Expenses. From the authors’ commercial experience, the impact on capital expense for equipment for aeration in the aerobic process will be marginal, and furthermore capital expense itself is a relatively small small fraction of total cost-of-goods-sold at scale. As a result the reliance on 2G ethanol plants is a reasonable proxy when estimating cost-of-goods-sold, enables researchers to leverage a large amount of data on actual construction projects, and does not fundamentally alter the conclusions of the study. The relationship of inflation-adjusted capital expenditures and capacity of 2G ethanol plants, based on a review of public feasibility studies, press releases on new ethanol plants, and government research studies are shown in Fig. below (National Renewable Energy Laboratory, ). This included data points across all regions studied in the paper and commonly chosen for fermentation plant locations: USA, European Union (EU) and Southeast Asia (SEA). The coefficient of determination ( [12pt]{minimal} $R^2$ ) of 0.9138 implies [12pt]{minimal} $91.38\%$ of the variation in capital expense can be explained by plant capacity. There is also no obvious relationship between building location and cost across the regions studied, which is expected as the majority of cost is driven by equipment and infrastructure (Hindustan Petrolum Corporation Limited PRAJ, ; National Renewable Energy Laboratory, ). In the technoeconomic analysis, capital expense’s impact on cost of goods is calculated using a 20-year depreciation. It is further assumed that for biofermentation plants running only glucose or glycerol, a 25% decrease in equipment costs can be achieved by removing preprocessing equipment, which equates to about a 14% decrease in total capital expenditure. This amount, based on our team’s experience, closely matches the 23.6% reduction in equipment cost modeled in the NREL model when handling glucose rather than corn stover. For a biofermentation plant handling only cassava or bread waste, which just requires a grinder, blender, and hydrolysate centrifuge for pretreatment, it was assumed half of these cost savings can be achieved. Strong correlations were also obtained for predicting plant FTEs based on capacity. The average salary and overhead for these employees was estimated at twice the average machine operator salary in that region using public datasets, to reflect the need for skilled technicians at these plants and the team’s experience in salary premium plus overhead. Plants with fermentation capacities ranging from 50k liters (50 thousand L) up to 15M liters (15 million L) (ten 1.5Ml reactors) were selected for analysis. This upper limit is comparable to a large-scale ethanol facility as 15Ml of fermentation capacity can produce 72M gallons of ethanol per year assuming a 2-day batch time and 10% ethanol concentration in fermentation tanks. Beyond this, there are practical limitations to mixing efficiency as well as issues with material strength (particularly for concrete). Feedstock sourcing also becomes particularly challenging as you scale past this size, with 15Ml of fermentation capacity already requiring corn stover from 150,000 acres of farmland, assuming 3.75 dry tons yield per acre and our base-case productivity scenario. Other Operating Expenses This section contains brief notes on all other assumptions made in the cost model. Inflation-adjusted raw material costs for glucose and glycerin were estimated at $500/MT and $600/MT respectively in the USA and EU and at $400/MT and $500/MT respectively in SEA based on literature review and public Alibaba prices, while corn stover and sugar beet were estimated at $42.50/MT and $52.70/MT respectively based on literature review (Kazi et al., ; Humbird et al., ; Liu & Chen, ; Aghazadeh et al., ; Yang & Rosentrater, ; Řezbová et al., ; Miranda et al., ; Shapouri & Salassi, ; Maung & Gustafson, ). Cassava and Palm Biomass (EFB) costs were estimated at $29.01/MT and $48.82/MT respectively using an internal report generated for Sime Darby Plantation Berhad and shared with the authors. Collection costs for bread waste have not been studied in detail but were estimated by the authors as twice the collection cost for palm biomass due to the relative scale of aggregation at palm mills vs. commercial bakeries. Literature review was also used to benchmark the preprocessing cost, including enzymatic hydrolysis, for converting these raw material feedstocks to glucose, which ranged from $38.10/MT for bread waste to $80.34/MT for corn stover (Humbird et al., ). The preprocessing efficiency (grams of glucose per gram of raw material feedstock) was also estimated based on a review of average feedstock composition from the literature (Concha Olmos and Zúñiga Hansen, ; Dewi et al., ; National Renewable Energy Laboratory, ). Also, critical to note is that preprocessing of these raw materials to glucose often leaves some inhibitory metabolites which impact biofermentation efficiency--which were modeled as yield drags of up to 3% based on an internal review of feedstock composition and a literature review on yield drag for ethanol production (Vanmarcke et al., ). Media, consumables, and waste stream costs were estimated using NREL’s bioethanol model (National Renewable Energy Laboratory, ). The extraction of microbial oils typically involves intracellular lipid recovery, which may include mechanical cell disruption, solvent extraction, or combinations of both. These techniques bear similarities to algal oil extraction methods, as highlighted in the NREL report on algal oil extraction and conversion (National Renewable Energy Laboratory, ). While both processes involve intracellular oil recovery, the specific conditions and processing steps differ based on the microbial strains and feedstock used. We modeled the downstream recovery of lipids after fermentation with hexane extraction, as we believe this to be the most cost-effective method at scale. Extraction and solvent costs were modeled based on literature review and the team’s knowledge of commercial costs for hexane extraction of lipids from oilseeds (Parsons et al., ). Annual facilities expenses, including energy costs, waste treatment and maintenance, were modeled at 5.0% of capital expense based on the team’s experience, closely matching the 5.45% from the NREL bioethanol model (National Renewable Energy Laboratory, ). Other annual expenses, including insurance, compliance, and legal costs were estimated at 16.9% of facilities annual costs based on the NREL bioethanol model (National Renewable Energy Laboratory, ). Corporate tax rates for the US, EU, and SEA were based on averages for those regions. Product Revenue Four products were included in the cost model, with the motivation for studying each end market shown in Fig. below. All four markets are large enough that price elasticity curves are not relevant at the volumes achievable at a single large-scale biofermentation facility and considered in this paper. For the CPO alternative, revenue was estimated at $875/MT based on daily settlement prices available from the Malaysian Palm Oil Council. For the high-oleic cooking oil, such oils when conventionally produced can receive a 20% price premium over conventional vegetable oils in commodity markets, giving a revenue of $1,006/MT. When using microbial oils as a low-carbon intensity biofuels feedstock oil, the carbon saving premiums can be directly monetized in a transparent way. Prices were estimated based on offtake by US biofuels producers, and using soybean oil spot prices from Chicago Board of Trade as well as public credit prices from California’s Low Carbon Fuel Standard Program minus transportation costs by super-liner to the nearest US port. These revenues were estimated at $1,389/MT for microbial oil production in the US, $1,307/MT for production in the EU, and $1,295/MT for production in SEA. More information on these carbon subsidies can be found in Appendix . For the 75% Lauric Acid (C12), revenue was estimated at $1,120/MT based on public data from Alibaba. Original research papers published within the past 15 years were assessed for their broad methodology and findings. These papers originated from a search by notable researchers and keywords related to fatty acid and oleochemical production from microorganisms. These works were in part chosen for their objectives, such as an aim to improve at least titer, yield, or productivity. Broadly, the approaches to improve the performance of these yeasts include genetic and metabolic engineering, enzyme engineering, and process optimization. The majority of organisms in these works are lipid-producing (oleaginous) yeasts, with the most common strain of yeast being Yarrowia lipolytica . Key measures from these papers, including strain, productivity, and yield, are listed in Table below. In the context of commercialized chemical production, most of these works are considered early feasibility studies at a bench scale, with few exceeding 3 L. Consequently, the process optimization approaches are relatively simple, and constrained by less-sophisticated equipment. Most feeding regimens are limited to batch and simpler fed-batch of a lone carbon source, with few works harboring the equipment to control pH, dissolved oxygen, and temperature. As oleaginous yeasts are known to switch from a growth phase to a stationary, lipid-producing stage, some recent papers develop multi-stage fed-batch feeding regimens. A notable exception is the advanced semi-continuous and continuous cultivation strategies using M. pulcherrima (Abeln & Chuck, ), which motivated an added scenario to our cost modeling. The feedstocks in these works were mostly simple sugars and first-generation feeds. Some works utilize lignocellulosic feedstocks, and many of these are motivated by a recognition that simple sugars such as glucose, while useful for R&D and strain development, are economically prohibitive, while also referencing growing examples of lignocellulosic feed utilization in bioethanol plants. The tradeoffs of these second-generation feeds are noticeable, as some key performance metrics (e.g., titer) are poor relative to glucose. This is likely due to inhibitors from the cellulosic substrate, which are not simple to address. None of these works perform direct or relative cost comparisons of alternative feedstocks to glucose, though much is referenced elsewhere and highlighted in other technoeconomic and cost modeling analyses and review papers. The full distribution of titers and yields by microbial strain is shown in Fig. below. The highest reported titers and productivities across these works are approximately 100 g/L and 1.2 g/L/hr of fatty acid methyl esters in Y. lipolytica , fed with glucose and yeast nitrogen (Qiao et al., ). Titers above 30 g/L on non-standard feedstocks were reported in the lesser-studied Trichosporon oleaginosus strain fed with a microalgae hydrolysate (Meo et al., ), although the productivity was significantly lower than observed with glucose feeds in Y. lipolytica . Interestingly, a productivity competitive to glucose feeds was noted in an unoptimized Rhodosporidiobolus fluvialis strain grown on crude glycerol (Poontawee & Limtong, ). The literature review was finally used to construct the various productivity scenarios for the technoeconomic analysis as follows. From these works, the highest reported titers for glucose and glycerol were 98.6 g/L and 23.6 g/L respectively, which acted as the state-of-the-art values in the technoeconomic models (noted as ‘Current Tech High’ in Table ). The g/g lipid yields reported in these works were then used to calculate their percent of maximal theoretical yield, which for conversion of glucose and glycerol to palmitic acid are 0.35 g/g and 0.38 g/g respectively. From these numbers, best-case scenario titers were calculated, representing the theoretical maximum. These values for glucose and glycerol feeds were 111.63 g/L and 26.88 g/L respectively (‘Future Tech-High’ in Table ). For additional comparison, the third quartile and mean of these titers were used to generate ‘Current Tech Base Case’ (57.85 g/L and 13.73 g/L, glucose and glycerol) and ‘Current Tech Low’ values respectively (32.7 g/L and 7.86 g/L, glucose and glycerol). The review additionally grounded assumptions of cycle times and plant utilization in the cost models. The smallest scale studies in this review predominantly performed batch processes of 1 L or less and averaged about 5 days of fermentation. Larger volume fermentations in this review, at upwards of 14 L, utilized a mixture of batch and fed-batch processes, and these larger fed-batch processes lasted upwards of 25 days. To form a comparable assessment of processes in the cost models, the number of batches and final performance in a given period were roughly normalized by the number of media turnovers. Turnovers of a total media volume could be estimated from fed-batch studies which reported a media dilution rate. The average dilution rate of 0.14/d in these studies corresponds to about one full volume of media exchange per week. Therefore, in comparing the reported titers possible for a given fermentation run across process types, one can consider either the titer at the end of a 5-day batch process, or that reported at the end of a month-long process, which would utilize roughly an equal volume of media. Then for simplicity in the cost models, the total number of batches producing the performance measure of interest (titer, e.g.) was drawn from the 5-day batch process scenario, which then also would generally apply to the same total output if modeled from month-long fed-batch processes. As noted by Caporusso et al. , the capital expense of building a biofermentation plant leveraging waste streams is modeled using 2G ethanol plants as a proxy. 2G ethanol plants, also known as second-generation ethanol plants, are facilities that produce ethanol fuel from non-food sources, such as agricultural waste, forest residues, energy crops, or dedicated energy crops. Unlike first-generation ethanol, which is primarily produced from food crops like corn or sugarcane, second-generation ethanol production focuses on utilizing biomass materials that are not in direct competition with the food supply. The process of producing 2G ethanol involves the following steps, also shown in Fig. (Hindustan Petrolum Corporation Limited PRAJ, ): Feedstock collection: Biomass materials, such as corn stover are collected or cultivated. Pre-treatment: The collected biomass is subjected to a mechanical, chemical, or thermal pre-treatment process, making it more accessible for further processing. Enzymatic hydrolysis: The pre-treated biomass is then treated with enzymes to break down the complex carbohydrates into simple sugars, such as glucose. Fermentation: The obtained sugars are fermented using specialized microorganisms, typically yeast or bacteria, which convert the sugars into ethanol through the process of anaerobic fermentation. Recovery: The fermented solution is distilled to separate the ethanol from water and other impurities. Additional purification steps and dehydration may be employed to obtain a higher concentration of ethanol. 2G ethanol plants provide good benchmarks for capital expenses and operating FTEs for several reasons: Similar processing steps: Both 2G ethanol production and microbial fermentation of oils involve similar processing steps, such as pre-treatment, enzymatic hydrolysis, fermentation, separation, and purification. Equipment and infrastructure: Both 2G ethanol plants and microbial fermentation facilities require the same types of equipment and infrastructure. This includes fermentation vessels or tanks, separation and purification units, storage facilities, utilities, and waste treatment systems. The specific technical requirements may vary, but enough similarities exist in the core infrastructure to make meaningful cost comparisons and estimations and to drive similar economies-of-scale. Experience: 2G ethanol plants have been in operation for some time and a large amount of public data is available on construction expenses and jobs created (FTEs). While there are inherent differences between anaerobic and aerobic processes, particularly in terms of oxygen supply, agitation, and energy requirements, these cost differences largely impact operating expenses which are adjusted for separately in section Other Operating Expenses. From the authors’ commercial experience, the impact on capital expense for equipment for aeration in the aerobic process will be marginal, and furthermore capital expense itself is a relatively small small fraction of total cost-of-goods-sold at scale. As a result the reliance on 2G ethanol plants is a reasonable proxy when estimating cost-of-goods-sold, enables researchers to leverage a large amount of data on actual construction projects, and does not fundamentally alter the conclusions of the study. The relationship of inflation-adjusted capital expenditures and capacity of 2G ethanol plants, based on a review of public feasibility studies, press releases on new ethanol plants, and government research studies are shown in Fig. below (National Renewable Energy Laboratory, ). This included data points across all regions studied in the paper and commonly chosen for fermentation plant locations: USA, European Union (EU) and Southeast Asia (SEA). The coefficient of determination ( [12pt]{minimal} $R^2$ ) of 0.9138 implies [12pt]{minimal} $91.38\%$ of the variation in capital expense can be explained by plant capacity. There is also no obvious relationship between building location and cost across the regions studied, which is expected as the majority of cost is driven by equipment and infrastructure (Hindustan Petrolum Corporation Limited PRAJ, ; National Renewable Energy Laboratory, ). In the technoeconomic analysis, capital expense’s impact on cost of goods is calculated using a 20-year depreciation. It is further assumed that for biofermentation plants running only glucose or glycerol, a 25% decrease in equipment costs can be achieved by removing preprocessing equipment, which equates to about a 14% decrease in total capital expenditure. This amount, based on our team’s experience, closely matches the 23.6% reduction in equipment cost modeled in the NREL model when handling glucose rather than corn stover. For a biofermentation plant handling only cassava or bread waste, which just requires a grinder, blender, and hydrolysate centrifuge for pretreatment, it was assumed half of these cost savings can be achieved. Strong correlations were also obtained for predicting plant FTEs based on capacity. The average salary and overhead for these employees was estimated at twice the average machine operator salary in that region using public datasets, to reflect the need for skilled technicians at these plants and the team’s experience in salary premium plus overhead. Plants with fermentation capacities ranging from 50k liters (50 thousand L) up to 15M liters (15 million L) (ten 1.5Ml reactors) were selected for analysis. This upper limit is comparable to a large-scale ethanol facility as 15Ml of fermentation capacity can produce 72M gallons of ethanol per year assuming a 2-day batch time and 10% ethanol concentration in fermentation tanks. Beyond this, there are practical limitations to mixing efficiency as well as issues with material strength (particularly for concrete). Feedstock sourcing also becomes particularly challenging as you scale past this size, with 15Ml of fermentation capacity already requiring corn stover from 150,000 acres of farmland, assuming 3.75 dry tons yield per acre and our base-case productivity scenario. This section contains brief notes on all other assumptions made in the cost model. Inflation-adjusted raw material costs for glucose and glycerin were estimated at $500/MT and $600/MT respectively in the USA and EU and at $400/MT and $500/MT respectively in SEA based on literature review and public Alibaba prices, while corn stover and sugar beet were estimated at $42.50/MT and $52.70/MT respectively based on literature review (Kazi et al., ; Humbird et al., ; Liu & Chen, ; Aghazadeh et al., ; Yang & Rosentrater, ; Řezbová et al., ; Miranda et al., ; Shapouri & Salassi, ; Maung & Gustafson, ). Cassava and Palm Biomass (EFB) costs were estimated at $29.01/MT and $48.82/MT respectively using an internal report generated for Sime Darby Plantation Berhad and shared with the authors. Collection costs for bread waste have not been studied in detail but were estimated by the authors as twice the collection cost for palm biomass due to the relative scale of aggregation at palm mills vs. commercial bakeries. Literature review was also used to benchmark the preprocessing cost, including enzymatic hydrolysis, for converting these raw material feedstocks to glucose, which ranged from $38.10/MT for bread waste to $80.34/MT for corn stover (Humbird et al., ). The preprocessing efficiency (grams of glucose per gram of raw material feedstock) was also estimated based on a review of average feedstock composition from the literature (Concha Olmos and Zúñiga Hansen, ; Dewi et al., ; National Renewable Energy Laboratory, ). Also, critical to note is that preprocessing of these raw materials to glucose often leaves some inhibitory metabolites which impact biofermentation efficiency--which were modeled as yield drags of up to 3% based on an internal review of feedstock composition and a literature review on yield drag for ethanol production (Vanmarcke et al., ). Media, consumables, and waste stream costs were estimated using NREL’s bioethanol model (National Renewable Energy Laboratory, ). The extraction of microbial oils typically involves intracellular lipid recovery, which may include mechanical cell disruption, solvent extraction, or combinations of both. These techniques bear similarities to algal oil extraction methods, as highlighted in the NREL report on algal oil extraction and conversion (National Renewable Energy Laboratory, ). While both processes involve intracellular oil recovery, the specific conditions and processing steps differ based on the microbial strains and feedstock used. We modeled the downstream recovery of lipids after fermentation with hexane extraction, as we believe this to be the most cost-effective method at scale. Extraction and solvent costs were modeled based on literature review and the team’s knowledge of commercial costs for hexane extraction of lipids from oilseeds (Parsons et al., ). Annual facilities expenses, including energy costs, waste treatment and maintenance, were modeled at 5.0% of capital expense based on the team’s experience, closely matching the 5.45% from the NREL bioethanol model (National Renewable Energy Laboratory, ). Other annual expenses, including insurance, compliance, and legal costs were estimated at 16.9% of facilities annual costs based on the NREL bioethanol model (National Renewable Energy Laboratory, ). Corporate tax rates for the US, EU, and SEA were based on averages for those regions. Four products were included in the cost model, with the motivation for studying each end market shown in Fig. below. All four markets are large enough that price elasticity curves are not relevant at the volumes achievable at a single large-scale biofermentation facility and considered in this paper. For the CPO alternative, revenue was estimated at $875/MT based on daily settlement prices available from the Malaysian Palm Oil Council. For the high-oleic cooking oil, such oils when conventionally produced can receive a 20% price premium over conventional vegetable oils in commodity markets, giving a revenue of $1,006/MT. When using microbial oils as a low-carbon intensity biofuels feedstock oil, the carbon saving premiums can be directly monetized in a transparent way. Prices were estimated based on offtake by US biofuels producers, and using soybean oil spot prices from Chicago Board of Trade as well as public credit prices from California’s Low Carbon Fuel Standard Program minus transportation costs by super-liner to the nearest US port. These revenues were estimated at $1,389/MT for microbial oil production in the US, $1,307/MT for production in the EU, and $1,295/MT for production in SEA. More information on these carbon subsidies can be found in Appendix . For the 75% Lauric Acid (C12), revenue was estimated at $1,120/MT based on public data from Alibaba. Technoeconomic analysis was done to analyze production costs of microbial oils from oleaginous yeasts, covering over 2,000 scenarios across batch and continuous fermentation, different productivity levels, fermentation capacities, regions, raw material feedstocks, and end-markets (see Section for a full list of evaluated scenarios). Key results and strategic insights across these many scenarios are shared in the following sections, with all costs denoted in $USD. Production Cost by Capacity, Region, and Feedstock Fig. below shows the estimated production cost by region, feedstock, and plant capacity for a palm oil alternative produced from batch fermentation of glucose under our base case productivity scenario. Feedstocks were only modeled for regions where they would be available: cassava and palm biomass feedstocks for SEA, corn stover for the USA, and sugar beet for the EU. Additionally, production costs for bread waste are only modeled for plant capacities up to 4.5M L; a decision based on our internal assessment of feedstock availability from commercial bakeries. We note that there is significant economy-of-scale as plant capacities increase, although these cost savings taper off at the upper limits of feasible fermentation capacity. For production of a palm oil alternative from glucose in the USA, production costs range from $7.82/kg at 50k L to $3.59/kg at 15M L. We also observe that production costs in the EU are slightly lower than the USA, while costs in SEA are significantly lower due to reduced labor and feedstock costs. For production of a palm oil alternative from glucose in the SEA, production costs range from $4.99/kg at 50k L to $2.95/kg at 15M L. Fig. also shows the economic need for waste stream valorization in microbial fermentation. Across all regions, waste streams can significantly reduce production costs relative to glucose. For production of a palm oil alternative from corn stover in the USA, production costs range from $7.38/kg at 50k L to $2.82/kg at 15M L. The lowest production costs can be achieved in SEA, with cassava allowing for a production at $2.49/kg at 15M L of fermentation capacity and palm biomass (EFB) allowing a production cost of $2.36/kg at 15M L. Bread waste is the lowest production cost feedstock at all given capacity levels due to pre-fermentation conversion efficiency (grams of glucose per gram of feedstock), as well as the reduced capital expenditures enabled by simpler feedstock pre-processing. While corn stover in the USA and cassava and palm biomass in SEA can offer better production cost due to the additional feedstock availability and scale, bread waste in the EU at 4.5M L of fermentation capacity equates to the lowest production cost at $2.98/kg, beating out production from sugar beets at increased scale. Breakdown of Production Cost Drivers Fig. shows a breakdown of cost drivers for microbial production of a palm alternative for six instances modeled in this technoeconomic analysis under the base-case productivity scenario. These scenarios were selected as representative of the major key insights into production costs. At small scale production (50k L of fermentation capacity) from glucose in the USA, labor is the largest cost percentage at 41.84%, although there is a significant economy-of-scale on fermentation capacity vs. FTEs such that labor represents only 15.23% of the production cost at 1.5M L of fermentation capacity and 6.07% at 15M L of fermentation capacity. Although capital expenditure to build the facility also offers economy-of-scale and is monotonically decreasing when expressed as a $/kg cost as scale increases, its percentage of the total spend does not monotonically or significantly decrease due to the drastically better economy-of-scale on labor. For production from glucose in the USA we have plant capital expense depreciation (on a 20-year time horizon) representing 11.87% of cost of goods sold at 50k L, 14.60% of cost of goods sold at 1.5M L, and 10.74% cost of goods sold at 15M L. In general, as these fermentation capacities change from small-scale commercial to large-scale commercial, we observe that cost of goods sold goes from being dominated by labor and capital expenses, to being dominated by raw material cost, with glucose representing 62.46% of production cost at 15M L capacity. To understand the impact of waste stream valorization on cost, comparisons were made between glucose and corn stover feedstocks at 1.5M and 15M L. In both instances, using a waste stream decreases the total raw materials and raw material processing cost, while increasing capital expenses and facilities costs (including energy). For production from corn stover at 15M L of fermentation capacity, 46.24% of the total production cost is driven by feedstock and preprocessing costs, while the remaining cost is spread broadly across capital expense, facilities, labor, media, and recovery costs. We pay special attention to the final scenario, which is production from palm biomass (EFB) in SEA at 15M L of fermentation capacity. This lowest cost scenario from the section Production Cost by Capacity, Region, and Feedstock. We see that labor is a tiny fraction of the total production cost at 2.70%, while raw materials represent 16.45%, pre-processing costs are 27.07%, and capital expense and facilities costs are both 19.66%. This relatively even distribution of cost drivers implies that no external price changes (e.g., decreases in steel costs for plant construction or labor costs) could significantly reduce production costs on its own. Future Potential Cost Reductions Through Strain and Technology Improvement Various productivity scenarios were studied, as described in section Literature Review. We find that based on the literature review of current production technology and yields produced at bench scale, we can conservatively upper bound the production cost at $3.31/kg (‘Current Tech Low’) and lower bound the production cost at $1.64/kg (‘Current Tech High’) at scale. These scenarios represent typical bench-scale titers and best-in-class bench-scale titers, respectively. Under our most aggressive scenario about future strain productivity, beyond anything that has been achieved at bench-scale today, it is projected that production costs could reach as low as $1.47/kg. Gross Margins by End Market Table below shows the commodity sales prices and production costs from biofermentation at scale in Southeast Asia (10 x 1.5 million-liter capacity with palm biomass feedstock) for the four end markets described in section Product Revenue. While sections and Breakdown of Production Cost Drivers detailed production costs for a palm oil alternative, it is critical to note that production costs are similar across all of these end markets due to similar theoretical strain yields (grams of final product per gram of glucose) across all fatty acid profiles. In particular, after strain optimization to reach the desired lipid fatty acid profile with a similar titer and actual yield, production costs vary by just a couple of percentage points. We see from Table that even under extremely aggressive future productivity assumptions—including a higher titer than ever achieved at bench scale across our broad literature review and 100% actual yield—production costs remain significantly higher than the commodity prices for all products considered. The gross margins, calculated as [12pt]{minimal} $( - ) / * 100\%$ , remain negative across all scenarios and products. Specifically, the gross margins for each product range as follows: Crude Palm Oil : Gross margins range from [12pt]{minimal} $-278.8\%$ under the Current Tech Low scenario to [12pt]{minimal} $-67.9\%$ under the Future Tech High scenario. High-Oleic Oil : Gross margins range from [12pt]{minimal} $-233.2\%$ to [12pt]{minimal} $-48.6\%$ . Low-CI Biofuels Feedstock Oil : Gross margins range from [12pt]{minimal} $-155.8\%$ to [12pt]{minimal} $-13.3\%$ . 75% Lauric Acid (C12) : Gross margins range from [12pt]{minimal} $-220.4\%$ to [12pt]{minimal} $-39.8\%$ . This indicates that even when commercial biofermentation of microbial oils reaches the scale of 2G ethanol production, true price competitiveness with commodity vegetable oils is unlikely to be achieved based on today’s prices. Moreover, combining this with the insights from Section , we see that a broad set of underlying fundamental shifts would be required for price parity. No individual cost driver is significant enough to allow price parity. The negative gross margins above only hold when selling into these commodity markets at prevailing commodity prices. There may be an opportunity to leverage the sustainability factor for additional pricing power. Of the end markets studied, only using microbial oils as a low-CI biofuels feedstock directly monetizes the land use and carbon savings. While current transportation decarbonization programs do not seem to pay enough to support commercial success, there could be better opportunities in the consumer products space. In these markets, the higher production cost of microbial oils can be economically justified by consumers’ increasing willingness to spend on healthier and more sustainable food or green cosmetics (Sharma et al., ). Potential Impact of Continuous Fermentation We also modeled the impact of continuous or semi-continuous fermentation as this could be enabled by certain oleaginous yeast strains such as M. Pulcherrima (Abeln et al., ), although to our knowledge only batch fermentation has been successfully used in commercial biofermentation. While continuous production could enable better economics due to higher feedstock throughput and less media turnover relative to fermentation capacity, our technoeconomic analysis found that it is unlikely to significantly shift economics. In particular, for the lowest production cost scenario of microbial oil production from palm biomass in SEA at 15M L of fermentation capacity, we modeled that continuous fermentation could reduce production costs from $2.36/kg to $2.25/kg. This production cost decrease of 4.7% is not material enough to change any of the previous conclusions. Strategic Insights The key strategic insights are summarized below: Strong economy-of-scale exists on production costs, with the cost of producing microbial oils produced in the USA using glucose decreasing from $7.82/kg at 50k L of fermentation capacity to $3.59/kg at 15M L of fermentation capacity. Valorizing waste streams is critical for lowering costs across all regions studied. Microbial oil production costs in the USA can be as low as $2.82/kg at 15M L of fermentation capacity by using corn stover, production costs in Southeast Asia can be as low as $2.36/kg at 15M L of fermentation capacity by using palm biomass (EFB), and production costs in the EU can be as low as $2.97/kg at 4.5M L of fermentation capacity by using bread waste. Under aggressive assumptions about future strain productivity, it is projected that production cost could reach as little as $1.47/kg under today’s prices. Continuous fermentation could lower costs if it were successfully operationalxdized, but not by a material amount. When selling into commodity end markets, we find that gross margins are negative even under aggressive assumptions about potential future strain productivity and technology. This implies a need to sell into consumer markets where the sustainability benefits can be directly monetized to enable attractive unit economics. Fig. below shows the estimated production cost by region, feedstock, and plant capacity for a palm oil alternative produced from batch fermentation of glucose under our base case productivity scenario. Feedstocks were only modeled for regions where they would be available: cassava and palm biomass feedstocks for SEA, corn stover for the USA, and sugar beet for the EU. Additionally, production costs for bread waste are only modeled for plant capacities up to 4.5M L; a decision based on our internal assessment of feedstock availability from commercial bakeries. We note that there is significant economy-of-scale as plant capacities increase, although these cost savings taper off at the upper limits of feasible fermentation capacity. For production of a palm oil alternative from glucose in the USA, production costs range from $7.82/kg at 50k L to $3.59/kg at 15M L. We also observe that production costs in the EU are slightly lower than the USA, while costs in SEA are significantly lower due to reduced labor and feedstock costs. For production of a palm oil alternative from glucose in the SEA, production costs range from $4.99/kg at 50k L to $2.95/kg at 15M L. Fig. also shows the economic need for waste stream valorization in microbial fermentation. Across all regions, waste streams can significantly reduce production costs relative to glucose. For production of a palm oil alternative from corn stover in the USA, production costs range from $7.38/kg at 50k L to $2.82/kg at 15M L. The lowest production costs can be achieved in SEA, with cassava allowing for a production at $2.49/kg at 15M L of fermentation capacity and palm biomass (EFB) allowing a production cost of $2.36/kg at 15M L. Bread waste is the lowest production cost feedstock at all given capacity levels due to pre-fermentation conversion efficiency (grams of glucose per gram of feedstock), as well as the reduced capital expenditures enabled by simpler feedstock pre-processing. While corn stover in the USA and cassava and palm biomass in SEA can offer better production cost due to the additional feedstock availability and scale, bread waste in the EU at 4.5M L of fermentation capacity equates to the lowest production cost at $2.98/kg, beating out production from sugar beets at increased scale. Fig. shows a breakdown of cost drivers for microbial production of a palm alternative for six instances modeled in this technoeconomic analysis under the base-case productivity scenario. These scenarios were selected as representative of the major key insights into production costs. At small scale production (50k L of fermentation capacity) from glucose in the USA, labor is the largest cost percentage at 41.84%, although there is a significant economy-of-scale on fermentation capacity vs. FTEs such that labor represents only 15.23% of the production cost at 1.5M L of fermentation capacity and 6.07% at 15M L of fermentation capacity. Although capital expenditure to build the facility also offers economy-of-scale and is monotonically decreasing when expressed as a $/kg cost as scale increases, its percentage of the total spend does not monotonically or significantly decrease due to the drastically better economy-of-scale on labor. For production from glucose in the USA we have plant capital expense depreciation (on a 20-year time horizon) representing 11.87% of cost of goods sold at 50k L, 14.60% of cost of goods sold at 1.5M L, and 10.74% cost of goods sold at 15M L. In general, as these fermentation capacities change from small-scale commercial to large-scale commercial, we observe that cost of goods sold goes from being dominated by labor and capital expenses, to being dominated by raw material cost, with glucose representing 62.46% of production cost at 15M L capacity. To understand the impact of waste stream valorization on cost, comparisons were made between glucose and corn stover feedstocks at 1.5M and 15M L. In both instances, using a waste stream decreases the total raw materials and raw material processing cost, while increasing capital expenses and facilities costs (including energy). For production from corn stover at 15M L of fermentation capacity, 46.24% of the total production cost is driven by feedstock and preprocessing costs, while the remaining cost is spread broadly across capital expense, facilities, labor, media, and recovery costs. We pay special attention to the final scenario, which is production from palm biomass (EFB) in SEA at 15M L of fermentation capacity. This lowest cost scenario from the section Production Cost by Capacity, Region, and Feedstock. We see that labor is a tiny fraction of the total production cost at 2.70%, while raw materials represent 16.45%, pre-processing costs are 27.07%, and capital expense and facilities costs are both 19.66%. This relatively even distribution of cost drivers implies that no external price changes (e.g., decreases in steel costs for plant construction or labor costs) could significantly reduce production costs on its own. Various productivity scenarios were studied, as described in section Literature Review. We find that based on the literature review of current production technology and yields produced at bench scale, we can conservatively upper bound the production cost at $3.31/kg (‘Current Tech Low’) and lower bound the production cost at $1.64/kg (‘Current Tech High’) at scale. These scenarios represent typical bench-scale titers and best-in-class bench-scale titers, respectively. Under our most aggressive scenario about future strain productivity, beyond anything that has been achieved at bench-scale today, it is projected that production costs could reach as low as $1.47/kg. Table below shows the commodity sales prices and production costs from biofermentation at scale in Southeast Asia (10 x 1.5 million-liter capacity with palm biomass feedstock) for the four end markets described in section Product Revenue. While sections and Breakdown of Production Cost Drivers detailed production costs for a palm oil alternative, it is critical to note that production costs are similar across all of these end markets due to similar theoretical strain yields (grams of final product per gram of glucose) across all fatty acid profiles. In particular, after strain optimization to reach the desired lipid fatty acid profile with a similar titer and actual yield, production costs vary by just a couple of percentage points. We see from Table that even under extremely aggressive future productivity assumptions—including a higher titer than ever achieved at bench scale across our broad literature review and 100% actual yield—production costs remain significantly higher than the commodity prices for all products considered. The gross margins, calculated as [12pt]{minimal} $( - ) / * 100\%$ , remain negative across all scenarios and products. Specifically, the gross margins for each product range as follows: Crude Palm Oil : Gross margins range from [12pt]{minimal} $-278.8\%$ under the Current Tech Low scenario to [12pt]{minimal} $-67.9\%$ under the Future Tech High scenario. High-Oleic Oil : Gross margins range from [12pt]{minimal} $-233.2\%$ to [12pt]{minimal} $-48.6\%$ . Low-CI Biofuels Feedstock Oil : Gross margins range from [12pt]{minimal} $-155.8\%$ to [12pt]{minimal} $-13.3\%$ . 75% Lauric Acid (C12) : Gross margins range from [12pt]{minimal} $-220.4\%$ to [12pt]{minimal} $-39.8\%$ . This indicates that even when commercial biofermentation of microbial oils reaches the scale of 2G ethanol production, true price competitiveness with commodity vegetable oils is unlikely to be achieved based on today’s prices. Moreover, combining this with the insights from Section , we see that a broad set of underlying fundamental shifts would be required for price parity. No individual cost driver is significant enough to allow price parity. The negative gross margins above only hold when selling into these commodity markets at prevailing commodity prices. There may be an opportunity to leverage the sustainability factor for additional pricing power. Of the end markets studied, only using microbial oils as a low-CI biofuels feedstock directly monetizes the land use and carbon savings. While current transportation decarbonization programs do not seem to pay enough to support commercial success, there could be better opportunities in the consumer products space. In these markets, the higher production cost of microbial oils can be economically justified by consumers’ increasing willingness to spend on healthier and more sustainable food or green cosmetics (Sharma et al., ). We also modeled the impact of continuous or semi-continuous fermentation as this could be enabled by certain oleaginous yeast strains such as M. Pulcherrima (Abeln et al., ), although to our knowledge only batch fermentation has been successfully used in commercial biofermentation. While continuous production could enable better economics due to higher feedstock throughput and less media turnover relative to fermentation capacity, our technoeconomic analysis found that it is unlikely to significantly shift economics. In particular, for the lowest production cost scenario of microbial oil production from palm biomass in SEA at 15M L of fermentation capacity, we modeled that continuous fermentation could reduce production costs from $2.36/kg to $2.25/kg. This production cost decrease of 4.7% is not material enough to change any of the previous conclusions. The key strategic insights are summarized below: Strong economy-of-scale exists on production costs, with the cost of producing microbial oils produced in the USA using glucose decreasing from $7.82/kg at 50k L of fermentation capacity to $3.59/kg at 15M L of fermentation capacity. Valorizing waste streams is critical for lowering costs across all regions studied. Microbial oil production costs in the USA can be as low as $2.82/kg at 15M L of fermentation capacity by using corn stover, production costs in Southeast Asia can be as low as $2.36/kg at 15M L of fermentation capacity by using palm biomass (EFB), and production costs in the EU can be as low as $2.97/kg at 4.5M L of fermentation capacity by using bread waste. Under aggressive assumptions about future strain productivity, it is projected that production cost could reach as little as $1.47/kg under today’s prices. Continuous fermentation could lower costs if it were successfully operationalxdized, but not by a material amount. When selling into commodity end markets, we find that gross margins are negative even under aggressive assumptions about potential future strain productivity and technology. This implies a need to sell into consumer markets where the sustainability benefits can be directly monetized to enable attractive unit economics. Technoeconomic analysis was performed to study the production costs of microbial oils across a variety of regions, fermentation capacities, feedstocks, end markets, and productivity scenarios. It was found that production costs as low as USD$2.36/kg could be enabled at scale through waste stream valorization in Southeast Asia; however, this would still yield negative margins for the broad range of accessible commodity end-markets studied, including using the microbial oil as a low-carbon intensity biofuels feedstock oil to monetize the land-use and decarbonization impact. This implies a need to sell into consumer markets where the sustainability benefits can be directly monetized to enable attractive unit economics. Future work should explore strategies to further reduce production costs and improve economic viability. One potential avenue is the implementation of fed-batch high cell density cultivation, which could enhance productivity by increasing cell concentrations and lipid yields. While this approach presents challenges in terms of scalability and operational complexity, successful research and operation in this area could lower costs and improve the competitiveness of microbial lipid production. Additionally, advancements in strain engineering to improve yields and titers, optimization of downstream processing techniques, and innovations in fermentation technology—such as continuous fermentation or novel bioreactor designs—could contribute to cost reductions. Exploring these strategies could help bridge the gap toward achieving price parity with commodity vegetable oils and expand the applicability of microbial oils in various markets. By addressing these technological and operational challenges, there is potential to make microbial lipid production more economically feasible. Continued research and development in these areas will be critical for the future success of microbial oils as sustainable alternatives in both commodity and specialty markets. |
Association of glucagon-like peptide-1 receptor agonists with cardiovascular and kidney outcomes in type 2 diabetic kidney transplant recipients | 2141e27c-2f26-475d-8cba-7bc11d829dd4 | 11846168 | Surgical Procedures, Operative[mh] | Kidney transplantation is recognized as the best treatment option for most patients with end-stage kidney disease (ESKD), as it is associated with reduced risk of mortality and cardiovascular events, and enhanced quality of life compared to maintenance dialysis . Nevertheless, the life expectancy of kidney transplant recipients (KTRs) still falls substantially below that of the general population . Cardiovascular diseases (CVDs) are the leading cause of mortality post-transplant, driven by prevalent traditional risk factors, along with complications related to chronic kidney disease (CKD), such as left ventricular hypertrophy and mineral bone disease, which often persist after the transplant . Diabetes mellitus (DM) is a major risk factor CVDs and is also associated with a higher risk of graft failure . It remains the leading cause of ESKD in adults, with the global prevalence of DM in adult ESKD patients reaching 29.7% in 2015 . Furthermore, retrospective cohort studies identified a 12–38% prevalence of pre-transplant DM . In addition to its association with increased mortality, CVDs, and graft failure, pre-transplant DM is also linked to a higher risk of impaired wound healing and infections . Current guidelines recommend screening kidney transplant candidates without known history of DM for abnormal glucose metabolism to enable early detection and management of related complications . Post-transplant diabetes mellitus (PTDM), affecting 5–25% of patients within the first year after transplantation, is another challenge . Both pre-transplant risk factors and the use of immunosuppressants, particularly corticosteroids and calcineurin inhibitors, contribute to the PTDM development . Consequently, optimizing the management in KTRs with DM is of paramount importance to improve long term outcomes. Glucagon-like peptide-1 receptor agonists (GLP-1 RAs), a class of incretin-based therapies, are promising antidiabetic agents. Over the past decades, several randomized clinical trials (RCTs) have shown that GLP-1 RAs improve cardiovascular outcomes in patients with type 2 DM (T2DM) who have established CVDs or are at risk, as well as in individuals with CKD . The American Diabetes Association Standards of Care 2024 guidelines recommends GLP-1 RAs in patients with T2DM who have pre-existing CVDs or are at high risk . Similarly, the KDIGO 2024 CKD guideline recommends GLP-1 RAs for adults with T2DM and CKD who have not achieved individualized glycemic targets . One meta-analysis of observational studies involving 338 KTRs supports the effectiveness of GLP-1 RAs, demonstrating benefits in glycemic control, reductions in proteinuria, and weight loss, without significant interference with tacrolimus blood levels . A retrospective cohort study, by Dotan et al., involving 318 patients, further revealed that treatment with GLP-1 RAs was associated with a reduced risk of major adverse cardiovascular events (MACEs) and all-cause mortality among solid organ transplant recipients, including kidney, lung, liver, and heart transplants . Additionally, Halden et al. demonstrated that GLP-1 infusion improved both insulin secretion and glucagon suppression during hyperglycemia in patients with PTDM . These studies, however, were constrained by small sample sizes and a lack of homogeneity in the cohorts, which included recipients of multiple organ types . The 2024 international PTDM consensus suggested GLP-1 RAs for PTDM patients with obesity and/or established CVDs. Nonetheless, it also acknowledged that these agents remain underutilized in PTDM management due to limitation of transplant-specific evidence . Real-world data offers significant potential for informing and shaping confirmatory trials, enabling the exploration of questions that might otherwise go unanswered. In this study, we utilized the international TriNetX platform to assess whether GLP-1RAs could mitigate long-term adverse outcomes in diabetic KTRs. Data source The study utilized data from the TriNetX database, a global collaborative network that integrates de-identified electronic medical records from various healthcare organizations (HCOs), primarily those affiliated with academic medical centers . The platform provides detailed patient information, including demographics, diagnoses (coded using the International Classification of Diseases, Tenth Revision, German Modification), procedures (using the International Classification of Diseases, Tenth Revision, Procedure Coding System), medications (based on the Anatomical Therapeutic Chemical Classification or RxNorm codes), along with laboratory results, clinical findings (recorded via local lab coding or Logical Observation Identifiers Names and Codes) and genomic information . We used the Global Collaborative Network, which includes 127 HCOs encompassing over 131 million individuals across 21 countries. The countries included Australia, Belgium, Brazil, Bulgaria, Estonia, France, Georgia, Germany, Ghana, Israel, Italy, Japan, Lithuania, Malaysia, Poland, Singapore, Spain, Taiwan, the United Arab Emirates, the United Kingdom, and the United States. The data was collected over a period spanning from January 1, 2006, to June 1, 2023. Ethics statement The Western Institutional Review Board approved a waiver of informed consent for TriNetX, as the platform solely compiles and presents de-identified data in aggregate form . Additionally, the use of TriNetX for this study was approved by the Institutional Review Board of Chi-Mei Hospital (No: 11210-E01). The study adhered to the principles of the Declaration of Helsinki and followed the STROBE guidelines (Strengthening the Reporting of Observational Studies in Epidemiology) for cohort study reporting . Study population The inclusion criteria for the study were as follows: (1) individuals aged over 18 years old, (2) KRTs, and (3) those diagnosed with T2DM either prior to or within 3 months post-transplant. Exclusion criteria included undergoing dialysis or experiencing mortality during 1 to 3 months post-transplant, as delayed graft function rarely persists beyond 1 month . The index date and the time of study entry were defined as the kidney transplant date, identified using ICD-10-CM code Z94.0 and relevant procedure codes. As the intention to treat design, patients were categorized as either GLP-1 RAs users or non-users depending on whether they received these agents within the 3 months following kidney transplant. Due to differences in mechanisms, dual glucose-dependent insulinotropic polypeptide (GIP) and GLP-1 receptor agonists were not included in the definition of GLP-1 RAs users in this study . We performed 1:1 propensity score matching (PSM) to generate comparable cohorts. Covariates Clinically relevant covariates, known to influence survival and cardiovascular or kidney outcomes, were carefully selected to ensure balanced comparisons between study groups. Those included various baseline characteristic including demographics (age, gender, race), comorbidities (e.g., liver diseases, chronic lower respiratory diseases, diabetic complications, obesity and neoplasm), medications (e.g., insulin, metformin, angiotensin-converting enzyme inhibitors (ACEIs), angiotensin II receptor blockers (ARBs), tacrolimus, cyclosporine, mycophenolate mofetil (MMF), and corticosteroids), as well as laboratory tests and physical findings (e.g., hemoglobin, glycated hemoglobin (HbA1c) levels, lipid profile, estimated glomerular filtration rate (eGFR) and systolic blood pressure (SBP)). Detailed codes for these covariates are provided in the Supplemental Table . Prespecified outcomes The primary outcome is all-cause mortality. Secondary outcomes encompass MACEs, including stroke (either ischemic or hemorrhagic), acute myocardial infarction (AMI), cardiac arrest, cardiogenic shock or death, as well as major adverse kidney events (MAKEs), which is defined as a composite of dialysis dependence, an eGFR less than 15 mL/min/1.73 m², or death. Detailed codes for outcomes are shown in Supplemental Table . Outcomes were tracked from 3 months after the index date to a maximum of 5 years. This 3-month window after transplant serves to mitigate reverse causality effects, ensuring that outcomes are more reliably attributed to the use of GLP-1RA. Additionally, it enhances data reliability by avoiding potential reverse etiologies or inconsistencies in immediate post-transplant records. To mitigate protopathic or ascertainment bias, patients with MACEs prior to the study period were excluded, and repeat PSM was performed. Additionally, potential GLP-1RA side effects were assessed for comprehensive safety evaluation. Prespecified subgroup, sensitivity and specificity analyses Subgroup analyses were conducted based on gender, age, body mass index (BMI), HbA1c, DM status (pre-existing or post-transplant), post-transplant eGFR, the presence of hypertension, proteinuria, heart failure, obesity, enrollment time, prior history of GLP-1RAs uses before transplant, and the concurrent use of medications, including ACEIs/ARBs, antidiabetic agents (insulin, metformin, dipeptidyl peptidase-4 inhibitors (DPP-4is), sodium-glucose cotransporter 2 inhibitors (SGLT2is), and immunosuppressants (steroids, cyclosporine and tacrolimus). GLP-1 RA ever users were defined as patients who used GLP-1 RAs within 3 months prior to transplant but discontinued their use afterward. In contrast, new users were those who had not used GLP-1 RAs before transplant and initiated use only after the transplant. Persistent users were identified as individuals who used GLP-1 RAs both before and after the transplant. To further investigate the robustness of our results, we performed sensitivity analyses using the Cox proportional hazards model with alternative covariates and varied cohort exclusion criteria. Specificity analyses were conducted to evaluate the individual components of the composite outcomes. To enhance the comprehensiveness of our study, we conducted risk analyzed at 1 year and 3 years post-transplant. Additionally, we expanded our comparative analysis of outcomes between two groups of KTRs: those who used GLP-1 RAs in the first three months post-transplant and continued their usage from 3 to 6 months, and those who did not use GLP-1 RAs in the first three months and continued without usage from 3 to 6 months, with outcomes tracked from 6 months post-transplant. Additional specificity analysis was conducted to compare the outcomes of GLP-1 RA users with those of patients receiving other second-line antihyperglycemic treatments, including thiazolidinediones (TZDs), DPP-4is, or sulfonylureas (SUs). Given the established cardiovascular and kidney benefits of SGLT2is for T2DM patients with CVDs, those at high risk, and those with CKD, along with recent cohort studies demonstrating their ability to enhance survival, reduce cardiovascular events, and preserve graft function in kidney KTRs with DM, we conducted additional analyses incorporating SGLT2is . Specifically, we compared outcomes between patients who used both GLP-1 RAs and SGLT2is and those who did not receive either medication within the first 3 months post-transplant. To evaluate the impact of GLP-1 RAs on the metabolic profile, we analyzed the serial changes in levels of HbA1c, LDL, body weight, and SBP from different time periods. Prespecified positive and negative controls To evaluate the validity of our approach, we conducted analyses using both positive and negative outcome and exposure controls. We selected nausea, vomiting and diarrhea, widely recognized as adverse effects of GLP-1 RAs, as positive outcome controls . Conversely, sunburn, herniated intervertebral discs, traffic accidents, and pneumonia were chosen as negative outcome controls. For exposure controls, SGLT2is were selected as the positive control, based on evidence from previous cohort studies demonstrating their cardiovascular and kidney benefits in KTRs with DM . On the other hand, topical urea, not expected to influence outcomes, was used as a negative exposure control. Landmark analysis To address immortal time bias, we performed a landmark analysis by refining the cohort selection period, adjusting it from the initial 3 months post-transplant to specific time points within 2, 6, 9, and 12 months post-transplant . This approach ensured that the impact of GLP-1RAs on outcomes remained consistent across different time intervals. Statistical analysis A 1:1 PSM was performed using the aforementioned covariates, utilizing a greedy nearest neighbor algorithm with a 0.1 pooled standard deviation caliper to minimize confounding between the groups. The balance of baseline covariates between the matched populations was evaluated using standardized mean differences, with values below 0.1 indicating a high degree of balance achieved . Baseline characteristics are presented as means with standard deviations (SDs) for continuous variables, and counts with percentages for categorical variables. The Kaplan-Meier method was utilized to estimate overall survival and event-free survival, with the log-rank test assessing statistically significant differences between the two groups. The Cox proportional hazards model was used to calculated adjusted hazard ratios (aHRs) with 95% confidence intervals (CIs) for outcomes associated with the use of GLP-1 RAs. The proportional hazards assumptions were tested using the generalized Schoenfeld approach . Cases with missing outcome data due to loss to follow-up were excluded to prevent bias and inaccuracies arising from incomplete data. To evaluate the impact of unmeasured confounders on the observed relationship between treatment and outcomes, the E-value was used. A large E-value suggests that only very strong unmeasured confounders could negate the observed treatment-outcome association . All statistical analyses were conducted using TriNetX built-in functions and R software (version 4.4.0). A two-sided p-value < 0.05 was considered statistically significant. The study utilized data from the TriNetX database, a global collaborative network that integrates de-identified electronic medical records from various healthcare organizations (HCOs), primarily those affiliated with academic medical centers . The platform provides detailed patient information, including demographics, diagnoses (coded using the International Classification of Diseases, Tenth Revision, German Modification), procedures (using the International Classification of Diseases, Tenth Revision, Procedure Coding System), medications (based on the Anatomical Therapeutic Chemical Classification or RxNorm codes), along with laboratory results, clinical findings (recorded via local lab coding or Logical Observation Identifiers Names and Codes) and genomic information . We used the Global Collaborative Network, which includes 127 HCOs encompassing over 131 million individuals across 21 countries. The countries included Australia, Belgium, Brazil, Bulgaria, Estonia, France, Georgia, Germany, Ghana, Israel, Italy, Japan, Lithuania, Malaysia, Poland, Singapore, Spain, Taiwan, the United Arab Emirates, the United Kingdom, and the United States. The data was collected over a period spanning from January 1, 2006, to June 1, 2023. The Western Institutional Review Board approved a waiver of informed consent for TriNetX, as the platform solely compiles and presents de-identified data in aggregate form . Additionally, the use of TriNetX for this study was approved by the Institutional Review Board of Chi-Mei Hospital (No: 11210-E01). The study adhered to the principles of the Declaration of Helsinki and followed the STROBE guidelines (Strengthening the Reporting of Observational Studies in Epidemiology) for cohort study reporting . The inclusion criteria for the study were as follows: (1) individuals aged over 18 years old, (2) KRTs, and (3) those diagnosed with T2DM either prior to or within 3 months post-transplant. Exclusion criteria included undergoing dialysis or experiencing mortality during 1 to 3 months post-transplant, as delayed graft function rarely persists beyond 1 month . The index date and the time of study entry were defined as the kidney transplant date, identified using ICD-10-CM code Z94.0 and relevant procedure codes. As the intention to treat design, patients were categorized as either GLP-1 RAs users or non-users depending on whether they received these agents within the 3 months following kidney transplant. Due to differences in mechanisms, dual glucose-dependent insulinotropic polypeptide (GIP) and GLP-1 receptor agonists were not included in the definition of GLP-1 RAs users in this study . We performed 1:1 propensity score matching (PSM) to generate comparable cohorts. Clinically relevant covariates, known to influence survival and cardiovascular or kidney outcomes, were carefully selected to ensure balanced comparisons between study groups. Those included various baseline characteristic including demographics (age, gender, race), comorbidities (e.g., liver diseases, chronic lower respiratory diseases, diabetic complications, obesity and neoplasm), medications (e.g., insulin, metformin, angiotensin-converting enzyme inhibitors (ACEIs), angiotensin II receptor blockers (ARBs), tacrolimus, cyclosporine, mycophenolate mofetil (MMF), and corticosteroids), as well as laboratory tests and physical findings (e.g., hemoglobin, glycated hemoglobin (HbA1c) levels, lipid profile, estimated glomerular filtration rate (eGFR) and systolic blood pressure (SBP)). Detailed codes for these covariates are provided in the Supplemental Table . The primary outcome is all-cause mortality. Secondary outcomes encompass MACEs, including stroke (either ischemic or hemorrhagic), acute myocardial infarction (AMI), cardiac arrest, cardiogenic shock or death, as well as major adverse kidney events (MAKEs), which is defined as a composite of dialysis dependence, an eGFR less than 15 mL/min/1.73 m², or death. Detailed codes for outcomes are shown in Supplemental Table . Outcomes were tracked from 3 months after the index date to a maximum of 5 years. This 3-month window after transplant serves to mitigate reverse causality effects, ensuring that outcomes are more reliably attributed to the use of GLP-1RA. Additionally, it enhances data reliability by avoiding potential reverse etiologies or inconsistencies in immediate post-transplant records. To mitigate protopathic or ascertainment bias, patients with MACEs prior to the study period were excluded, and repeat PSM was performed. Additionally, potential GLP-1RA side effects were assessed for comprehensive safety evaluation. Subgroup analyses were conducted based on gender, age, body mass index (BMI), HbA1c, DM status (pre-existing or post-transplant), post-transplant eGFR, the presence of hypertension, proteinuria, heart failure, obesity, enrollment time, prior history of GLP-1RAs uses before transplant, and the concurrent use of medications, including ACEIs/ARBs, antidiabetic agents (insulin, metformin, dipeptidyl peptidase-4 inhibitors (DPP-4is), sodium-glucose cotransporter 2 inhibitors (SGLT2is), and immunosuppressants (steroids, cyclosporine and tacrolimus). GLP-1 RA ever users were defined as patients who used GLP-1 RAs within 3 months prior to transplant but discontinued their use afterward. In contrast, new users were those who had not used GLP-1 RAs before transplant and initiated use only after the transplant. Persistent users were identified as individuals who used GLP-1 RAs both before and after the transplant. To further investigate the robustness of our results, we performed sensitivity analyses using the Cox proportional hazards model with alternative covariates and varied cohort exclusion criteria. Specificity analyses were conducted to evaluate the individual components of the composite outcomes. To enhance the comprehensiveness of our study, we conducted risk analyzed at 1 year and 3 years post-transplant. Additionally, we expanded our comparative analysis of outcomes between two groups of KTRs: those who used GLP-1 RAs in the first three months post-transplant and continued their usage from 3 to 6 months, and those who did not use GLP-1 RAs in the first three months and continued without usage from 3 to 6 months, with outcomes tracked from 6 months post-transplant. Additional specificity analysis was conducted to compare the outcomes of GLP-1 RA users with those of patients receiving other second-line antihyperglycemic treatments, including thiazolidinediones (TZDs), DPP-4is, or sulfonylureas (SUs). Given the established cardiovascular and kidney benefits of SGLT2is for T2DM patients with CVDs, those at high risk, and those with CKD, along with recent cohort studies demonstrating their ability to enhance survival, reduce cardiovascular events, and preserve graft function in kidney KTRs with DM, we conducted additional analyses incorporating SGLT2is . Specifically, we compared outcomes between patients who used both GLP-1 RAs and SGLT2is and those who did not receive either medication within the first 3 months post-transplant. To evaluate the impact of GLP-1 RAs on the metabolic profile, we analyzed the serial changes in levels of HbA1c, LDL, body weight, and SBP from different time periods. To evaluate the validity of our approach, we conducted analyses using both positive and negative outcome and exposure controls. We selected nausea, vomiting and diarrhea, widely recognized as adverse effects of GLP-1 RAs, as positive outcome controls . Conversely, sunburn, herniated intervertebral discs, traffic accidents, and pneumonia were chosen as negative outcome controls. For exposure controls, SGLT2is were selected as the positive control, based on evidence from previous cohort studies demonstrating their cardiovascular and kidney benefits in KTRs with DM . On the other hand, topical urea, not expected to influence outcomes, was used as a negative exposure control. To address immortal time bias, we performed a landmark analysis by refining the cohort selection period, adjusting it from the initial 3 months post-transplant to specific time points within 2, 6, 9, and 12 months post-transplant . This approach ensured that the impact of GLP-1RAs on outcomes remained consistent across different time intervals. A 1:1 PSM was performed using the aforementioned covariates, utilizing a greedy nearest neighbor algorithm with a 0.1 pooled standard deviation caliper to minimize confounding between the groups. The balance of baseline covariates between the matched populations was evaluated using standardized mean differences, with values below 0.1 indicating a high degree of balance achieved . Baseline characteristics are presented as means with standard deviations (SDs) for continuous variables, and counts with percentages for categorical variables. The Kaplan-Meier method was utilized to estimate overall survival and event-free survival, with the log-rank test assessing statistically significant differences between the two groups. The Cox proportional hazards model was used to calculated adjusted hazard ratios (aHRs) with 95% confidence intervals (CIs) for outcomes associated with the use of GLP-1 RAs. The proportional hazards assumptions were tested using the generalized Schoenfeld approach . Cases with missing outcome data due to loss to follow-up were excluded to prevent bias and inaccuracies arising from incomplete data. To evaluate the impact of unmeasured confounders on the observed relationship between treatment and outcomes, the E-value was used. A large E-value suggests that only very strong unmeasured confounders could negate the observed treatment-outcome association . All statistical analyses were conducted using TriNetX built-in functions and R software (version 4.4.0). A two-sided p-value < 0.05 was considered statistically significant. Baseline characteristics of study population We identified 35,488 adult KTRs with T2DM, with a mean [SD] age of 57.7 [12.2] years and 57.7% being man. Among these, 3,465 (9.8%) received GLP-1 RAs within 3 months post-transplant (Fig. ). No patients receiving GLP-1 RAs were excluded due to loss of follow-up, while 8 patients (0.2%) who did not receive GLP-1 RAs were excluded for this reason (Supplemental Table ). Before PSM, significant differences were observed in various covariates. Before PSM, GLP-1 RA users had a smaller proportion of white individuals and were more likely to receive antihypertensive, other antidiabetic agents, and HMG-CoA reductase inhibitors. They exhibited higher HbA1c, higher eGFR, higher BMI, and lower total cholesterol/LDL values. This group also had a higher prevalence of comorbidities, including hypertension, dyslipidemia, heart failure, diabetic complications, overweight and obesity, chronic lower respiratory diseases, cystic kidney disease and neoplasms. Following PSM with 1:1 ratio, 3,297 GLP-1 RA users were matched with an equal number of control patients. After PSM, all standardized differences for the covariates were less than 0.1, indicating that a good balance was achieved (Table ). Primary outcome and secondary outcomes After a median duration of follow-up for the whole cohort 2.5 years (interquartile range (IQR), 1.4–3.6 years), 93 patients (2.6%) in the GLP-1 RAs users group and 319 patients (9.0%) in the GLP-1 RAs non-users group experienced all-cause mortality. Use of GLP-1 RAs was associated with a substantially lower risk of all-cause mortality, with an aHR of 0.39 (95% CI 0.31–0.50, p < 0.001). Additionally, compared with the non-users, GLP-1 RA users exhibited lower risks of MACEs (7.0% vs. 12.0%; aHR, 0.66; 95% CI 0.56–0.79, p < 0.001) and MAKEs (12.3% vs. 20.3%; aHR, 0.66; 95% CI 0.58–0.75, p < 0.001) (Figs. and , Supplemental Tables – ). There were no violations of the proportional hazards assumption (Schoenfeld test p = 0.761, 0.816, and 0.697, respectively). The E-values were notably high, at 4.51 for all-cause mortality, 2.38 for MACEs, and 2.00 for MAKEs, indicating that the observed associations are unlikely to be explained away by potential confounding (Supplemental Table ). Subgroup, sensitivity and specificity analyses Subgroup analyses demonstrated that the benefit of GLP-1 RAs on overall survival, MACEs and MAKEs was broadly consistent across prespecified subgroups. Notably, these advantages persisted irrespective of baseline kidney function, HbA1c levels, BMI, the timing of DM diagnosis, comorbidities and concurrent medication, including immunosuppressants (Fig. ). Among patients with pre-transplant diabetes, outcomes aligned with the main findings. To account for advancements in medical practice, we performed an additional analysis of patients enrolled before and after 2022. The findings revealed that the benefits of GLP-1 RAs remained consistent across both time periods, unaffected by the chronological changes. The consistent results from sensitivity analyses using the Cox proportional hazards model with alternative sets of covariates and various exclusion criteria confirmed the robustness of our findings (Supplemental Table ). Additionally, GLP-1 RAs users consistently demonstrated a statistically significant lower risk of cardiac arrest and cardiogenic shock (aHR = 0.30, p = 0.001), dialysis dependence (aHR = 0.49, p < 0.001) and eGFR less than 15 mL/min/1.73 m² (aHR = 0.77, p < 0.001). While there was a numerical reduction in the incidence of MI (aHR, 0.88; 95% CI 0.66–1.18, P = 0.404) and stroke (aHR, 0.86; 95% CI 0.64–1.15, p = 0.299) among GLP-1 RAs users, these differences did not reach statistical significance (Fig. ). Outcome analysis conducted at 1 year or 3 years post-transplant showed consistent results. Comparative analysis of outcomes between those who used GLP-1 RAs in the first 3 months post-transplant and continued from 3 to 6 months, versus those who did not use GLP-1 RAs during either period, yielded similar results (Supplemental Table ). Similarly, outcomes were consistent when comparing patients who used both GLP-1 RAs and SGLT2 is with those who did not receive either medication within the first 3 months post-transplant (Supplemental Table ). Additionally, when comparing the outcomes of GLP-1 RAs users with those receiving DPP-4is, TZDs, or SUs, the results remained consistent (Supplemental Table ). Serial analyses of the metabolic profile showed consistently high HbA1c levels and body weight loss among GLP-1 RAs users. However, LDL levels and SBP did not made significant different (Supplemental Table ). However, landmark analysis, adjusting the cohort selection period to 2, 6, 9, and 12 months post-transplant, demonstrated consistent beneficial effects of GLP-1 RAs on all-cause mortality, MACEs, and MAKEs (Supplemental Table ). Positive and negative controls outcome As expected, the positive results confirmed that the use of GLP-1 RAs was associated with an increased incidence of nausea/vomiting (aHR, 1.21; 95% CI 1.02–1.45, p = 0.030 and diarrhea (aHR, 1.29; 95% CI 1.10–1.50, p = 0.002). Additionally, no association was observed between the use of GLP-1 RAs and the incidence of sunburn, herniated intervertebral discs, traffic accidents, or pneumonia. Further safety evaluations revealed that the risks of suicide, depression, hypoglycemia, retinopathy, and pancreatitis were not increased with the use of GLP-1 RAs (Fig. ). Positive and negative exposure controls We observed that SGLT2is users experienced substantial reductions in all-cause mortality (aHR, 0.44; 95% CI 0.33–0.60, p < 0.001), MACEs (aHR, 0.68; 95% CI 0.54–0.85, p < 0.001) and MAKEs (aHR, 0.62; 95% CI 0.53–0.73, p < 0.001) in T2DM after kidney transplants. In contrast, topical urea did not show any significant effects (Supplemental Table ). We identified 35,488 adult KTRs with T2DM, with a mean [SD] age of 57.7 [12.2] years and 57.7% being man. Among these, 3,465 (9.8%) received GLP-1 RAs within 3 months post-transplant (Fig. ). No patients receiving GLP-1 RAs were excluded due to loss of follow-up, while 8 patients (0.2%) who did not receive GLP-1 RAs were excluded for this reason (Supplemental Table ). Before PSM, significant differences were observed in various covariates. Before PSM, GLP-1 RA users had a smaller proportion of white individuals and were more likely to receive antihypertensive, other antidiabetic agents, and HMG-CoA reductase inhibitors. They exhibited higher HbA1c, higher eGFR, higher BMI, and lower total cholesterol/LDL values. This group also had a higher prevalence of comorbidities, including hypertension, dyslipidemia, heart failure, diabetic complications, overweight and obesity, chronic lower respiratory diseases, cystic kidney disease and neoplasms. Following PSM with 1:1 ratio, 3,297 GLP-1 RA users were matched with an equal number of control patients. After PSM, all standardized differences for the covariates were less than 0.1, indicating that a good balance was achieved (Table ). After a median duration of follow-up for the whole cohort 2.5 years (interquartile range (IQR), 1.4–3.6 years), 93 patients (2.6%) in the GLP-1 RAs users group and 319 patients (9.0%) in the GLP-1 RAs non-users group experienced all-cause mortality. Use of GLP-1 RAs was associated with a substantially lower risk of all-cause mortality, with an aHR of 0.39 (95% CI 0.31–0.50, p < 0.001). Additionally, compared with the non-users, GLP-1 RA users exhibited lower risks of MACEs (7.0% vs. 12.0%; aHR, 0.66; 95% CI 0.56–0.79, p < 0.001) and MAKEs (12.3% vs. 20.3%; aHR, 0.66; 95% CI 0.58–0.75, p < 0.001) (Figs. and , Supplemental Tables – ). There were no violations of the proportional hazards assumption (Schoenfeld test p = 0.761, 0.816, and 0.697, respectively). The E-values were notably high, at 4.51 for all-cause mortality, 2.38 for MACEs, and 2.00 for MAKEs, indicating that the observed associations are unlikely to be explained away by potential confounding (Supplemental Table ). Subgroup analyses demonstrated that the benefit of GLP-1 RAs on overall survival, MACEs and MAKEs was broadly consistent across prespecified subgroups. Notably, these advantages persisted irrespective of baseline kidney function, HbA1c levels, BMI, the timing of DM diagnosis, comorbidities and concurrent medication, including immunosuppressants (Fig. ). Among patients with pre-transplant diabetes, outcomes aligned with the main findings. To account for advancements in medical practice, we performed an additional analysis of patients enrolled before and after 2022. The findings revealed that the benefits of GLP-1 RAs remained consistent across both time periods, unaffected by the chronological changes. The consistent results from sensitivity analyses using the Cox proportional hazards model with alternative sets of covariates and various exclusion criteria confirmed the robustness of our findings (Supplemental Table ). Additionally, GLP-1 RAs users consistently demonstrated a statistically significant lower risk of cardiac arrest and cardiogenic shock (aHR = 0.30, p = 0.001), dialysis dependence (aHR = 0.49, p < 0.001) and eGFR less than 15 mL/min/1.73 m² (aHR = 0.77, p < 0.001). While there was a numerical reduction in the incidence of MI (aHR, 0.88; 95% CI 0.66–1.18, P = 0.404) and stroke (aHR, 0.86; 95% CI 0.64–1.15, p = 0.299) among GLP-1 RAs users, these differences did not reach statistical significance (Fig. ). Outcome analysis conducted at 1 year or 3 years post-transplant showed consistent results. Comparative analysis of outcomes between those who used GLP-1 RAs in the first 3 months post-transplant and continued from 3 to 6 months, versus those who did not use GLP-1 RAs during either period, yielded similar results (Supplemental Table ). Similarly, outcomes were consistent when comparing patients who used both GLP-1 RAs and SGLT2 is with those who did not receive either medication within the first 3 months post-transplant (Supplemental Table ). Additionally, when comparing the outcomes of GLP-1 RAs users with those receiving DPP-4is, TZDs, or SUs, the results remained consistent (Supplemental Table ). Serial analyses of the metabolic profile showed consistently high HbA1c levels and body weight loss among GLP-1 RAs users. However, LDL levels and SBP did not made significant different (Supplemental Table ). However, landmark analysis, adjusting the cohort selection period to 2, 6, 9, and 12 months post-transplant, demonstrated consistent beneficial effects of GLP-1 RAs on all-cause mortality, MACEs, and MAKEs (Supplemental Table ). As expected, the positive results confirmed that the use of GLP-1 RAs was associated with an increased incidence of nausea/vomiting (aHR, 1.21; 95% CI 1.02–1.45, p = 0.030 and diarrhea (aHR, 1.29; 95% CI 1.10–1.50, p = 0.002). Additionally, no association was observed between the use of GLP-1 RAs and the incidence of sunburn, herniated intervertebral discs, traffic accidents, or pneumonia. Further safety evaluations revealed that the risks of suicide, depression, hypoglycemia, retinopathy, and pancreatitis were not increased with the use of GLP-1 RAs (Fig. ). We observed that SGLT2is users experienced substantial reductions in all-cause mortality (aHR, 0.44; 95% CI 0.33–0.60, p < 0.001), MACEs (aHR, 0.68; 95% CI 0.54–0.85, p < 0.001) and MAKEs (aHR, 0.62; 95% CI 0.53–0.73, p < 0.001) in T2DM after kidney transplants. In contrast, topical urea did not show any significant effects (Supplemental Table ). In this study, we observed only a small proportion (9.8%) of adult KTRs with T2DM being treated with GLP-1 RAs among 3 months after transplant. Our analysis demonstrated that the use of GLP-1 RAs was associated with a significantly decreased risk of all-cause mortality, MACEs, and MAKEs over a median follow-up period of 2.5 years. These benefits were consistently observed across diverse subgroups, regardless of obesity, baseline kidney function, or glucose control. Notably, persistent or new users of GLP-1 RAs exhibited better survival and kidney outcomes compared to ever users, with persistent users also showing superior MACEs and MAKEs outcomes. While gastrointestinal side effects like nausea and vomiting were more common, there was no increased risk of hypoglycemia, suicide, or pancreatitis, underscoring the safety of GLP-1 RAs in this population. These results confirm the efficacy and safety of GLP-1 RAs for managing T2DM in KTRs. Our observations about the efficacy of GLP-1 RAs in enhancing survival and reducing cardiovascular events are concordance with evidence from several cardiovascular outcome trials (CVOTs) conducted in general population with T2DM, as well as a recent cohort study focused on solid organ transplant recipients . Moreover, the observed high incidence of MACEs, which reached 12.0% in the non-users during follow-up, highlights the elevated cardiovascular risk among KTRs with T2DM, underscoring the importance of incorporating GLP-1 RAs into the DM care for these specific population. The cardioprotective effects of GLP-1 RAs are thought to arise from their pleiotropic effects . This is supported by the widespread expression of GLP-1 receptors outside the pancreas, such as in the brain, lungs, heart, kidneys, liver, nervous system, endothelial cells, and immune cells . Preclinical studies indicate that GLP-1 RAs can attenuate atherosclerosis by reducing vascular inflammation, lowering oxidative stress, and inhibiting the proliferation and activation of vascular smooth muscle cells . Additionally, these agents have been shown to decrease epicardial fat deposits, reduce cardiomyocyte apoptosis, optimize cardiac energetics, and enhance myocardial glucose oxidation . The consistently high HbA1c levels observed among GLP-1 RAs users in our study further reinforce their pleiotropic actions beyond glucose lowering . Secondary endpoints in CVOTs with GLP-1 RAs have also suggested potential kidney benefits, primarily through reductions in albuminuria, with some trials reporting a mitigation of eGFR decline . The recently published FLOW trial, which specifically targeted T2DM patients with CKD and used kidney-specific primary endpoints, has solidified the efficacy of GLP-1 RAs in slowing CKD progression . Regarding their use in KTRs with T2DM, data remain sparse. A meta-analysis of nine observational studies, involving 338 KTRs with a median follow-up time of 12 months, found that treatment with GLP-1 RAs was associated with a decrease in proteinuria but did not significantly alter eGFR levels . Notably, one study included in the meta-analysis, with a follow-up period of at least 2 years, showed that GLP-1 RAs were associated with a lower risk of MAKEs, and protocol biopsies results indicated that GLP-1 RAs may promote tubular cell regeneration in the kidney graft . The significant reduction in the risk of MAKEs observed in our study could be attributed to the relatively longer follow-up period and the larger number of participants. The gastrointestinal effects associated with GLP-1 RAs, such as delayed gastric emptying, nausea, and vomiting, raise concerns about potential interference with the therapeutic level immunosuppressants, possibly leading to graft failure. Although we were unable to assess blood drugs levels in the TriNetX database, the consistent reduction in MAKEs observed in steroid, tacrolimus and cyclosporine subgroups provides indirect reassurance of the safety of GLP-1 RAs in KRTs and regardless of the specific immunosuppressive regimen. These findings align with previous studies, which have shown that GLP-1 RAs do not significantly affect tacrolimus trough levels or lead to acute rejection . The precise mechanism of kidney-protective effects of GLP-1 RAs still remains elusive, but likely include direct kidney actions such as inducing natriuresis through NHE3 inhibition in the proximal tubules, suppressing the intrarenal renin-angiotensin system, ameliorating kidney ischemia/hypoxia . These direct effects are further supported by human studies demonstrating the wide expression of GLP-1 receptors in the proximal tubules, juxtaglomerular cells, hilar and intralobular arteries, and preglomerular vascular smooth muscle cells . Additionally, GLP-1 RAs have shown potential in mitigating inflammation in diabetic kidney disease by inhibiting angiotensin II signaling, downregulating the receptor for advanced glycation end products, attenuating myelopoiesis, and promoting M2 macrophage polarization in mouse models . To our knowledge, this study represents the largest cohort to date investigating the association between GLP-1 RAs and cardiovascular and kidney outcomes in KTRs with T2DM. Additionally, the strengths of our study include the long follow-up period and the comprehensive validation of results through various methods, such as sensitivity and specificity analyses. However, this study has several limitations. First, as with other studies using electronic health databases, the analysis of the TriNetX database relies heavily on diagnosis codes, which may introduce both misclassification bias due to coding errors and ascertainment bias from the underrepresentation of mild cases. Additionally, we were unable to evaluate the exact reasons for drug prescriptions, medication switches, or adherence. Nevertheless, the study employed an intention-to-treat design, and our analysis was structured to ensure consistent results, thereby reducing the potential for guarantee-time or immortal time bias . The specificity tests revealed no difference between patients with GLP-1RAs users in our negative control analysis, aiding in the removal of selection bias that can be caused by existing knowledge of an individual’s assignment. Second, although we employed PSM to balance covariates between treated and control groups, residual confounding could not be completely avoided. However, the high E-values from our analysis suggest that the likelihood of residual confounders explaining the observed associations is very low, thereby strengthening the potential association in our findings. Additionally, we employed various medications and biochemical data as proxies for disease severity, helping to mitigate the inherent limitations of using electronic health records. Third, the built-in statistical platform in TriNetX restricted the possibility of performing more refined analyses, which cannot be entirely eliminated despite the implementation of rigorous methodologies and variable validation strategies. Forth, the aggregate nature of the data limited our capacity to explore the severity of gastrointestinal adverse effects and their association with drug discontinuation. Fifth, previous CVOTs have suggested that the beneficial effects of GLP-1 RAs are primarily associated with long-acting agents such as liraglutide, semaglutide, and dulaglutide, rather than short-acting agents like exenatide and lixisenatide . However, the lack of detailed data on specific GLP-1 RAs types limited the scope of subgroup analyses. Similarly, the limited number of records for specific dose of GLP-1 RAs precluded an assessment of dose-response effects. Sixth, we were unable to evaluate changes in insulin resistance following the use of GLP-1 RAs due to the limited availability of data on insulin resistance indices in the TriNetX database. Seventh, the TriNetX dataset lacks detailed information on the causes of death, which limits our ability to evaluate kidney-specific or cardiovascular mortality. Finally, before PSM, GLP-1 RAs users exhibited a higher prevalence of comorbidities and higher prescription rates for several medications. This imbalance could lead to potential misestimation of GLP-1 RA effects when generalized to all KTRs with T2DM. This study represents the largest cohort to date showing that GLP-1 RAs use in KTRs with T2DM is associated with reduced risks of mortality, as well as adverse cardiovascular and kidney outcomes, compared to nonuse. The underutilization of GLP-1 RAs presents an opportunity to improve outcomes in this high-risk population without increased complications. Further RTCs are warranted to validate these findings and identify specific subgroups of KTRs who would benefit most from GLP-1 RA therapy. Below is the link to the electronic supplementary material. Supplementary Material 1 |
Automated Volumetric Milling Area Planning for Acoustic Neuroma Surgery via Evolutionary Multi-Objective Optimization | 320c8ac3-7647-4ff8-89d3-49479d965a3b | 11768615 | Surgical Procedures, Operative[mh] | Establishing a surgical channel is a critical step in acoustic neuroma surgery. During this procedure, a portion of the skull must be removed via milling or drilling to expose the surgical target for subsequent interventions . The morphology of the removed skull area significantly impacts the surgeon’s ability to access the target region effectively in later stages of the operation. Additionally, the proximity of the milling site to the surrounding structures and the volume of bone to be excised are crucial factors in assessing potential postoperative complications. Consequently, preoperative planning of milling areas enables surgeons to gain a comprehensive understanding of the surgical procedure and assess the feasibility of the intervention, thereby promoting successful operations with minimal risk and potential harm . A surgical channel refers to the space created by milling part of the temporal bone, thereby exposing intracranial lesions. Robotic surgery for mastoidectomy has been developed to enhance the stability and precision in creating surgical channels . Additional surgical navigation techniques, such as image-guided surgery and augmented reality navigation, can further assist with the bone milling procedure . For these navigation techniques, preoperative milling area planning is increasingly crucial, serving as a spatial reference to guide milling operations. However, planning such a volumetric area is challenging due to the complexity of its spatial representation and the constraints imposed by the surrounding structures. For instance, the translabyrinthine approach is a transpetrosal presigmoid surgical route used for operative treatment. It is employed to quickly access the cochleovestibular nerve and treat tumors located in the lateral skull base . In this approach, the surgical channel must navigate around critical vessels and nerves , damage to which could lead to severe complications such as cerebrospinal fluid leaks and facial nerve dysfunction . The current automated planning algorithms under such constraints predominantly address cochlear implants and stereoelectroencephalography (SEEG) , where only a linear trajectory is required, leaving the planning of complex volumetric areas an unresolved issue. Practically, preoperative planning of milling areas largely depends on manual planning by surgeons, which takes 35 minutes on average and often lacks comprehensive consideration of the constructability of the planned milling area and the subsequent surgical procedures , limiting the effectiveness of preoperative planning in acoustic neuroma surgeries. To address the aforementioned challenges, this study proposes a set of methods for modeling volumetric milling areas and the surrounding spatial constraints to enable automatic optimization. We aim to achieve automated planning for milling areas while ensuring the safety, constructability, and accessibility of the surgical targets in subsequent operations. We first employ high-resolution local segmentation masks of the key surrounding structures and deformable registration to construct the spatial distribution of critical structures, serving as constraints for subsequent planning algorithms. By combining the spatial constraints imposed by key structures with the physical dimensions of the surgical tools, the maximum milling area is generated as the initial input for optimization. Subsequently, a series of deformation fields are designed to control the volumetric milling area, using a surgical target accessing procedure as the foundational model. This method simplifies the complex three-dimensional volumetric area into a set of continuous parameters, while the boundaries formed by adjacent structures are converted into numerical constraints on these parameters. Finally, target accessibility, potential risk of injury, and the constructability of the surgical channel are quantitatively evaluated, with an evolutionary multi-objective optimization algorithm applied to automatically generate a surgical plan with optimal safety and feasibility. In detail, our method can generate a surgical plan in 5840.1 ± 279.9 s, reducing potential damage to the scala vestibuli by 29.8%, improving the milling boundary smoothness by 78.3%, and increasing target accessibility by 26.4%. This paper makes the following main contributions: A set of spatial modeling methods for parameterizing irregular volumetric milling areas; Evaluation methods for milling areas that are critical to the constructability of the surgical channel; The first automated planning algorithm which enables volumetric spatial planning within a feasible amount of time.
Currently, most surgical planning studies in neurosurgery emphasize trajectory planning, such as electrode insertion for deep brain stimulation and inner ear access for cochlear implantation . In these cases, the planning target can be simply represented by an entry point on the skull and a target point within the brain. The representation limits the target definition space and allows each trajectory to be straightforwardly evaluated, thereby facilitating the application of optimization algorithms to these planning problems. In contrast, defining an irregular volumetric area presents a far more complex challenge. The inability to accurately model the planning target not only makes the problem unsuitable for automatic optimization but also complicates the evaluation of the planned surgical channel. Therefore, the current research primarily employs heuristic methods to generate milling area plans, avoiding the need to explicitly parameterize an irregular volume. Popovic et al. simplified this problem by neglecting the thickness of the skull, allowing them to project the three-dimensional contours of a tumor directly onto the adjacent skull surface to define the resection area. However, this method is unsuitable for temporal regions, where the bone thickness varies considerably, while critical structures, such as the vessels and nerves, are present. In a different approach, McBrayer et al. pre-planned surgical channels on an atlas with segmented critical structures. After that, deformable volume registration was applied to map the surgical plan from the atlas onto the new CT data. However, this method overlooks how the planned milling area impacts the constructability of the surgical channel and subsequent surgical procedures. Although the milling area may be well planned in the atlas, the deformation process can reduce its feasibility in real-world operations. Aghdasi et al. and Rajesh et al. enhance this method by creating multiple plans on the atlas. Unlike McBrayer et al. ’s study, segmentation of the critical structures was also performed on the new CT data, with each milling plan evaluated in relation to the surrounding structures to identify the least risky option. Nonetheless, these approaches still lack personalized consideration of the surgical targets and a quantitative assessment of the plan’s feasibility.
3.1. Problem Definition Planning target: As illustrated in , surgeons must remove part of the skull C m i l l to expose the surgical target T during acoustic neuroma surgery. This removal creates the surgical channel C c h a n n e l , composed of the milled skull, the internal cavities C i n , and external space C e x , defined as C c h a n n e l = C e x ∪ C i n ∪ C m i l l . Spatial constraints: While the skull forms the definition space for milling operations, the surrounding critical structures impose certain limits on this area. Based on their clinical significance, the surrounding structures can be categorized into those that must remain intact H = ∪ i = 1 I C h i , such as the sinus dura and the facial nerve, and those that should be minimally damaged S = ∪ j = 1 J C s j , such as the scala vestibuli. Evaluation: During the planning of the milling area, the potential risks Q k ( C m i l l ) , k ∈ K relevant to the design of the area need to be considered. Key evaluations include the difficulty of constructing the planned surgical channel Q c , how difficult it is for the surgical tools to reach the surgery targets after bone removal Q a , and how much injury to the structures S is caused by the milling procedure Q i . The comprehensive planning problem can thus be formulated as optimizing the milling area C m i l l while ensuring it does not intersect with critical structures C m i l l ∩ C h i = ⌀ , aiming to minimize surgical risks Q k ( C m i l l ) , k = 1 , … , K . 3.2. The Automated Milling Area Planning Pipeline However, the extremely high complexity of defining volumetric milling areas can result in impractical optimization times. To address this, we propose the framework illustrated in for automated planning. Initially, a template-based approach was employed to segment the high-resolution risk structures S and H surrounding the milling areas. Subsequently, potential entry points on the skull surface K and target points T on the surgical target were identified. For each target t ∈ T , the maximum feasible milling area C m i l l ( 0 ) ( t ) that contributes to the surgical procedure was calculated, adhering to the constraints of critical structures H . This process generated a maximum potential milling region, which served as the boundary for optimization: C m i l l ( 0 ) = ∪ t ∈ T C m i l l ( 0 ) ( t ) . To control these volumetric areas, a set of deformation control functions h t ( p i t ) was designed. By projecting each C m i l l ( 0 ) ( t ) onto a control plane and applying h t , the volumetric milling area could be modulated using the parameters defined by these functions. This approach inherently transformed the complex spatial constraints into numerical boundaries for parameter adjustments. Lastly, we evaluated patient injury Q i , constructability Q c , and target accessibility Q a for the surgical channel under the parameter set P = p i t . Multi-objective optimization was then employed to identify the most suitable surgical plan. (1) C m i l l = ⋃ t ∈ T h t ( C m i l l ( 0 ) ( t ) ) min Q c ( C m i l l ) , Q a ( C m i l l ) , Q i ( C m i l l ) s . t . P = { p i t ∈ ( 0 , 1 ] , i = 1 , … , I , t = 1 , … , T } The following part of this paper details the methods proposed in this paper. In detail, includes the method for critical structure segmentation, includes the method for generating the maximum possible milling area, includes the method for volumetric milling area parameterization, and includes the milling area evaluations and the implementation of multi-objective optimization, while includes a set of methods for computing acceleration to enable automatic planning within a reasonable amount of time. 3.3. Critical Structure Segmentation Segmentation of the critical structures, such as the sinus dura, the facial nerve, and the scala tympani, surrounding the milling area in preoperative CT scans is essential for establishing the spatial constraints that inform the milling area’s design. However, this segmentation task is challenging due to several factors: the dimensions of certain critical structures approach the resolution limit of clinical CT devices (0.625 mm), soft tissues can be difficult to discern in CT images, and there is considerable variation in the sizes of different critical structures. These complexities pose significant challenges for deep-learning-based segmentation methods . Therefore, we employed the template-based segmentation pipeline illustrated in to achieve robust segmentation of the critical structures beyond the imaging resolution limitations. Specifically, two cranial CT templates were developed (see ): one encompassing the full cranial area, F , at a resolution similar to that used in standard clinical practice (0.625 mm × 0.625 mm × 0.625 mm), and another covering only the local region, L , around the lateral skull base area with a finer spatial resolution (0.125 mm × 0.125 mm × 0.125 mm). The full cranial template F served to establish the spatial connection between the target CT volumes and the high-resolution local template. To enhance the generalizability of template F , F was generated by registering and averaging multiple clinical CT volumes using the deformable registration method proposed in Avants et al. ’s study. Mutual information (MI) was employed as the optimization metric during the registration process. Concurrently, the local template L was designed to provide a precise reference for accurate segmentation. To accomplish this, we utilized the high-resolution temporal bone segmentation dataset from Sieber et al. ’s study, averaging it to form L . Moreover, to connect these two different spaces, anatomical landmarks, including the stylomastoid foramen, the geniculate ganglion, the top of the head of the malleus, the footplate of the stapes, and the arcuate eminence, were annotated on both templates, represented as M F and M L . For segmentation of a new CT volume V , it was first aligned with the full-head template F through deformable registration to transfer the anatomical landmarks into the new volume space, namely M V . Subsequently, rigid body registration between M L and M V was performed using Singular Value Decomposition (SVD) to achieve an initial alignment between L and V . This was followed by deformable registration between the CT volumes in L and V , generating the non-rigid transformation D mapping the high-resolution template L to V . Finally, the deformation field was applied to transfer the segmentation labels L s e g into the new volume space D ( L s e g ) , generating the spatial constraints H , S for subsequent planning. This approach ensures that the segmented critical structures retain the high resolution of the local template, while structural integrity and smoothness are maintained through the deformation-based method. 3.4. Automated Milling Area Planning 3.4.1. Maximum Milling Area Generation To define a boundary for milling area planning and establish a foundation for quantifying the spatial constraints posed by critical structures, a maximum permissible milling area was first generated based on the structures that must remain intact during surgery H . Prior to generating this area, the surgical target T was manually annotated, while potential entry points K were automatically identified from the skull surface near the surgical target. For the construction of a feasible surgical channel, each milled voxel s ∈ C m i l l should contribute to the surgical procedures following channel creation. Simplifying this by modeling the surgical tool as a cylinder, s should meet the condition that an entry point k ∈ K and a target point t ∈ T exist such that the cylindrical region connecting k and t with the radius r encompasses s : (2) d ( s , l ( k , s ) ) < r where l denotes the linear trajectory between k and t , and d ( s , l ) represents the distance from the spatial point s to the trajectory l . Consequently, the maximum permissible milling area can be heuristically represented as the union of all potential trajectories of the surgical tools that avoid interaction with the critical structures: (3) C m i l l ( 0 ) = ⋃ k ∈ K , t ∈ T I ( k , t ) · C ( k , t ) Here, I ( k , t ) represents whether the surgical tool will damage any critical structure when accessing t though s : (4) I ( k , t ) = 1 d ( p n i , l ( k , t ) ) > r + d r i s k i , ∀ p n i ∈ C h i , ∀ i 0 e l s e where d r i s k i denotes the empirically minimum safe distance that the surgical tools should maintain from each critical structure. In our implementation, a safe distance of 0.5 mm was enforced for the scala tympani, the scala vestibuli, the malleus, the incus, the stapes, the chorda tympani, the tympanic drum, and the carotis, while 1.5 mm was set for the sinus dura, and 2.5 mm was set for the facial nerve. Meanwhile, C ( k , t ) in Equation represents the cylindrical spatial area covered by the surgical tool: (5) C ( k , t ) = { s ∣ d ( s , l ( k , t ) ) < r , k s → · k t → > 0 , t s → · t k → > 0 } Although this method effectively generates the maximum permissible milling area while ensuring the validity of each milled voxel, the volumetric region is controlled by a set of unordered point pairs, { ( k , t ) } , which is unsuitable for further refinement. Therefore, we categorize { ( k , t ) } further based on each spatial element t within the surgical target: (6) C m i l l ( 0 ) = ⋃ t ∈ T C m i l l ( 0 ) ( t ) , C m i l l ( 0 ) ( t ) = ⋃ k ∈ K I ( k , t ) · C ( k , t ) Because the critical structures are continuous, all feasible entry points k for a single target form a continuous distribution K ( t ) on the skull surface. Therefore, the maximum milling area can be expressed as the set of each spatial element in the surgical target paired with all feasible entry points relevant to that target, namely { ( t , K ( t ) ) ∣ t ∈ T } . The continuity of K ( t ) makes it eligible for parameterization. Moreover, to ensure accuracy in milling area planning, the preoperative CT scans, critical structure segmentations, entry points, and target points were uniformly resampled to a spatial resolution of 0.25 mm prior to maximum milling area generation. 3.4.2. Volumetric Area Parameterization The previous generation of the maximum milling area organized the irregular volumetric region into target points and corresponding feasible entry points for each target, represented by { ( t , K ( t ) ) ∣ t ∈ T } . However, the irregular skull surface on which K ( t ) is distributed presents challenges for complete parameterization. To address this issue, we selected a control plane S parallel to the approach direction of the surgical operations (the A-S plane in RAS space). Each connection between t and k , where k ∈ K ( t ) , was then extended along the vector t k → to intersect with S at point g . The generated point g then serves as a unique substitute for k , transforming the irregular distribution on the skull surface into a function for area selection B t , S on the plane S : (7) B t , S ( x , y ) = 1 ⇔ I ( S ( x , y ) , t n ) > 0 where s ( x , y ) represents the 3D spatial position of a point ( x , y ) on the control plane. Since K ( t ) is continuously distributed over the skull surface, the corresponding B t , S ( x , y ) forms a single connected component on the control plane. This property ensures that each point k within a subregion B ′ of B t , S ( B ′ ∣ B ′ ( x , y ) < B t , S ( x , y ) , ∀ x , y ) meets the spatial constraints imposed by structure H . To regulate the area B t , S for optimization purposes, we developed a control function to transform B t , S into a subset. For convenience, B t , S ( x , y ) was initially converted into the polar coordinate form B t , S ( ρ , θ ) , centered at its mass center. Subsequently, a control function σ t ( θ ) , θ ∈ ( 0 , 2 π ] , σ t ( θ ) ∈ ( 0 , 1 ] was applied to modify B t , S ( ρ , θ ) (see ): (8) h t ( B t , S ) ( ρ , θ ) = B t , S ( ρ / σ t ( θ ) , θ ) where h ( · ) represents the deformation operation. Given the distribution of the critical structures, the shapes of B t , S are predominantly convex. Hence, the deformed area h ( B t , S ) generally remains a subset of B t , S in most cases. Nevertheless, a subset restriction was applied to the deformation process to ensure adherence to the spatial constraints: (9) h t ′ ( B t , S ) ( ρ , θ ) = B t , S ( ρ / σ t ( θ ) , θ ) · B t , S ( ρ , θ ) Finally, the proposed function σ t ( θ ) was parameterized using N control points { ( θ i , σ i ) t ∣ θ i = i / N · 2 π } uniformly distributed along θ . The value of σ t at each θ was then determined through spline interpolation of the surrounding control points and constrained to the range ( 0 , 1 ] . This approach allows for complete parameterization of the milling area: (10) C m i l l ( o p t ) ( { ( θ i , ρ i ) t } ) = ⋃ t ∈ T ⋃ k ∈ S B t , S ( ρ / σ t ( θ ) , θ ) · C ( k ( g ) , t ) In this way, the entire volumetric area can be fully governed by the parameter set { ( θ i , ρ i ) t } , consisting of N · N T parameters within the range ( 0 , 1 ] , with an initial value of 1 for optimization. Here, N denotes the number of control points for a single control function, and N T represents the number of target points considered during optimization. Furthermore, the proposed modeling method, along with initialization using the maximum milling area, transforms the complex spatial constraints imposed by the surrounding critical structures into a numerical constraint on the parameters. 3.4.3. Milling Area Evaluation and Optimization To achieve the optimal milling plan, we implemented a series of quantitative assessments, evaluating the feasibility of surgical channels at different parameter settings, including the accessibility of the surgical target after channel construction Q a , the potential injury to bone and the surrounding structures Q i , and the complexity of constructing the surgical channel Q c . Surgery target accessibility Q a : In Mo ’s study, a series of evaluation methods for surgical channel construction was proposed, including an intrinsic measure of the spatial accessibility μ a ( C ) : (11) μ a ( C ) = 1 V T ∑ δ t ∈ T A δ t C Here, A δ t C represents the difficulty of accessing a specific target δ t through the constructed surgical channel C . Therefore, we adopted this as the metric for surgical target accessibility: (12) Q a ( C m i l l ) = − μ a ( C m i l l ) Injury Q i : During surgical channel construction, minimizing the damage to both the skull and the surrounding critical structures S is crucial. Therefore, the proportion of damaged volume within each tissue was adopted as a measure to evaluate the injury sustained during bone milling: (13) Q i ( C m i l l ) = λ b o n e · V C m i l l V C b o n e + ∑ j = 1 J λ j · V C m i l l ∩ C s j V C s j , λ b o n e + ∑ j = 1 J λ j = 1 where λ j and λ b o n e denote the weights assigned to the injury of different critical structures. Surgical channel constructability Q c : In Mo ’s study, a metric was also proposed for evaluating the global spatial compactness μ g s c ( C ) of the constructed surgical channel: (14) μ g s c ( C ) = ∑ δ a ∈ C ∑ δ b ∈ C 1 d δ a , δ b , δ a ≠ δ b where δ a and δ b represent distinct spatial elements within the surgical channel C , and d δ a , δ b is the Euclidean distance between these locations. If an obstacle exists between δ a and δ b , d δ a , δ b is set to ∞ . Generally, μ g s c ( C ) indicates the compactness of the surgical channel. A more compact channel offers greater flexibility for adjusting the milling tool’s orientation during construction, facilitating easier channel creation. Besides channel compactness, the boundary smoothness also affects the feasibility of accurately constructing the channel. Balasubramanian et al. ’s study introduced the spectral arc length μ s a for smoothness assessment. However, this metric is less suitable for three-dimensional boundaries. Given that the surgical channel is primarily defined by the target points and associated entry points, the contours of feasible entry points on the control plane B t S determine the smoothness of the entire channel boundary. Thus, this evaluation can be conducted on the two-dimensional control plane: (15) μ s a ( C ) = μ s a L ′ 1 − ∏ t ∈ T ( 1 − B t S ) where L ′ ( · ) represents the first derivative of the contour of a two-dimensional region. By combining these two evaluations, the constructability of the planned surgical channel can be determined: (16) Q c ( C m i l l ) = − μ g s c ( C m i l l ) / μ g s c ( C m i l l 0 ) · e μ s a ( C m i l l ) Subsequently, the NSGA-III genetic optimization algorithm was employed to iteratively refine the milling area, aiming for enhanced accessibility, safety, and constructability. Two solutions were selected from the Pareto frontier using the compromise programming algorithm and the pseudo-weight algorithm . Finally, the solution with the maximum μ g s c ( C ) was chosen as the final result. 3.5. Automated Planning Acceleration Although the proposed volumetric area modeling methods greatly simplify the definition space of the target milling area and make it compatible with optimization algorithms, the need to iterate over entry points K and surgical targets T , along with calculating the distances between the surgical tools and three-dimensional critical structures at a high spatial resolution, results in substantial time consumption. Therefore, this section introduces several methods (see ) aimed at accelerating the automated planning procedure. Symbolic Distance Field (SDF) of critical structures: In generating the maximum milling area, substantial computation was required to determine the distances between the surgical tools and the surrounding critical structures. Due to the high resolution of the critical structures and the dense spatial distribution of the distance queries, neither point-by-point iteration over structures H nor k-d tree searches across the tool trajectories proved efficient enough for this task. To address this, three-dimensional signed distance fields { u s d f ( i ) } were precomputed for each structure. In subsequent calculations, SDF lookup was employed to replace time-intensive nearest distance or distance calculations, and GPU acceleration was utilized to enhance the efficiency. Early termination strategy of the traversal process: When calculating the valid entry points K ( t ) for each entry point, the contribution of a new effective entry point k ∈ K to the surgical channel C m i l l ( 0 ) ( t ) gradually decreases throughout the iteration procedure. Moreover, the physical size of the surgical tool is much larger than the spatial resolution of the entry points. Therefore, the contribution of the new traversed entry point k was calculated: (17) G t ( k ) = C m i l l , t ( { k } l a s t + k ) − C m i l l , t ( { k } l a s t ) d ( k , t ) · π r 2 When G t consistently falls below 5 × 10 − 4 , it is assumed that the maximum surgical channel has nearly been established, and the iteration process is terminated. After projecting K ( t ) onto the control plane for B t , S , the pixelized effective area was processed using a morphological closing operation using a kernel of the physical size r to ensure continuity and integrity. Control of the target area’s resolution and effective target extraction: During surgery, only the surface of the target needs to be exposed to allow access to the target, and the flexibility of the surgical target allows surgeons to reach the full target even with partial surface exposure. Meanwhile, the large number of target points increases the number of optimization parameters, slowing the convergence process. Therefore, target points were filtered to retain only effective points on the surface. First, target points that were not oriented toward the approach direction were excluded. For each target t ∈ T , the normal vector at this point, n t , was estimated. For any potential entry point k ∈ K , the angle between the normal and the tool trajectory was calculated as θ ( t , k ) = arccos ( n t · t k → ) . The count of k satisfying θ ( t , k ) < π / 2 was recorded as r t . A target point t was deemed effective if it could be accessed from multiple entry points: r t > mean ( r ) , r t > 0.05 × Num ( K ) . The resulting effective target points form T ( 1 ) . After that, the accessibility of each t ∈ T ( 1 ) was further verified under the constraints imposed by H . To expedite this process, K was resampled with a spatial resolution of 2 r , and the iteration terminated once any valid k was identified. This step further refined the target points into T ( 2 ) . Given the convexity of the surgical target and the spatial distribution of the critical structures, T ( 2 ) is fully accessible if all the points on its boundary are reachable. Therefore, the target area T ( 2 ) was further compressed to its contour line T ( 3 ) . Finally, T ( 3 ) was arranged in clockwise order along the boundary of T ( 2 ) as a closed curve and downsampled to T ( 4 ) using r as the spatial interval to ensure a partial overlap between adjacent access trajectories. By refining the target points, approximately 20 target points remained for subsequent optimization, while the validity of the planned surgical channel could still be ensured. With 8 parameters per control function σ t , only about 160 parameters were required to control the irregular volumetric milling area.
Planning target: As illustrated in , surgeons must remove part of the skull C m i l l to expose the surgical target T during acoustic neuroma surgery. This removal creates the surgical channel C c h a n n e l , composed of the milled skull, the internal cavities C i n , and external space C e x , defined as C c h a n n e l = C e x ∪ C i n ∪ C m i l l . Spatial constraints: While the skull forms the definition space for milling operations, the surrounding critical structures impose certain limits on this area. Based on their clinical significance, the surrounding structures can be categorized into those that must remain intact H = ∪ i = 1 I C h i , such as the sinus dura and the facial nerve, and those that should be minimally damaged S = ∪ j = 1 J C s j , such as the scala vestibuli. Evaluation: During the planning of the milling area, the potential risks Q k ( C m i l l ) , k ∈ K relevant to the design of the area need to be considered. Key evaluations include the difficulty of constructing the planned surgical channel Q c , how difficult it is for the surgical tools to reach the surgery targets after bone removal Q a , and how much injury to the structures S is caused by the milling procedure Q i . The comprehensive planning problem can thus be formulated as optimizing the milling area C m i l l while ensuring it does not intersect with critical structures C m i l l ∩ C h i = ⌀ , aiming to minimize surgical risks Q k ( C m i l l ) , k = 1 , … , K .
However, the extremely high complexity of defining volumetric milling areas can result in impractical optimization times. To address this, we propose the framework illustrated in for automated planning. Initially, a template-based approach was employed to segment the high-resolution risk structures S and H surrounding the milling areas. Subsequently, potential entry points on the skull surface K and target points T on the surgical target were identified. For each target t ∈ T , the maximum feasible milling area C m i l l ( 0 ) ( t ) that contributes to the surgical procedure was calculated, adhering to the constraints of critical structures H . This process generated a maximum potential milling region, which served as the boundary for optimization: C m i l l ( 0 ) = ∪ t ∈ T C m i l l ( 0 ) ( t ) . To control these volumetric areas, a set of deformation control functions h t ( p i t ) was designed. By projecting each C m i l l ( 0 ) ( t ) onto a control plane and applying h t , the volumetric milling area could be modulated using the parameters defined by these functions. This approach inherently transformed the complex spatial constraints into numerical boundaries for parameter adjustments. Lastly, we evaluated patient injury Q i , constructability Q c , and target accessibility Q a for the surgical channel under the parameter set P = p i t . Multi-objective optimization was then employed to identify the most suitable surgical plan. (1) C m i l l = ⋃ t ∈ T h t ( C m i l l ( 0 ) ( t ) ) min Q c ( C m i l l ) , Q a ( C m i l l ) , Q i ( C m i l l ) s . t . P = { p i t ∈ ( 0 , 1 ] , i = 1 , … , I , t = 1 , … , T } The following part of this paper details the methods proposed in this paper. In detail, includes the method for critical structure segmentation, includes the method for generating the maximum possible milling area, includes the method for volumetric milling area parameterization, and includes the milling area evaluations and the implementation of multi-objective optimization, while includes a set of methods for computing acceleration to enable automatic planning within a reasonable amount of time.
Segmentation of the critical structures, such as the sinus dura, the facial nerve, and the scala tympani, surrounding the milling area in preoperative CT scans is essential for establishing the spatial constraints that inform the milling area’s design. However, this segmentation task is challenging due to several factors: the dimensions of certain critical structures approach the resolution limit of clinical CT devices (0.625 mm), soft tissues can be difficult to discern in CT images, and there is considerable variation in the sizes of different critical structures. These complexities pose significant challenges for deep-learning-based segmentation methods . Therefore, we employed the template-based segmentation pipeline illustrated in to achieve robust segmentation of the critical structures beyond the imaging resolution limitations. Specifically, two cranial CT templates were developed (see ): one encompassing the full cranial area, F , at a resolution similar to that used in standard clinical practice (0.625 mm × 0.625 mm × 0.625 mm), and another covering only the local region, L , around the lateral skull base area with a finer spatial resolution (0.125 mm × 0.125 mm × 0.125 mm). The full cranial template F served to establish the spatial connection between the target CT volumes and the high-resolution local template. To enhance the generalizability of template F , F was generated by registering and averaging multiple clinical CT volumes using the deformable registration method proposed in Avants et al. ’s study. Mutual information (MI) was employed as the optimization metric during the registration process. Concurrently, the local template L was designed to provide a precise reference for accurate segmentation. To accomplish this, we utilized the high-resolution temporal bone segmentation dataset from Sieber et al. ’s study, averaging it to form L . Moreover, to connect these two different spaces, anatomical landmarks, including the stylomastoid foramen, the geniculate ganglion, the top of the head of the malleus, the footplate of the stapes, and the arcuate eminence, were annotated on both templates, represented as M F and M L . For segmentation of a new CT volume V , it was first aligned with the full-head template F through deformable registration to transfer the anatomical landmarks into the new volume space, namely M V . Subsequently, rigid body registration between M L and M V was performed using Singular Value Decomposition (SVD) to achieve an initial alignment between L and V . This was followed by deformable registration between the CT volumes in L and V , generating the non-rigid transformation D mapping the high-resolution template L to V . Finally, the deformation field was applied to transfer the segmentation labels L s e g into the new volume space D ( L s e g ) , generating the spatial constraints H , S for subsequent planning. This approach ensures that the segmented critical structures retain the high resolution of the local template, while structural integrity and smoothness are maintained through the deformation-based method.
3.4.1. Maximum Milling Area Generation To define a boundary for milling area planning and establish a foundation for quantifying the spatial constraints posed by critical structures, a maximum permissible milling area was first generated based on the structures that must remain intact during surgery H . Prior to generating this area, the surgical target T was manually annotated, while potential entry points K were automatically identified from the skull surface near the surgical target. For the construction of a feasible surgical channel, each milled voxel s ∈ C m i l l should contribute to the surgical procedures following channel creation. Simplifying this by modeling the surgical tool as a cylinder, s should meet the condition that an entry point k ∈ K and a target point t ∈ T exist such that the cylindrical region connecting k and t with the radius r encompasses s : (2) d ( s , l ( k , s ) ) < r where l denotes the linear trajectory between k and t , and d ( s , l ) represents the distance from the spatial point s to the trajectory l . Consequently, the maximum permissible milling area can be heuristically represented as the union of all potential trajectories of the surgical tools that avoid interaction with the critical structures: (3) C m i l l ( 0 ) = ⋃ k ∈ K , t ∈ T I ( k , t ) · C ( k , t ) Here, I ( k , t ) represents whether the surgical tool will damage any critical structure when accessing t though s : (4) I ( k , t ) = 1 d ( p n i , l ( k , t ) ) > r + d r i s k i , ∀ p n i ∈ C h i , ∀ i 0 e l s e where d r i s k i denotes the empirically minimum safe distance that the surgical tools should maintain from each critical structure. In our implementation, a safe distance of 0.5 mm was enforced for the scala tympani, the scala vestibuli, the malleus, the incus, the stapes, the chorda tympani, the tympanic drum, and the carotis, while 1.5 mm was set for the sinus dura, and 2.5 mm was set for the facial nerve. Meanwhile, C ( k , t ) in Equation represents the cylindrical spatial area covered by the surgical tool: (5) C ( k , t ) = { s ∣ d ( s , l ( k , t ) ) < r , k s → · k t → > 0 , t s → · t k → > 0 } Although this method effectively generates the maximum permissible milling area while ensuring the validity of each milled voxel, the volumetric region is controlled by a set of unordered point pairs, { ( k , t ) } , which is unsuitable for further refinement. Therefore, we categorize { ( k , t ) } further based on each spatial element t within the surgical target: (6) C m i l l ( 0 ) = ⋃ t ∈ T C m i l l ( 0 ) ( t ) , C m i l l ( 0 ) ( t ) = ⋃ k ∈ K I ( k , t ) · C ( k , t ) Because the critical structures are continuous, all feasible entry points k for a single target form a continuous distribution K ( t ) on the skull surface. Therefore, the maximum milling area can be expressed as the set of each spatial element in the surgical target paired with all feasible entry points relevant to that target, namely { ( t , K ( t ) ) ∣ t ∈ T } . The continuity of K ( t ) makes it eligible for parameterization. Moreover, to ensure accuracy in milling area planning, the preoperative CT scans, critical structure segmentations, entry points, and target points were uniformly resampled to a spatial resolution of 0.25 mm prior to maximum milling area generation. 3.4.2. Volumetric Area Parameterization The previous generation of the maximum milling area organized the irregular volumetric region into target points and corresponding feasible entry points for each target, represented by { ( t , K ( t ) ) ∣ t ∈ T } . However, the irregular skull surface on which K ( t ) is distributed presents challenges for complete parameterization. To address this issue, we selected a control plane S parallel to the approach direction of the surgical operations (the A-S plane in RAS space). Each connection between t and k , where k ∈ K ( t ) , was then extended along the vector t k → to intersect with S at point g . The generated point g then serves as a unique substitute for k , transforming the irregular distribution on the skull surface into a function for area selection B t , S on the plane S : (7) B t , S ( x , y ) = 1 ⇔ I ( S ( x , y ) , t n ) > 0 where s ( x , y ) represents the 3D spatial position of a point ( x , y ) on the control plane. Since K ( t ) is continuously distributed over the skull surface, the corresponding B t , S ( x , y ) forms a single connected component on the control plane. This property ensures that each point k within a subregion B ′ of B t , S ( B ′ ∣ B ′ ( x , y ) < B t , S ( x , y ) , ∀ x , y ) meets the spatial constraints imposed by structure H . To regulate the area B t , S for optimization purposes, we developed a control function to transform B t , S into a subset. For convenience, B t , S ( x , y ) was initially converted into the polar coordinate form B t , S ( ρ , θ ) , centered at its mass center. Subsequently, a control function σ t ( θ ) , θ ∈ ( 0 , 2 π ] , σ t ( θ ) ∈ ( 0 , 1 ] was applied to modify B t , S ( ρ , θ ) (see ): (8) h t ( B t , S ) ( ρ , θ ) = B t , S ( ρ / σ t ( θ ) , θ ) where h ( · ) represents the deformation operation. Given the distribution of the critical structures, the shapes of B t , S are predominantly convex. Hence, the deformed area h ( B t , S ) generally remains a subset of B t , S in most cases. Nevertheless, a subset restriction was applied to the deformation process to ensure adherence to the spatial constraints: (9) h t ′ ( B t , S ) ( ρ , θ ) = B t , S ( ρ / σ t ( θ ) , θ ) · B t , S ( ρ , θ ) Finally, the proposed function σ t ( θ ) was parameterized using N control points { ( θ i , σ i ) t ∣ θ i = i / N · 2 π } uniformly distributed along θ . The value of σ t at each θ was then determined through spline interpolation of the surrounding control points and constrained to the range ( 0 , 1 ] . This approach allows for complete parameterization of the milling area: (10) C m i l l ( o p t ) ( { ( θ i , ρ i ) t } ) = ⋃ t ∈ T ⋃ k ∈ S B t , S ( ρ / σ t ( θ ) , θ ) · C ( k ( g ) , t ) In this way, the entire volumetric area can be fully governed by the parameter set { ( θ i , ρ i ) t } , consisting of N · N T parameters within the range ( 0 , 1 ] , with an initial value of 1 for optimization. Here, N denotes the number of control points for a single control function, and N T represents the number of target points considered during optimization. Furthermore, the proposed modeling method, along with initialization using the maximum milling area, transforms the complex spatial constraints imposed by the surrounding critical structures into a numerical constraint on the parameters. 3.4.3. Milling Area Evaluation and Optimization To achieve the optimal milling plan, we implemented a series of quantitative assessments, evaluating the feasibility of surgical channels at different parameter settings, including the accessibility of the surgical target after channel construction Q a , the potential injury to bone and the surrounding structures Q i , and the complexity of constructing the surgical channel Q c . Surgery target accessibility Q a : In Mo ’s study, a series of evaluation methods for surgical channel construction was proposed, including an intrinsic measure of the spatial accessibility μ a ( C ) : (11) μ a ( C ) = 1 V T ∑ δ t ∈ T A δ t C Here, A δ t C represents the difficulty of accessing a specific target δ t through the constructed surgical channel C . Therefore, we adopted this as the metric for surgical target accessibility: (12) Q a ( C m i l l ) = − μ a ( C m i l l ) Injury Q i : During surgical channel construction, minimizing the damage to both the skull and the surrounding critical structures S is crucial. Therefore, the proportion of damaged volume within each tissue was adopted as a measure to evaluate the injury sustained during bone milling: (13) Q i ( C m i l l ) = λ b o n e · V C m i l l V C b o n e + ∑ j = 1 J λ j · V C m i l l ∩ C s j V C s j , λ b o n e + ∑ j = 1 J λ j = 1 where λ j and λ b o n e denote the weights assigned to the injury of different critical structures. Surgical channel constructability Q c : In Mo ’s study, a metric was also proposed for evaluating the global spatial compactness μ g s c ( C ) of the constructed surgical channel: (14) μ g s c ( C ) = ∑ δ a ∈ C ∑ δ b ∈ C 1 d δ a , δ b , δ a ≠ δ b where δ a and δ b represent distinct spatial elements within the surgical channel C , and d δ a , δ b is the Euclidean distance between these locations. If an obstacle exists between δ a and δ b , d δ a , δ b is set to ∞ . Generally, μ g s c ( C ) indicates the compactness of the surgical channel. A more compact channel offers greater flexibility for adjusting the milling tool’s orientation during construction, facilitating easier channel creation. Besides channel compactness, the boundary smoothness also affects the feasibility of accurately constructing the channel. Balasubramanian et al. ’s study introduced the spectral arc length μ s a for smoothness assessment. However, this metric is less suitable for three-dimensional boundaries. Given that the surgical channel is primarily defined by the target points and associated entry points, the contours of feasible entry points on the control plane B t S determine the smoothness of the entire channel boundary. Thus, this evaluation can be conducted on the two-dimensional control plane: (15) μ s a ( C ) = μ s a L ′ 1 − ∏ t ∈ T ( 1 − B t S ) where L ′ ( · ) represents the first derivative of the contour of a two-dimensional region. By combining these two evaluations, the constructability of the planned surgical channel can be determined: (16) Q c ( C m i l l ) = − μ g s c ( C m i l l ) / μ g s c ( C m i l l 0 ) · e μ s a ( C m i l l ) Subsequently, the NSGA-III genetic optimization algorithm was employed to iteratively refine the milling area, aiming for enhanced accessibility, safety, and constructability. Two solutions were selected from the Pareto frontier using the compromise programming algorithm and the pseudo-weight algorithm . Finally, the solution with the maximum μ g s c ( C ) was chosen as the final result.
To define a boundary for milling area planning and establish a foundation for quantifying the spatial constraints posed by critical structures, a maximum permissible milling area was first generated based on the structures that must remain intact during surgery H . Prior to generating this area, the surgical target T was manually annotated, while potential entry points K were automatically identified from the skull surface near the surgical target. For the construction of a feasible surgical channel, each milled voxel s ∈ C m i l l should contribute to the surgical procedures following channel creation. Simplifying this by modeling the surgical tool as a cylinder, s should meet the condition that an entry point k ∈ K and a target point t ∈ T exist such that the cylindrical region connecting k and t with the radius r encompasses s : (2) d ( s , l ( k , s ) ) < r where l denotes the linear trajectory between k and t , and d ( s , l ) represents the distance from the spatial point s to the trajectory l . Consequently, the maximum permissible milling area can be heuristically represented as the union of all potential trajectories of the surgical tools that avoid interaction with the critical structures: (3) C m i l l ( 0 ) = ⋃ k ∈ K , t ∈ T I ( k , t ) · C ( k , t ) Here, I ( k , t ) represents whether the surgical tool will damage any critical structure when accessing t though s : (4) I ( k , t ) = 1 d ( p n i , l ( k , t ) ) > r + d r i s k i , ∀ p n i ∈ C h i , ∀ i 0 e l s e where d r i s k i denotes the empirically minimum safe distance that the surgical tools should maintain from each critical structure. In our implementation, a safe distance of 0.5 mm was enforced for the scala tympani, the scala vestibuli, the malleus, the incus, the stapes, the chorda tympani, the tympanic drum, and the carotis, while 1.5 mm was set for the sinus dura, and 2.5 mm was set for the facial nerve. Meanwhile, C ( k , t ) in Equation represents the cylindrical spatial area covered by the surgical tool: (5) C ( k , t ) = { s ∣ d ( s , l ( k , t ) ) < r , k s → · k t → > 0 , t s → · t k → > 0 } Although this method effectively generates the maximum permissible milling area while ensuring the validity of each milled voxel, the volumetric region is controlled by a set of unordered point pairs, { ( k , t ) } , which is unsuitable for further refinement. Therefore, we categorize { ( k , t ) } further based on each spatial element t within the surgical target: (6) C m i l l ( 0 ) = ⋃ t ∈ T C m i l l ( 0 ) ( t ) , C m i l l ( 0 ) ( t ) = ⋃ k ∈ K I ( k , t ) · C ( k , t ) Because the critical structures are continuous, all feasible entry points k for a single target form a continuous distribution K ( t ) on the skull surface. Therefore, the maximum milling area can be expressed as the set of each spatial element in the surgical target paired with all feasible entry points relevant to that target, namely { ( t , K ( t ) ) ∣ t ∈ T } . The continuity of K ( t ) makes it eligible for parameterization. Moreover, to ensure accuracy in milling area planning, the preoperative CT scans, critical structure segmentations, entry points, and target points were uniformly resampled to a spatial resolution of 0.25 mm prior to maximum milling area generation.
The previous generation of the maximum milling area organized the irregular volumetric region into target points and corresponding feasible entry points for each target, represented by { ( t , K ( t ) ) ∣ t ∈ T } . However, the irregular skull surface on which K ( t ) is distributed presents challenges for complete parameterization. To address this issue, we selected a control plane S parallel to the approach direction of the surgical operations (the A-S plane in RAS space). Each connection between t and k , where k ∈ K ( t ) , was then extended along the vector t k → to intersect with S at point g . The generated point g then serves as a unique substitute for k , transforming the irregular distribution on the skull surface into a function for area selection B t , S on the plane S : (7) B t , S ( x , y ) = 1 ⇔ I ( S ( x , y ) , t n ) > 0 where s ( x , y ) represents the 3D spatial position of a point ( x , y ) on the control plane. Since K ( t ) is continuously distributed over the skull surface, the corresponding B t , S ( x , y ) forms a single connected component on the control plane. This property ensures that each point k within a subregion B ′ of B t , S ( B ′ ∣ B ′ ( x , y ) < B t , S ( x , y ) , ∀ x , y ) meets the spatial constraints imposed by structure H . To regulate the area B t , S for optimization purposes, we developed a control function to transform B t , S into a subset. For convenience, B t , S ( x , y ) was initially converted into the polar coordinate form B t , S ( ρ , θ ) , centered at its mass center. Subsequently, a control function σ t ( θ ) , θ ∈ ( 0 , 2 π ] , σ t ( θ ) ∈ ( 0 , 1 ] was applied to modify B t , S ( ρ , θ ) (see ): (8) h t ( B t , S ) ( ρ , θ ) = B t , S ( ρ / σ t ( θ ) , θ ) where h ( · ) represents the deformation operation. Given the distribution of the critical structures, the shapes of B t , S are predominantly convex. Hence, the deformed area h ( B t , S ) generally remains a subset of B t , S in most cases. Nevertheless, a subset restriction was applied to the deformation process to ensure adherence to the spatial constraints: (9) h t ′ ( B t , S ) ( ρ , θ ) = B t , S ( ρ / σ t ( θ ) , θ ) · B t , S ( ρ , θ ) Finally, the proposed function σ t ( θ ) was parameterized using N control points { ( θ i , σ i ) t ∣ θ i = i / N · 2 π } uniformly distributed along θ . The value of σ t at each θ was then determined through spline interpolation of the surrounding control points and constrained to the range ( 0 , 1 ] . This approach allows for complete parameterization of the milling area: (10) C m i l l ( o p t ) ( { ( θ i , ρ i ) t } ) = ⋃ t ∈ T ⋃ k ∈ S B t , S ( ρ / σ t ( θ ) , θ ) · C ( k ( g ) , t ) In this way, the entire volumetric area can be fully governed by the parameter set { ( θ i , ρ i ) t } , consisting of N · N T parameters within the range ( 0 , 1 ] , with an initial value of 1 for optimization. Here, N denotes the number of control points for a single control function, and N T represents the number of target points considered during optimization. Furthermore, the proposed modeling method, along with initialization using the maximum milling area, transforms the complex spatial constraints imposed by the surrounding critical structures into a numerical constraint on the parameters.
To achieve the optimal milling plan, we implemented a series of quantitative assessments, evaluating the feasibility of surgical channels at different parameter settings, including the accessibility of the surgical target after channel construction Q a , the potential injury to bone and the surrounding structures Q i , and the complexity of constructing the surgical channel Q c . Surgery target accessibility Q a : In Mo ’s study, a series of evaluation methods for surgical channel construction was proposed, including an intrinsic measure of the spatial accessibility μ a ( C ) : (11) μ a ( C ) = 1 V T ∑ δ t ∈ T A δ t C Here, A δ t C represents the difficulty of accessing a specific target δ t through the constructed surgical channel C . Therefore, we adopted this as the metric for surgical target accessibility: (12) Q a ( C m i l l ) = − μ a ( C m i l l ) Injury Q i : During surgical channel construction, minimizing the damage to both the skull and the surrounding critical structures S is crucial. Therefore, the proportion of damaged volume within each tissue was adopted as a measure to evaluate the injury sustained during bone milling: (13) Q i ( C m i l l ) = λ b o n e · V C m i l l V C b o n e + ∑ j = 1 J λ j · V C m i l l ∩ C s j V C s j , λ b o n e + ∑ j = 1 J λ j = 1 where λ j and λ b o n e denote the weights assigned to the injury of different critical structures. Surgical channel constructability Q c : In Mo ’s study, a metric was also proposed for evaluating the global spatial compactness μ g s c ( C ) of the constructed surgical channel: (14) μ g s c ( C ) = ∑ δ a ∈ C ∑ δ b ∈ C 1 d δ a , δ b , δ a ≠ δ b where δ a and δ b represent distinct spatial elements within the surgical channel C , and d δ a , δ b is the Euclidean distance between these locations. If an obstacle exists between δ a and δ b , d δ a , δ b is set to ∞ . Generally, μ g s c ( C ) indicates the compactness of the surgical channel. A more compact channel offers greater flexibility for adjusting the milling tool’s orientation during construction, facilitating easier channel creation. Besides channel compactness, the boundary smoothness also affects the feasibility of accurately constructing the channel. Balasubramanian et al. ’s study introduced the spectral arc length μ s a for smoothness assessment. However, this metric is less suitable for three-dimensional boundaries. Given that the surgical channel is primarily defined by the target points and associated entry points, the contours of feasible entry points on the control plane B t S determine the smoothness of the entire channel boundary. Thus, this evaluation can be conducted on the two-dimensional control plane: (15) μ s a ( C ) = μ s a L ′ 1 − ∏ t ∈ T ( 1 − B t S ) where L ′ ( · ) represents the first derivative of the contour of a two-dimensional region. By combining these two evaluations, the constructability of the planned surgical channel can be determined: (16) Q c ( C m i l l ) = − μ g s c ( C m i l l ) / μ g s c ( C m i l l 0 ) · e μ s a ( C m i l l ) Subsequently, the NSGA-III genetic optimization algorithm was employed to iteratively refine the milling area, aiming for enhanced accessibility, safety, and constructability. Two solutions were selected from the Pareto frontier using the compromise programming algorithm and the pseudo-weight algorithm . Finally, the solution with the maximum μ g s c ( C ) was chosen as the final result.
Although the proposed volumetric area modeling methods greatly simplify the definition space of the target milling area and make it compatible with optimization algorithms, the need to iterate over entry points K and surgical targets T , along with calculating the distances between the surgical tools and three-dimensional critical structures at a high spatial resolution, results in substantial time consumption. Therefore, this section introduces several methods (see ) aimed at accelerating the automated planning procedure. Symbolic Distance Field (SDF) of critical structures: In generating the maximum milling area, substantial computation was required to determine the distances between the surgical tools and the surrounding critical structures. Due to the high resolution of the critical structures and the dense spatial distribution of the distance queries, neither point-by-point iteration over structures H nor k-d tree searches across the tool trajectories proved efficient enough for this task. To address this, three-dimensional signed distance fields { u s d f ( i ) } were precomputed for each structure. In subsequent calculations, SDF lookup was employed to replace time-intensive nearest distance or distance calculations, and GPU acceleration was utilized to enhance the efficiency. Early termination strategy of the traversal process: When calculating the valid entry points K ( t ) for each entry point, the contribution of a new effective entry point k ∈ K to the surgical channel C m i l l ( 0 ) ( t ) gradually decreases throughout the iteration procedure. Moreover, the physical size of the surgical tool is much larger than the spatial resolution of the entry points. Therefore, the contribution of the new traversed entry point k was calculated: (17) G t ( k ) = C m i l l , t ( { k } l a s t + k ) − C m i l l , t ( { k } l a s t ) d ( k , t ) · π r 2 When G t consistently falls below 5 × 10 − 4 , it is assumed that the maximum surgical channel has nearly been established, and the iteration process is terminated. After projecting K ( t ) onto the control plane for B t , S , the pixelized effective area was processed using a morphological closing operation using a kernel of the physical size r to ensure continuity and integrity. Control of the target area’s resolution and effective target extraction: During surgery, only the surface of the target needs to be exposed to allow access to the target, and the flexibility of the surgical target allows surgeons to reach the full target even with partial surface exposure. Meanwhile, the large number of target points increases the number of optimization parameters, slowing the convergence process. Therefore, target points were filtered to retain only effective points on the surface. First, target points that were not oriented toward the approach direction were excluded. For each target t ∈ T , the normal vector at this point, n t , was estimated. For any potential entry point k ∈ K , the angle between the normal and the tool trajectory was calculated as θ ( t , k ) = arccos ( n t · t k → ) . The count of k satisfying θ ( t , k ) < π / 2 was recorded as r t . A target point t was deemed effective if it could be accessed from multiple entry points: r t > mean ( r ) , r t > 0.05 × Num ( K ) . The resulting effective target points form T ( 1 ) . After that, the accessibility of each t ∈ T ( 1 ) was further verified under the constraints imposed by H . To expedite this process, K was resampled with a spatial resolution of 2 r , and the iteration terminated once any valid k was identified. This step further refined the target points into T ( 2 ) . Given the convexity of the surgical target and the spatial distribution of the critical structures, T ( 2 ) is fully accessible if all the points on its boundary are reachable. Therefore, the target area T ( 2 ) was further compressed to its contour line T ( 3 ) . Finally, T ( 3 ) was arranged in clockwise order along the boundary of T ( 2 ) as a closed curve and downsampled to T ( 4 ) using r as the spatial interval to ensure a partial overlap between adjacent access trajectories. By refining the target points, approximately 20 target points remained for subsequent optimization, while the validity of the planned surgical channel could still be ensured. With 8 parameters per control function σ t , only about 160 parameters were required to control the irregular volumetric milling area.
To comprehensively evaluate the proposed automated milling area planning algorithm, an experiment was conducted on six clinical CT volumes, comparing the results with those of manual planning. The dataset included three left-ear and three right-ear cases. Acoustic neuromas were used as the surgical targets in the planning process, with eight parameters controlling the maximum milling area for each target point. Additionally, the safe distances d r i s k were set to 1.5 mm for the sinus dura, 2.5 mm for the facial nerve, and 0.5 mm for the other critical structures, based on guidance from an experienced surgeon. The time consumption and evaluation metrics for the surgical channel were recorded. 4.1. Time Consumption The time consumption was measured on a graphics workstation with an Intel Core i9-13900K CPU, an NVIDIA GeForce RTX 4090 GPU, and Ubuntu 20.04.1 LTS. Multi-objective milling area optimization was implemented using PyMOO , while the parallelizable components of the algorithm were implemented using CuPy for GPU acceleration. The time consumption for each step of the algorithm is shown in . On average, 5840.1 ± 279.9 s was required for the entire planning procedure. Of this duration, 416.9 ± 10.7 s was spent on critical structure segmentation, 2958.5 ± 203.1 s on maximum milling area generation, and 2464.7 ± 74.4 s on milling area optimization. 4.2. Evaluation Metrics for Each Step The quantitative evaluation for each metric before and after milling area optimization is presented in . Additionally, the generated target points for optimization, the planned milling volume, and the distance distribution from the volume surface to the critical structures are visualized in . In general, target points oriented toward the approach direction of the surgical operations and the boundary used for the milling area were extracted from the labeled surgical target. The distribution of the distance to the surrounding critical structures on the surface of the planned surgical channel before and after optimization shows a substantial reduction in the areas near the critical structures after optimization. Quantitatively, the average number of voxels within a 1 mm distance from the critical structures decreased by 71.3% across all CT volumes. The quantitative evaluation metrics for the planned surgical channel, including surgical target accessibility μ a ( C m i l l ) , structure damage Q i ( C m i l l ) , global spatial compactness μ g s c ( C m i l l ) , and boundary smoothness μ s a ( C m i l l ) , were recorded before and after optimization (see ). Generally, the planned milling area is better with a higher value for μ a , a lower value for Q i , and a higher value for μ s a , while μ g s c is better when it is higher under the same milling volume and will increase as the milling volume increases. Following the optimization process, μ a ( C m i l l ) , Q i ( C m i l l ) , and μ s a ( C m i l l ) improved by 1.4%, 18.9%, and 3.5%, respectively, while μ g s c ( C m i l l ) decreased by 32.3%. This reduction in μ g s c ( C m i l l ) is proportional to the volume size. During optimization, the milling area was minimized to reduce structural damage, which consequently lowered μ g s c ( C m i l l ) . 4.3. Comparison with Manual Planning To verify the feasibility of the milling area planned using the proposed method further, we compared the planning results against those from manual planning performed by surgeons. An experienced surgeon was recruited to manually plan the milling area based on CT images. The planning procedure was conducted using 3D Slicer software with the segmentation editor. To ensure a fair comparison, the surgeon did not have access to the automated planning results, while the same critical structure segmentations were provided. Additionally, since the surgeon primarily used a slice-by-slice planning strategy, a Gaussian filter with a radius of 0.25 mm was applied to the planned results to fill any gaps in the volume. A comparison between manual and automated planning is shown in . Generally, the automated planning effectively avoided high-risk areas within 1 mm of the critical structures. In each case, high-risk areas within 0.5 mm from the critical structures existed for the manual planning. The minimal distances from the planned milling volume to each critical structure are shown in , averaged across all six datasets. Compared to the manual planning results, the minimal distances were increased by 0.94 ± 1.24 mm, 0.80 ± 0.43 mm, 0.90 ± 0.41 mm, 0.52 ± 0.48 mm, 0.79 ± 0.64 mm, and 0.51 ± 0.25 mm for the scala tympani, the sinus dura, the carotis, the malleus, the incus, the stapes, the facial nerve, the chorda tympani, and the outer ear canal, respectively. Average increases of 72.0%, 86.3%, 112.1%, 56.5%, 83.7%, and 50.6% were achieved. Quantitative evaluations of both the manual and automated planning results were also compared. Unlike the previous comparison of the results before and after multi-objective optimization, the injury evaluation Q i was divided into two categories—damage to the temporal bone and damage to the scala vestibuli—since damage to the scala vestibuli is a critical metric in clinical cases. The evaluation results are presented in . Comparatively, damage to the scala vestibuli was reduced by 29.8%, while the milling volume increased by 29.3% with the automated algorithm. Additionally, the channel boundary smoothness was improved by 78.3%, and the global spatial compactness was improved by 26.4%. The corresponding constructability of the milling area was improved by more than 2.95 times in all cases. The performance in terms of target accessibility was similar for manual and automated planning. To further compare manual and automated planning, an experienced surgeon was recruited to rate both planning results. This surgeon was different from the one who performed the manual planning. To ensure fairness, the surgeon was not informed about whether each plan was manual or automated. All 12 planned results were randomized in sequence for evaluation, and the critical structure segmentations were provided for reference. During evaluation, the surgeon could freely view each result on both 2D slices and 3D volumes using 3D Slicer. After examining each result, the surgeon completed a questionnaire with three questions focusing on injury risk, constructability, and accessibility: Q1: Is the planned surgical channel likely to damage vascular, nerve, or any other risk structures during bone milling? Q2: Is it possible to successfully construct the planned surgical channel using common neurosurgical procedures such as milling? Q3: After the removal of the corresponding bone tissue, is it possible to successfully achieve the surgical target? Each question was rated on a scale from 1 to 5, with 1 being the lowest and 5 the highest score. The results of this evaluation are presented in . On average, the manual planning results scored 4.2, 5, and 4.8 for the three questions, while the automated results scored 4.7, 4.3, and 4.3. The surgeon noted that manual planning posed a higher risk to the surrounding structures, potentially due to less comprehensive consideration of the spatial relationship between the milling area and the adjacent structures. Additionally, the surgeon reported that the automated planning results were clinically feasible.
The time consumption was measured on a graphics workstation with an Intel Core i9-13900K CPU, an NVIDIA GeForce RTX 4090 GPU, and Ubuntu 20.04.1 LTS. Multi-objective milling area optimization was implemented using PyMOO , while the parallelizable components of the algorithm were implemented using CuPy for GPU acceleration. The time consumption for each step of the algorithm is shown in . On average, 5840.1 ± 279.9 s was required for the entire planning procedure. Of this duration, 416.9 ± 10.7 s was spent on critical structure segmentation, 2958.5 ± 203.1 s on maximum milling area generation, and 2464.7 ± 74.4 s on milling area optimization.
The quantitative evaluation for each metric before and after milling area optimization is presented in . Additionally, the generated target points for optimization, the planned milling volume, and the distance distribution from the volume surface to the critical structures are visualized in . In general, target points oriented toward the approach direction of the surgical operations and the boundary used for the milling area were extracted from the labeled surgical target. The distribution of the distance to the surrounding critical structures on the surface of the planned surgical channel before and after optimization shows a substantial reduction in the areas near the critical structures after optimization. Quantitatively, the average number of voxels within a 1 mm distance from the critical structures decreased by 71.3% across all CT volumes. The quantitative evaluation metrics for the planned surgical channel, including surgical target accessibility μ a ( C m i l l ) , structure damage Q i ( C m i l l ) , global spatial compactness μ g s c ( C m i l l ) , and boundary smoothness μ s a ( C m i l l ) , were recorded before and after optimization (see ). Generally, the planned milling area is better with a higher value for μ a , a lower value for Q i , and a higher value for μ s a , while μ g s c is better when it is higher under the same milling volume and will increase as the milling volume increases. Following the optimization process, μ a ( C m i l l ) , Q i ( C m i l l ) , and μ s a ( C m i l l ) improved by 1.4%, 18.9%, and 3.5%, respectively, while μ g s c ( C m i l l ) decreased by 32.3%. This reduction in μ g s c ( C m i l l ) is proportional to the volume size. During optimization, the milling area was minimized to reduce structural damage, which consequently lowered μ g s c ( C m i l l ) .
To verify the feasibility of the milling area planned using the proposed method further, we compared the planning results against those from manual planning performed by surgeons. An experienced surgeon was recruited to manually plan the milling area based on CT images. The planning procedure was conducted using 3D Slicer software with the segmentation editor. To ensure a fair comparison, the surgeon did not have access to the automated planning results, while the same critical structure segmentations were provided. Additionally, since the surgeon primarily used a slice-by-slice planning strategy, a Gaussian filter with a radius of 0.25 mm was applied to the planned results to fill any gaps in the volume. A comparison between manual and automated planning is shown in . Generally, the automated planning effectively avoided high-risk areas within 1 mm of the critical structures. In each case, high-risk areas within 0.5 mm from the critical structures existed for the manual planning. The minimal distances from the planned milling volume to each critical structure are shown in , averaged across all six datasets. Compared to the manual planning results, the minimal distances were increased by 0.94 ± 1.24 mm, 0.80 ± 0.43 mm, 0.90 ± 0.41 mm, 0.52 ± 0.48 mm, 0.79 ± 0.64 mm, and 0.51 ± 0.25 mm for the scala tympani, the sinus dura, the carotis, the malleus, the incus, the stapes, the facial nerve, the chorda tympani, and the outer ear canal, respectively. Average increases of 72.0%, 86.3%, 112.1%, 56.5%, 83.7%, and 50.6% were achieved. Quantitative evaluations of both the manual and automated planning results were also compared. Unlike the previous comparison of the results before and after multi-objective optimization, the injury evaluation Q i was divided into two categories—damage to the temporal bone and damage to the scala vestibuli—since damage to the scala vestibuli is a critical metric in clinical cases. The evaluation results are presented in . Comparatively, damage to the scala vestibuli was reduced by 29.8%, while the milling volume increased by 29.3% with the automated algorithm. Additionally, the channel boundary smoothness was improved by 78.3%, and the global spatial compactness was improved by 26.4%. The corresponding constructability of the milling area was improved by more than 2.95 times in all cases. The performance in terms of target accessibility was similar for manual and automated planning. To further compare manual and automated planning, an experienced surgeon was recruited to rate both planning results. This surgeon was different from the one who performed the manual planning. To ensure fairness, the surgeon was not informed about whether each plan was manual or automated. All 12 planned results were randomized in sequence for evaluation, and the critical structure segmentations were provided for reference. During evaluation, the surgeon could freely view each result on both 2D slices and 3D volumes using 3D Slicer. After examining each result, the surgeon completed a questionnaire with three questions focusing on injury risk, constructability, and accessibility: Q1: Is the planned surgical channel likely to damage vascular, nerve, or any other risk structures during bone milling? Q2: Is it possible to successfully construct the planned surgical channel using common neurosurgical procedures such as milling? Q3: After the removal of the corresponding bone tissue, is it possible to successfully achieve the surgical target? Each question was rated on a scale from 1 to 5, with 1 being the lowest and 5 the highest score. The results of this evaluation are presented in . On average, the manual planning results scored 4.2, 5, and 4.8 for the three questions, while the automated results scored 4.7, 4.3, and 4.3. The surgeon noted that manual planning posed a higher risk to the surrounding structures, potentially due to less comprehensive consideration of the spatial relationship between the milling area and the adjacent structures. Additionally, the surgeon reported that the automated planning results were clinically feasible.
In this paper, we propose an automated planning method utilizing an optimization method to obtain a safe and executable surgical plan for the milling areas in mastoidectomy. To accomplish this, a template-based approach was employed to segment high-resolution critical structures surrounding the target operation area, using two templates for segmentation via a global-to-local registration process. Following segmentation, the maximum milling area was defined by simulating the access to the target during the surgical procedure. Re-categorization was then applied to the elements within the maximum milling area, with potential entry points projected onto a control plane to simplify the complex expressions caused by the irregular structural surfaces. Next, a set of control functions was introduced to deform the maximum milling area into a subset in a parameterized manner. Finally, the milling area’s constructability, the post-milling accessibility of the surgical target, and the injury risk to the patient were quantitatively evaluated. Multi-objective optimization was then employed to derive an optimal milling plan that met both the safety and feasibility requirements. In general, the proposed method represents the irregular volumetric area as a combination of multiple cylinders aligned with the physical dimensions of the surgical tools, extending from the entry points to the target. The access trajectories are organized by each target point and are deformed on the entry side, ensuring the effectiveness of each spatial element, even within the deformed surgical channel. Additionally, the definition of the control functions with control points and interpolation preserves the volume’s smoothness. By adopting the maximum milling area as a deformation reference, spatial constraints are transformed into numerical limitations on the control parameters, making the irregular volumetric area suitable for optimization algorithms. Compared to previous works aiming at volumetric milling area planning, Popovic et al. neglected the thickness of the skull for heuristic generation, while McBrayer et al. , Aghdasi et al. , and Rajesh et al. pre-planned the milling area on an atlas and mapped it to new CT data for patient-specific milling areas. To our knowledge, our work is the first to achieve successful adoption of optimization methods in this procedure for more feasible milling area planning. As a result, improved using the computation acceleration strategies adopted in this study, the proposed method could fully and automatically plan the milling area within 5840.1 ± 685.7 s. Due to the long time interval between CT scanning and the conduction of surgery, this time consumption already satisfies the clinical requirements. Moreover, it is still possible to improve the algorithm towards its real-time application. The time consumption of the current algorithm mainly lies in the generation of the maximum milling area and the multi-objective optimization. In both procedures, the spatial relationships between different targets need to be repetitively calculated, leading to a considerable amount of parallel computation, and might be improved by adopting a more powerful GPU. Moreover, the current implementation was based on Python, where a large overhead exists. Implementing the same method using C++ and CUDA, with careful organization to reduce repetitive calculations and the overhead, could significantly reduce the time consumption. Compared to the manual planning results, our proposed method reduced the damage to the scala vestibuli by 29.8 % and increased the distance from the milling area to the surrounding critical structures by percentages ranging from 50.6 % to 112.1 % , significantly enhancing the safety of the milling procedure. Additionally, the boundary smoothness was improved by 78.3 % and global spatial compactness by 29.8 % when automated planning was used, making the planned milling area easier to construct during surgery. As the surgeon primarily adopted a slice-by-slice planning strategy, comprehensive consideration of the spatial relationship between different slices was lacking, leading to the reduced constructability of the planned volume. In contrast, our method generates a volume fully controlled by the entry and target points, theoretically ensuring that each spatial element is accessible during milling operations. In addition to the quantitative evaluation, a subjective assessment by an experienced surgeon was included. Although the automated plan requires more bone removal, the surgeon observed that the planned volumes maintained a safer distance from the surrounding structures and were feasible for real surgical cases. In most acoustic neuroma cases, the critical structures surrounding the lesion area remain unaffected; therefore, the generalizability of the proposed method will not be influenced by the level of the tumor lesion. However, evaluation on a more diverse dataset is still needed in future work. Although this paper primarily focuses on mastoidectomy, the volumetric area modeling method is not limited to this clinical case. This approach—using entry and target points for volume representation and employing a maximum area and parameterization strategy for deformation—could also be applied in other situations where complex shapes and constraints need to be quantitatively represented. Meanwhile, there are several limitations regarding the proposed method that need to be addressed in future studies. During the implementation of the proposed method, the surgical tool was modelized using a cylinder to simplify the calculation. Certain limitation might be raised due to this setup. For most neurosurgical tools, the three-dimensional spatial constraints can be simplified using a cylinder bounding box. However, a volumetric representation of the surgical tool might be needed when a more accurate description of the tool’s shape is needed, which would potentially increase the computational consumption of the proposed method. A way to consider the spatial constraints in such cases remains an open question. Moreover, the proposed method is also dependent on high-quality preoperative CT data and segmentation labels. How to provide a precise distribution of the critical structures when the imaging quality is limited also needs to be discussed. A way to integrate automatic planning into the clinical procedure is also a critical problem. Surgeons need to be aware of the possible risks of the milling plan and understand the evaluation metrics during automatic planning, and a mechanism to integrate the surgeons’ clinical experience during planning will be needed to ensure the safety of the surgery.
This paper introduced a modeling method for irregular volumetric areas, enabling automated planning of the milling area in acoustic neuroma surgery. By pre-generating the maximum milling area based on the spatial constraints imposed by the surrounding critical structures, the milling area was fully parameterized using several control functions and a deformation process. Additionally, the complex spatial constraints were translated into numerical limitations on the parameters. The milling area was evaluated using multiple metrics, including accessibility, constructability, and injury, while multi-objective optimization was employed to achieve the optimal planning results. Overall, the proposed planning procedure was able to generate a milling plan within a reasonable time, while the risks during bone milling were significantly reduced. The automatically planned milling areas also demonstrated smoother boundaries, enhancing their feasibility for real clinical applications. This modeling approach is applicable not only to acoustic neuroma surgery but also to other cases where complex relationships between a volumetric target and the surrounding structures need to be considered.
|
Electrophysiology-Guided Genetic Characterisation Maximises Molecular Diagnosis in an Irish Paediatric Inherited Retinal Degeneration Population | ab0452d2-d74c-4100-a4f8-21511317dad7 | 9033125 | Physiology[mh] | Inherited retinal degenerations (IRDs) account for over one third of the underlying causes of blindness in the paediatric population . IRDs are a clinically and genetically inhomogeneous group of progressive blinding genetic diseases that can present at any stage from birth through to late middle age. There are over 300 genes identified to date that are associated with isolated or syndromic IRDs making accurate molecular diagnosis challenging. Patients with IRDs often experience long delays seeing an average of eight clinicians over 7 years prior to reaching a definitive diagnosis . The current standard of care necessitates a harmonised clinical and genetic diagnosis to access novel disease-modifying treatments . Clinical diagnoses can be classified according to various features including (a) cell type (i.e., cone vs. rod vs. combined dysfunction); (b) primarily targeted retinal region (i.e., macula vs. periphery vs. panretinal); (c) age of onset (i.e., congenital vs. juvenile vs. adult-onset); (d) severity of visual dysfunction (e.g., mild vs. moderate vs. severe); (e) cadence (i.e., stationary vs. progressive); and (f) absence or presence of extra-ocular clinical involvement (i.e., isolated vs. syndromic forms). Non-retinal ophthalmic features include juvenile cataract , sensory nystagmus and refractive error . Syndromic IRDs may involve other sensory deficits (e.g., sensorineural hearing loss and retinitis pigmentosa (RP) in up to 18% of children with IRD, i.e., Usher syndrome, OMIM#276900) or require physician management for other body systems (e.g., diabetes mellitus or renal disease in Bardet–Biedl syndrome, OMIM#209900) . The historical mainstay of IRD treatment has been predominantly supportive (i.e., correction of refractive error, mobility support, educational support) . With the approval of Luxturna TM (Novartis AG, Basel, Switzerland) , the first gene therapy for IRD (biallelic RPE65 -associated retinopathy), approved by the US Food and Drug Administration and European Medicines Agency, the relevance of clarifying the genetic aetiology of paediatric (and adult) IRD has been substantiated. There are multiple gene therapy clinical trials underway or planned for other IRD conditions including choroideraemia ( CHM , OMIM*300390), achromatopsia ( CNGA3 , OMIM*60053; CNGB3 , OMIM*605080), non-syndromic RP ( RPGR , OMIM*312610; MERTK , OMIM*604705), X-linked retinoschisis ( RS1 , OMIM*300839), Leber congenital amaurosis (LCA, CEP290 , OMIM*610142) and Usher syndrome ( MYO7A , OMIM*276903). In childhood, many ‘stationary’ IRDs show preservation of retinal architecture, while some ‘progressive’ IRDs have not yet resulted in significant retinal destruction/atrophy, providing a window of opportunity for intervention (e.g., gene therapy) . Thus, early confirmation of genotype is critical to gain access to these disease-modifying therapies. Genetic diagnosis also aids family risk determination where families may yet be incomplete so pedigrees with genetically determined IRD can be appropriately evaluated, investigated, and counselled. To date, the majority of patients recruited to the Irish national IRD programme (Target 5000) have been adults . Herein, we describe the pathway used to identify and genetically investigate a cohort of children and adolescents with IRD attending a tertiary care paediatric ophthalmology department, the clinical and genetic outcomes, and the therapeutic implications of the first systematic coordinated round of paediatric genetic testing for IRDs in Ireland.
The database of all patients ( n = 1045) undergoing electrophysiology (EP, full field electroretinography (ERG), multifocal ERG, and/or visually evoked potentials (VEP)) at a tertiary-referral paediatric ophthalmology department (Children’s University Hospital, Temple Street, Dublin, Ireland) between February 2006 and April 2020 were screened for features consistent with IRD. A total of 87 reports (8.3%) consistent with IRDs were identified and a retrospective chart review was conducted to confirm a clinical phenotype of IRD. Due to limitations from the SARS-CoV-2-19 pandemic, a “telegenetics” approach was adopted to reduce repeated hospital exposures, where possible. Patients or parents were contacted by phone to obtain family history and to consent for genetic testing. All parents/guardians and/or patients (if meeting assent criteria) provided informed consent/assent for genetic testing. This study was approved by the institutional review board of Mater Misericordiae University Hospital, Dublin, Ireland, and abides by the tenets of the Declaration of Helsinki. A saliva sample kit for genetic analysis (Oragene DNA OG-500/OGD-500, DNA GenoTek Inc., Ottawa, ON, Canada) was sent directly to the patient’s home address to minimise additional hospital exposures. A panel-based next generation sequencing (pNGS) analysis of 351 IRD-implicated genes (nuclear and mitochondrial genome) was performed by an accredited laboratory (Blueprint Genetics, Helsinki, Finland) . All relevant variants were reported in HGNC nomenclature, compared against a reference genome (GRCH37/HG19), and confirmed with Sanger sequencing as per the commercial laboratory’s protocols. Findings from pNGS were discussed in a multidisciplinary team (MDT) setting comprising ophthalmologists, clinical and molecular geneticists, and genetic counsellors to ensure correlation of the phenotype with genotype prior to feedback of genetic results to the patients/parents by the genetic counsellor. Assessed clinical data included demographics, age of onset, ophthalmic and systemic symptoms/diagnoses, family history, and comprehensive ophthalmic examination (i.e., LogMAR visual acuity (VA), cycloplegic retinoscopy, and dilated slit lamp biomicroscopy and/or indirect ophthalmoscopy). Where possible, colour fundus photography, fundus autofluorescence and optical coherence tomography were captured to further characterise and stage the retinal phenotype. In cases of syndromic IRD, relevant input from other expert medical specialities (e.g., neurodevelopmental, metabolic, cardiology, renal, etc.) was sought.
3.1. Demographics A total of 87 individuals were identified as having electrophysiology features consistent with IRD (e.g., rod and/or cone photoreceptor dysfunction). See and . Out of the 87 individuals, 70 patients (80.5%) from 57 pedigrees agreed to or were available for genetic testing. Mean patient age at time of testing was 13.94 years, with 64.3% male and 35.7% female. 3.2. Visual Acuity Quantifiable VA measurements were possible in 88.6% (62/70) of cases (i.e., children able to reliably respond to LogMAR letters/pictures), with the majority of children who were unable to give formal VA readings having a diagnosis of LCA (62.5%, 5/8). The most severe visual disturbance was noted in patients with LCA, Stargardt disease (STGD) and cone dystrophy (CD). and provides a more detailed analysis. The variable VA noted in individuals with LCA, STGD, CD, and RCD may be attributable to severity/stage of disease associated with differing ages and VA is not always the most reliable marker of disease severity on IRD (e.g., early visual field loss in RP). 3.3. Refractive Error Refractive error was recorded for 75.7% of cases . High myopia is strongly associated with congenital stationary night blindness (CSNB) which is in keeping with the literature . Astigmatism was also greatest in CSNB. A total of 41.5% of patients had ≥6D ametropia making refractive error the most common treatable feature in paediatric IRD. 3.4. Other Ocular Co-Morbidities No patients had keratoconus or glaucoma. Five patients had evidence of cataract; one patient with ACHM had Mittendorf dot cataract and three patients (16.7%) with RCD had bilateral posterior subcapsular cataract at a mean age of 18.5 years. One patient with XLRS had history of unilateral rhegmatogenous retinal detachment (RRD) repair (pars plana vitrectomy with scleral buckle) and subsequently underwent cataract surgery at age 16. RRD was detected in only one eye of this total paediatric IRD cohort (0.7% of eyes or 1.4% of patients, all diagnoses). One patient with BVMD had evidence of choroidal neovascular complex in the right eye requiring intravitreal anti-vascular endothelial growth factor injection with good visual result. Cystic macular lesions (CML) were noted in all cases with XLRS ( n = 7) . No patients with non-XLRS diagnoses had CML though detection of this finding may have been impacted by limited facility for detailed multimodal imaging (MMI, i.e., optical coherence tomography, OCT) in young patients. A total of 11 patients (15.9%) had amblyopia, calculated using a ≥2-line interocular VA discrepancy, though this does not account for the bilateral stimulus deprivation amblyopia of severe early onset IRD affecting VA. provides a more detailed analysis. 3.5. Genetic Analysis Of 70 patient samples from 57 pedigrees undergoing pNGS, causative genetic variants were detected for 60 patients (85.7%) from 47 pedigrees (82.5%). The most common genetic aetiologies were RS1 (11.7%, n = 7), CNGB3 (10%, n = 6), ABCA4 (8.3%, n = 5), RPGR (8.3%, n = 5), and NYX (6.7%, n = 4). See . In total, 63 variants in 27 IRD-associated genes were identified with 90.5% ( n = 57) being pathogenic (American College of Medical Genetics, ACMG class 5, 65.1%, n = 41) or likely pathogenic (ACMG class 4, 25.4%, n = 16) variants and only 9.5% ( n = 6) being variants of unknown significance (VUS, ACMG class 3) requiring further work to assess degree of deleteriousness on protein function (e.g., in silico analysis) or segregation analysis. There were 11 novel variants (17.5%) detected . The most common inheritance pattern for genetically resolved cases was autosomal recessive (AR, 55%, n = 33) with X-linked recessive (XL, 30%, n = 18) and autosomal dominant (AD, 15%, n = 9) less frequent. A total of 33.3% ( n = 11) of recessive disease was due to homozygous mutations, with the remainder explained by compound heterozygous variants in the relevant gene. Of 10 unresolved patients with clinically AR IRD, a single allele was detected in 5 patients (50%, ACMG class 5 in two cases, class 4 in one case, and class 3 in two cases). Further studies including whole gene sequencing (i.e., single allele cases) and/or whole exome/genome sequencing (i.e., no candidate genetic variants) are planned to resolve these cases . Of the 60 genetically resolved IRD patients, 5% ( n = 3) are eligible for approved therapies (i.e., Luxturna™, RPE65 ) and 38.3% ( n = 23) are eligible for clinical trial-based gene therapies including CEP290 ( n = 2), CNGA3 ( n = 3), CNGB3 ( n = 6), RPGR ( n = 5), and RS1 ( n = 7). pNGS revealed additional findings in ocular-implicated genes in 33 patients with 15 patients having more than one additional variant identified ; however, these variants were mainly ACMG class 3 (VUS) and did not correlate well with the clinical phenotype. As whole exome testing was not conducted, secondary findings (i.e., reportable non-retinal genes of significance to ongoing patient welfare) were not included in the analysis .
A total of 87 individuals were identified as having electrophysiology features consistent with IRD (e.g., rod and/or cone photoreceptor dysfunction). See and . Out of the 87 individuals, 70 patients (80.5%) from 57 pedigrees agreed to or were available for genetic testing. Mean patient age at time of testing was 13.94 years, with 64.3% male and 35.7% female.
Quantifiable VA measurements were possible in 88.6% (62/70) of cases (i.e., children able to reliably respond to LogMAR letters/pictures), with the majority of children who were unable to give formal VA readings having a diagnosis of LCA (62.5%, 5/8). The most severe visual disturbance was noted in patients with LCA, Stargardt disease (STGD) and cone dystrophy (CD). and provides a more detailed analysis. The variable VA noted in individuals with LCA, STGD, CD, and RCD may be attributable to severity/stage of disease associated with differing ages and VA is not always the most reliable marker of disease severity on IRD (e.g., early visual field loss in RP).
Refractive error was recorded for 75.7% of cases . High myopia is strongly associated with congenital stationary night blindness (CSNB) which is in keeping with the literature . Astigmatism was also greatest in CSNB. A total of 41.5% of patients had ≥6D ametropia making refractive error the most common treatable feature in paediatric IRD.
No patients had keratoconus or glaucoma. Five patients had evidence of cataract; one patient with ACHM had Mittendorf dot cataract and three patients (16.7%) with RCD had bilateral posterior subcapsular cataract at a mean age of 18.5 years. One patient with XLRS had history of unilateral rhegmatogenous retinal detachment (RRD) repair (pars plana vitrectomy with scleral buckle) and subsequently underwent cataract surgery at age 16. RRD was detected in only one eye of this total paediatric IRD cohort (0.7% of eyes or 1.4% of patients, all diagnoses). One patient with BVMD had evidence of choroidal neovascular complex in the right eye requiring intravitreal anti-vascular endothelial growth factor injection with good visual result. Cystic macular lesions (CML) were noted in all cases with XLRS ( n = 7) . No patients with non-XLRS diagnoses had CML though detection of this finding may have been impacted by limited facility for detailed multimodal imaging (MMI, i.e., optical coherence tomography, OCT) in young patients. A total of 11 patients (15.9%) had amblyopia, calculated using a ≥2-line interocular VA discrepancy, though this does not account for the bilateral stimulus deprivation amblyopia of severe early onset IRD affecting VA. provides a more detailed analysis.
Of 70 patient samples from 57 pedigrees undergoing pNGS, causative genetic variants were detected for 60 patients (85.7%) from 47 pedigrees (82.5%). The most common genetic aetiologies were RS1 (11.7%, n = 7), CNGB3 (10%, n = 6), ABCA4 (8.3%, n = 5), RPGR (8.3%, n = 5), and NYX (6.7%, n = 4). See . In total, 63 variants in 27 IRD-associated genes were identified with 90.5% ( n = 57) being pathogenic (American College of Medical Genetics, ACMG class 5, 65.1%, n = 41) or likely pathogenic (ACMG class 4, 25.4%, n = 16) variants and only 9.5% ( n = 6) being variants of unknown significance (VUS, ACMG class 3) requiring further work to assess degree of deleteriousness on protein function (e.g., in silico analysis) or segregation analysis. There were 11 novel variants (17.5%) detected . The most common inheritance pattern for genetically resolved cases was autosomal recessive (AR, 55%, n = 33) with X-linked recessive (XL, 30%, n = 18) and autosomal dominant (AD, 15%, n = 9) less frequent. A total of 33.3% ( n = 11) of recessive disease was due to homozygous mutations, with the remainder explained by compound heterozygous variants in the relevant gene. Of 10 unresolved patients with clinically AR IRD, a single allele was detected in 5 patients (50%, ACMG class 5 in two cases, class 4 in one case, and class 3 in two cases). Further studies including whole gene sequencing (i.e., single allele cases) and/or whole exome/genome sequencing (i.e., no candidate genetic variants) are planned to resolve these cases . Of the 60 genetically resolved IRD patients, 5% ( n = 3) are eligible for approved therapies (i.e., Luxturna™, RPE65 ) and 38.3% ( n = 23) are eligible for clinical trial-based gene therapies including CEP290 ( n = 2), CNGA3 ( n = 3), CNGB3 ( n = 6), RPGR ( n = 5), and RS1 ( n = 7). pNGS revealed additional findings in ocular-implicated genes in 33 patients with 15 patients having more than one additional variant identified ; however, these variants were mainly ACMG class 3 (VUS) and did not correlate well with the clinical phenotype. As whole exome testing was not conducted, secondary findings (i.e., reportable non-retinal genes of significance to ongoing patient welfare) were not included in the analysis .
This is the first systematic coordinated genetic assessment of a paediatric IRD cohort in Ireland which resulted in an 85.7% genetic resolution rate from first-tier pNGS testing (with 90.5% of variants classified as pathogenic or likely pathogenic), superior to our adult cohort (~70% genetically resolved) and other pNGS IRD studies . Similar positive correlation between paediatric age group and likelihood of obtaining genetic diagnosis has been previously noted . This is likely due to (1) selective genetic testing only of patients with abnormal EP and (2) lack of advanced acquired disease mimicking IRD (as seen in adult populations) . In childhood-onset IRD, some phenotypes are specific for mutations in a single gene or a relatively small subset of genes, e.g., XLRS ( RS1 ), LCA (including RPE65, CEP290, AILP1, RDH12, CRX and CRB1 , among others), and CSNB (including NYX , TRPM1 and CACNA1F , among others) . EP may be particularly helpful in refining the clinical diagnosis, thus narrowing the scope of the genetic spectrum of interest, e.g., 1., electronegative ERG in a male with macular retinoschisis suggests RS1 genotype, while, e.g., 2., CSNB can be categorised by EP subtypes (i.e., Riggs vs. Schubert–Bornschein types). In patients with broader, less specific clinical diagnoses (e.g., retinal dystrophy or rod-cone dystrophy), a larger list of potential genetic aetiologies must be considered (i.e., >100). Thus, there is a lower chance of receiving a molecular diagnosis and greater chance of uncovering misleading non-causative variants in other IRD-associated genes ; it is these cases in which pNGS is most effective. In our cohort, 81.3% of patients diagnosed with RCD and 70.6% of patients diagnosed with CD (excluding STGD) received a conclusive genetic diagnosis, demonstrating the power of a broader genetic scope (i.e., pNGS of 351 genes in this case) in resolving heritable diseases with heterogeneous genetic makeup. Associated ophthalmic features (e.g., cataract, CML) were found in a minority of patients in this cohort (14.3%). This figure may increase as patients enter adulthood (i.e., later stage of the disease process) as detection/monitoring of subtle findings (i.e., CML) may improve with greater cooperation with detailed examination (e.g., nystagmus, photophobia, fundus contact lens) and MMI (e.g., OCT) . Although refractive error may increase with age in healthy eyes , in IRD, retinal feedback on eye growth is impaired due to defocus and the inherent retinal dysfunction, which may accelerate axial elongation of the globe . Although likely an adaptive strategy to maximise vision, this may lead to acquired sequelae including angle closure (high hyperopia) and retinal breaks, detachment and/or atrophy (high myopia) . Uncorrected refractive errors can cause ametropic amblyopia of their own accord in addition to the underlying retinal dysfunction of IRD and thus core measures such as accurate refractive correction and amblyopia treatment (e.g., patching) remain the cornerstone of care to prevent acquired visual loss in children with IRD. In a paediatric cohort, early genetic diagnosis empowers patients and their families to make decisions regarding their care. Genetically guided prognosis allows educational, employment, and supportive planning for the long term. Ocular supports such as refraction, low vision aids, and mobility training can be introduced early to maximise function. In scenarios where young families may yet be incomplete, genetic counselling is particularly important in making informed reproductive decisions . The need to achieve an accurate genetic diagnosis for IRD patients is increasingly important as novel gene therapies are in clinical trials for a growing number of aetiologies and a molecular diagnosis is often a prerequisite for access to clinical trials and approved treatments. In this cohort, 43.3% of the children (26 out of 60 tested positive) meet the criteria for gene therapies (approved or in clinical trials). Although long term efficacy from once-off gene therapy for IRD is not known , access to disease-modifying treatments prior to the onset of structural damage to the retina (i.e., photoreceptor/RPE atrophy) will likely become the standard of care for IRDs along with refractive correction and supportive measures. This research group has plans to participate in upcoming phase I/II/III gene therapy clinical trials for RPGR, CNGB3 , and USH2A which paediatric (and adult) patients from these pedigrees may directly benefit from. Limitations This study was carried out during the global SARS-CoV-2-2019 pandemic and thus some patients/parents were unwilling/unable to travel for in-person clinical re-assessment. Therefore, some cases ( n = 17) were excluded and phenotypic information is not current for all cases. This also limited opportunities to perform MMI on all patients, though the younger of the cohorts may not have been amenable regardless. Alternate clues and techniques must be employed, including modified EP testing setups more tolerable/appropriate for young children . Limitations of this study also include its retrospective nature and the low number of patients, an issue with many observational studies of rare diseases. However, we feel that this study represents the paediatric IRD population in Ireland, capturing ~27% of the estimated 365 Irish population ≤18 years with IRD. Only patients having been referred for electrophysiology were screened and thus patients with more severe ocular or syndromic (i.e., intellectual disability) phenotypes may have been excluded due to electrophysiology non-completion. Inherited vitreoretinopathies, which are largely monogenic (e.g., Stickler syndrome, OMIM#108300) were not captured by the EP screening approach described here as frank rod and/or cone dystrophy would not be apparent. This may bias the demographic data output from this study, describing a certain subset of IRDs. Definitive statements are difficult to make with small numbers of genetically heterogeneous diseases; however, the data herein support the use of an EP-guided clinically validated pNGS approach to the investigation of paediatric IRD to maximise positive gene detection. Further work is ongoing to assess an additional cohort of the paediatric population with inconclusive EP findings (excluded in the first round) suspected to have IRD. This includes repeat EP and clinical assessment prior to genetic testing (pNGS).
This study was carried out during the global SARS-CoV-2-2019 pandemic and thus some patients/parents were unwilling/unable to travel for in-person clinical re-assessment. Therefore, some cases ( n = 17) were excluded and phenotypic information is not current for all cases. This also limited opportunities to perform MMI on all patients, though the younger of the cohorts may not have been amenable regardless. Alternate clues and techniques must be employed, including modified EP testing setups more tolerable/appropriate for young children . Limitations of this study also include its retrospective nature and the low number of patients, an issue with many observational studies of rare diseases. However, we feel that this study represents the paediatric IRD population in Ireland, capturing ~27% of the estimated 365 Irish population ≤18 years with IRD. Only patients having been referred for electrophysiology were screened and thus patients with more severe ocular or syndromic (i.e., intellectual disability) phenotypes may have been excluded due to electrophysiology non-completion. Inherited vitreoretinopathies, which are largely monogenic (e.g., Stickler syndrome, OMIM#108300) were not captured by the EP screening approach described here as frank rod and/or cone dystrophy would not be apparent. This may bias the demographic data output from this study, describing a certain subset of IRDs. Definitive statements are difficult to make with small numbers of genetically heterogeneous diseases; however, the data herein support the use of an EP-guided clinically validated pNGS approach to the investigation of paediatric IRD to maximise positive gene detection. Further work is ongoing to assess an additional cohort of the paediatric population with inconclusive EP findings (excluded in the first round) suspected to have IRD. This includes repeat EP and clinical assessment prior to genetic testing (pNGS).
This study represents a vital initial step in the genetic characterisation of children with IRD to empower them with prognosis, genetic counselling (family planning tools/evidence), and access to novel therapies, where available. Visual electrophysiology is a useful objective adjunct to history, examination, and MMI to clinically diagnose IRD in children who may have difficulty completing a full battery of typical phenotypic characterisation tests. The early introduction of genetic testing in the diagnostic/care pathway for children with phenotypic (i.e., clinical and/or electrophysiologic) findings suggestive of IRD is critical for genetic counselling of these families prior to upcoming gene therapy trials.
|
The clinical performance of robotic assisted navigation system versus conventional freehand technique for percutaneous transthoracic needle biopsy | 99dd8a7a-9611-47c5-826c-fe29b5e99163 | 11836355 | Surgical Procedures, Operative[mh] | Recently, statistics from relevant organizations show that as of 2022, the incidence of lung cancer has once again surpassed that of breast cancer to become the cancer with the highest incidence and mortality rate in the world . For lung cancer patients, early and accurate diagnosis is the key to treatment. In recent years, with the popularization and development of low-dose computed tomography (CT), lung cancer screening by low-dose CT in high-risk populations and even in routine clinical practice has become a worldwide consensus . Because of this, more suspicious lung lesions have been detected in clinical practice. Currently, lung biopsy can be used to determine the benignity or malignancy of the lesion and subsequent genetic testing can be used to determine the patient’s treatment plan. The three commonly used methods are transbronchoscopic biopsy, percutaneous transthoracic needle biopsy (PTNB), and surgery. Surgery is more invasive, costly, and complex than the other two methods. For central lesions, transbronchoscopic biopsy possesses more advantages, whereas for peripheral lesions, it is less diagnostic and more limited , . Therefore, with the development of minimally invasive fields such as interventional radiology, PTNB has gradually become a widely accepted maneuver in clinical practice, especially for peripheral lesions – . The biggest advantage of lung puncture biopsy is the ability to accurately puncture the lesion under the guidance of CT. Compared with the traditional surgical operation, image-guided interventions therapy are inexpensive, less invasive and have fewer complications. However, the traditional method of PTNB is mainly freehand puncture, which is dependent on the experience of the operator. During the procedure, the operator needs to adjust the direction of the needle several times, and multiple CT scans are required to assess the needle tip position, which greatly elevates the risk of radiation injury to the patient as well as potential complications. In order to improve the accuracy of puncture and reduce the complication rate, many robotic navigation-assisted puncture systems for CT-guided percutaneous puncture have been created , . Previous studies have shown that robotic navigation systems are feasible, safe, and effective for assisted puncture of phantom, swine livers and kidneys models compared to manual puncture – . Based on these previous studies, in order to compare the accuracy and safety of robotic navigation system-assisted puncture with freehand puncture in thoracic and abdominal lesions, a clinical trial was conducted at our center, enrolling a total of 60 patients, which demonstrated that robotic navigation system-assisted CT-guided thoracic and abdominal lesion puncture improves the accuracy and safety of the punctures . However, the small sample size of our previous study resulted in the inclusion of fewer patients with PTNB. The presence of respiratory motion in the lungs makes the process of freehand puncture procedure complex and difficult . In this study, we aim to evaluate the clinical performance of this robot-assisted navigation system in assisting PTNB compared to conventional freehand puncture. Study procedures This study was conducted in accordance with the Declaration of Helsinki. This study was approved by the institutional review board (JD-LK-2022-163-01), and written informed consent was obtained from all patients. Procedures were performed at a single institution (The second affiliated hospital of Soochow University) between January 2022 and June 2024. Fifty-five consecutive patients undergoing robotic assisted PTNB formed a “robotic group” where data were collected prospectively. Patients who underwent freehand puncture from January 2022 to June 2024 were retrospectively collected and matched 1:1 with the robotic group using propensity matching analysis method (PSM) to derive the freehand group. Study participants The inclusion and exclusion criteria were the same for both groups. Inclusion criteria were (i) patients requiring PTNB after multidisciplinary consultation (ii) age > 18 years (iii) ECOG (physical status) score ≤ 1. Exclusion criteria were (i) patients who had not discontinued anticoagulant and(or) antiplatelet medications 5 days prior to procedures (ii) ECOG Score > 3 (iii) poor patient compliance (iv) patient age < 18 years (v) pregnancy. Procedures All patients received local infiltration anesthesia, and all underwent PTNB guided by 16-row helical CT. All biopsies were performed with a core needle in a coaxial cutting needle system using an 18 G puncture needle (OptiMed 1399-1210, 18 G-100/150 mm). For robotic group, we utilized a commercially available robotic-assisted navigation system (TH-S1) obtained from TrueHealth Medical Technology Co. Ltd. in Hengqin, China. This system holds approval from the National Medical Products Administration (NMPA) as a class III medical device. The system comprises a photoelectric navigation system, a surgical planning system, and a robotic arm positioning and puncture system (Fig. ), specifically designed for interventional procedures. The operational principle of the robotic-assisted navigation system for preoperative lung nodule localization is as follows: (I) The patient’s preoperative CT scan is imported into the surgical planning system, enabling the reconstruction of a comprehensive 3D model encompassing the pulmonary nodules, vessels, bronchi, bone structures, and skin. (II) The 3D model is automatically registered with the patient’s position information obtained through the photoelectric navigation system, ensuring accurate alignment. (III) Based on the location of the lesion in the 3D model, the simulated puncture path is generated after selecting the entry point and target location. (IV) The robotic arm positioning and puncture system precisely positions the puncture path, including the puncture site, needle direction, and depth, within the surgical space. (V) A puncture needle is then inserted manually, and biopsy sampling is performed after CT scanning to verify the needle tip position. In the traditional manual CT-guided percutaneous puncture procedure, an experienced operator determines the needle insertion trajectory based on the initial CT image. The entry point is determined using the CT scan frame laser line to indicate the axial position. Since the operator chooses the needle angle based on their experience, they must repeatedly adjust the needle’s direction and depth based on multiple CT scans to ensure a safe and accurate puncture. Outcome measures Feasibility Number of CT scans: The number of CT scans required for the needle tip to reach the position that meets feasibility criteria. Procedure timing (PT): The time elapsed from the initial CT scan used to localize the image to the time of the post-biopsy scanning of the CT to verify complications. Number of punctures: The number of puncture times required for the needle tip to reach the position that meets the feasibility criteria. In the robotic group, if the needle tip position is not considered to be capable of performing biopsy after the first needle tip insertion, the operator could adjust the position by planning a new path. After the second failed adjustment, the procedure is considered unsuccessful and converted to freehand puncture. Generalizability Generalizability is defined as the clinical performance of PTNB using robotic-assisted navigation system by different interventional radiologists. Safety Safety was evaluated as the number of major adverse events attributable to the needle insertion. The method of evaluation was based on a common surgical grading of complications . Grade I: Any deviation from the normal postoperative course without the need for pharmacological treatment or surgical, endoscopic, and radiological interventions. Grade II: Requiring pharmacological treatment with drugs other than such allowed for grade I complications. Blood transfusions and total parenteral nutrition are also included. Grade III: Requiring surgical, endoscopic or radiological intervention. Grade IV: Life-threatening complication requiring IC/ICU management. Statistical analysis Data were analyzed using GraphPad Prism 9.5.0 (GraphPad, San Diego, Calif). Measurements are expressed as mean ± standard deviation [12pt]{minimal} $$( s)$$ . All data were checked for normality using the Shapiro–Wilk test. For data that conformed to normal distribution, paired t -tests were used for comparisons between two groups, one-way ANOVA was used for comparisons between multiple groups, and Wilcoxon signed rank-sum tests were used for data that did not conform to normal distribution. Count data were expressed as rates or component ratios, and comparisons between groups were made using the chi-square test. P < 0.05 was considered to be statistically significant. Propensity score matching (PSM) The PSM process was implemented using the PSM extension program for SPSS: with whether or not a robotic-assisted navigation system aid was used to assist PTNB as a dependent variable, age, sex, BMI, lesion location, lesion size, and lesion edge-to-skin distance as covariates, matching was performed using a 1:1 nearest neighbor matching method, in which each robotic group was matched to 1 freehand group with the most similar scoring values. The process ensures the excellence of the matching results by defining the caliper values, after which the standardized differences of the covariates between the groups are changed before and after the matching is compared; the closer the standardized differences are to 0 after the matching, the more satisfactory the matching results are. When the absolute value of the standardized difference is less than 0.1 (10%), the balance of variables between groups is considered to be better. This study was conducted in accordance with the Declaration of Helsinki. This study was approved by the institutional review board (JD-LK-2022-163-01), and written informed consent was obtained from all patients. Procedures were performed at a single institution (The second affiliated hospital of Soochow University) between January 2022 and June 2024. Fifty-five consecutive patients undergoing robotic assisted PTNB formed a “robotic group” where data were collected prospectively. Patients who underwent freehand puncture from January 2022 to June 2024 were retrospectively collected and matched 1:1 with the robotic group using propensity matching analysis method (PSM) to derive the freehand group. The inclusion and exclusion criteria were the same for both groups. Inclusion criteria were (i) patients requiring PTNB after multidisciplinary consultation (ii) age > 18 years (iii) ECOG (physical status) score ≤ 1. Exclusion criteria were (i) patients who had not discontinued anticoagulant and(or) antiplatelet medications 5 days prior to procedures (ii) ECOG Score > 3 (iii) poor patient compliance (iv) patient age < 18 years (v) pregnancy. All patients received local infiltration anesthesia, and all underwent PTNB guided by 16-row helical CT. All biopsies were performed with a core needle in a coaxial cutting needle system using an 18 G puncture needle (OptiMed 1399-1210, 18 G-100/150 mm). For robotic group, we utilized a commercially available robotic-assisted navigation system (TH-S1) obtained from TrueHealth Medical Technology Co. Ltd. in Hengqin, China. This system holds approval from the National Medical Products Administration (NMPA) as a class III medical device. The system comprises a photoelectric navigation system, a surgical planning system, and a robotic arm positioning and puncture system (Fig. ), specifically designed for interventional procedures. The operational principle of the robotic-assisted navigation system for preoperative lung nodule localization is as follows: (I) The patient’s preoperative CT scan is imported into the surgical planning system, enabling the reconstruction of a comprehensive 3D model encompassing the pulmonary nodules, vessels, bronchi, bone structures, and skin. (II) The 3D model is automatically registered with the patient’s position information obtained through the photoelectric navigation system, ensuring accurate alignment. (III) Based on the location of the lesion in the 3D model, the simulated puncture path is generated after selecting the entry point and target location. (IV) The robotic arm positioning and puncture system precisely positions the puncture path, including the puncture site, needle direction, and depth, within the surgical space. (V) A puncture needle is then inserted manually, and biopsy sampling is performed after CT scanning to verify the needle tip position. In the traditional manual CT-guided percutaneous puncture procedure, an experienced operator determines the needle insertion trajectory based on the initial CT image. The entry point is determined using the CT scan frame laser line to indicate the axial position. Since the operator chooses the needle angle based on their experience, they must repeatedly adjust the needle’s direction and depth based on multiple CT scans to ensure a safe and accurate puncture. Feasibility Number of CT scans: The number of CT scans required for the needle tip to reach the position that meets feasibility criteria. Procedure timing (PT): The time elapsed from the initial CT scan used to localize the image to the time of the post-biopsy scanning of the CT to verify complications. Number of punctures: The number of puncture times required for the needle tip to reach the position that meets the feasibility criteria. In the robotic group, if the needle tip position is not considered to be capable of performing biopsy after the first needle tip insertion, the operator could adjust the position by planning a new path. After the second failed adjustment, the procedure is considered unsuccessful and converted to freehand puncture. Generalizability Generalizability is defined as the clinical performance of PTNB using robotic-assisted navigation system by different interventional radiologists. Safety Safety was evaluated as the number of major adverse events attributable to the needle insertion. The method of evaluation was based on a common surgical grading of complications . Grade I: Any deviation from the normal postoperative course without the need for pharmacological treatment or surgical, endoscopic, and radiological interventions. Grade II: Requiring pharmacological treatment with drugs other than such allowed for grade I complications. Blood transfusions and total parenteral nutrition are also included. Grade III: Requiring surgical, endoscopic or radiological intervention. Grade IV: Life-threatening complication requiring IC/ICU management. Number of CT scans: The number of CT scans required for the needle tip to reach the position that meets feasibility criteria. Procedure timing (PT): The time elapsed from the initial CT scan used to localize the image to the time of the post-biopsy scanning of the CT to verify complications. Number of punctures: The number of puncture times required for the needle tip to reach the position that meets the feasibility criteria. In the robotic group, if the needle tip position is not considered to be capable of performing biopsy after the first needle tip insertion, the operator could adjust the position by planning a new path. After the second failed adjustment, the procedure is considered unsuccessful and converted to freehand puncture. Generalizability is defined as the clinical performance of PTNB using robotic-assisted navigation system by different interventional radiologists. Safety was evaluated as the number of major adverse events attributable to the needle insertion. The method of evaluation was based on a common surgical grading of complications . Grade I: Any deviation from the normal postoperative course without the need for pharmacological treatment or surgical, endoscopic, and radiological interventions. Grade II: Requiring pharmacological treatment with drugs other than such allowed for grade I complications. Blood transfusions and total parenteral nutrition are also included. Grade III: Requiring surgical, endoscopic or radiological intervention. Grade IV: Life-threatening complication requiring IC/ICU management. Data were analyzed using GraphPad Prism 9.5.0 (GraphPad, San Diego, Calif). Measurements are expressed as mean ± standard deviation [12pt]{minimal} $$( s)$$ . All data were checked for normality using the Shapiro–Wilk test. For data that conformed to normal distribution, paired t -tests were used for comparisons between two groups, one-way ANOVA was used for comparisons between multiple groups, and Wilcoxon signed rank-sum tests were used for data that did not conform to normal distribution. Count data were expressed as rates or component ratios, and comparisons between groups were made using the chi-square test. P < 0.05 was considered to be statistically significant. Propensity score matching (PSM) The PSM process was implemented using the PSM extension program for SPSS: with whether or not a robotic-assisted navigation system aid was used to assist PTNB as a dependent variable, age, sex, BMI, lesion location, lesion size, and lesion edge-to-skin distance as covariates, matching was performed using a 1:1 nearest neighbor matching method, in which each robotic group was matched to 1 freehand group with the most similar scoring values. The process ensures the excellence of the matching results by defining the caliper values, after which the standardized differences of the covariates between the groups are changed before and after the matching is compared; the closer the standardized differences are to 0 after the matching, the more satisfactory the matching results are. When the absolute value of the standardized difference is less than 0.1 (10%), the balance of variables between groups is considered to be better. The PSM process was implemented using the PSM extension program for SPSS: with whether or not a robotic-assisted navigation system aid was used to assist PTNB as a dependent variable, age, sex, BMI, lesion location, lesion size, and lesion edge-to-skin distance as covariates, matching was performed using a 1:1 nearest neighbor matching method, in which each robotic group was matched to 1 freehand group with the most similar scoring values. The process ensures the excellence of the matching results by defining the caliper values, after which the standardized differences of the covariates between the groups are changed before and after the matching is compared; the closer the standardized differences are to 0 after the matching, the more satisfactory the matching results are. When the absolute value of the standardized difference is less than 0.1 (10%), the balance of variables between groups is considered to be better. Baseline characteristics of the two groups of patients before and after matching After excluding 5 patients who were unable to proceed to the next step due to technical errors during puncture, a total of 50 patients were included in the robotic group as the test group. A total of 200 patients who underwent freehand puncture at our center from January 2022 to June 2024 were included, and a total of 170 patients were included as the control group after excluding patients with incomplete corresponding data. Baseline characteristics (age, sex, BMI, lesion location, lesion size, and lesion edge-to-skin distance) were compared between the two groups, and the results showed no significant differences in baseline characteristics between the two groups. However, in order to simulate the randomization scenario as much as possible, we chose to use PSM with 1:1 nearest-neighbor matching method and a caliper value of 0.15, which finally resulted in 50 patients who were most closely matched to the robotic group. The equalization of the covariates was also significantly improved from before by PSM. The baseline characteristics of patients before and after matching is shown in Table . Main results A summary of results comparing technical outcomes between robotic versus freehand procedures is shown in Table . Technical success Technical success was achieved in all procedures in both groups. The needle tip reached the edge of the lesion to obtain the tissue and the final definitive pathologic tissue results were obtained. And no cases in the robotic group were converted to freehand puncture. Example images from a typical robotic assisted PTNB procedure are provided in Fig. . Feasibility The robotic group required fewer puncture times in place (1.4 ± 0.8 vs. 2.3 ± 1.5), P < 0.05, had a shorter procedure time (15.33 ± 5.47 vs. 20.43 ± 9.78), P < 0.05, and the patients underwent fewer CT scan times (3.80 ± 1.22 vs. 5.75 ± 2.12) compared with the freehand group, P < 0.05 (Fig. ). Generalizability The test group involved three interventional radiologists with different experiences, including an attending with 20 years of experience, an attending with 10 years of experience, and a resident with 5 years of experience. The number of puncture times made during the procedure, the procedure time, the number of CT scans and the radiation dose received by the patient were collected for each interventional radiologist. The statistical analysis showed no significant differences between the interventional radiologists (Table ). Safety Six adverse events in the robotic group were pneumothorax of which four were managed conservatively (grade I) and 2 were managed with chest drain insertion (grade III). Two adverse events in the robotic group were slight postoperative pain. There were fifteen adverse events in the freehand group, of which fourteen were pneumothoraxes, ten were treated conservatively (grade I), 4 required chest drain insertion (grade III), and 1 hemorrhage required emergency resuscitation (grade IV). The results after statistical analysis showed that there was no statistical difference between the two groups in terms of the overall number of adverse events ( P > 0.05), but according to the sub-stratum analysis of pneumothorax showed a statistical difference ( P = 0.05) (Fig. ). After excluding 5 patients who were unable to proceed to the next step due to technical errors during puncture, a total of 50 patients were included in the robotic group as the test group. A total of 200 patients who underwent freehand puncture at our center from January 2022 to June 2024 were included, and a total of 170 patients were included as the control group after excluding patients with incomplete corresponding data. Baseline characteristics (age, sex, BMI, lesion location, lesion size, and lesion edge-to-skin distance) were compared between the two groups, and the results showed no significant differences in baseline characteristics between the two groups. However, in order to simulate the randomization scenario as much as possible, we chose to use PSM with 1:1 nearest-neighbor matching method and a caliper value of 0.15, which finally resulted in 50 patients who were most closely matched to the robotic group. The equalization of the covariates was also significantly improved from before by PSM. The baseline characteristics of patients before and after matching is shown in Table . A summary of results comparing technical outcomes between robotic versus freehand procedures is shown in Table . Technical success was achieved in all procedures in both groups. The needle tip reached the edge of the lesion to obtain the tissue and the final definitive pathologic tissue results were obtained. And no cases in the robotic group were converted to freehand puncture. Example images from a typical robotic assisted PTNB procedure are provided in Fig. . The robotic group required fewer puncture times in place (1.4 ± 0.8 vs. 2.3 ± 1.5), P < 0.05, had a shorter procedure time (15.33 ± 5.47 vs. 20.43 ± 9.78), P < 0.05, and the patients underwent fewer CT scan times (3.80 ± 1.22 vs. 5.75 ± 2.12) compared with the freehand group, P < 0.05 (Fig. ). The test group involved three interventional radiologists with different experiences, including an attending with 20 years of experience, an attending with 10 years of experience, and a resident with 5 years of experience. The number of puncture times made during the procedure, the procedure time, the number of CT scans and the radiation dose received by the patient were collected for each interventional radiologist. The statistical analysis showed no significant differences between the interventional radiologists (Table ). Six adverse events in the robotic group were pneumothorax of which four were managed conservatively (grade I) and 2 were managed with chest drain insertion (grade III). Two adverse events in the robotic group were slight postoperative pain. There were fifteen adverse events in the freehand group, of which fourteen were pneumothoraxes, ten were treated conservatively (grade I), 4 required chest drain insertion (grade III), and 1 hemorrhage required emergency resuscitation (grade IV). The results after statistical analysis showed that there was no statistical difference between the two groups in terms of the overall number of adverse events ( P > 0.05), but according to the sub-stratum analysis of pneumothorax showed a statistical difference ( P = 0.05) (Fig. ). PTNB is widely used for early diagnosis of suspected malignant tumors due to its simplicity and effectiveness . However, inaccurate localization may lead to repeated scanning and puncture attempts, resulting in potential radiation exposure as well as the risk of complications to the patient . To tackle this problem, a novel robotic-assisted navigation system was developed with the aim of improving the accuracy of puncture, shortening the procedure time, reducing the complication rate, and decreasing the radiation exposure to the patient. In this study, we aim to evaluate the clinical performance of this robot-assisted navigation system in assisting PTNB compared to conventional freehand puncture. Our study included a larger sample size than previous reports on robotic navigation system , . In a previous study by Erica S. Alexander et al. on robotic navigation-assisted lung biopsy , the baseline of lesion size was not standardized between the two groups; in this study, we used PSM to ensure the consistency of the baseline of lesion size and the distance from the lesion edges to the skin, which yielded a pleasing experimental result. The traditional approach to CT-guided PTNB is mainly freehand puncture with stepwise needle entry. This method leads to longer procedure times and increased radiation exposure for the patient, as it requires multiple CT scans to determine the needle tip position during the procedure. In this study, the robotic group required shorter procedure time when the needle gain an acceptable position compared with the freehand group, and the patients underwent fewer CT scan times. The traditional puncture approach, which relies heavily on the operator’s experience, uses a stepwise needle advancement method that requires repeated intraoperative adjustments of the needle angle. This method can lead to the creation of multiple puncture passages, seriously increasing the risk of potential complications. A retrospective study of 10,568 cases of percutaneous lung biopsy showed that the occurrence of pneumothorax in two puncture channels was significantly higher than in 1 ( P < 0.001), and the frequency of pneumothorax (requiring catheter drainage) was significantly increased in three puncture channels ( P < 0.001) . A retrospective study of serious complications such as pneumothorax and/or parenchymal hemorrhage after CT-guided transthoracic biopsy shows that the most important risk factor for the development of pneumothorax is the number of needle insertion times, with the higher the number of needle insertion times being associated with a higher risk of pneumothorax . In this study, the robotic group had a statistically significant reduction in the number of puncture attempts needed to achieve an acceptable co-axial needle position before biopsy, compared to the freehand group. There was also a reduction in the number of adverse events (8 vs. 15), but the difference was not statistically significant. Among the complications related to PTNB, pneumothorax was the most common, as reported in related studies . Therefore, we conducted a sub-stratum analysis specifically for the complication of pneumothorax. The results indicated that the number of pneumothoraxes in the robotic group was significantly reduced compared with the Freehand group, which was statistically significant and superior to the results reported in previous related studies , . Several studies have shown that accuracy decreases when novice operators perform freehand PTNB , . One of the major advantages of robotic-assisted navigation system is that it can reduce the negative consequences of lack of operating experience, but until now, there is a lack of relevant studies on this topic. Therefore, in this study, we included for the first time three interventional radiologists with varying levels of experience in robot-assisted percutaneous needle biopsy (PTNB): one attending with 20 years of experience, another with 10 years, and a resident with 5 years of experience. Statistical analysis of the results showed that there were no significant differences in the number of needle puncture times required to reach the target position, the procedure time, the number of CT scan times required, and the radiation dose to the patient between operators with different levels of experience when using robot-assisted PTNB. This further confirms that robot-assisted PTNB reduces the impact of operator experience. Simultaneously, in this study, contrary to conventional practice, we allowed the patients to breathe freely and quietly during the procedure. There were two reasons why it is not necessary to hold their breath: firstly, the reproducibility of the diaphragm position between breath holds is not good, and secondly, the deep breaths that follow may result in a pinprick laceration of the pleura . At the same time, relevant study pointed out that the performance as well as complications of PTNB performed under calm breathing were acceptable . Furthermore, asking the patient to breathe steadily and regularly while being comfortable can increase the patient’s cooperation, reduce the patient’s nervousness, improve puncture efficiency and reduce the complications due to poor patient cooperation . Presently, the main robotic-assisted navigation systems that exist on the market are divided into two main categories: electromagnetic navigation systems and optical navigation systems . Electromagnetic navigation systems operate based on electromagnetic principles, utilizing sensors to detect the position and orientation of medical instruments (such as puncture needles or guide needles). An example is the CorPath ® system . These systems do not rely on external visual guidance, allowing for precise positioning even in deep body tissues. But electromagnetic navigation systems may be susceptible to performance interference from metals or other magnetic sources, which can easily lead to technical malfunctions during the puncture process, rendering the procedure unfeasible, especially in a CT environment. In a previous study, out of 26 patients who required electromagnetic navigation-assisted puncture, 8 experienced technical malfunctions during the procedure . Optical navigation systems, on the other hand, use optical sensors (such as cameras, lasers, or infrared devices) to track the position of the target. An example is the OpticNav ® system . These systems offer high precision, clear real-time feedback, and intuitive operation. They are suitable for surface-visible areas and do not rely on complex sensor equipment. In this study, we utilized an optical navigation system, which, despite being compatible with the CT environment, has a significant drawback: the camera and optical markers must not be obstructed. Among the 55 patients who required optical navigation system-assisted PTNB, 5 experienced technical malfunctions that prevented the machine arm from reaching the preset position, necessitating replanning or conversion to a freehand puncture. The results, while clearly superior to electromagnetic navigation, are poorer in terms of system usability than previous studies related to optical navigation , . Upon analyzing the reasons, in 2 of the patients, the robot experienced calibration and control errors during the operation. This is mainly due to a flaw in the design of the system, which is not expected to occur subsequently as the technology is updated. In the other 3 cases, we attributed the issues to the patient’s respiratory amplitude and the navigation system’s inability to recognize the optical markers, leading to planning failures. Upon further analysis, it was observed that all three patients were female, and their predisposition to chest breathing contributed to pronounced chest surface motion during respiration. The increased respiratory amplitude caused significant fluctuations, hindering the navigation system’s ability to accurately detect the markers. Unlike rigid and static organs such as the brain and bones, robot-assisted navigation systems may make errors in lung puncture . Preoperative image data cannot be aligned in real time with the patient’s lung anatomical data because respiratory motion changes the position of the lung lesion, which makes the widespread use of navigation techniques in lung puncture challenging. We will conduct a further study to investigate the patterns of respiratory displacement and respiratory motion in various regions of the lung to update the respiratory gating technique for puncture navigation. The limitation of this study is the retrospective data collection of the freehand puncture group, which led to the lack of some important information, such as the CT scan parameters, which resulted in the unmeasurable radiation dose to the patients during the operation, even though the study showed a significant reduction in the number of CT scan times in the robotic group compared with the freehand group. Additionally, in this study, we utilized PSM to control for confounding factors and ensure that baseline characteristics, such as lesion size and puncture path distance, were similar between the two groups. While this approach closely simulates a randomized controlled scenario , it still has limitations and shortcomings compared to a randomized control trial. This study is a real-world clinical study which confirms that robotic-assisted PTNB is feasible and safe in the clinic. Compared to the conventional freehand technique, the robot-assisted navigation system can reduce the number of puncture times, the number of CT scan times, and the rate of adverse events such as pneumothorax. Additionally, its accuracy is comparable to that of the conventional freehand technique. At the same time, the robot-assisted navigation system can also reduce the influence of operator experience on PTNB, which can be widely promoted in the clinic. |
Comparison of Publication of Pediatric Probiotic vs Antibiotic Trials Registered on ClinicalTrials.gov | 6a65748c-9612-49b3-b425-f1adb1ede102 | 8501398 | Pediatrics[mh] | Probiotics represent a rapidly growing commercial industry valued at approximately US $15 billion in 2013. With growth estimated to be 7% per annum, there has been a concerted effort to document clinical effectiveness , , through the conduct of randomized clinical trials that are summarized in numerous meta-analyses. , , , , , The latter inform guideline recommendations, , which often support probiotic use with the caveat that the evidence is weak owing to the inclusion of studies with a high risk of bias. , , Although many early studies that influence the conclusions of meta-analyses reported benefits associated with probiotic use, recent multicenter trials often contradict these results, , , , , and they are beginning to inform the conclusions of reviews. For example, the 2010 Cochrane review of probiotic use in acute infectious diarrhea (63 studies) concluded that probiotics “have clear beneficial effects in shortening the duration and reducing stool frequency in acute infectious diarrhoea.” (p2) However, the 2020 update (82 studies) states that “probiotics probably make little or no difference to the number of people who have diarrhoea lasting 48 hours or longer, and we are uncertain whether probiotics reduce the duration of diarrhoea.” (p2) Publication bias occurs when the direction or strength of a finding influences the likelihood, timing, language, and journal of publication. This phenomenon may explain the discordant conclusions of probiotic meta-analyses and the results of recent large, multicenter clinical trials. Because 25% to 50% of registered trials remain unpublished after study completion, meta-analyses may reach incorrect conclusions if the included data misrepresent the effects, benefits, and risks of the intervention. , Publication bias is a particular concern in fields where industry-funded studies are common because they are associated with study discontinuation, nonpublication, and more favorable reporting of results and conclusions. , The International Committee of Medical Journal Editors has embraced the use of registries, such as ClinicalTrials.gov, and since 2005 have required trial registration before participant enrollment as a prerequisite for publication. We therefore used this data source to compare the proportion of registered trials that are published between those evaluating a probiotic relative to those evaluating an antibiotic. Our secondary objective was to determine whether exposure status (ie, probiotic or antibiotic), trial result, or funding source were independently associated with publication status and whether study design elements, journal impact factor, and interval from study completion to publication differed by exposure status. We hypothesized that a lower proportion of registered probiotic trials are published compared with antibiotic trials.
Study Design and Setting This cross-sectional study used publicly available aggregate data; thus, institutional review board approval was not required. The study followed the Strengthening the Reporting of Observational Studies in Epidemiology ( STROBE ) reporting guideline. The study included trials registered in ClinicalTrials.gov with start dates from July 1, 2005, to June 30, 2016. Established in 1999, ClinicalTrials.gov is an online, publicly available registry that includes details on interventions, outcomes, results, and funding sources. The start date was selected to correspond with the 2005 International Committee of Medical Journal Editors requirement that trials be registered in advance of publication. The end date was chosen to allow for a sufficient interval between registration and study publication. Eligible studies met the following criteria: (1) the included participants were younger than 18 years (identified as children in the database); (2) the participants were randomized to at least 2 alternate interventions; and (3) the study evaluated a probiotic or 1 of the 5 most commonly prescribed antibiotics in children (eTable 1 in the ). , Antibiotics were selected to serve as the referent standard because these medications are no longer patent protected, and the financial implications of trial results would be minimal for industry stakeholders. The registry search (eTable 2 in the ) was undertaken with the assistance of a medical librarian (D.L.L.) using search terms for probiotics ( Lactobacillus or probiotic or Saccharomyces or Enterococcus or Streptococcus or Acidophilus ) and antibiotics ( azithromycin or amoxicillin–clavulanic acid or amoxicillin or cefdinir or cephalexin ). These terms were exploded to include synonyms through the ClinicalTrials.gov advanced search function (eTables 1 and 3 in the ). Our search was limited to interventional trials. We did not apply any study purpose, language, result, or recruitment status restrictions. After the initial search, data were downloaded from ClinicalTrials.gov via comma-separated values export for review. Eligibility Screening All searches were updated and finalized as of September 14, 2020. Two authors (M.R. and K.L.) independently performed the search and evaluated studies for eligibility. After completion of independent screening and removal of duplicates, disagreements were resolved via consultation with a third reviewer (S.B.F.). Data Extraction and Definitions Two reviewers from a group of 3 (M.R., K.L., or N.L.) extracted data independently into a standardized database that was piloted with the first 10 eligible studies and subsequently refined before completing extraction for the remaining studies. If discrepancies occurred, the 2 reviewers attempted to resolve them; if discrepancies remained, a third reviewer (S.B.F.) was consulted. Study completion was the date when participants were no longer being examined or treated (ie, final visit had occurred), as stated in the ClinicalTrials.gov registry. When this data field was incomplete, the estimated date of completion was extracted. Study duration was the interval from study start to completion dates. Data extractors classified the study purpose based on the following predefined criteria: therapeutic studies used an investigational agent to improve a specified medical condition; prophylactic studies evaluated investigational agent use for preventative purposes; safety/tolerability studies determined adverse effects associated with the investigational agent; and basic science studies were designed to assess pharmacokinetic properties. Data extractors classified the comparison group as placebo, standard care if the study described comparison to standard of care treatment, separate intervention if the comparison was another therapeutic agent, or alternate dosing if the trial compared alternative methods of administration, timing, or dosing of the intervention. Blinding was present if any level of masking was reported on ClinicalTrials.gov. Trials are categorized by the registry into the following population age groups: premature infants (gestational age <36 weeks), term infants (aged 0-15 months), all ages (15 months to 18 years), and children and adults (includes participants older than 18 years). We categorized funding source according to the study sponsor disclosed on ClinicalTrials.gov. If any industry funding was disclosed, the study was categorized as industry funded. Published trials had the following additional data extracted: whether findings for all primary outcomes were reported, direction of results, date, and journal of publication. We classified a study finding as positive if the results of the primary outcome, as stated in ClinicalTrials.gov, were statistically significant (ie, based on reported P values and/or 95% CIs). For noninferiority trials, if the investigational agent was found to be noninferior, the results were classified as positive. If there were multiple primary outcomes and the results were not uniformly positive or negative, these studies were classified as mixed. The publication interval was the difference between the study completion date (actual or estimated) and the date of publication. Journal impact factor was determined using the 2019 edition of Journal Citation Reports (Clarivate Analytics). Outcomes Primary Outcome Publication status, the primary outcome, was assessed in accordance with previous work in the field. First, the ClinicalTrials.gov registry was examined to identify whether an article citation was provided. If a citation was not identified, a PubMed search was performed using the ClinicalTrials.gov NCT number. If a publication was not located, PubMed was subsequently searched using the name of the principal investigator and other identifying descriptors (eg, population, intervention, and study outcomes). If a matching publication was not identified, the strategy was repeated using Google Scholar. All searches were updated and finalized as of September 9, 2020. Secondary Objectives We sought to determine whether study intervention (ie, probiotic vs antibiotic), outcome (ie, positive vs negative vs mixed), or funding source (ie, industry vs nonindustry) were independently associated with publication status. In addition, we aimed to determine whether study design elements (ie, blinding, study purpose, multicenter vs single-center), impact factor of the journal in which the study was published, and the interval from study completion to publication differed by study intervention type. These factors were selected as reflections of publication quality. Sample Size Sample size calculations were performed using the Russ Lenth Java applet for determining power and sample size. We assumed a worst-case scenario and estimated that 50% of registered probiotic trials would be published. Based on a survey of 9 colleagues (3 pediatric emergency medicine physicians; 3 pediatric infectious disease specialists; and 3 gastroenterologists), the median clinically meaningful difference in the proportion of studies published was set at 15%. Using a 2-sided α of .05, a minimum of 135 studies were required in each cohort to reject the null hypothesis with 80% power. Statistical Analysis We report trial characteristics by exposure status (ie, probiotic or antibiotic) using descriptive statistics. Between-group comparisons for study design elements were performed using the Pearson χ 2 test or the Fisher exact test if the number of studies in any single category was less than 10. For nonnormally distributed data, we used the Mann-Whitney test to compare 2 independent samples. Interrater reliability regarding publication status was determined using the Cohen κ value. For the primary outcome of publication status, we compared antibiotic and probiotic trials using the Pearson χ 2 test. For the secondary outcomes, we conducted an a priori planned multiple logistic regression analysis to assess the association between publication status (dependent variable) and the type of intervention (ie, probiotic vs antibiotic) including the following a priori identified covariates: funding source, blinding, and study purpose. We compared features of study design, outcomes, and funding source between groups using the Pearson χ 2 test. We used the Mann-Whitney test to compare impact factor of the journal in which the study was published and time to publication for the 2 groups. Statistical analyses were completed on October 16, 2020, using SPSS Statistics Subscription Build, version 1.0.0.1461 (IBM Corp). Statistical significance was set at 2-sided P < .05.
This cross-sectional study used publicly available aggregate data; thus, institutional review board approval was not required. The study followed the Strengthening the Reporting of Observational Studies in Epidemiology ( STROBE ) reporting guideline. The study included trials registered in ClinicalTrials.gov with start dates from July 1, 2005, to June 30, 2016. Established in 1999, ClinicalTrials.gov is an online, publicly available registry that includes details on interventions, outcomes, results, and funding sources. The start date was selected to correspond with the 2005 International Committee of Medical Journal Editors requirement that trials be registered in advance of publication. The end date was chosen to allow for a sufficient interval between registration and study publication. Eligible studies met the following criteria: (1) the included participants were younger than 18 years (identified as children in the database); (2) the participants were randomized to at least 2 alternate interventions; and (3) the study evaluated a probiotic or 1 of the 5 most commonly prescribed antibiotics in children (eTable 1 in the ). , Antibiotics were selected to serve as the referent standard because these medications are no longer patent protected, and the financial implications of trial results would be minimal for industry stakeholders. The registry search (eTable 2 in the ) was undertaken with the assistance of a medical librarian (D.L.L.) using search terms for probiotics ( Lactobacillus or probiotic or Saccharomyces or Enterococcus or Streptococcus or Acidophilus ) and antibiotics ( azithromycin or amoxicillin–clavulanic acid or amoxicillin or cefdinir or cephalexin ). These terms were exploded to include synonyms through the ClinicalTrials.gov advanced search function (eTables 1 and 3 in the ). Our search was limited to interventional trials. We did not apply any study purpose, language, result, or recruitment status restrictions. After the initial search, data were downloaded from ClinicalTrials.gov via comma-separated values export for review.
All searches were updated and finalized as of September 14, 2020. Two authors (M.R. and K.L.) independently performed the search and evaluated studies for eligibility. After completion of independent screening and removal of duplicates, disagreements were resolved via consultation with a third reviewer (S.B.F.).
Two reviewers from a group of 3 (M.R., K.L., or N.L.) extracted data independently into a standardized database that was piloted with the first 10 eligible studies and subsequently refined before completing extraction for the remaining studies. If discrepancies occurred, the 2 reviewers attempted to resolve them; if discrepancies remained, a third reviewer (S.B.F.) was consulted. Study completion was the date when participants were no longer being examined or treated (ie, final visit had occurred), as stated in the ClinicalTrials.gov registry. When this data field was incomplete, the estimated date of completion was extracted. Study duration was the interval from study start to completion dates. Data extractors classified the study purpose based on the following predefined criteria: therapeutic studies used an investigational agent to improve a specified medical condition; prophylactic studies evaluated investigational agent use for preventative purposes; safety/tolerability studies determined adverse effects associated with the investigational agent; and basic science studies were designed to assess pharmacokinetic properties. Data extractors classified the comparison group as placebo, standard care if the study described comparison to standard of care treatment, separate intervention if the comparison was another therapeutic agent, or alternate dosing if the trial compared alternative methods of administration, timing, or dosing of the intervention. Blinding was present if any level of masking was reported on ClinicalTrials.gov. Trials are categorized by the registry into the following population age groups: premature infants (gestational age <36 weeks), term infants (aged 0-15 months), all ages (15 months to 18 years), and children and adults (includes participants older than 18 years). We categorized funding source according to the study sponsor disclosed on ClinicalTrials.gov. If any industry funding was disclosed, the study was categorized as industry funded. Published trials had the following additional data extracted: whether findings for all primary outcomes were reported, direction of results, date, and journal of publication. We classified a study finding as positive if the results of the primary outcome, as stated in ClinicalTrials.gov, were statistically significant (ie, based on reported P values and/or 95% CIs). For noninferiority trials, if the investigational agent was found to be noninferior, the results were classified as positive. If there were multiple primary outcomes and the results were not uniformly positive or negative, these studies were classified as mixed. The publication interval was the difference between the study completion date (actual or estimated) and the date of publication. Journal impact factor was determined using the 2019 edition of Journal Citation Reports (Clarivate Analytics).
Primary Outcome Publication status, the primary outcome, was assessed in accordance with previous work in the field. First, the ClinicalTrials.gov registry was examined to identify whether an article citation was provided. If a citation was not identified, a PubMed search was performed using the ClinicalTrials.gov NCT number. If a publication was not located, PubMed was subsequently searched using the name of the principal investigator and other identifying descriptors (eg, population, intervention, and study outcomes). If a matching publication was not identified, the strategy was repeated using Google Scholar. All searches were updated and finalized as of September 9, 2020. Secondary Objectives We sought to determine whether study intervention (ie, probiotic vs antibiotic), outcome (ie, positive vs negative vs mixed), or funding source (ie, industry vs nonindustry) were independently associated with publication status. In addition, we aimed to determine whether study design elements (ie, blinding, study purpose, multicenter vs single-center), impact factor of the journal in which the study was published, and the interval from study completion to publication differed by study intervention type. These factors were selected as reflections of publication quality.
Publication status, the primary outcome, was assessed in accordance with previous work in the field. First, the ClinicalTrials.gov registry was examined to identify whether an article citation was provided. If a citation was not identified, a PubMed search was performed using the ClinicalTrials.gov NCT number. If a publication was not located, PubMed was subsequently searched using the name of the principal investigator and other identifying descriptors (eg, population, intervention, and study outcomes). If a matching publication was not identified, the strategy was repeated using Google Scholar. All searches were updated and finalized as of September 9, 2020.
We sought to determine whether study intervention (ie, probiotic vs antibiotic), outcome (ie, positive vs negative vs mixed), or funding source (ie, industry vs nonindustry) were independently associated with publication status. In addition, we aimed to determine whether study design elements (ie, blinding, study purpose, multicenter vs single-center), impact factor of the journal in which the study was published, and the interval from study completion to publication differed by study intervention type. These factors were selected as reflections of publication quality.
Sample size calculations were performed using the Russ Lenth Java applet for determining power and sample size. We assumed a worst-case scenario and estimated that 50% of registered probiotic trials would be published. Based on a survey of 9 colleagues (3 pediatric emergency medicine physicians; 3 pediatric infectious disease specialists; and 3 gastroenterologists), the median clinically meaningful difference in the proportion of studies published was set at 15%. Using a 2-sided α of .05, a minimum of 135 studies were required in each cohort to reject the null hypothesis with 80% power.
We report trial characteristics by exposure status (ie, probiotic or antibiotic) using descriptive statistics. Between-group comparisons for study design elements were performed using the Pearson χ 2 test or the Fisher exact test if the number of studies in any single category was less than 10. For nonnormally distributed data, we used the Mann-Whitney test to compare 2 independent samples. Interrater reliability regarding publication status was determined using the Cohen κ value. For the primary outcome of publication status, we compared antibiotic and probiotic trials using the Pearson χ 2 test. For the secondary outcomes, we conducted an a priori planned multiple logistic regression analysis to assess the association between publication status (dependent variable) and the type of intervention (ie, probiotic vs antibiotic) including the following a priori identified covariates: funding source, blinding, and study purpose. We compared features of study design, outcomes, and funding source between groups using the Pearson χ 2 test. We used the Mann-Whitney test to compare impact factor of the journal in which the study was published and time to publication for the 2 groups. Statistical analyses were completed on October 16, 2020, using SPSS Statistics Subscription Build, version 1.0.0.1461 (IBM Corp). Statistical significance was set at 2-sided P < .05.
We identified 466 unique probiotic and 209 unique antibiotic trials in ClinicalTrials.gov. After screening, 265 (56.9%) probiotic and 136 (65.1%) antibiotic records were retained for a total of 401 studies . Fifty-eight probiotic trials reported their status as withdrawn, terminated, or unknown compared with 18 antibiotic trials (21.9% vs 13.2%; difference, 8.7% [95% CI, 0.5%-15.8%]). Probiotic trials were of shorter duration (median, 537 [IQR, 306-973] days) compared with antibiotic trials (median, 1096 [IQR, 668-1642] days; P = .001). Probiotic trials were more likely to be placebo-controlled compared with antibiotic trials (205 [77.4%] vs 55 [40.4%]; difference, 36.9% [95% CI, 26.9%-46.1%]), whereas antibiotic trials were more likely to use a separate intervention (38 [27.9%] vs 18 [6.8%]; difference, 21.2% [95% CI, 13.4%-30.0%]) or dosing regimen (24 [17.6%] vs 3 [1.1%]; difference, 16.5% [95% CI, 10.6%-23.8%]) as comparison groups. Results were more commonly available in ClinicalTrials.gov for the antibiotic studies (39 [28.7%] vs 17 [6.4%]; difference, 22.3% [95% CI, 14.4%-30.7%]) . Primary Outcome Interreviewer publication status classification agreement for antibiotic and probiotic studies was outstanding, with κ values of 0.94 (95% CI, 0.90-0.98) and 0.82 (95% CI, 0.75-0.89), respectively. A greater proportion of antibiotic studies were published compared with probiotic studies (83 [61.0%] vs 119 [44.9%]; difference, 16.1% [95% CI, 5.8%-25.9%]). Secondary Outcomes After adjustment for industry funding, blinding, and study purpose, studies evaluating an antibiotic were more likely to be published (odds ratio, 2.1 [95% CI, 1.3-3.4]) . Of the 83 published antibiotic studies, 56 (67.5%) reported statistically significant results, compared with 58 of the 119 published probiotic trials (48.7%; difference, 18.7% [95% CI, 4.9%-31.4%]). The proportion of industry-funded trials reporting statistically significant benefits was higher for antibiotic trials (10 of 13 [76.9%]) compared with probiotic trials (15 of 34 [44.1%]; difference, 32.8% [95% CI, 1.1%-54.1%]). A greater proportion (38 of 119 [31.9%]) of published probiotic trials were industry funded compared with published antibiotic trials (10 of 83 [12.0%]; difference, 19.9% [95% CI, 8.3%-30.2%]). Antibiotic trials, compared with probiotic trials, were more likely to be therapeutic (114 of 136 [83.8%] vs 117 of 265 [44.2%]; difference, 39.7% [95% CI, 30.4%-47.5%]) and multicenter (46 of 136 [33.8%] vs 46 of 265 [17.4%]; difference, 16.5% [95% CI, 7.5%-25.7%]) . A greater percentage of published antibiotic trials were multicenter in design (28 of 79 [35.4%]) compared with published probiotic trials (22 of 104 [21.2%]; difference, 14.2% [95% CI, 1.2%-27.2%]) . The median impact factor of the journals in which the articles were published was lower for the probiotic trials than antibiotic trials (3.0 [IQR, 2.3-4.2] vs 7.2 [IQR, 2.8-20.5]; P < .001) (eTable 4 in the ). The median time to publication did not differ between the probiotic and antibiotic trials (801 [IQR, 550-1183] vs 683 [IQR, 441-1036] days; P = .24). Exploratory Analyses Industry-funded trials were more likely to involve premature or term infants compared with non–industry-funded trials (45 of 106 [42.5%] vs 73 of 295 [24.7%]; difference, 17.7% [95% CI, 7.3%-28.3%]). Antibiotic trials with statistically significant results were more likely to be multicenter (24 of 58 [41.4%]) compared with statistically significant probiotic trial findings (7 of 48 [14.6%]; difference, 26.8% [95% CI, 9.6%-41.6%]). Although the proportion of antibiotic trials has not changed over time (through 2010, 67 of 136 [49.3%]; after 2010, 69 of 136 [50.7%]; difference, 1.2% [95% CI, −13.7% to 16.0%]), the proportion of registered probiotic trials that were published has increased (through 2010, 91 of 264 [34.5%]; after 2010, 173 of 265 [65.3%]; difference, 31.0% [95% CI, 18.5%-42.3%]).
Interreviewer publication status classification agreement for antibiotic and probiotic studies was outstanding, with κ values of 0.94 (95% CI, 0.90-0.98) and 0.82 (95% CI, 0.75-0.89), respectively. A greater proportion of antibiotic studies were published compared with probiotic studies (83 [61.0%] vs 119 [44.9%]; difference, 16.1% [95% CI, 5.8%-25.9%]).
After adjustment for industry funding, blinding, and study purpose, studies evaluating an antibiotic were more likely to be published (odds ratio, 2.1 [95% CI, 1.3-3.4]) . Of the 83 published antibiotic studies, 56 (67.5%) reported statistically significant results, compared with 58 of the 119 published probiotic trials (48.7%; difference, 18.7% [95% CI, 4.9%-31.4%]). The proportion of industry-funded trials reporting statistically significant benefits was higher for antibiotic trials (10 of 13 [76.9%]) compared with probiotic trials (15 of 34 [44.1%]; difference, 32.8% [95% CI, 1.1%-54.1%]). A greater proportion (38 of 119 [31.9%]) of published probiotic trials were industry funded compared with published antibiotic trials (10 of 83 [12.0%]; difference, 19.9% [95% CI, 8.3%-30.2%]). Antibiotic trials, compared with probiotic trials, were more likely to be therapeutic (114 of 136 [83.8%] vs 117 of 265 [44.2%]; difference, 39.7% [95% CI, 30.4%-47.5%]) and multicenter (46 of 136 [33.8%] vs 46 of 265 [17.4%]; difference, 16.5% [95% CI, 7.5%-25.7%]) . A greater percentage of published antibiotic trials were multicenter in design (28 of 79 [35.4%]) compared with published probiotic trials (22 of 104 [21.2%]; difference, 14.2% [95% CI, 1.2%-27.2%]) . The median impact factor of the journals in which the articles were published was lower for the probiotic trials than antibiotic trials (3.0 [IQR, 2.3-4.2] vs 7.2 [IQR, 2.8-20.5]; P < .001) (eTable 4 in the ). The median time to publication did not differ between the probiotic and antibiotic trials (801 [IQR, 550-1183] vs 683 [IQR, 441-1036] days; P = .24).
Industry-funded trials were more likely to involve premature or term infants compared with non–industry-funded trials (45 of 106 [42.5%] vs 73 of 295 [24.7%]; difference, 17.7% [95% CI, 7.3%-28.3%]). Antibiotic trials with statistically significant results were more likely to be multicenter (24 of 58 [41.4%]) compared with statistically significant probiotic trial findings (7 of 48 [14.6%]; difference, 26.8% [95% CI, 9.6%-41.6%]). Although the proportion of antibiotic trials has not changed over time (through 2010, 67 of 136 [49.3%]; after 2010, 69 of 136 [50.7%]; difference, 1.2% [95% CI, −13.7% to 16.0%]), the proportion of registered probiotic trials that were published has increased (through 2010, 91 of 264 [34.5%]; after 2010, 173 of 265 [65.3%]; difference, 31.0% [95% CI, 18.5%-42.3%]).
In this study, we determined that antibiotic trials registered on ClinicalTrials.gov from July 1, 2005, to June 30, 2016, were more than twice as likely to be published compared with probiotic trials registered during the same period. Although characteristics such as funding source, study purpose, and blinding differed between probiotic and antibiotic studies, none of these were independently associated with publication status. The impact factor of the journal in which the studies were published was higher for the antibiotic studies. Earlier evaluations of publication bias in the probiotic literature identified selective publication using statistical methods such as funnel plots and Egger intercept tests. , One study reported that studies with negative findings were less likely to be published. The presence of significant publication bias , led to a cautious interpretation of results that was reflected in the conclusions. We are unaware of any studies that have assessed publication bias in a field of research (eg, probiotics) by comparing publications with those of another field (eg, antibiotics). Although industry funding has been reported to influence the likelihood of publication, , , the only prior evaluation of this concern in the probiotic literature identified no such association. In our study, although industry funding was significantly greater among probiotic studies, it was not independently associated with publication status. In general, pediatric studies are less likely to be published compared with studies that focus on adults. , , , A study conducted in 2012 that used a design similar to that of our study reported that only 29% of included pediatric trials resulted in publication. Furthermore, those authors found that industry-funded pediatric trials were less likely to be published compared with those funded by the National Institutes of Health. However, there does appear to be a trend toward increasing publication of pediatric studies. A review of abstracts accepted to the Pediatric Academic Societies’ annual meetings from 1992 to 1995 found that 41% of abstracts were unpublished as of the year 2000. When this analysis was repeated 15 years later, only 28% of studies presented at this same meeting from 2008 to 2011 remained unpublished as of 2017. Larger sample sizes and registration on ClinicalTrials.gov are 2 factors that may explain the higher publication proportions. Although we determined that pediatric probiotic studies are less likely to be published compared with pediatric antibiotic studies, our results do not pinpoint the underlying explanation. Although others have demonstrated a bias toward the publication of beneficial results using the US Food and Drug Administration registry and results database, no such data are available for probiotic studies, which are rarely conducted with the intention of applying for marketing approval or a labeling change. Thus, it is possible that the unpublished probiotic studies represent negative trials and that the higher level of selective publication in this field renders the results of meta-analyses unreliable. Selective publication is possible in all fields of research, and this can lead to inaccurate estimates of the effectiveness of an intervention. Selective publication wastes limited resources and the contributions of study participants and hinders the advance of medical knowledge. By altering the assessment of efficacy, selective publication can lead to inappropriate medication use and recommendations. Attempts to study selective publication are hindered by the unavailability of data from unpublished trials. Although a variety of tests are available to detect selective reporting bias, none reliably detect or exclude the possibility of bias. , , , , Strengths and Limitations The strengths of our study include the large number of registered trials included for analysis that provide adequate power for our analyses. We designed our study to provide a large interval between study registration and assessment of publication status to provide sufficient time for studies to be completed and published. Our protocol for determining publication status and high interobserver reliability ensure confidence in labeling studies as unpublished. Our study is unique in that it compares selective publication across types of medications. This approach enables us to conclude that publication bias may be a greater concern in probiotic studies compared with those evaluating antibiotics in children. A limitation of our study is the inability to determine the direction of results of trials that were unpublished and unavailable in ClinicalTrials.gov. Although the inclusion of conference abstracts in systematic reviews is a controversial topic, we did not include conference abstract publication in the definition of our primary outcome, which was focused on peer-reviewed journal publication indexed in PubMed or Google Scholar. We only used 1 comparison group—pediatric antibiotic trials—and ideally, subsequent analyses will include comparisons with other drug classes to enable similar comparative analyses to place this concern in the proper context. We also cannot conclusively determine the reason for nonpublication.
The strengths of our study include the large number of registered trials included for analysis that provide adequate power for our analyses. We designed our study to provide a large interval between study registration and assessment of publication status to provide sufficient time for studies to be completed and published. Our protocol for determining publication status and high interobserver reliability ensure confidence in labeling studies as unpublished. Our study is unique in that it compares selective publication across types of medications. This approach enables us to conclude that publication bias may be a greater concern in probiotic studies compared with those evaluating antibiotics in children. A limitation of our study is the inability to determine the direction of results of trials that were unpublished and unavailable in ClinicalTrials.gov. Although the inclusion of conference abstracts in systematic reviews is a controversial topic, we did not include conference abstract publication in the definition of our primary outcome, which was focused on peer-reviewed journal publication indexed in PubMed or Google Scholar. We only used 1 comparison group—pediatric antibiotic trials—and ideally, subsequent analyses will include comparisons with other drug classes to enable similar comparative analyses to place this concern in the proper context. We also cannot conclusively determine the reason for nonpublication.
The findings of this cross-sectional study suggest that registered pediatric probiotic trials are less likely to be published than registered pediatric antibiotic studies. Although further research is needed to establish the underlying study characteristics that explain this finding, our results raise a concern regarding the accuracy of meta-analyses that rely on published data to reach their conclusions. These results serve to highlight the importance of closing the loopholes and barriers that allow studies to be conducted without publishing their findings. This evidence should drive regulatory measures to ensure access to unpublished trial data and the provision of reasons for nonpublication.
|
Narratives of most significant change to explore experiences of caregivers in a caregiver-young adolescent sexual and reproductive health communication intervention in rural south-western Uganda | a69764c5-acae-440f-a4ae-cc908fec61f4 | 10231775 | Health Communication[mh] | Sexual and Reproductive Health (SRH) education for young adolescents (10–14 years) is essential for mitigating negative SRH outcomes during adolescence . Currently, HIV/AIDS, other sexually transmitted infections (STIs) and teenage pregnancy are on the rise among young adolescents in sub-Saharan Africa (SSA) . School-based sexuality education strategies have been the main avenue wherein adolescents receive comprehensive SRH education. However, in many countries, this is assumed to be foreign and in conflict with acceptable traditional family norms, and has thus had a negative reception from cultural and religious institutions as well as from teachers tasked with the role of delivering comprehensive sexuality education (CSE) . Leveraging the role of caregivers as the primary providers of SRH information for young adolescents is an alternative public health strategy . Caregiver-adolescent communication around SRH is precisely important for realizing adolescent rights to access SRH information . It equips them with the knowledge, skills, and attitude changes that enable them to make responsible SRH choices, including delay of sexual debut and choosing protected sex . However, parental SRH communication in sub-Saharan Africa is marred by intergenerational conflict. This is a potential source of tension between the caregiver and young adolescent . The cultural and religious contexts of families and communities further hinder SRH communication . In Uganda, communicating sexuality issues to adolescents was formerly the role of paternal aunts and uncles. However, the change of family structure from an extended to a nuclear family has shifted the role of the primary providers of SRH information away from extended family to primary caregivers (i.e. mothers and fathers). Resource-limited settings present an additional unique context in which caregiver-child communication occurs . Adolescents in these settings are disproportionately at a higher risk of SRH risks including unintended pregnancies, exposure to sexually transmitted infections (STIs) and vulnerability to sexual violence . The dynamics of caregiver-adolescent communication comprise an interplay of influences such as the knowledge, attitudes, comfort level, values and beliefs of caregivers. This interplay in turn influences not only how they convey SRH information but also how adolescents receive it . At an interpersonal level, relationship quality and parenting styles between the caregiver and the adolescent influence communication . Parenting styles, specifically authoritarian ones, tend to position young adolescents as passive recipients of SRH information , while a warm and loving relationship between the adolescent and caregiver is a stronger foundation for good communication . Gender dynamics also influence SRH communication . Same sex communication is more common compared to communication with the opposite sex . Communication between mothers and daughters is more common compared to fathers and their sons or fathers with daughters . Current evidence acknowledges the need to engage caregivers through providing both informational support and strategies for communication. These could help improve caregiver competence, efficacy, and comfort in deciphering SRH information to the young adolescents . Such support and strategies should be family centric, responsive to cultural and community values, and should take into account a breadth of information as effective ways to improve communication . However, current evidence indicates that most parent-child communication interventions are devoid of context-specific relevance and fail to establish how these strategies materialize within broader aspects of SRH communication and parenting styles. Interventions to improve parent and child communication on SRH in settings similar to Uganda have shown positive results in improving SRH outcomes, especially when initiated during early adolescence . We developed a parent-only multi-session intervention targeting caregivers of young adolescents with the primary aim of providing caregivers with knowledge about the SRH of young adolescents, imparting behavioural skills, instilling attitude change, and fostering good communication skills and positive parent-child relationships. This paper presents findings on the post-intervention evaluation examining the effectiveness and outcomes of the intervention using the most significant change (MSC) stories. These findings indicate broader experiences and changes that occurred due to the intervention beyond the specific communication on SRH (which will be presented in a separate quantitative evaluation paper). The study ran from October 2021 to November 2021 among caregivers selected from 6 villages in Rwebishekye parish, in Mbarara district-south-western Uganda. Among an estimated population of 6,061 people, the study community comprised approximately 1,520 households, of which 29% were headed by women . Description of the intervention The intervention was embedded within a larger project conducted in south-western Uganda to improve caregiver and young adolescent communication on SRH. The baseline status of caregiver-young adolescent SRH communication is provided in separate publications . Using Appreciative Inquiry (AI), a participatory research approach, we jointly developed culturally sensitive parent-child communication guidelines over which the parents would have ownership . This framework allowed us to shift the current paradigm away from solely preventing negative behaviours among adolescents and towards empowering caregivers to enhance their developmental assets and abilities to make positive, informed decisions. The AI methods worked in four stages; 1) Discovery: Identify what works well through community based participatory research (CBPR) activities and baseline data collection in the formative phase; 2) Dream: Envision changes that would work well in the future in collaboration with the community stakeholders and a community advisory board (CAB) whose role was to routinely provide feedback on the most impactful practices of parent and YA communication. Initially, the CAB collaborated with the research team to interpret the barriers and facilitators to parent-young adolescent communication. 3) Design: Develop the intervention in collaboration with the stakeholders who would review the modules and provide input and feedback on its pertinence towards the study community to ensure age, cultural and religious appropriateness. 4) Destiny: Implement the proposed intervention in collaboration with the community members and evaluate the effectiveness of the intervention. The intervention had 3 components namely: 1) the content; 2) training of community facilitators; 3) training of caregivers. We therefore collectively developed training modules designed for caregivers of young adolescents, which community leaders twinned with research team members then delivered over the course of 15 weeks in small group sessions of 10–20 participants distributed across the six villages in the community . The training for caregivers was delivered in two phases in April/May and September/October 2021. For each training phase, groups of parents received training two days a week (based on their preference) and each session lasted 1–2 hours each. Pre and post assessment on knowledge and attitude changes were done at the beginning and end of each training phase. Participatory learning approaches including discussions, storytelling, group work and role plays were used. The training sessions were designed to build the parenting skills of caregivers, enable parents to address structural barriers to SRH communication, improve the communication skills of parents around SRH, improve parental knowledge of young adolescent SRH, and address attitudes and normative beliefs concerning young adolescent SRH, all in order to improve caregiver comfort with SRH discussions. Evaluation of the intervention We evaluated the intervention using the most significant change (MSC) technique, a form of participatory monitoring and evaluation involving the collection and selection of stories of change . MSC specifically helps participants describe the changes and identify expected and unexpected changes brought by the intervention. The typical MSC process includes 1) defining the domains of change; 2) deciding on how and when to collect the change stories; 3) collecting the significant change stories; 4) selecting the most significant change stories; and 5) verifying the stories . The process can be adapted depending on the nature and context of the intervention . While a typical MSC activity would involve the selection of stories through a hierarchical process within different levels of authority , we chose to analyse all the stories that were collected in the study given that each of them presents a unique experience. Prior to the collection of the significant change stories, four domains of change were predefined, based on the project’s objectives: a) caregiver and young adolescent communication on SRH issues; b) caregiver knowledge and attitudes towards adolescent SRH; c) parenting skills; and d) personal life and family. However, we also considered domains of change that emerged during analysis of the stories. Selection of study participants and story collection We purposively selected caregivers of young adolescents from the communities where the intervention was implemented, identified by the community leaders in the respective villages. Three trained researchers collected stories in the local dialect of the area (Runyankore-Rukiga), one month after concluding the implementation of the intervention. We used a narrative approach to explore the experiences of study participants based on the four domains of change (a–d, above). The researchers were trained in qualitative research methods including the most significant change technique. After consenting participants, the researchers collected the stories at a convenient location in the community. Each story collection session lasted between 10 and 30 minutes. The stories were audio-recorded to capture participants’ narratives verbatim. However, at the end of each interview, the researchers probed and reflected back on the story to seek clarity on certain aspects as reported by the participants. The audio files were transcribed verbatim in the local language and then translated into English. We then convened a plenary focus group of nine participants to discuss and verify whether the stories accurately reflected changes in the community. Data analysis We adopted a thematic analysis approach to interpret the change stories. We adopted a hybrid of deductive and inductive analysis to generate themes from the data . Based on the six stages of thematic analysis suggested by Braun and Clarke, the researchers iteratively read the change stories to gain a deeper understanding of the data until we reached saturation. . This allowed us to examine the different types of changes beyond the set domains of change and thus gain a full picture of the impact of the intervention . We applied an initial coding framework to the significant change stories to study fragments of data including words, lines and segments to understand their meaning . We followed this with focused coding, where the initial codes were clustered into the a priori domains of change already mentioned, and we noted emergent themes. Below, we present the results of both emergent and a priori thematic areas . Ethical consideration Ethical clearance was obtained from the Research Ethics’ committee of Mbarara university of Science and Technology (15/05-19) and the Uganda National Council of Science and Technology (UNCST) (SS 5108). All participants provided written informed consent to participate in the study during enrolment. Permission was also obtained to allow the story to be recorded. Privacy and confidentiality were maintained during the data collection process. This was achieved by selecting a convenient location for the interview within the community where felt comfortable to express their views. Each participant was assigned a unique identification number and no identifiers were associated with the participants’ stories. The intervention was embedded within a larger project conducted in south-western Uganda to improve caregiver and young adolescent communication on SRH. The baseline status of caregiver-young adolescent SRH communication is provided in separate publications . Using Appreciative Inquiry (AI), a participatory research approach, we jointly developed culturally sensitive parent-child communication guidelines over which the parents would have ownership . This framework allowed us to shift the current paradigm away from solely preventing negative behaviours among adolescents and towards empowering caregivers to enhance their developmental assets and abilities to make positive, informed decisions. The AI methods worked in four stages; 1) Discovery: Identify what works well through community based participatory research (CBPR) activities and baseline data collection in the formative phase; 2) Dream: Envision changes that would work well in the future in collaboration with the community stakeholders and a community advisory board (CAB) whose role was to routinely provide feedback on the most impactful practices of parent and YA communication. Initially, the CAB collaborated with the research team to interpret the barriers and facilitators to parent-young adolescent communication. 3) Design: Develop the intervention in collaboration with the stakeholders who would review the modules and provide input and feedback on its pertinence towards the study community to ensure age, cultural and religious appropriateness. 4) Destiny: Implement the proposed intervention in collaboration with the community members and evaluate the effectiveness of the intervention. The intervention had 3 components namely: 1) the content; 2) training of community facilitators; 3) training of caregivers. We therefore collectively developed training modules designed for caregivers of young adolescents, which community leaders twinned with research team members then delivered over the course of 15 weeks in small group sessions of 10–20 participants distributed across the six villages in the community . The training for caregivers was delivered in two phases in April/May and September/October 2021. For each training phase, groups of parents received training two days a week (based on their preference) and each session lasted 1–2 hours each. Pre and post assessment on knowledge and attitude changes were done at the beginning and end of each training phase. Participatory learning approaches including discussions, storytelling, group work and role plays were used. The training sessions were designed to build the parenting skills of caregivers, enable parents to address structural barriers to SRH communication, improve the communication skills of parents around SRH, improve parental knowledge of young adolescent SRH, and address attitudes and normative beliefs concerning young adolescent SRH, all in order to improve caregiver comfort with SRH discussions. We evaluated the intervention using the most significant change (MSC) technique, a form of participatory monitoring and evaluation involving the collection and selection of stories of change . MSC specifically helps participants describe the changes and identify expected and unexpected changes brought by the intervention. The typical MSC process includes 1) defining the domains of change; 2) deciding on how and when to collect the change stories; 3) collecting the significant change stories; 4) selecting the most significant change stories; and 5) verifying the stories . The process can be adapted depending on the nature and context of the intervention . While a typical MSC activity would involve the selection of stories through a hierarchical process within different levels of authority , we chose to analyse all the stories that were collected in the study given that each of them presents a unique experience. Prior to the collection of the significant change stories, four domains of change were predefined, based on the project’s objectives: a) caregiver and young adolescent communication on SRH issues; b) caregiver knowledge and attitudes towards adolescent SRH; c) parenting skills; and d) personal life and family. However, we also considered domains of change that emerged during analysis of the stories. We purposively selected caregivers of young adolescents from the communities where the intervention was implemented, identified by the community leaders in the respective villages. Three trained researchers collected stories in the local dialect of the area (Runyankore-Rukiga), one month after concluding the implementation of the intervention. We used a narrative approach to explore the experiences of study participants based on the four domains of change (a–d, above). The researchers were trained in qualitative research methods including the most significant change technique. After consenting participants, the researchers collected the stories at a convenient location in the community. Each story collection session lasted between 10 and 30 minutes. The stories were audio-recorded to capture participants’ narratives verbatim. However, at the end of each interview, the researchers probed and reflected back on the story to seek clarity on certain aspects as reported by the participants. The audio files were transcribed verbatim in the local language and then translated into English. We then convened a plenary focus group of nine participants to discuss and verify whether the stories accurately reflected changes in the community. We adopted a thematic analysis approach to interpret the change stories. We adopted a hybrid of deductive and inductive analysis to generate themes from the data . Based on the six stages of thematic analysis suggested by Braun and Clarke, the researchers iteratively read the change stories to gain a deeper understanding of the data until we reached saturation. . This allowed us to examine the different types of changes beyond the set domains of change and thus gain a full picture of the impact of the intervention . We applied an initial coding framework to the significant change stories to study fragments of data including words, lines and segments to understand their meaning . We followed this with focused coding, where the initial codes were clustered into the a priori domains of change already mentioned, and we noted emergent themes. Below, we present the results of both emergent and a priori thematic areas . Ethical clearance was obtained from the Research Ethics’ committee of Mbarara university of Science and Technology (15/05-19) and the Uganda National Council of Science and Technology (UNCST) (SS 5108). All participants provided written informed consent to participate in the study during enrolment. Permission was also obtained to allow the story to be recorded. Privacy and confidentiality were maintained during the data collection process. This was achieved by selecting a convenient location for the interview within the community where felt comfortable to express their views. Each participant was assigned a unique identification number and no identifiers were associated with the participants’ stories. Thirty caregivers were enrolled onto the study. These comprised 8 (26.7%) male caregivers and 22 (73.3%) female caregivers. Their median age was 45 years. is a summary of the results presented in line with the domains of change. Improved caregiver-young adolescent SRH communication Overall improvement in communicating SRH topics Parents reported communicating about SRH topics like general health and body hygiene, menstruation and menstruation hygiene, wet dreams, body changes and the implications of these changes. In some instances, caregivers communicated to their children about the availability of STI screening services as well as contraception for adolescents. They also reported having discussions around pregnancy and other negative consequences of engaging in risky sexual practices. Some narratives indicated that pubertal changes were a trigger for communicating about SRH risks like unwanted pregnancy or HIV. Caregivers reported engaging their children in discussions by encouraging them to approach them in case of any concerns, especially those related to menarche. A female caregiver shared her story of change, emphasizing not just the topics discussed with her children, but also how she felt relaxed as she communicated. “ My children had not started engaging in sex , but I took time and talked to them about STDs and HIV . I asked them if they knew about AIDS and if they know that it kills . One of my children said she had cut herself and asked if that meant she was going to suffer from HIV/AIDS . I told her that it is possible to get AIDS through cuts , but the most important thing right now is to protect themselves especially now that they have already started menstruating because they can become pregnant and also suffer from HIV/AIDS . Once , we were weeding the garden , I told them that ‘do you know that this person almost died ? ’ They asked me the cause which I said was AIDS . I also told them that condoms can prevent one from getting pregnant . I talk to them freely because we are now used to each other . They ask me questions based on what they know from school , which was not the case in the past because I used to fear telling them about such . The more they ask me the more I use the chance to explain to them and get a lot to talk about . They can only ask you questions when they don’t fear you . ” (Female caregiver) However, SRH communication was still marred by threats as when caregivers reported the use of scare tactics to pass on SRH information. Discussions with boys revolved around warnings against engaging in sexual relationships or else facing the consequences of unwanted pregnancies like being incarcerated. Comfort with SRH communication Caregivers reported being more comfortable and less anxious or embarrassed discussing SRH with their young adolescents after the intervention. More so, their children were less fearful when approaching them about SRH and sensitive issues regarding their sexuality. Having developed warmer and friendlier relationships with the young adolescents after the intervention, adolescents could approach caregivers even about the risk of sexual violence in the community, in addition to experiences of bodily changes like menarche. Adolescents could ably seek material support during menstruation like pads and knickers which had not been the case before the intervention. Caregivers recalled their prior experiences of SRH communication with their older adolescents as being uncomfortable. A parent narrated being previously embarrassed to discuss some sensitive topics such as family planning and condoms, but with the intervention, they were more comfortable. A female caregiver shares her experience prior and after the intervention; “ At the start of the training , I would feel shy and wonder how I would mention some difficult terms in front of my children . But now , all is well because we are very close to each other . This is because of the strategies we were taught to use to approach our children . Now they tell me all their problems without fear and I also guide them accordingly . ” (Female caregiver) Gender-sensitive SRH communication Caregivers highlighted challenging cultural norms around gender roles and cross-gender communication. Some caregivers acknowledged they could communicate to their child on SRH regardless of gender differences. This is mainly reported by the male participants. A male caregiver shared his experience of talking to his daughter about menstruation in the absence her mother and supporting her with her menstruation needs. Another acknowledged that despite being a man, the new communication skills acquired from the intervention enabled him to talk to his daughter without having to go through her mother first. “ I had learned and obtained the experience on how to communicate with my daughter without any fear of being a man . This had actually always been hard for me to do before I took part in the study . This has helped very much in preventing her from the bad conduct that was developing . ” (Male caregiver 1) “ Before , I never cared so much about my child , or even never cared that she had become an adolescent . I didn’t even understand much about adolescence . But ever since the intervention started , I learned that there’s a lot a male parent contributes to the growth of a child , and when I understood this , I started paying attention to my daughters , especially this one who is ten years…Recently she got into her first period and her mother wasn’t around but she came running and told me “while I was urinating , I urinated blood” . I quickly went and bought pads for her . This all happened because I had talked to her about her menstruation just like we were taught during the intervention . ” (Male caregiver 2) A female caregiver also appreciates that she has learnt that it is also important to talk to her son about SRH specifically their pubertal changes as well as the fact that they are also at risk of SRH problems if they engage in risky sexual behaviour. Transformation in knowledge and attitudes towards SRH Parents acknowledged their role as the primary SRH communicators with their young adolescents; post-intervention. They appreciated the caregivers’ significant role in helping young adolescents navigate pubertal changes. Prior to the study, they believed that sexuality education was solely provided at school, while concurrently expressing their uncertainties on the accuracy of the SRH information provided there. Caregivers now acknowledge their cardinal role in deciphering SRH information for their young adolescent children. Additionally, they appreciate the fact that they can competently talk about SRH with young adolescents of the opposite sex without having to go through their partners. Fathers appreciate being able to talk to their daughters while mothers appreciate being able to talk to their sons about their SRH matters. The stories also indicated increased knowledge among caregivers around different aspects of young adolescent SRH. One caregiver disclosed that before the intervention, he did not pay attention to his children as they entered adolescence, holding onto the belief that they were still too young to receive SRH information. After the training, they reported being more attentive to their children’s pubertal changes. One caregiver appreciated learning from the intervention that their young adolescent is at risk of getting pregnant since she previously believed that only older adolescents were capable of getting pregnant. Learning this prompted her to talk to her child about bodily changes such as menarche at age 12 years and the possibility of getting pregnant even then. “ As a parent I never knew that a girl of 12 years can go into her menstrual periods or become pregnant . I always looked at her as a child but after attending the training , we now sit together and I tell them everything , even if one is [only] 13 years old . I tell her that “boys can impregnate you . ” I go ahead to tell her about her body’s changes . ” (Female caregiver) Caregivers also expressed a more positive attitude towards communicating sensitive SRH issues such as condom use. Several caregivers also mentioned learning more about how to handle instances of sexual violence. One caregiver revealed learning about the process to undertake in case of such violence, such as the need for the adolescent to be examined by a health worker and the provision of preventive therapy for HIV. They also learned that young adolescents can be screened for STIs and that adolescents also need information on these services. “ My most interesting topic and what I learned the most is a case where a young girl who stays with the grandmother or mother or relative is sent for something but is raped along the way . We always knew that the first thing is to run to the LC1 (Local Council) and police and report the case . But we instead found out that rushing this child to the hospital would save her from many things like pregnancy [and] HIV/AIDS . I loved this so much , because we didn’t know about it as village people . ” (Female caregiver aged 45 years) Adopting positive parent and child relationships to improve SRH communication Positive parenting skills Caregivers indicated an improved responsiveness to their children by acknowledging that their role goes beyond giving birth to the child to include nurturing them and having a positive and friendly relationship with them. Caregivers report more warmth and support towards their children after the intervention. Caregivers transformed their relationships with their children from authoritarian and neglectful parenting to more affective and hands-on parenting styles. Caregivers adopted a friendlier and composed approach to relating with their children, who in turn reciprocated that warmth of feeling. This is manifested through reports of prioritizing spending more time with their children, treating them with affection and communicating with them in a calmer manner. As mentioned above, they also began to pay greater attention to the physical needs of their children, such as sanitary pads and underwear for girls, to prevent them from obtaining these through other transactional relationships that may put them at risk. Adolescents began to openly talk about their personal lives with their caregivers, which was not the case prior to the intervention. Due to the friendly atmosphere, adolescents no longer sought information from third parties in the community but readily approached their caregivers without fear. More so, they admitted that this friendly atmosphere had helped to erase their fear of talking about SRH with their children. Some caregivers report adopting some techniques for creating comfortable and friendly environments to be able to have conversations with their children like taking advantage of meal times in the living rooms. These improvements also carried over into relationships between caregivers and their non-biological children. Encouraging children’s agency and negotiating good behaviour Caregivers described how by changing their attitudes to create a less fearful environment and allowing open and comfortable discussions on different issues including sexual and reproductive health, they can now have friendly conversations with their children, listening to their opinions and allowing them the opportunity to make decisions at the household level. Young adolescents began to participate independently in domestic activities, taking on roles and executing them with minimum supervision from the caregivers. They report abandoning punitive approaches including corporal punishment and arguing with their children, some of which were adopted from how they themselves were parented, to take up negotiating good behaviour through open and positive communication while employing listening skills. They also reported experiences of providing social rewards to young adolescents who exhibit good behaviour. This has enabled them to monitor their children, especially their movements, establish where they spend their time, who they spend their time with, how they spend their day in a fearless environment. This has earned them a positive change in their children’s behaviour allowing them to be more open and transparent about what they do in the absence of their parent. “ I always make some popcorns at home and tell them ‘whoever comes back home early will eat the popcorn and those that will come late will find that the popcorn is finished . ’ This makes them come back home early . Also , because I am close with my children , they now tell me the challenges they face . I also tell them that “you see you have matured and your breasts have developed , so if you allow a boy to touch your breasts or sleep with him , you will become pregnant , get HIV and die . ” (Female caregiver) Transformation of gendered parental roles in parent-child relationships Caregivers reveal a change in attitude towards parenting roles. They now understand the notion of equal parenting roles no matter the issues and the sex of the child. This is mainly reported among male caregivers. A male caregiver reports previous experience of using third party communication, i.e. Communication to his children through his wife. After the intervention, he can now directly talk to his children, especially his daughter. Additionally, they have reformed traditional gender attitudes to begin socialising male young adolescents to take on household roles traditionally meant for females. “ My boy is 12 years old and I see that he has changed positively because he now knows that boys can also peel plantains [plantain is a traditional food in Uganda-before cooking , it must be peeled and traditionally , it is role of the girl to complete this task–peeling (plantain) is not for just girls ! A boy can also mop the house . ” (Female caregiver) Personal and family life Participants report a warmer relationship with their partners and overall improved harmony in families. Couples can communicate with each other on different matters especially those regarding their children and the household. They report being able to sit and plan together which was not the case previously, when individuals abandoned their parenting roles to their partners creating tensions within households. The training on entrepreneurship and financial literacy empowered parents with new skills on how to improve their household income and support their children. “ Before I used to say , ‘I earned the money myself and I can use it the way I want since I am the family head , without consulting my wife . ’ If I got like UGX150 , 000 , I used to buy 3kgs of meat for my family to also enjoy , not knowing that my wife maybe could have a loan somewhere and could be paid using this same money and the meat waits until next Saturday . But after the study sessions , I now engage my wife in planning for the money that I get so that in case she has something bigger than what I had to spend this money on , then we go for that first , and this study has helped me so much with doing this . ”(Male caregiver) Community support and community level changes Adoption of collective community child-rearing practices was a key change at the community level. Parents reported that they had spread the knowledge they acquired to other children in the community whose caregivers were not part of the intervention. Caregivers also reported making a resolution to improve community connectivity and create community support systems for parenting. Having conducted the intervention amidst the Covid-19 restrictions, participants report that the community did not have incidents of unwanted pregnancies in contrast to many other communities. This was attributed to the intervention. Overall improvement in communicating SRH topics Parents reported communicating about SRH topics like general health and body hygiene, menstruation and menstruation hygiene, wet dreams, body changes and the implications of these changes. In some instances, caregivers communicated to their children about the availability of STI screening services as well as contraception for adolescents. They also reported having discussions around pregnancy and other negative consequences of engaging in risky sexual practices. Some narratives indicated that pubertal changes were a trigger for communicating about SRH risks like unwanted pregnancy or HIV. Caregivers reported engaging their children in discussions by encouraging them to approach them in case of any concerns, especially those related to menarche. A female caregiver shared her story of change, emphasizing not just the topics discussed with her children, but also how she felt relaxed as she communicated. “ My children had not started engaging in sex , but I took time and talked to them about STDs and HIV . I asked them if they knew about AIDS and if they know that it kills . One of my children said she had cut herself and asked if that meant she was going to suffer from HIV/AIDS . I told her that it is possible to get AIDS through cuts , but the most important thing right now is to protect themselves especially now that they have already started menstruating because they can become pregnant and also suffer from HIV/AIDS . Once , we were weeding the garden , I told them that ‘do you know that this person almost died ? ’ They asked me the cause which I said was AIDS . I also told them that condoms can prevent one from getting pregnant . I talk to them freely because we are now used to each other . They ask me questions based on what they know from school , which was not the case in the past because I used to fear telling them about such . The more they ask me the more I use the chance to explain to them and get a lot to talk about . They can only ask you questions when they don’t fear you . ” (Female caregiver) However, SRH communication was still marred by threats as when caregivers reported the use of scare tactics to pass on SRH information. Discussions with boys revolved around warnings against engaging in sexual relationships or else facing the consequences of unwanted pregnancies like being incarcerated. Comfort with SRH communication Caregivers reported being more comfortable and less anxious or embarrassed discussing SRH with their young adolescents after the intervention. More so, their children were less fearful when approaching them about SRH and sensitive issues regarding their sexuality. Having developed warmer and friendlier relationships with the young adolescents after the intervention, adolescents could approach caregivers even about the risk of sexual violence in the community, in addition to experiences of bodily changes like menarche. Adolescents could ably seek material support during menstruation like pads and knickers which had not been the case before the intervention. Caregivers recalled their prior experiences of SRH communication with their older adolescents as being uncomfortable. A parent narrated being previously embarrassed to discuss some sensitive topics such as family planning and condoms, but with the intervention, they were more comfortable. A female caregiver shares her experience prior and after the intervention; “ At the start of the training , I would feel shy and wonder how I would mention some difficult terms in front of my children . But now , all is well because we are very close to each other . This is because of the strategies we were taught to use to approach our children . Now they tell me all their problems without fear and I also guide them accordingly . ” (Female caregiver) Gender-sensitive SRH communication Caregivers highlighted challenging cultural norms around gender roles and cross-gender communication. Some caregivers acknowledged they could communicate to their child on SRH regardless of gender differences. This is mainly reported by the male participants. A male caregiver shared his experience of talking to his daughter about menstruation in the absence her mother and supporting her with her menstruation needs. Another acknowledged that despite being a man, the new communication skills acquired from the intervention enabled him to talk to his daughter without having to go through her mother first. “ I had learned and obtained the experience on how to communicate with my daughter without any fear of being a man . This had actually always been hard for me to do before I took part in the study . This has helped very much in preventing her from the bad conduct that was developing . ” (Male caregiver 1) “ Before , I never cared so much about my child , or even never cared that she had become an adolescent . I didn’t even understand much about adolescence . But ever since the intervention started , I learned that there’s a lot a male parent contributes to the growth of a child , and when I understood this , I started paying attention to my daughters , especially this one who is ten years…Recently she got into her first period and her mother wasn’t around but she came running and told me “while I was urinating , I urinated blood” . I quickly went and bought pads for her . This all happened because I had talked to her about her menstruation just like we were taught during the intervention . ” (Male caregiver 2) A female caregiver also appreciates that she has learnt that it is also important to talk to her son about SRH specifically their pubertal changes as well as the fact that they are also at risk of SRH problems if they engage in risky sexual behaviour. Transformation in knowledge and attitudes towards SRH Parents acknowledged their role as the primary SRH communicators with their young adolescents; post-intervention. They appreciated the caregivers’ significant role in helping young adolescents navigate pubertal changes. Prior to the study, they believed that sexuality education was solely provided at school, while concurrently expressing their uncertainties on the accuracy of the SRH information provided there. Caregivers now acknowledge their cardinal role in deciphering SRH information for their young adolescent children. Additionally, they appreciate the fact that they can competently talk about SRH with young adolescents of the opposite sex without having to go through their partners. Fathers appreciate being able to talk to their daughters while mothers appreciate being able to talk to their sons about their SRH matters. The stories also indicated increased knowledge among caregivers around different aspects of young adolescent SRH. One caregiver disclosed that before the intervention, he did not pay attention to his children as they entered adolescence, holding onto the belief that they were still too young to receive SRH information. After the training, they reported being more attentive to their children’s pubertal changes. One caregiver appreciated learning from the intervention that their young adolescent is at risk of getting pregnant since she previously believed that only older adolescents were capable of getting pregnant. Learning this prompted her to talk to her child about bodily changes such as menarche at age 12 years and the possibility of getting pregnant even then. “ As a parent I never knew that a girl of 12 years can go into her menstrual periods or become pregnant . I always looked at her as a child but after attending the training , we now sit together and I tell them everything , even if one is [only] 13 years old . I tell her that “boys can impregnate you . ” I go ahead to tell her about her body’s changes . ” (Female caregiver) Caregivers also expressed a more positive attitude towards communicating sensitive SRH issues such as condom use. Several caregivers also mentioned learning more about how to handle instances of sexual violence. One caregiver revealed learning about the process to undertake in case of such violence, such as the need for the adolescent to be examined by a health worker and the provision of preventive therapy for HIV. They also learned that young adolescents can be screened for STIs and that adolescents also need information on these services. “ My most interesting topic and what I learned the most is a case where a young girl who stays with the grandmother or mother or relative is sent for something but is raped along the way . We always knew that the first thing is to run to the LC1 (Local Council) and police and report the case . But we instead found out that rushing this child to the hospital would save her from many things like pregnancy [and] HIV/AIDS . I loved this so much , because we didn’t know about it as village people . ” (Female caregiver aged 45 years) Parents reported communicating about SRH topics like general health and body hygiene, menstruation and menstruation hygiene, wet dreams, body changes and the implications of these changes. In some instances, caregivers communicated to their children about the availability of STI screening services as well as contraception for adolescents. They also reported having discussions around pregnancy and other negative consequences of engaging in risky sexual practices. Some narratives indicated that pubertal changes were a trigger for communicating about SRH risks like unwanted pregnancy or HIV. Caregivers reported engaging their children in discussions by encouraging them to approach them in case of any concerns, especially those related to menarche. A female caregiver shared her story of change, emphasizing not just the topics discussed with her children, but also how she felt relaxed as she communicated. “ My children had not started engaging in sex , but I took time and talked to them about STDs and HIV . I asked them if they knew about AIDS and if they know that it kills . One of my children said she had cut herself and asked if that meant she was going to suffer from HIV/AIDS . I told her that it is possible to get AIDS through cuts , but the most important thing right now is to protect themselves especially now that they have already started menstruating because they can become pregnant and also suffer from HIV/AIDS . Once , we were weeding the garden , I told them that ‘do you know that this person almost died ? ’ They asked me the cause which I said was AIDS . I also told them that condoms can prevent one from getting pregnant . I talk to them freely because we are now used to each other . They ask me questions based on what they know from school , which was not the case in the past because I used to fear telling them about such . The more they ask me the more I use the chance to explain to them and get a lot to talk about . They can only ask you questions when they don’t fear you . ” (Female caregiver) However, SRH communication was still marred by threats as when caregivers reported the use of scare tactics to pass on SRH information. Discussions with boys revolved around warnings against engaging in sexual relationships or else facing the consequences of unwanted pregnancies like being incarcerated. Caregivers reported being more comfortable and less anxious or embarrassed discussing SRH with their young adolescents after the intervention. More so, their children were less fearful when approaching them about SRH and sensitive issues regarding their sexuality. Having developed warmer and friendlier relationships with the young adolescents after the intervention, adolescents could approach caregivers even about the risk of sexual violence in the community, in addition to experiences of bodily changes like menarche. Adolescents could ably seek material support during menstruation like pads and knickers which had not been the case before the intervention. Caregivers recalled their prior experiences of SRH communication with their older adolescents as being uncomfortable. A parent narrated being previously embarrassed to discuss some sensitive topics such as family planning and condoms, but with the intervention, they were more comfortable. A female caregiver shares her experience prior and after the intervention; “ At the start of the training , I would feel shy and wonder how I would mention some difficult terms in front of my children . But now , all is well because we are very close to each other . This is because of the strategies we were taught to use to approach our children . Now they tell me all their problems without fear and I also guide them accordingly . ” (Female caregiver) Caregivers highlighted challenging cultural norms around gender roles and cross-gender communication. Some caregivers acknowledged they could communicate to their child on SRH regardless of gender differences. This is mainly reported by the male participants. A male caregiver shared his experience of talking to his daughter about menstruation in the absence her mother and supporting her with her menstruation needs. Another acknowledged that despite being a man, the new communication skills acquired from the intervention enabled him to talk to his daughter without having to go through her mother first. “ I had learned and obtained the experience on how to communicate with my daughter without any fear of being a man . This had actually always been hard for me to do before I took part in the study . This has helped very much in preventing her from the bad conduct that was developing . ” (Male caregiver 1) “ Before , I never cared so much about my child , or even never cared that she had become an adolescent . I didn’t even understand much about adolescence . But ever since the intervention started , I learned that there’s a lot a male parent contributes to the growth of a child , and when I understood this , I started paying attention to my daughters , especially this one who is ten years…Recently she got into her first period and her mother wasn’t around but she came running and told me “while I was urinating , I urinated blood” . I quickly went and bought pads for her . This all happened because I had talked to her about her menstruation just like we were taught during the intervention . ” (Male caregiver 2) A female caregiver also appreciates that she has learnt that it is also important to talk to her son about SRH specifically their pubertal changes as well as the fact that they are also at risk of SRH problems if they engage in risky sexual behaviour. Parents acknowledged their role as the primary SRH communicators with their young adolescents; post-intervention. They appreciated the caregivers’ significant role in helping young adolescents navigate pubertal changes. Prior to the study, they believed that sexuality education was solely provided at school, while concurrently expressing their uncertainties on the accuracy of the SRH information provided there. Caregivers now acknowledge their cardinal role in deciphering SRH information for their young adolescent children. Additionally, they appreciate the fact that they can competently talk about SRH with young adolescents of the opposite sex without having to go through their partners. Fathers appreciate being able to talk to their daughters while mothers appreciate being able to talk to their sons about their SRH matters. The stories also indicated increased knowledge among caregivers around different aspects of young adolescent SRH. One caregiver disclosed that before the intervention, he did not pay attention to his children as they entered adolescence, holding onto the belief that they were still too young to receive SRH information. After the training, they reported being more attentive to their children’s pubertal changes. One caregiver appreciated learning from the intervention that their young adolescent is at risk of getting pregnant since she previously believed that only older adolescents were capable of getting pregnant. Learning this prompted her to talk to her child about bodily changes such as menarche at age 12 years and the possibility of getting pregnant even then. “ As a parent I never knew that a girl of 12 years can go into her menstrual periods or become pregnant . I always looked at her as a child but after attending the training , we now sit together and I tell them everything , even if one is [only] 13 years old . I tell her that “boys can impregnate you . ” I go ahead to tell her about her body’s changes . ” (Female caregiver) Caregivers also expressed a more positive attitude towards communicating sensitive SRH issues such as condom use. Several caregivers also mentioned learning more about how to handle instances of sexual violence. One caregiver revealed learning about the process to undertake in case of such violence, such as the need for the adolescent to be examined by a health worker and the provision of preventive therapy for HIV. They also learned that young adolescents can be screened for STIs and that adolescents also need information on these services. “ My most interesting topic and what I learned the most is a case where a young girl who stays with the grandmother or mother or relative is sent for something but is raped along the way . We always knew that the first thing is to run to the LC1 (Local Council) and police and report the case . But we instead found out that rushing this child to the hospital would save her from many things like pregnancy [and] HIV/AIDS . I loved this so much , because we didn’t know about it as village people . ” (Female caregiver aged 45 years) Positive parenting skills Caregivers indicated an improved responsiveness to their children by acknowledging that their role goes beyond giving birth to the child to include nurturing them and having a positive and friendly relationship with them. Caregivers report more warmth and support towards their children after the intervention. Caregivers transformed their relationships with their children from authoritarian and neglectful parenting to more affective and hands-on parenting styles. Caregivers adopted a friendlier and composed approach to relating with their children, who in turn reciprocated that warmth of feeling. This is manifested through reports of prioritizing spending more time with their children, treating them with affection and communicating with them in a calmer manner. As mentioned above, they also began to pay greater attention to the physical needs of their children, such as sanitary pads and underwear for girls, to prevent them from obtaining these through other transactional relationships that may put them at risk. Adolescents began to openly talk about their personal lives with their caregivers, which was not the case prior to the intervention. Due to the friendly atmosphere, adolescents no longer sought information from third parties in the community but readily approached their caregivers without fear. More so, they admitted that this friendly atmosphere had helped to erase their fear of talking about SRH with their children. Some caregivers report adopting some techniques for creating comfortable and friendly environments to be able to have conversations with their children like taking advantage of meal times in the living rooms. These improvements also carried over into relationships between caregivers and their non-biological children. Encouraging children’s agency and negotiating good behaviour Caregivers described how by changing their attitudes to create a less fearful environment and allowing open and comfortable discussions on different issues including sexual and reproductive health, they can now have friendly conversations with their children, listening to their opinions and allowing them the opportunity to make decisions at the household level. Young adolescents began to participate independently in domestic activities, taking on roles and executing them with minimum supervision from the caregivers. They report abandoning punitive approaches including corporal punishment and arguing with their children, some of which were adopted from how they themselves were parented, to take up negotiating good behaviour through open and positive communication while employing listening skills. They also reported experiences of providing social rewards to young adolescents who exhibit good behaviour. This has enabled them to monitor their children, especially their movements, establish where they spend their time, who they spend their time with, how they spend their day in a fearless environment. This has earned them a positive change in their children’s behaviour allowing them to be more open and transparent about what they do in the absence of their parent. “ I always make some popcorns at home and tell them ‘whoever comes back home early will eat the popcorn and those that will come late will find that the popcorn is finished . ’ This makes them come back home early . Also , because I am close with my children , they now tell me the challenges they face . I also tell them that “you see you have matured and your breasts have developed , so if you allow a boy to touch your breasts or sleep with him , you will become pregnant , get HIV and die . ” (Female caregiver) Transformation of gendered parental roles in parent-child relationships Caregivers reveal a change in attitude towards parenting roles. They now understand the notion of equal parenting roles no matter the issues and the sex of the child. This is mainly reported among male caregivers. A male caregiver reports previous experience of using third party communication, i.e. Communication to his children through his wife. After the intervention, he can now directly talk to his children, especially his daughter. Additionally, they have reformed traditional gender attitudes to begin socialising male young adolescents to take on household roles traditionally meant for females. “ My boy is 12 years old and I see that he has changed positively because he now knows that boys can also peel plantains [plantain is a traditional food in Uganda-before cooking , it must be peeled and traditionally , it is role of the girl to complete this task–peeling (plantain) is not for just girls ! A boy can also mop the house . ” (Female caregiver) Personal and family life Participants report a warmer relationship with their partners and overall improved harmony in families. Couples can communicate with each other on different matters especially those regarding their children and the household. They report being able to sit and plan together which was not the case previously, when individuals abandoned their parenting roles to their partners creating tensions within households. The training on entrepreneurship and financial literacy empowered parents with new skills on how to improve their household income and support their children. “ Before I used to say , ‘I earned the money myself and I can use it the way I want since I am the family head , without consulting my wife . ’ If I got like UGX150 , 000 , I used to buy 3kgs of meat for my family to also enjoy , not knowing that my wife maybe could have a loan somewhere and could be paid using this same money and the meat waits until next Saturday . But after the study sessions , I now engage my wife in planning for the money that I get so that in case she has something bigger than what I had to spend this money on , then we go for that first , and this study has helped me so much with doing this . ”(Male caregiver) Community support and community level changes Adoption of collective community child-rearing practices was a key change at the community level. Parents reported that they had spread the knowledge they acquired to other children in the community whose caregivers were not part of the intervention. Caregivers also reported making a resolution to improve community connectivity and create community support systems for parenting. Having conducted the intervention amidst the Covid-19 restrictions, participants report that the community did not have incidents of unwanted pregnancies in contrast to many other communities. This was attributed to the intervention. Caregivers indicated an improved responsiveness to their children by acknowledging that their role goes beyond giving birth to the child to include nurturing them and having a positive and friendly relationship with them. Caregivers report more warmth and support towards their children after the intervention. Caregivers transformed their relationships with their children from authoritarian and neglectful parenting to more affective and hands-on parenting styles. Caregivers adopted a friendlier and composed approach to relating with their children, who in turn reciprocated that warmth of feeling. This is manifested through reports of prioritizing spending more time with their children, treating them with affection and communicating with them in a calmer manner. As mentioned above, they also began to pay greater attention to the physical needs of their children, such as sanitary pads and underwear for girls, to prevent them from obtaining these through other transactional relationships that may put them at risk. Adolescents began to openly talk about their personal lives with their caregivers, which was not the case prior to the intervention. Due to the friendly atmosphere, adolescents no longer sought information from third parties in the community but readily approached their caregivers without fear. More so, they admitted that this friendly atmosphere had helped to erase their fear of talking about SRH with their children. Some caregivers report adopting some techniques for creating comfortable and friendly environments to be able to have conversations with their children like taking advantage of meal times in the living rooms. These improvements also carried over into relationships between caregivers and their non-biological children. Caregivers described how by changing their attitudes to create a less fearful environment and allowing open and comfortable discussions on different issues including sexual and reproductive health, they can now have friendly conversations with their children, listening to their opinions and allowing them the opportunity to make decisions at the household level. Young adolescents began to participate independently in domestic activities, taking on roles and executing them with minimum supervision from the caregivers. They report abandoning punitive approaches including corporal punishment and arguing with their children, some of which were adopted from how they themselves were parented, to take up negotiating good behaviour through open and positive communication while employing listening skills. They also reported experiences of providing social rewards to young adolescents who exhibit good behaviour. This has enabled them to monitor their children, especially their movements, establish where they spend their time, who they spend their time with, how they spend their day in a fearless environment. This has earned them a positive change in their children’s behaviour allowing them to be more open and transparent about what they do in the absence of their parent. “ I always make some popcorns at home and tell them ‘whoever comes back home early will eat the popcorn and those that will come late will find that the popcorn is finished . ’ This makes them come back home early . Also , because I am close with my children , they now tell me the challenges they face . I also tell them that “you see you have matured and your breasts have developed , so if you allow a boy to touch your breasts or sleep with him , you will become pregnant , get HIV and die . ” (Female caregiver) Caregivers reveal a change in attitude towards parenting roles. They now understand the notion of equal parenting roles no matter the issues and the sex of the child. This is mainly reported among male caregivers. A male caregiver reports previous experience of using third party communication, i.e. Communication to his children through his wife. After the intervention, he can now directly talk to his children, especially his daughter. Additionally, they have reformed traditional gender attitudes to begin socialising male young adolescents to take on household roles traditionally meant for females. “ My boy is 12 years old and I see that he has changed positively because he now knows that boys can also peel plantains [plantain is a traditional food in Uganda-before cooking , it must be peeled and traditionally , it is role of the girl to complete this task–peeling (plantain) is not for just girls ! A boy can also mop the house . ” (Female caregiver) Participants report a warmer relationship with their partners and overall improved harmony in families. Couples can communicate with each other on different matters especially those regarding their children and the household. They report being able to sit and plan together which was not the case previously, when individuals abandoned their parenting roles to their partners creating tensions within households. The training on entrepreneurship and financial literacy empowered parents with new skills on how to improve their household income and support their children. “ Before I used to say , ‘I earned the money myself and I can use it the way I want since I am the family head , without consulting my wife . ’ If I got like UGX150 , 000 , I used to buy 3kgs of meat for my family to also enjoy , not knowing that my wife maybe could have a loan somewhere and could be paid using this same money and the meat waits until next Saturday . But after the study sessions , I now engage my wife in planning for the money that I get so that in case she has something bigger than what I had to spend this money on , then we go for that first , and this study has helped me so much with doing this . ”(Male caregiver) Adoption of collective community child-rearing practices was a key change at the community level. Parents reported that they had spread the knowledge they acquired to other children in the community whose caregivers were not part of the intervention. Caregivers also reported making a resolution to improve community connectivity and create community support systems for parenting. Having conducted the intervention amidst the Covid-19 restrictions, participants report that the community did not have incidents of unwanted pregnancies in contrast to many other communities. This was attributed to the intervention. This qualitative evaluation study assessed the effectiveness of a community-based intervention aimed at improving caregiver-young adolescent communication on SRH. The most significant change stories highlight an improvement in caregiver-young adolescent SRH communication as a result of the intervention. Specific improvements included; caregivers broadening the range of topics discussed with young adolescents, an improvement in caregiver knowledge of young adolescent SRH, and increased comfort discussing SRH with young adolescents. Improvement in parent-child relationships and the adoption of positive parenting practices was also a very salient finding from the significant change stories. Overall, the intervention addressed key contextual and underlying issues affecting SRH communication beyond the generic and superficial parameters constantly cited as key influencers of SRH communication such as socioeconomic status, gender, level of knowledge among others. This intervention is unique in a setting where the sexuality of young people is a sensitive matter due to religious and cultural dispositions, making open discussions about sex taboo . Findings prior to the development of this intervention demonstrated that the level of SRH communication increases with greater comfort around SRH discussions . This intervention addressed underlying issues affecting SRH communication including parenting styles and strained parent-child relationships that mediate between comfort around having SRH discussions and the actual experiences of SRH communication. Many participants reported feeling more comfortable and open in their discussions about SRH after participating in the intervention. Some caregivers reported feeling more at ease discussing sensitive topics such as contraception. In this study, SRH encompassed ten broad SRH topics: general health and body hygiene; menstruation and menstruation hygiene; nocturnal emissions in boys; HIV/AIDS and other STIs; handling sexual pressure; sexual conduct; having babies and birth control; romantic relationships; condoms; and sexual violence and reporting. Prior evidence indicated that belief in the importance of SRH communication was an important mediator for parent-young adolescent SRH communication . However, despite reports of increased SRH communication, the stories of change indicated that discussions remained narrowly focused on avoiding potential negative consequences of sex, such as HIV/AIDS, other STIs, and unwanted pregnancies. The conversations are still marred by warnings, threats and spread of fear as is illustrated in previous findings . Indeed, the conversations seemed to avoid explicit discussions of ways to mitigate or prevent these risks. The persistent lack of comprehensiveness in SRH discussions can be explained by the societies’ cultural and religious norms in the study setting that prohibit open discussions surrounding sex . On the other hand, general health and body hygiene as well as menstruation and menstruation hygiene are prominently discussed. Overall, caregivers not only acknowledged the benefit of discussing SRH with young adolescents but importantly expressed willingness to have these discussions after the intervention despite the dearth in the breadth of the topics. The stories of change indicated an increased knowledge of the SRH of young adolescents. Knowledge of SRH is an important factor as far as SRH communication is concerned. Caregivers who had a high SRH knowledge were more likely to have adolescents who adopted positive SRH behaviours . Lack of knowledge prior to the intervention was tied to beliefs about the timing of SRH communication, and the onset of puberty; many caregivers felt that the age category (10–14 years) was too young to initiate conversations around sex . Participants reported gaining a better understanding of SRH topics and feeling more informed about the risks and benefits of sexual behaviour. Gender norms have an influence on the content of SRH discussions. The stories of change reflected a transformation of gender norms and roles during SRH communication. Prior findings on gender and its effects on SRH communication revealed that SRH communication was most common between mothers and their daughters, rather than between mothers and their sons or fathers and their sons . Previous studies have also revealed the traditional role of mothers as the primary socializing agents as far as SRH is concerned . In this intervention, stereotypes around gender roles were left behind, especially among the men who initially believed that discussions around SRH topics such as menstruation were the role of the female caregiver. Male caregivers revealed affirmative experiences in adopting these roles of talking to their daughters. One of the most notable effects of the intervention is the great improvement in parent-child relationships. Caregivers acknowledged their initial deficiencies in parenting as well as the strained relationships they had with their children, and how this intervention successfully addressed these challenges. Changes in parenting styles and the enhancement of parent-child relationships have been indicative of promoting good and open SRH communication . Positive parent-child relationships and closeness fosters open relationships, and thus open and comfortable discussions between caregivers and young adolescents about SRH . Further, a warm relationship between the caregiver and young adolescents, builds agency and promotes decision-making skills in sexual relationships. Studies show that agency, or more specifically, motivational autonomy, is a critical resource in attaining positive reproductive outcomes for adolescents . Not coincidentally, this agency also facilitates adolescents’ rejection of traditional gender norms , norms which caregivers also attempted to challenge as a result of the intervention. Community-wide interventions on caregiver-child communication around SRH have shown promising results among caregivers of adolescents . It was therefore imperative to utilize the MSC technique; a participatory approach to evaluate this intervention. The MSC technique as an evaluation approach in this study augments the quantitative findings of the end-line evaluation for this project. The MSC technique specifically helps to identify underlying as well as unexpected changes brought about by the intervention beyond the set domains of change . Previous literature indicates that MSC may not be used as a stand-alone method of evaluation given that it may not capture certain types of changes or may help to capture changes that would rather not be captured using a different approach . Our evaluation has some limitations. Stories from 30 participants may not necessarily present the views of all the other community members that participated in the intervention. More so, the participants were selected from a single community, which may limit the generalizability of the findings. This being a participatory research project required a high level of rapport building. Given its duration in the community, coupled with frequent collaboration with community members, that rapport building may have created significant social desirability biases in our participants’ responses. Participants were overwhelmingly appreciative of the project with mostly positive stories of change. They were less critical of the intervention. Additionally, the stories report the immediate and short-term effects of the intervention rather than its long-term effects, which remain to be studied. The MSC technique in this study diverts from the original design especially in the selection process of the stories. Nevertheless, the narrative nature of the evaluation allows us to capture first-hand, relatively open and spontaneous expressions of our participants’ experiences of intervention-inspired changes, in addition to those underlying changes that may not be captured using other structured evaluation approaches. This study assessed the effectiveness of a community-based intervention aimed at improving caregiver-young adolescent communication. Our findings suggest that a community-based multi-session training program for caregivers of young adolescents produced positive effects in SRH communication. Caregivers adopted skills not only directly relating to SRH communication but also skills known to mediate SRH communication such as parenting styles, knowledge and attitudes towards SRH, and comfort around having SRH discussions. Future evaluations should explore the long-term impacts of interventions of this nature. Implementation studies should look for ways to scale-up community-based interventions in the wider population to test their application in different settings especially as these concern policies and guidelines on parenting in resource-limited, small communities, with cultural and religious taboos around discussing sex. More specifically, such research can focus on empowering parents to promote decision-making skills and motivational autonomy among their adolescent children. Additional future interventions can focus on best practices to normalize open discussions around SRH and dispel deeply ingrained beliefs around sexuality that have been shown to deter positive SRH outcomes for adolescents. S1 Checklist (TXT) Click here for additional data file. S1 File (DOCX) Click here for additional data file. |
Estetrol: From Preclinical to Clinical Pharmacology and Advances in the Understanding of the Molecular Mechanism of Action | fe30fc19-3c51-4be7-b4e0-3730d7b4cb0b | 10293541 | Pharmacology[mh] | Estetrol (E4) was recently marketed as the estrogenic component of a new combined oral contraceptive (COC) in combination with the progestin drospirenone (DRSP). It is also currently in late stage clinical development for use as a menopausal hormone therapy (MHT). These advancements for E4 in women’s health are the culmination of efforts to characterize the pharmacological activity and safety profile of this natural estrogen. In addition to the primary therapeutic targets, namely the prevention of pregnancy and the alleviation of vasomotor symptoms (VMS), the pharmacological characterization of E4 has been extended to many other tissues and biological responses. The use of estrogens in the context of contraception and menopause is associated with unwanted effects, including an increased risk of breast cancer and venous thromboembolism (VTE). The role of estrogens and estrogen receptors (ERs) in the breast is well described. Estradiol (E2) physiologically stimulates postnatal mammary gland development . The proliferative rate in normal breast epithelium from women exposed to an estroprogestative combination is significantly higher compared with non-users . Breast cancer is the most commonly diagnosed cancer and about 70% of breast tumors express ERs. In these cancers, estrogen acts as mitotic agent and growth factor promoting tumor growth. Several epidemiological studies have linked the use of hormonal contraception or MHT to an increased risk of developing breast cancer . Besides the ER-mediated effects on cell proliferation, the production of highly reactive metabolites has also been described to play a role in estrogen-induced carcinogenesis. The oxidative metabolism of estrogen leads to the formation of catechol and quinone species that can react with the DNA to create adducts that can give rise to mutations and therefore contribute to the development of tumors . This mechanism of breast carcinogenesis is independent of ER. Accumulation of estrogen-DNA adducts was detected in human breast cancer cells and in human samples (breast tumor tissue, urine and serum) . The liver plays a critical role in hemostasis as it is the primary source of the majority of coagulation factors, anticoagulant proteins and constituents of the fibrinolytic system . The VTE risk in hormonal contraception users is a rare but serious adverse effect that is due to the strong impact of estrogens on the liver, and in particular on the synthesis of hepatic coagulation factors triggering a shift towards a prothrombotic state . The estrogen dosage, the nature of the estrogenic and progestogenic components in combined contraceptives, as well as the route of administration are some of the parameters that can influence the risk of VTE associated with hormonal contraception . Based on the adverse effects reported for other estrogens, special attention was given to deciphering the impact of E4 on the breast and the liver with a particular focus on hemostasis parameters. Preclinical and clinical data suggest that E4 has a more selective pharmacological profile compared with other estrogens. E4 confers adequate estrogenic effects in uterovaginal tissues and bone as well as on cardiovascular and central nervous systems, while it has an overall limited impact on hepatic parameters, including on the hemostasis balance. In addition, data obtained in preclinical models suggest that E4 may have a differential impact on breast proliferation and carcinogenesis compared with other estrogens. From a molecular perspective, the interaction of E4 with the ERs has been extensively characterized. The focus of the current review is to provide an overview of the work that led to the characterization of the pharmacological properties of E4 as well as an insight into the recent advances made in the understanding of the molecular mechanisms of action driving its tissue-selective activity and ultimately underlying its favorable benefit–risk ratio. E4 was first discovered and identified by Egon Diczfalusy at the Karolinska Institute in Stockholm in 1965 . E4 belongs to the family of natural estrogens with estrone (E1), estradiol (E2) and estriol (E3). Structurally, E4 has four hydroxyl groups (Fig. ). E4 is produced by the human fetal liver during pregnancy and reaches the maternal circulation as indicated by increasing E4 concentrations in maternal plasma and urine throughout pregnancy. Different studies have shown a consistent steady rise of E4 in maternal plasma during pregnancy to levels up to 1.2 ng/mL at term. Fetal E4 levels are reported to be over 10 times higher than maternal levels . To date, the physiological function of E4 during pregnancy remains unclear. However, the physiological exposure to relatively high concentrations of E4 during pregnancy suggests a good tolerability of the compound. After its discovery, preclinical research studies were conducted with E4. The first experimental data described E4 as a weak estrogen compared with the reference estrogen E2, showing a moderate affinity for the ERs . It was also shown that E4 was able to induce a number of biological changes in the rat uterus, revealing its estrogenic activity . The potential use of E4 as an indicator of fetal well-being was investigated in various studies, but due to the large intra- and interindividual variation of maternal E4 levels, this appeared not to be feasible , and further research into E4 was subsequently abandoned. In the 2000s, scientific interest in E4 was rekindled with the goal of exploring its potential therapeutic use in women’s health. In terms of pharmacokinetic properties, E4 has a high oral bioavailability and a long half-life in humans, in contrast to other natural estrogens such as E2 . Metabolism also highly differentiates E4 from other estrogens. In vitro reaction phenotyping studies were conducted to evaluate the role of drug-metabolizing enzymes in the metabolism of E4. Furthermore, to obtain a comprehensive understanding of the metabolic behavior of E4 in humans, metabolite profiling by mass spectrometry was performed, with samples collected during a phase I trial in which participants received an oral dose of radiolabeled-E4. Cytochrome P450 (CYP) enzymes do not play a major role in the metabolism of E4 , and, instead, E4 undergoes phase II metabolism with the production of inactive conjugated metabolites. Human metabolite profiling showed that the main metabolites observed in plasma after oral administration are E4-16-glucuronide, E4-3-glucuronide and E4-glucuronide-sulfate. E4 is not converted back into other active estrogens such as E3, E2 or E1 and is therefore considered as a terminal end-product of estrogen metabolism . In order to gain insight into the pharmacological profile of E4, this compound was tested in a large panel of preclinical in vitro and in vivo models and then in clinical trials involving women of reproductive age as well as postmenopausal women. Current knowledge on the pharmacological activity of E4 includes data on the prevention of pregnancy, as well as the alleviation of menopausal symptoms. The biological responses induced by E4 on uterovaginal tissues, bones, the cardiovascular system, and the breast, and in regard to glucose metabolism, lipid profile, hepatic proteins and hemostasis balance, were also investigated and are presented below. Prevention of Pregnancy While the inhibition of ovulation is primarily induced by the progestin contained in a COC, the estrogenic component assists the progestin in its contraceptive activity and provides an adequate cycle control. The efficacy of E4 to inhibit ovulation was first assessed and confirmed in rats, showing that the anti-ovulatory effect of E4 was dose-dependent, with two administrations per day of 0.3 mg/kg E4 effectively inhibiting ovulation in cycling rats . In this experiment, the relative potency of E4 was about 18-fold lower than the synthetic estrogen EE. Based on these preclinical data, it was concluded that E4 was a good candidate to be the estrogenic component of a COC. A phase II dose-finding pilot study was conducted to evaluate the efficacy of different doses of E4 (5–20 mg) in combination with a progestin (levonorgestrel or DRSP) in suppressing the pituitary-ovarian axis and ovulation in healthy premenopausal women for three consecutive cycles. Participants receiving EE 20 µg/DRSP 3 mg served as a reference group. The compounds were well tolerated, and all treatments resulted in inhibition of ovulation. Inhibition of ovarian activity was more pronounced in the highest E4 dose group and was very similar to that observed for the EE/DRSP group . Another published clinical trial that included healthy young women with proven ovulatory cycles further demonstrated the adequate ovulation inhibition and ovarian function suppression for the combination of E4 15 mg/DRSP 3 mg in a 24/4-day regimen for three consecutive cycles. None of the participants using E4/DRSP ovulated during E4/DRSP use, while the subsequent return of ovulation occurred, on average, 15.5 days after treatment discontinuation . Two comparable pivotal phase III clinical studies, conducted in North America (NCT02817841) and Europe/Russia (NCT02817828) , assessed the contraceptive efficacy of the combination E4/DRSP. In the trial conducted in North America evaluating 1674 women aged between 16 and 35 years for 13 cycles, the overall and method-failure pregnancy rates were evaluated using the Pearl Index (PI) and life-table analysis. A PI of 2.65, a method-failure PI of 1.43, and a 13-cycle life-table pregnancy rate of 2.1% were reported, indicating that E4/DRSP is an effective method of contraception. The trial conducted in Europe and Russia that included 1353 women aged 18–35 years who used E4/DRSP for 13 cycles also showed a high contraceptive efficacy with a low PI of 0.47 pregnancies/100 woman-years. This PI value is similar to the marketed DRSP-containing COCs such as Yaz ® and Yasmin ® . A pooled analysis of both phase III studies further demonstrated that E4/DRSP is an effective oral contraception overall, and, importantly, also across subgroups based on age, contraceptive history and body mass index . The COC consisting of E4 15 mg/DRSP 3 mg is now approved and marketed in different territories, including Europe, Russia, US, Canada and Australia. Alleviation of Vasomotor Symptoms The efficacy of E4 to alleviate hot flushes was investigated in an experimental animal model considered representative for menopausal VMS. This experimental model consists of recording the thermal responses in the tail skin of morphine-dependent ovariectomized rats after morphine withdrawal by administration of naloxone. E4 was efficacious in alleviating hot flushes and suppressed the increase in tail skin temperature in a dose-dependent manner. In this model, the equipotent dose of E4 was 10 times higher than EE, suggesting that the potency of E4 may be lower than EE, although only one dose of EE was tested . A dose-finding phase II clinical trial (E4Relief—NCT02834312) was conducted to select the effective dose of E4 for the treatment of VMS in postmenopausal women. A total of 257 postmenopausal women aged 40–65 years, presenting with at least seven moderate to severe hot flushes per day or at least 50 moderate to severe hot flushes per week received a daily dose of E4 (2.5, 5, 10 or 15 mg) for a period of 12 weeks. During that period, the efficacy of E4 in alleviating VMS was assessed by recording (in an e-diary) the frequency and severity of hot flushes, with statistical analysis performed at weeks 4 and 12. The frequency of hot flushes decreased with all tested E4 doses, with the most pronounced changes observed in the E4 15 mg group. The difference in the percentage change of weekly hot flushes frequency was significant for the E4 15 mg group versus placebo at both week 4 and week 12. The decrease in severity of hot flushes was significantly more pronounced for E4 15 mg than for placebo at both week 4 and week 12. With the other doses having failed to promote statistically significant effects versus placebo, E4 15 mg was considered to be the minimum effective daily oral dose for the treatment of VMS . Effects of E4 on Uterovaginal Tissues The ability of E4 to bind to the rat uterine ER, but with a lower binding affinity compared with E2, was originally demonstrated in 1976 . Subsequently, a study in 1979 that evaluated the uterine response to E4 following subcutaneous administration of the compound to immature rats, showed that E4 influenced the uterine weight, luminal fluid volume and protein content . The estrogenic action of E4 in the uterus has been confirmed in more recent preclinical studies . In ovariectomized female rats treated daily orally for 7 days, E4 1 mg/kg/day and 3 mg/kg/day induced a statistically significant increase in uterine wet weight compared with the vehicle group. The potency of E4 was estimated to be approximately 20-fold lower than EE in this rat model . An acute treatment with E4 in ovariectomized mice induced uterotrophic effects and changes in uterine gene expression. Luminal epithelial height and stromal height were significantly increased by subcutaneous administration of E4 1 mg/kg. Accordingly, epithelial proliferation measured by Ki67 staining was also increased in mice treated with E4. The expression of a set of uterine genes known to be regulated by estrogen was evaluated in ovariectomized mice 6 h after a treatment with E4, and this transcriptomic analysis revealed that all E2-responsive genes in the uterus were also modulated by E4. In most cases, a 100-times higher dose of E4 was necessary to mimic the transcriptional effect induced by E2. The gene expression profile and the histological changes induced by concomitant treatment with E2 and E4 was similar to the profile induced by E2 alone . The estrogenic activity of E4 was also shown in the vagina in preclinical models. A modified Allen–Doisy test conducted in ovariectomized female rats showed that E4 induced vaginal cornification in a dose-dependent manner after 5 days of oral treatment (E4 0.1, 0.3, 1 or 3 mg/kg) . In ovariectomized mice, morphological and functional changes in the vagina were observed after chronic treatment with E4 (subcutaneous minipumps releasing 1 or 6 mg/kg/day), including an increase in vaginal weight, an increase in vaginal epithelial proliferation and epithelial height, as well as an increase in vaginal lubrication after cervical vaginal stimulation . The endometrium plays a central role in the uterine bleeding process. One of the purposes of including an estrogen in a COC is to counterbalance the effects of the progestin on the endometrium, thereby providing good cycle stability and an acceptable bleeding pattern. The fact that a reduction of the estrogen dose in COCs or the use of progestin-only pills often results in bleeding irregularities clearly illustrates this role for the estrogenic component . A regular and predictable bleeding profile is an important factor influencing COC choice, acceptability and adherence. Bleeding data from different clinical trials highlight the favorable and highly predictable bleeding pattern with limited unscheduled bleeding/spotting for the combination of E4/DRSP . A pooled analysis of two phase III trials including bleeding data from over 3400 participants showed that the use of the E4 15mg/DRSP 3mg COC in a 24/4-day treatment regimen is associated with a regular and predictable bleeding pattern . This further demonstrates the adequate estrogenic activity of E4 on the endometrium as well as its capacity to counterbalance the effects of the progestin to stabilize the endometrium and offer a good cycle control. In postmenopausal women receiving oral E4 alone (2.5, 5, 10 or 15 mg) for a period of 12 weeks, the endometrial thickness increased during treatment in a dose-dependent manner. While the mean endometrial thickness at baseline was 2.5 mm and was comparable among groups, a mean endometrial thickness of 3.9 mm (E4 2.5 mg) to 6.2 mm (E4 15 mg) was reported at week 4. The endometrial thickness remained stable until week 12 for all groups except the E4 15 mg group, for which the mean endometrial thickness increased to 7.9 mm. However, no endometrial hyperplasia was observed in any of the treatment groups. The endometrial thickness normalized and returned to baseline levels (3.2 mm) following progestin treatment (10 mg dydrogesterone daily for 14 days) at study completion . In the same trial with postmenopausal women, the effects of oral E4 were also evaluated on vaginal cytology, genitourinary syndrome of menopause, and health-related quality-of-life. The different outcomes included the vaginal epithelial cell maturation index, maturation value, vaginal pH, the genitourinary syndrome of menopause score (vaginal dryness, vaginal pain associated with sexual activity, vaginal irritation/itching, dysuria; reported in an e-diary) and the Menopause Rating Scale (MRS) at baseline and at week 12. Overall, E4 promoted estrogenic effects in the vagina and decreased signs of atrophy, confirming that E4 is a promising treatment option for these menopausal symptoms. Regarding vaginal cytology, a decrease in parabasal and intermediate cells and an increase in superficial cells was observed at week 12 in all E4 groups compared with baseline, indicating improved vaginal estrogenization, with a significant effect in the E4 15 mg group. Additionally, the maturation value was increased in all E4 groups. Vaginal pH decreased in all E4 groups and slightly increased in the placebo group. In terms of self-reported genitourinary symptoms, compared with placebo, the intensity score at week 12 significantly decreased for vaginal dryness (in the E4 15 mg group) and vaginal pain (in the E4 5, 10 and 15 mg groups), while the changes observed for irritation/itching and dysuria were not significant. Of note, this trial was designed with a primary goal of assessing the effect of E4 on VMS as opposed to focusing specifically on genitourinary symptoms. The MRS score decreased in all E4 treatment groups after 4 and 12 weeks of treatment, with the most pronounced effects in the E4 15 mg group, highlighting an improvement in terms of quality of life and well-being . Effects of E4 on Bone Metabolism Although an effect on human osteoblastic cell proliferation was not detected in vitro , in vivo studies have suggested that E4 may play a beneficial role in the maintenance of bone mass. A preclinical bone study conducted in ovariectomized female rats (a model of postmenopausal osteoporosis) showed that an oral treatment of E4 (0.1, 0.5 or 2.5 mg/kg/day) for 4 weeks significantly prevented the ovariectomy-related increase in osteocalcin levels, and improved bone mineral density and content, while also increasing bone strength. These bone-sparing effects induced by E4 were dose-dependent, although, similar to other studies, the potency of E4 was lower than for EE . In healthy women of reproductive age using E4/DRSP for three consecutive cycles, no imbalance in bone markers was observed . In line with the preclinical data, in a multiple-rising-dose study with postmenopausal women, E4 treatment induced changes in bone turnover markers, including a substantial dose-dependent decrease in osteocalcin levels, suggesting a preventative effect on bone loss . In the phase II E4Relief trial (NCT02834312), in which postmenopausal women received E4 2.5, 5, 10, 15 mg or placebo daily for 12 weeks, changes in bone turnover markers (osteocalcin and type 1 collagen C-terminal telopeptide [CTX-1]) were evaluated at week 12 compared with baseline and versus placebo. CTX-1 levels significantly decreased from baseline in the E4 5 mg, 10 mg and 15 mg groups. In the analysis versus placebo, the decrease was significant in the E4 10 mg and 15 mg groups. The impact of E4 (5, 10 and 15 mg groups) on osteocalcin after 12 weeks of treatment was not significant from baseline but was significant versus placebo . While this effect is consistent with the role of estrogens in bone remodeling and supports the potential beneficial effect of E4 in osteoporosis, additional clinical data (including long-term bone marker measurements, bone density scan and fracture data) are needed to validate this effect. Further evidence regarding the benefits of E4 on the bone came from a phase II study evaluating a high dose of E4 (40 mg) in male patients with advanced prostate cancer requiring androgen deprivation therapy (ADT), where E4 was being evaluated as an add-on to ADT to improve the efficacy and adverse effects of ADT, including ADT-induced bone loss. Therefore, the secondary endpoints of the study included the assessment of bone metabolism (osteocalcin and type I collagen telopeptide). While bone metabolism markers increased in the group receiving luteinizing hormone-releasing hormone agonist alone (48% for osteocalcin and 151% for CTX-1) at week 24, those turnover parameters decreased significantly from baseline in the group cotreated with E4 . Effects of E4 on the Cardiovascular System It is known that estrogens modulate cardiovascular physiology and function , and as such the impact of E4 has been thoroughly assed in preclinical models of different cardiovascular functions including nitric oxide (NO) production, vasodilation, endothelial healing, atherosclerosis, neointimal proliferation and hypertension prevention. To date, the effect of E4 on these cardiovascular functions are limited to preclinical data, with no clinical data available yet. Nitric Oxide Production and Vasodilation Endothelial NO is a key player for vascular function and vasodilation and is a known target of estrogens . In vitro, E4 induced rapid NO release and stimulated endothelial NO synthase (eNOS) activation and expression in human umbilical vein endothelial cells (HUVECs). However, E4 was significantly less effective compared with E2. When E4 was combined with E2, E4 antagonized NO synthesis induced by pregnancy-like E2 concentrations. However, E4 did not impede the induction of NO synthesis induced by lower E2 concentrations . These data support that E4 may be a regulator of NO synthesis in human endothelial cells. In a mouse model of carotid artery, E4 used at different dose levels (0.3, 1 and 6 mg/kg/day) failed to stimulate eNOS activation or endothelial NO production, while E2 was able to promote these two responses. When E4 was used in combination with E2, E4 antagonized the effects induced by E2 in mouse carotid artery. The combination E4+E2 therefore failed to promote eNOS activation and NO production in this experimental model . Based on the antagonistic activity of E4 in the presence of E2 on NO release described above, lower cardiovascular effects (such as vasodilation) could be expected in the presence of E4. Importantly, several studies have confirmed that NO production, which is essential for adequate vasodilation and endothelial function, is controlled by multiple factors besides estrogens. The regulation of vascular tone by endothelium-derived NO is mediated by multiple controlling mechanisms, including physical factors such as an increase in shear stress or reduction in temperature, as well as by neurohumoral mediators through the activation of specific endothelial cell membrane receptors. The main physiological driver of NO production is shear stress and estrogens are considered to play a limited role in the regulation of endothelial-derived NO production and subsequent physiological vasodilation. The impact of E4 on shear stress, was evaluated in an ex vivo model of flow-mediated vasodilatation. Chronic treatment with E4 promoted the occurrence of flow arteriolar remodeling in ovariectomized mice after an increase in blood flow, demonstrating that the presence of E4 did not impair the NO-mediated vasodilation . Moreover, E4 was shown to induce vasodilation of animal arteries by a specific mechanism distinct from NO production, whereby E4 induced the vasodilation of ewe uterine arteries at high concentrations . It also induced ex vivo relaxing responses in eight different vascular beds: rat uterine, aorta, carotid, mesenteric, pulmonary, renal, middle cerebral and septal coronary arteries. The vasodilation induced by E4 in rat arteries was ER-dependent since it was abrogated by the ER antagonist ICI 182 780. Blockade of eNOS by Nω-nitro- l -arginine methyl ester (an NO synthase inhibitor) blunted the E2-mediated, but not E4-mediated, relaxing response, demonstrating that E2, but not E4, induced vasodilation by stimulating eNOS activity. Overall, this study shows that E4 induced relaxation of precontracted rat arteries via both an endothelium-dependent mechanism and a guanylate cyclase mechanism . In conclusion, NO production is not the only mechanism eliciting the beneficial impact of estrogens on vasculature. The lack of E4-induced eNOS activity and NO release observed in some but not all experimental models should not be associated with any vascular safety concerns. Endothelial Healing The preclinical model of endothelial healing is usually used to assess the vascular protective effects of a compound. The acceleration of endothelial healing by estrogens is considered as a vasculo-protective action. A recent study demonstrated that chronic treatment with E4 (subcutaneous pellet) was able to accelerate endothelial healing after carotid artery injury in ovariectomized mice. The quantitative analysis of re-endothelialized areas, performed 5 days after endovascular injury, showed an increase of 30% of endothelial regeneration in control mice compared with day 0, and an increase of about 80% in mice treated with E4 . It was previously reported in another study published by the same group that E4 was not able to promote endothelial healing in the mouse carotid artery model . In the experimental model used for that study, both the artery media and endothelium were injured by electrocoagulation (perivascular injury) and the endothelial regeneration process was evaluated 3 days post-injury by the quantification of the re-endothelialized area. In these conditions, no effect was observed with E4, regardless of the dose levels used (0.3, 1 or 6 mg/kg/day) . Davezac et al. showed that in contrast, a model of specific endothelial destruction of the carotid artery, preserving smooth muscle cells, does not lead to the same results . When the injury is limited to the artery endothelium and when the underlining layer of vascular smooth muscle cells stays intact, E4 is able to accelerate the endothelial healing (re-endothelialization) after artery injury, highlighting that smooth muscle cells are necessary for E4 to mediate this endothelial function in mice. These conflicting results, at first glance, illustrate the crucial importance of the preclinical models and experimental conditions when interpreting data. A recent study evaluating the impact of estrogens used in oral contraceptives on human endothelial function showed that E4 (10 −9 to 10 −7 M) significantly enhanced migration of HUVECs using scratch and Boyden chamber assays. The effect of E4 on endothelial migration was comparable with the effect of EE, suggesting comparable vascular remodeling and regeneration capacity . Atherosclerosis Prevention The impact of E4 on the prevention of atheroma was assessed in low-density lipoprotein receptor-deficient (LDLr −/− ) mice fed a high-cholesterol diet, a well-described model to investigate the atheroprotective effects of estrogens. E4 used at 0.6 or 6 mg/kg/day in the diet for 12 weeks prevented lipid deposition and reduced atheroma deposits in the aortic sinus in ovariectomized LDLr −/− mice up to almost 80% in a dose-dependent manner. E4 also decreased the total plasma cholesterol in these mice . Neointimal Hyperplasia Prevention Neointimal hyperplasia refers to post-intervention (e.g . after mechanical atherosclerosis treatment), pathological, vascular remodeling due to the proliferation and migration of vascular muscle cells into the tunica intima layer. Neointimal hyperplasia can ultimately result in vascular wall thickening and in a reduction of the lumen diameter, which in turn leads to vascular insufficiency and restenosis. In a mouse model of femoral artery mechanical injury, E4 prevented neointimal hyperplasia by a direct inhibitory effect on the proliferation and migration of vascular smooth muscle cells but not by acting on endothelial cells. Morphometric analysis showed that 28 days after the injury, the mice treated with E4 exhibited a reduced neointima/media ratio . Hypertension Prevention and Arteriolar Remodeling Promotion Additional vasculoprotective actions were described after chronic treatment with E4, including the prevention of angiotensin II-induced hypertension, which is a major risk factor of cardiovascular diseases, and the restoration of arteriolar flow-mediated remodeling, which has a major role in the homeostasis of tissue perfusion . In that study, flow-mediated remodeling was evaluated in mesenteric arteries isolated from ovariectomized mice treated with vehicle or E4 over 2 weeks. The arterial diameter was measured in response to stepwise increases in pressure in mesenteric arteries submitted to high flow or to normal flow. The effect of E4 on angiotensin II treatment was evaluated in ovariectomized female mice implanted with osmotic minipumps delivering angiotensin II or a combination of angiotensin II and E4 for 1 month, with systolic blood pressure being measured weekly. E4 prevented angiotensin II-induced hypertension and favored flow-mediated remodeling . Effects of E4 on the Breast Sex steroids promote the growth of certain hormone-dependent tissues and tumors. Efforts have been made to characterize the impact of E4 on breast epithelial cell proliferation and breast cancer growth in preclinical models and preliminary clinical trials. Normal Breast Epithelial Cell Proliferation In vitro exposure of normal human breast epithelial cells for 96 h with 10 nM E2 elicited a maximal cell proliferation increase of about 60%. At the same concentration, E4 did not increase human breast epithelial cell proliferation. A 100 times higher concentration of E4 (1 µM) was necessary to stimulate the proliferation to the same extent as E2 . To evaluate the effect of E4 on mammary gland, prepubertal ovariectomized mice were treated orally with different dose levels of E4 (0.3, 1, 3 or 10 mg/kg/day) or with E2 (1 mg/kg/day) for 14 days, after which mammary glands were collected and epithelial cells isolated. The level of epithelial proliferation assessed by the expression of cyclin D1 and Ki67 mRNA was significantly lower in mice treated by E4 (at any dose levels) compared with mice treated with E2, suggesting a lower proliferative effect for E4 . Breast Cancer Growth E4 also exhibits a lower potency than E2 to induce human breast cancer cell growth. Liu et al. investigated the impact of different estrogens, including E2 and E4, on proliferation of the ER-positive breast cancer cell line ZR 75-1 in vitro. All estrogens tested caused a significant stimulation of cell proliferation. At the lowest concentration (10 −10 M), E4 had a significantly lower stimulatory effect than E2, while at higher concentrations (≥10 −9 M), E2 and E4 stimulated cell proliferation to the same extent . In another assay using MCF-7 cells transfected with PGRMC1, E4 was also significantly less active than E2 in promoting cell proliferation. At 10 −10 M, E2 increased the proliferation rate by about 160%, while E4 induced an increase of only 50% compared with the control condition. At higher concentrations (≥10 −9 M), the same proliferative effect (about +160%) was elicited by E2 and E4 . In another study, a 1000 times higher concentration of E4 was needed to promote MCF-7 and MCF-7/BOS cell growth in vitro to the same extent as E2, confirming the weaker potency of E4 to induce human breast cancer cell growth compared with E2 in vitro . An estrogen supplementation is necessary for the growth of MCF-7 and the formation of tumor in vivo. To determine if E4 could achieve the same effect as E2 in this model, ovariectomized immunodeficient mice implanted with MCF-7 cells received a daily oral treatment of E4 (0.5, 1, 3 or 10 mg/kg/day) or E2 (3 mg/kg/day). After 5 weeks of treatment, E2 promoted tumor growth, with tumor weights being fivefold higher compared with the untreated group. No significant difference was observed between the untreated control group and mice treated with E4 0.5 mg/kg/day. Indeed, E4 was as efficient as E2 in promoting tumor growth only at the highest dose level of 10 mg/kg/day, confirming the lower potency of E4 to induce breast cancer growth compared with E2 in vivo . The effect of a combined treatment with E2 and E4 on MCF-7 tumor growth was also analyzed, whereby ovariectomized mice implanted with MCF-7 cells and with a subcutaneous E2 pellet received a daily oral treatment of E4 (1, 3 or 10 mg/kg/day) for 5 weeks. In these conditions, E4 attenuated E2-induced tumor growth in a dose-dependent manner. Exposure to the combination of E2 + E4 decreased the tumor volume and tumor weight by approximately 50% compared with mice exposed to E2 alone. This antagonistic effect of E4 in the presence of E2 was also observed for MCF-7 cell proliferation in vitro. This effect became maximal when E4 was at least 100 times more concentrated than E2 . In a broader preclinical study combining genetically engineered mouse models, human cell line xenografts and hormone-dependent authentic breast tumor patient-derived xenografts, the authors showed a limited effect of E4 on breast cancer growth in vivo when used at doses similar to the therapeutic levels required for contraception or menopause . Breast Cancer Migration and Invasion Breast cancer cell movement requires a remodeling of the actin cytoskeleton, involving estrogen-mediated signaling pathways. Interestingly, it has been demonstrated that E4 acts as a weak estrogen on breast cancer cell migration and invasion. The effects of E4 on its own or in the presence of E2 were tested on T47-D breast cancer cell migration and invasion of three-dimensional matrices. Exposure of T47-D cells to E4 weakly stimulated migration and invasion in comparison with E2. In addition, E4 decreased the extent of movement and invasion induced by E2 . Clinical Data The effect of 14 days of preoperative treatment with E4 20 mg/day on tumor proliferation markers was investigated in a preoperative window trial in 30 pre- and postmenopausal women with ER+ early breast cancer. E4 had a significant proapoptotic effect on tumor tissue, whereas Ki67 expression (a marker of cell proliferation) remained unchanged in both pre- and postmenopausal women . The efficacy of high doses of E4 in postmenopausal patients with pretreated, locally advanced, and/or metastatic ER+/HER2− breast cancer was assessed in a phase IB/IIA, dose-escalation study in which successive cohorts of three patients received E4 20, 40 or 60 mg/day for 12 weeks by oral administration. Five of nine patients completing 12 weeks of E4 treatment showed objective antitumor effects, as evaluated by computer tomography scanning according to the Response Evaluation Criteria in Solid Tumors (RECIST) criteria, with stabilization of the disease in four patients and one complete response. The complete response was seen with the 20 mg dose, and stabilization of the disease was observed in one patient in the 20 mg group and three patients treated with 40 mg . Lipid Profile, Carbohydrate Metabolism and Metabolic Disorders The use of hormone therapy can impact metabolic markers such as total cholesterol, low-density lipoprotein cholesterol (LDLc), high-density lipoprotein cholesterol (HDLc), triglycerides and glucose levels. The effect of E4 on the pathophysiological consequences of a Western diet (42% kcal fat, 0.2% cholesterol) was evaluated in mice. Weekly body weight measurements showed that chronic treatment with E4 reduced body weight gain and protected mice against Western diet-induced obesity. After 7 weeks of Western diet feeding, E4 improved glucose tolerance in mice. At the end of the protocol, fasting glucose levels were significantly lower in mice treated with E4. A reduced accumulation of subcutaneous, perigonadal and mesenteric adipose tissue was also observed in the E4-treated group. In addition, disorders associated with obesity, such as atherosclerosis and steatosis, were prevented in mice fed a Western diet and treated with E4. Furthermore, E4 induced a lower accumulation of lipids in the liver. Accordingly, the expression of genes involved in lipid metabolism, including cholesterol metabolism and lipoprotein assembly, was decreased in the liver of E4-treated mice compared with control mice. The study demonstrated that E4 prevents Western-induced obesity by increasing locomotor activity and energy expenditure . In postmenopausal women receiving E4 2.5, 5, 10 or 15 mg daily for 12 weeks in the E4Relief trial (NCT02834312), absolute changes from baseline in triglyceride levels were minimal in all study groups and were not significant when compared with placebo. HDLc increased from baseline in all E4 groups, while no increase was observed in the placebo group. An increase from baseline was observed for LDLc in the E4 2.5 and 5 mg groups and for total cholesterol in the E4 2.5, 5 and 10 mg groups. None of these changes were significantly different when compared with placebo. Regarding glucose metabolism, no significant change in fasting glucose level was observed from baseline. A significant decrease was seen in insulin resistance and hemoglobin A1c in the E4 10 and 15 mg groups, respectively, suggesting an improved glucose tolerance . The impact of E4 in combination with a progestin on lipid metabolism was evaluated in healthy women. Participants (healthy women aged between 18 and 35 years) received E4/DRSP, E4/LNG or EE/DRSP as a comparator for three consecutive cycles. Minor effects on lipoproteins were observed in the E4 groups and the effects on triglycerides in the E4 groups were significantly lower compared with the EE group, demonstrating that E4-containing COCs have a limited effect on lipid metabolism . The effect of the combination of E4 15 mg/DRSP 3 mg on metabolic parameters, including lipid profile and carbohydrate metabolism, after six treatment cycles was then evaluated in healthy subjects . The study included two frequently used EE-containing COCs as comparators, one with LNG and one with DRSP, to validate changes related to the estrogen component. E4/DRSP had a minimal impact on lipid parameters; the largest effect was observed for triglycerides (+24.0%), which was less compared with EE/LNG (+28.0%) and EE/DRSP (+65.5%). With E4/DRSP, no significant changes from baseline were observed for LDL-C, total cholesterol, the HDL-C/LDL-C ratio, and lipoprotein A. Carbohydrate parameters, including fasting insulin and glucose, C-peptide and HbA1c, remained relatively stable in all treatment groups. Oral glucose tolerance test (OGTT) glucose and insulin concentrations varied substantially with no remarkable treatment differences. Changes in carbohydrate parameters were minimal, pointing towards a negligible impact on glycemic control . Taken together, these data tend to demonstrate a low impact of E4 on lipids and carbohydrate metabolism. The COC with E4 15 mg/DRSP 3 mg is associated with a favorable effect on body weight control . Effects of E4 on Liver Proteins and Hemostasis Balance Sex hormone binding globulin (SHBG) is an estrogen-responsive protein produced by the liver that reflects the overall estrogenic impact of a compound on the liver . Moreover, the plasma levels of SHBG can modify the plasma distribution of natural steroid ligands. The effect of E4 on the production of SHBG has been evaluated in vitro in human HepG2 cells and human Hep89 cells overexpressing ERα and was compared with the effect of other estrogens. Exposure to E4 (0.1–1000 nM) during 24, 48 or 72 h did not stimulate the production of SHBG in either cell lines. In contrast, a significant dose-dependent increase in SHBG production was observed after exposure to other estrogens such as EE, E2 and E3 . These in vitro data may indicate that E4 is less likely to modulate the plasma levels of SHBG. In a clinical trial including 49 postmenopausal women, treatment with escalating doses of E4 (2–40 mg) for 28 days induced a dose-dependent increase of SHBG levels. When looking at the different doses of E4, only the 10 mg E4 group elicited a similar increase in SHBG levels as the 2 mg E2-valerate group (59% and 62%, respectively), suggesting a lower potency of E4 on the production of SHBG . In the E4Relief trial (NCT02834312) conducted in postmenopausal women, a dose-dependent increase in SHBG levels compared with baseline was also observed in E4-treated groups (+10.3%, +23.3%, +61.8% and +99.4% for E4 2.5, 5, 10 and 15 mg, respectively) . Moreover, the combination E4/DRSP given to healthy women during six cycles had significantly less impact (+55%) than the combination EE/DRSP (+251%) on SHBG production . In conclusion, although an effect on SHBG production is noted after oral treatment with E4 in clinical trials, in contrast to the absence of effect reported in in vitro assays, the effect of E4 stayed small compared with other estrogens, suggesting a lower estrogenic effect on the liver. Preclinical studies in relation to coagulation risks, showed that chronic E4 treatment in ovariectomized female mice exhibited a prolonged tail-bleeding time and were protected from arterial and venous thrombosis in vivo. In addition, E4 treatment decreased ex vivo thrombus growth on collagen under arterial flow conditions . To assess the effects of the COC containing E4 on hemostasis parameters, healthy women received the combination E4/DRSP, EE/LNG or EE/DRSP as comparators, during six cycles. Activated protein C resistance is observed in COC users and this functional assay is used to assess the thrombogenicity potential of COCs . The median change of endogenous thrombin potential (ETP)-based activated protein C sensitivity resistance (APCr) at cycle 6 was +30% for E4/DRSP, +165% for EE/LNG, and +219% for EE/DRSP. Changes in hemostasis parameters, including anticoagulant proteins and fibrinolytic proteins, after treatment with six cycles of E4/DRSP, were smaller or similar to those observed for EE/LNG. However, much more pronounced changes were observed with EE/DRSP . Absolute changes from baseline for hemostasis parameters were also minimal in postmenopausal women receiving E4 alone for 12 weeks . The thrombin generation coagulation assay is used as a marker of hypercoagulability and risk of VTE . A comparative assessment of the impact of E4/DRSP and EE/LNG or EE/DRSP on thrombin generation was conducted. Data were collected from trial NCT02957630, in which thrombograms and thrombin generation parameters were extracted for each subject at baseline and after six cycles of treatment. It was shown that E4 in combination with DRSP does not have any impact on thrombin generation in contrast to EE-containing products that induce the production of procoagulant factors, a decrease in the synthesis of anticoagulant factors, and therefore induce a shift towards a prothrombotic state . It can therefore be concluded that EE-containing products induce a prothrombotic environment while E4 exhibits a neutral profile on hemostasis (Fig. ). Pooled analysis of data from two phase III trials including 3417 participants showed that the combination of E4/DRSP is associated with an overall favorable safety profile. A single case of VTE was reported, which resolved without sequelae after anticoagulant treatment . While available hemostasis data described above suggest that the E4/DRSP COC may be associated with a lower VTE risk, this will need to be demonstrated in a larger population in postauthorization safety studies. While the inhibition of ovulation is primarily induced by the progestin contained in a COC, the estrogenic component assists the progestin in its contraceptive activity and provides an adequate cycle control. The efficacy of E4 to inhibit ovulation was first assessed and confirmed in rats, showing that the anti-ovulatory effect of E4 was dose-dependent, with two administrations per day of 0.3 mg/kg E4 effectively inhibiting ovulation in cycling rats . In this experiment, the relative potency of E4 was about 18-fold lower than the synthetic estrogen EE. Based on these preclinical data, it was concluded that E4 was a good candidate to be the estrogenic component of a COC. A phase II dose-finding pilot study was conducted to evaluate the efficacy of different doses of E4 (5–20 mg) in combination with a progestin (levonorgestrel or DRSP) in suppressing the pituitary-ovarian axis and ovulation in healthy premenopausal women for three consecutive cycles. Participants receiving EE 20 µg/DRSP 3 mg served as a reference group. The compounds were well tolerated, and all treatments resulted in inhibition of ovulation. Inhibition of ovarian activity was more pronounced in the highest E4 dose group and was very similar to that observed for the EE/DRSP group . Another published clinical trial that included healthy young women with proven ovulatory cycles further demonstrated the adequate ovulation inhibition and ovarian function suppression for the combination of E4 15 mg/DRSP 3 mg in a 24/4-day regimen for three consecutive cycles. None of the participants using E4/DRSP ovulated during E4/DRSP use, while the subsequent return of ovulation occurred, on average, 15.5 days after treatment discontinuation . Two comparable pivotal phase III clinical studies, conducted in North America (NCT02817841) and Europe/Russia (NCT02817828) , assessed the contraceptive efficacy of the combination E4/DRSP. In the trial conducted in North America evaluating 1674 women aged between 16 and 35 years for 13 cycles, the overall and method-failure pregnancy rates were evaluated using the Pearl Index (PI) and life-table analysis. A PI of 2.65, a method-failure PI of 1.43, and a 13-cycle life-table pregnancy rate of 2.1% were reported, indicating that E4/DRSP is an effective method of contraception. The trial conducted in Europe and Russia that included 1353 women aged 18–35 years who used E4/DRSP for 13 cycles also showed a high contraceptive efficacy with a low PI of 0.47 pregnancies/100 woman-years. This PI value is similar to the marketed DRSP-containing COCs such as Yaz ® and Yasmin ® . A pooled analysis of both phase III studies further demonstrated that E4/DRSP is an effective oral contraception overall, and, importantly, also across subgroups based on age, contraceptive history and body mass index . The COC consisting of E4 15 mg/DRSP 3 mg is now approved and marketed in different territories, including Europe, Russia, US, Canada and Australia. The efficacy of E4 to alleviate hot flushes was investigated in an experimental animal model considered representative for menopausal VMS. This experimental model consists of recording the thermal responses in the tail skin of morphine-dependent ovariectomized rats after morphine withdrawal by administration of naloxone. E4 was efficacious in alleviating hot flushes and suppressed the increase in tail skin temperature in a dose-dependent manner. In this model, the equipotent dose of E4 was 10 times higher than EE, suggesting that the potency of E4 may be lower than EE, although only one dose of EE was tested . A dose-finding phase II clinical trial (E4Relief—NCT02834312) was conducted to select the effective dose of E4 for the treatment of VMS in postmenopausal women. A total of 257 postmenopausal women aged 40–65 years, presenting with at least seven moderate to severe hot flushes per day or at least 50 moderate to severe hot flushes per week received a daily dose of E4 (2.5, 5, 10 or 15 mg) for a period of 12 weeks. During that period, the efficacy of E4 in alleviating VMS was assessed by recording (in an e-diary) the frequency and severity of hot flushes, with statistical analysis performed at weeks 4 and 12. The frequency of hot flushes decreased with all tested E4 doses, with the most pronounced changes observed in the E4 15 mg group. The difference in the percentage change of weekly hot flushes frequency was significant for the E4 15 mg group versus placebo at both week 4 and week 12. The decrease in severity of hot flushes was significantly more pronounced for E4 15 mg than for placebo at both week 4 and week 12. With the other doses having failed to promote statistically significant effects versus placebo, E4 15 mg was considered to be the minimum effective daily oral dose for the treatment of VMS . The ability of E4 to bind to the rat uterine ER, but with a lower binding affinity compared with E2, was originally demonstrated in 1976 . Subsequently, a study in 1979 that evaluated the uterine response to E4 following subcutaneous administration of the compound to immature rats, showed that E4 influenced the uterine weight, luminal fluid volume and protein content . The estrogenic action of E4 in the uterus has been confirmed in more recent preclinical studies . In ovariectomized female rats treated daily orally for 7 days, E4 1 mg/kg/day and 3 mg/kg/day induced a statistically significant increase in uterine wet weight compared with the vehicle group. The potency of E4 was estimated to be approximately 20-fold lower than EE in this rat model . An acute treatment with E4 in ovariectomized mice induced uterotrophic effects and changes in uterine gene expression. Luminal epithelial height and stromal height were significantly increased by subcutaneous administration of E4 1 mg/kg. Accordingly, epithelial proliferation measured by Ki67 staining was also increased in mice treated with E4. The expression of a set of uterine genes known to be regulated by estrogen was evaluated in ovariectomized mice 6 h after a treatment with E4, and this transcriptomic analysis revealed that all E2-responsive genes in the uterus were also modulated by E4. In most cases, a 100-times higher dose of E4 was necessary to mimic the transcriptional effect induced by E2. The gene expression profile and the histological changes induced by concomitant treatment with E2 and E4 was similar to the profile induced by E2 alone . The estrogenic activity of E4 was also shown in the vagina in preclinical models. A modified Allen–Doisy test conducted in ovariectomized female rats showed that E4 induced vaginal cornification in a dose-dependent manner after 5 days of oral treatment (E4 0.1, 0.3, 1 or 3 mg/kg) . In ovariectomized mice, morphological and functional changes in the vagina were observed after chronic treatment with E4 (subcutaneous minipumps releasing 1 or 6 mg/kg/day), including an increase in vaginal weight, an increase in vaginal epithelial proliferation and epithelial height, as well as an increase in vaginal lubrication after cervical vaginal stimulation . The endometrium plays a central role in the uterine bleeding process. One of the purposes of including an estrogen in a COC is to counterbalance the effects of the progestin on the endometrium, thereby providing good cycle stability and an acceptable bleeding pattern. The fact that a reduction of the estrogen dose in COCs or the use of progestin-only pills often results in bleeding irregularities clearly illustrates this role for the estrogenic component . A regular and predictable bleeding profile is an important factor influencing COC choice, acceptability and adherence. Bleeding data from different clinical trials highlight the favorable and highly predictable bleeding pattern with limited unscheduled bleeding/spotting for the combination of E4/DRSP . A pooled analysis of two phase III trials including bleeding data from over 3400 participants showed that the use of the E4 15mg/DRSP 3mg COC in a 24/4-day treatment regimen is associated with a regular and predictable bleeding pattern . This further demonstrates the adequate estrogenic activity of E4 on the endometrium as well as its capacity to counterbalance the effects of the progestin to stabilize the endometrium and offer a good cycle control. In postmenopausal women receiving oral E4 alone (2.5, 5, 10 or 15 mg) for a period of 12 weeks, the endometrial thickness increased during treatment in a dose-dependent manner. While the mean endometrial thickness at baseline was 2.5 mm and was comparable among groups, a mean endometrial thickness of 3.9 mm (E4 2.5 mg) to 6.2 mm (E4 15 mg) was reported at week 4. The endometrial thickness remained stable until week 12 for all groups except the E4 15 mg group, for which the mean endometrial thickness increased to 7.9 mm. However, no endometrial hyperplasia was observed in any of the treatment groups. The endometrial thickness normalized and returned to baseline levels (3.2 mm) following progestin treatment (10 mg dydrogesterone daily for 14 days) at study completion . In the same trial with postmenopausal women, the effects of oral E4 were also evaluated on vaginal cytology, genitourinary syndrome of menopause, and health-related quality-of-life. The different outcomes included the vaginal epithelial cell maturation index, maturation value, vaginal pH, the genitourinary syndrome of menopause score (vaginal dryness, vaginal pain associated with sexual activity, vaginal irritation/itching, dysuria; reported in an e-diary) and the Menopause Rating Scale (MRS) at baseline and at week 12. Overall, E4 promoted estrogenic effects in the vagina and decreased signs of atrophy, confirming that E4 is a promising treatment option for these menopausal symptoms. Regarding vaginal cytology, a decrease in parabasal and intermediate cells and an increase in superficial cells was observed at week 12 in all E4 groups compared with baseline, indicating improved vaginal estrogenization, with a significant effect in the E4 15 mg group. Additionally, the maturation value was increased in all E4 groups. Vaginal pH decreased in all E4 groups and slightly increased in the placebo group. In terms of self-reported genitourinary symptoms, compared with placebo, the intensity score at week 12 significantly decreased for vaginal dryness (in the E4 15 mg group) and vaginal pain (in the E4 5, 10 and 15 mg groups), while the changes observed for irritation/itching and dysuria were not significant. Of note, this trial was designed with a primary goal of assessing the effect of E4 on VMS as opposed to focusing specifically on genitourinary symptoms. The MRS score decreased in all E4 treatment groups after 4 and 12 weeks of treatment, with the most pronounced effects in the E4 15 mg group, highlighting an improvement in terms of quality of life and well-being . Although an effect on human osteoblastic cell proliferation was not detected in vitro , in vivo studies have suggested that E4 may play a beneficial role in the maintenance of bone mass. A preclinical bone study conducted in ovariectomized female rats (a model of postmenopausal osteoporosis) showed that an oral treatment of E4 (0.1, 0.5 or 2.5 mg/kg/day) for 4 weeks significantly prevented the ovariectomy-related increase in osteocalcin levels, and improved bone mineral density and content, while also increasing bone strength. These bone-sparing effects induced by E4 were dose-dependent, although, similar to other studies, the potency of E4 was lower than for EE . In healthy women of reproductive age using E4/DRSP for three consecutive cycles, no imbalance in bone markers was observed . In line with the preclinical data, in a multiple-rising-dose study with postmenopausal women, E4 treatment induced changes in bone turnover markers, including a substantial dose-dependent decrease in osteocalcin levels, suggesting a preventative effect on bone loss . In the phase II E4Relief trial (NCT02834312), in which postmenopausal women received E4 2.5, 5, 10, 15 mg or placebo daily for 12 weeks, changes in bone turnover markers (osteocalcin and type 1 collagen C-terminal telopeptide [CTX-1]) were evaluated at week 12 compared with baseline and versus placebo. CTX-1 levels significantly decreased from baseline in the E4 5 mg, 10 mg and 15 mg groups. In the analysis versus placebo, the decrease was significant in the E4 10 mg and 15 mg groups. The impact of E4 (5, 10 and 15 mg groups) on osteocalcin after 12 weeks of treatment was not significant from baseline but was significant versus placebo . While this effect is consistent with the role of estrogens in bone remodeling and supports the potential beneficial effect of E4 in osteoporosis, additional clinical data (including long-term bone marker measurements, bone density scan and fracture data) are needed to validate this effect. Further evidence regarding the benefits of E4 on the bone came from a phase II study evaluating a high dose of E4 (40 mg) in male patients with advanced prostate cancer requiring androgen deprivation therapy (ADT), where E4 was being evaluated as an add-on to ADT to improve the efficacy and adverse effects of ADT, including ADT-induced bone loss. Therefore, the secondary endpoints of the study included the assessment of bone metabolism (osteocalcin and type I collagen telopeptide). While bone metabolism markers increased in the group receiving luteinizing hormone-releasing hormone agonist alone (48% for osteocalcin and 151% for CTX-1) at week 24, those turnover parameters decreased significantly from baseline in the group cotreated with E4 . It is known that estrogens modulate cardiovascular physiology and function , and as such the impact of E4 has been thoroughly assed in preclinical models of different cardiovascular functions including nitric oxide (NO) production, vasodilation, endothelial healing, atherosclerosis, neointimal proliferation and hypertension prevention. To date, the effect of E4 on these cardiovascular functions are limited to preclinical data, with no clinical data available yet. Nitric Oxide Production and Vasodilation Endothelial NO is a key player for vascular function and vasodilation and is a known target of estrogens . In vitro, E4 induced rapid NO release and stimulated endothelial NO synthase (eNOS) activation and expression in human umbilical vein endothelial cells (HUVECs). However, E4 was significantly less effective compared with E2. When E4 was combined with E2, E4 antagonized NO synthesis induced by pregnancy-like E2 concentrations. However, E4 did not impede the induction of NO synthesis induced by lower E2 concentrations . These data support that E4 may be a regulator of NO synthesis in human endothelial cells. In a mouse model of carotid artery, E4 used at different dose levels (0.3, 1 and 6 mg/kg/day) failed to stimulate eNOS activation or endothelial NO production, while E2 was able to promote these two responses. When E4 was used in combination with E2, E4 antagonized the effects induced by E2 in mouse carotid artery. The combination E4+E2 therefore failed to promote eNOS activation and NO production in this experimental model . Based on the antagonistic activity of E4 in the presence of E2 on NO release described above, lower cardiovascular effects (such as vasodilation) could be expected in the presence of E4. Importantly, several studies have confirmed that NO production, which is essential for adequate vasodilation and endothelial function, is controlled by multiple factors besides estrogens. The regulation of vascular tone by endothelium-derived NO is mediated by multiple controlling mechanisms, including physical factors such as an increase in shear stress or reduction in temperature, as well as by neurohumoral mediators through the activation of specific endothelial cell membrane receptors. The main physiological driver of NO production is shear stress and estrogens are considered to play a limited role in the regulation of endothelial-derived NO production and subsequent physiological vasodilation. The impact of E4 on shear stress, was evaluated in an ex vivo model of flow-mediated vasodilatation. Chronic treatment with E4 promoted the occurrence of flow arteriolar remodeling in ovariectomized mice after an increase in blood flow, demonstrating that the presence of E4 did not impair the NO-mediated vasodilation . Moreover, E4 was shown to induce vasodilation of animal arteries by a specific mechanism distinct from NO production, whereby E4 induced the vasodilation of ewe uterine arteries at high concentrations . It also induced ex vivo relaxing responses in eight different vascular beds: rat uterine, aorta, carotid, mesenteric, pulmonary, renal, middle cerebral and septal coronary arteries. The vasodilation induced by E4 in rat arteries was ER-dependent since it was abrogated by the ER antagonist ICI 182 780. Blockade of eNOS by Nω-nitro- l -arginine methyl ester (an NO synthase inhibitor) blunted the E2-mediated, but not E4-mediated, relaxing response, demonstrating that E2, but not E4, induced vasodilation by stimulating eNOS activity. Overall, this study shows that E4 induced relaxation of precontracted rat arteries via both an endothelium-dependent mechanism and a guanylate cyclase mechanism . In conclusion, NO production is not the only mechanism eliciting the beneficial impact of estrogens on vasculature. The lack of E4-induced eNOS activity and NO release observed in some but not all experimental models should not be associated with any vascular safety concerns. Endothelial Healing The preclinical model of endothelial healing is usually used to assess the vascular protective effects of a compound. The acceleration of endothelial healing by estrogens is considered as a vasculo-protective action. A recent study demonstrated that chronic treatment with E4 (subcutaneous pellet) was able to accelerate endothelial healing after carotid artery injury in ovariectomized mice. The quantitative analysis of re-endothelialized areas, performed 5 days after endovascular injury, showed an increase of 30% of endothelial regeneration in control mice compared with day 0, and an increase of about 80% in mice treated with E4 . It was previously reported in another study published by the same group that E4 was not able to promote endothelial healing in the mouse carotid artery model . In the experimental model used for that study, both the artery media and endothelium were injured by electrocoagulation (perivascular injury) and the endothelial regeneration process was evaluated 3 days post-injury by the quantification of the re-endothelialized area. In these conditions, no effect was observed with E4, regardless of the dose levels used (0.3, 1 or 6 mg/kg/day) . Davezac et al. showed that in contrast, a model of specific endothelial destruction of the carotid artery, preserving smooth muscle cells, does not lead to the same results . When the injury is limited to the artery endothelium and when the underlining layer of vascular smooth muscle cells stays intact, E4 is able to accelerate the endothelial healing (re-endothelialization) after artery injury, highlighting that smooth muscle cells are necessary for E4 to mediate this endothelial function in mice. These conflicting results, at first glance, illustrate the crucial importance of the preclinical models and experimental conditions when interpreting data. A recent study evaluating the impact of estrogens used in oral contraceptives on human endothelial function showed that E4 (10 −9 to 10 −7 M) significantly enhanced migration of HUVECs using scratch and Boyden chamber assays. The effect of E4 on endothelial migration was comparable with the effect of EE, suggesting comparable vascular remodeling and regeneration capacity . Atherosclerosis Prevention The impact of E4 on the prevention of atheroma was assessed in low-density lipoprotein receptor-deficient (LDLr −/− ) mice fed a high-cholesterol diet, a well-described model to investigate the atheroprotective effects of estrogens. E4 used at 0.6 or 6 mg/kg/day in the diet for 12 weeks prevented lipid deposition and reduced atheroma deposits in the aortic sinus in ovariectomized LDLr −/− mice up to almost 80% in a dose-dependent manner. E4 also decreased the total plasma cholesterol in these mice . Neointimal Hyperplasia Prevention Neointimal hyperplasia refers to post-intervention (e.g . after mechanical atherosclerosis treatment), pathological, vascular remodeling due to the proliferation and migration of vascular muscle cells into the tunica intima layer. Neointimal hyperplasia can ultimately result in vascular wall thickening and in a reduction of the lumen diameter, which in turn leads to vascular insufficiency and restenosis. In a mouse model of femoral artery mechanical injury, E4 prevented neointimal hyperplasia by a direct inhibitory effect on the proliferation and migration of vascular smooth muscle cells but not by acting on endothelial cells. Morphometric analysis showed that 28 days after the injury, the mice treated with E4 exhibited a reduced neointima/media ratio . Hypertension Prevention and Arteriolar Remodeling Promotion Additional vasculoprotective actions were described after chronic treatment with E4, including the prevention of angiotensin II-induced hypertension, which is a major risk factor of cardiovascular diseases, and the restoration of arteriolar flow-mediated remodeling, which has a major role in the homeostasis of tissue perfusion . In that study, flow-mediated remodeling was evaluated in mesenteric arteries isolated from ovariectomized mice treated with vehicle or E4 over 2 weeks. The arterial diameter was measured in response to stepwise increases in pressure in mesenteric arteries submitted to high flow or to normal flow. The effect of E4 on angiotensin II treatment was evaluated in ovariectomized female mice implanted with osmotic minipumps delivering angiotensin II or a combination of angiotensin II and E4 for 1 month, with systolic blood pressure being measured weekly. E4 prevented angiotensin II-induced hypertension and favored flow-mediated remodeling . Endothelial NO is a key player for vascular function and vasodilation and is a known target of estrogens . In vitro, E4 induced rapid NO release and stimulated endothelial NO synthase (eNOS) activation and expression in human umbilical vein endothelial cells (HUVECs). However, E4 was significantly less effective compared with E2. When E4 was combined with E2, E4 antagonized NO synthesis induced by pregnancy-like E2 concentrations. However, E4 did not impede the induction of NO synthesis induced by lower E2 concentrations . These data support that E4 may be a regulator of NO synthesis in human endothelial cells. In a mouse model of carotid artery, E4 used at different dose levels (0.3, 1 and 6 mg/kg/day) failed to stimulate eNOS activation or endothelial NO production, while E2 was able to promote these two responses. When E4 was used in combination with E2, E4 antagonized the effects induced by E2 in mouse carotid artery. The combination E4+E2 therefore failed to promote eNOS activation and NO production in this experimental model . Based on the antagonistic activity of E4 in the presence of E2 on NO release described above, lower cardiovascular effects (such as vasodilation) could be expected in the presence of E4. Importantly, several studies have confirmed that NO production, which is essential for adequate vasodilation and endothelial function, is controlled by multiple factors besides estrogens. The regulation of vascular tone by endothelium-derived NO is mediated by multiple controlling mechanisms, including physical factors such as an increase in shear stress or reduction in temperature, as well as by neurohumoral mediators through the activation of specific endothelial cell membrane receptors. The main physiological driver of NO production is shear stress and estrogens are considered to play a limited role in the regulation of endothelial-derived NO production and subsequent physiological vasodilation. The impact of E4 on shear stress, was evaluated in an ex vivo model of flow-mediated vasodilatation. Chronic treatment with E4 promoted the occurrence of flow arteriolar remodeling in ovariectomized mice after an increase in blood flow, demonstrating that the presence of E4 did not impair the NO-mediated vasodilation . Moreover, E4 was shown to induce vasodilation of animal arteries by a specific mechanism distinct from NO production, whereby E4 induced the vasodilation of ewe uterine arteries at high concentrations . It also induced ex vivo relaxing responses in eight different vascular beds: rat uterine, aorta, carotid, mesenteric, pulmonary, renal, middle cerebral and septal coronary arteries. The vasodilation induced by E4 in rat arteries was ER-dependent since it was abrogated by the ER antagonist ICI 182 780. Blockade of eNOS by Nω-nitro- l -arginine methyl ester (an NO synthase inhibitor) blunted the E2-mediated, but not E4-mediated, relaxing response, demonstrating that E2, but not E4, induced vasodilation by stimulating eNOS activity. Overall, this study shows that E4 induced relaxation of precontracted rat arteries via both an endothelium-dependent mechanism and a guanylate cyclase mechanism . In conclusion, NO production is not the only mechanism eliciting the beneficial impact of estrogens on vasculature. The lack of E4-induced eNOS activity and NO release observed in some but not all experimental models should not be associated with any vascular safety concerns. The preclinical model of endothelial healing is usually used to assess the vascular protective effects of a compound. The acceleration of endothelial healing by estrogens is considered as a vasculo-protective action. A recent study demonstrated that chronic treatment with E4 (subcutaneous pellet) was able to accelerate endothelial healing after carotid artery injury in ovariectomized mice. The quantitative analysis of re-endothelialized areas, performed 5 days after endovascular injury, showed an increase of 30% of endothelial regeneration in control mice compared with day 0, and an increase of about 80% in mice treated with E4 . It was previously reported in another study published by the same group that E4 was not able to promote endothelial healing in the mouse carotid artery model . In the experimental model used for that study, both the artery media and endothelium were injured by electrocoagulation (perivascular injury) and the endothelial regeneration process was evaluated 3 days post-injury by the quantification of the re-endothelialized area. In these conditions, no effect was observed with E4, regardless of the dose levels used (0.3, 1 or 6 mg/kg/day) . Davezac et al. showed that in contrast, a model of specific endothelial destruction of the carotid artery, preserving smooth muscle cells, does not lead to the same results . When the injury is limited to the artery endothelium and when the underlining layer of vascular smooth muscle cells stays intact, E4 is able to accelerate the endothelial healing (re-endothelialization) after artery injury, highlighting that smooth muscle cells are necessary for E4 to mediate this endothelial function in mice. These conflicting results, at first glance, illustrate the crucial importance of the preclinical models and experimental conditions when interpreting data. A recent study evaluating the impact of estrogens used in oral contraceptives on human endothelial function showed that E4 (10 −9 to 10 −7 M) significantly enhanced migration of HUVECs using scratch and Boyden chamber assays. The effect of E4 on endothelial migration was comparable with the effect of EE, suggesting comparable vascular remodeling and regeneration capacity . The impact of E4 on the prevention of atheroma was assessed in low-density lipoprotein receptor-deficient (LDLr −/− ) mice fed a high-cholesterol diet, a well-described model to investigate the atheroprotective effects of estrogens. E4 used at 0.6 or 6 mg/kg/day in the diet for 12 weeks prevented lipid deposition and reduced atheroma deposits in the aortic sinus in ovariectomized LDLr −/− mice up to almost 80% in a dose-dependent manner. E4 also decreased the total plasma cholesterol in these mice . Neointimal hyperplasia refers to post-intervention (e.g . after mechanical atherosclerosis treatment), pathological, vascular remodeling due to the proliferation and migration of vascular muscle cells into the tunica intima layer. Neointimal hyperplasia can ultimately result in vascular wall thickening and in a reduction of the lumen diameter, which in turn leads to vascular insufficiency and restenosis. In a mouse model of femoral artery mechanical injury, E4 prevented neointimal hyperplasia by a direct inhibitory effect on the proliferation and migration of vascular smooth muscle cells but not by acting on endothelial cells. Morphometric analysis showed that 28 days after the injury, the mice treated with E4 exhibited a reduced neointima/media ratio . Additional vasculoprotective actions were described after chronic treatment with E4, including the prevention of angiotensin II-induced hypertension, which is a major risk factor of cardiovascular diseases, and the restoration of arteriolar flow-mediated remodeling, which has a major role in the homeostasis of tissue perfusion . In that study, flow-mediated remodeling was evaluated in mesenteric arteries isolated from ovariectomized mice treated with vehicle or E4 over 2 weeks. The arterial diameter was measured in response to stepwise increases in pressure in mesenteric arteries submitted to high flow or to normal flow. The effect of E4 on angiotensin II treatment was evaluated in ovariectomized female mice implanted with osmotic minipumps delivering angiotensin II or a combination of angiotensin II and E4 for 1 month, with systolic blood pressure being measured weekly. E4 prevented angiotensin II-induced hypertension and favored flow-mediated remodeling . Sex steroids promote the growth of certain hormone-dependent tissues and tumors. Efforts have been made to characterize the impact of E4 on breast epithelial cell proliferation and breast cancer growth in preclinical models and preliminary clinical trials. Normal Breast Epithelial Cell Proliferation In vitro exposure of normal human breast epithelial cells for 96 h with 10 nM E2 elicited a maximal cell proliferation increase of about 60%. At the same concentration, E4 did not increase human breast epithelial cell proliferation. A 100 times higher concentration of E4 (1 µM) was necessary to stimulate the proliferation to the same extent as E2 . To evaluate the effect of E4 on mammary gland, prepubertal ovariectomized mice were treated orally with different dose levels of E4 (0.3, 1, 3 or 10 mg/kg/day) or with E2 (1 mg/kg/day) for 14 days, after which mammary glands were collected and epithelial cells isolated. The level of epithelial proliferation assessed by the expression of cyclin D1 and Ki67 mRNA was significantly lower in mice treated by E4 (at any dose levels) compared with mice treated with E2, suggesting a lower proliferative effect for E4 . Breast Cancer Growth E4 also exhibits a lower potency than E2 to induce human breast cancer cell growth. Liu et al. investigated the impact of different estrogens, including E2 and E4, on proliferation of the ER-positive breast cancer cell line ZR 75-1 in vitro. All estrogens tested caused a significant stimulation of cell proliferation. At the lowest concentration (10 −10 M), E4 had a significantly lower stimulatory effect than E2, while at higher concentrations (≥10 −9 M), E2 and E4 stimulated cell proliferation to the same extent . In another assay using MCF-7 cells transfected with PGRMC1, E4 was also significantly less active than E2 in promoting cell proliferation. At 10 −10 M, E2 increased the proliferation rate by about 160%, while E4 induced an increase of only 50% compared with the control condition. At higher concentrations (≥10 −9 M), the same proliferative effect (about +160%) was elicited by E2 and E4 . In another study, a 1000 times higher concentration of E4 was needed to promote MCF-7 and MCF-7/BOS cell growth in vitro to the same extent as E2, confirming the weaker potency of E4 to induce human breast cancer cell growth compared with E2 in vitro . An estrogen supplementation is necessary for the growth of MCF-7 and the formation of tumor in vivo. To determine if E4 could achieve the same effect as E2 in this model, ovariectomized immunodeficient mice implanted with MCF-7 cells received a daily oral treatment of E4 (0.5, 1, 3 or 10 mg/kg/day) or E2 (3 mg/kg/day). After 5 weeks of treatment, E2 promoted tumor growth, with tumor weights being fivefold higher compared with the untreated group. No significant difference was observed between the untreated control group and mice treated with E4 0.5 mg/kg/day. Indeed, E4 was as efficient as E2 in promoting tumor growth only at the highest dose level of 10 mg/kg/day, confirming the lower potency of E4 to induce breast cancer growth compared with E2 in vivo . The effect of a combined treatment with E2 and E4 on MCF-7 tumor growth was also analyzed, whereby ovariectomized mice implanted with MCF-7 cells and with a subcutaneous E2 pellet received a daily oral treatment of E4 (1, 3 or 10 mg/kg/day) for 5 weeks. In these conditions, E4 attenuated E2-induced tumor growth in a dose-dependent manner. Exposure to the combination of E2 + E4 decreased the tumor volume and tumor weight by approximately 50% compared with mice exposed to E2 alone. This antagonistic effect of E4 in the presence of E2 was also observed for MCF-7 cell proliferation in vitro. This effect became maximal when E4 was at least 100 times more concentrated than E2 . In a broader preclinical study combining genetically engineered mouse models, human cell line xenografts and hormone-dependent authentic breast tumor patient-derived xenografts, the authors showed a limited effect of E4 on breast cancer growth in vivo when used at doses similar to the therapeutic levels required for contraception or menopause . Breast Cancer Migration and Invasion Breast cancer cell movement requires a remodeling of the actin cytoskeleton, involving estrogen-mediated signaling pathways. Interestingly, it has been demonstrated that E4 acts as a weak estrogen on breast cancer cell migration and invasion. The effects of E4 on its own or in the presence of E2 were tested on T47-D breast cancer cell migration and invasion of three-dimensional matrices. Exposure of T47-D cells to E4 weakly stimulated migration and invasion in comparison with E2. In addition, E4 decreased the extent of movement and invasion induced by E2 . Clinical Data The effect of 14 days of preoperative treatment with E4 20 mg/day on tumor proliferation markers was investigated in a preoperative window trial in 30 pre- and postmenopausal women with ER+ early breast cancer. E4 had a significant proapoptotic effect on tumor tissue, whereas Ki67 expression (a marker of cell proliferation) remained unchanged in both pre- and postmenopausal women . The efficacy of high doses of E4 in postmenopausal patients with pretreated, locally advanced, and/or metastatic ER+/HER2− breast cancer was assessed in a phase IB/IIA, dose-escalation study in which successive cohorts of three patients received E4 20, 40 or 60 mg/day for 12 weeks by oral administration. Five of nine patients completing 12 weeks of E4 treatment showed objective antitumor effects, as evaluated by computer tomography scanning according to the Response Evaluation Criteria in Solid Tumors (RECIST) criteria, with stabilization of the disease in four patients and one complete response. The complete response was seen with the 20 mg dose, and stabilization of the disease was observed in one patient in the 20 mg group and three patients treated with 40 mg . In vitro exposure of normal human breast epithelial cells for 96 h with 10 nM E2 elicited a maximal cell proliferation increase of about 60%. At the same concentration, E4 did not increase human breast epithelial cell proliferation. A 100 times higher concentration of E4 (1 µM) was necessary to stimulate the proliferation to the same extent as E2 . To evaluate the effect of E4 on mammary gland, prepubertal ovariectomized mice were treated orally with different dose levels of E4 (0.3, 1, 3 or 10 mg/kg/day) or with E2 (1 mg/kg/day) for 14 days, after which mammary glands were collected and epithelial cells isolated. The level of epithelial proliferation assessed by the expression of cyclin D1 and Ki67 mRNA was significantly lower in mice treated by E4 (at any dose levels) compared with mice treated with E2, suggesting a lower proliferative effect for E4 . E4 also exhibits a lower potency than E2 to induce human breast cancer cell growth. Liu et al. investigated the impact of different estrogens, including E2 and E4, on proliferation of the ER-positive breast cancer cell line ZR 75-1 in vitro. All estrogens tested caused a significant stimulation of cell proliferation. At the lowest concentration (10 −10 M), E4 had a significantly lower stimulatory effect than E2, while at higher concentrations (≥10 −9 M), E2 and E4 stimulated cell proliferation to the same extent . In another assay using MCF-7 cells transfected with PGRMC1, E4 was also significantly less active than E2 in promoting cell proliferation. At 10 −10 M, E2 increased the proliferation rate by about 160%, while E4 induced an increase of only 50% compared with the control condition. At higher concentrations (≥10 −9 M), the same proliferative effect (about +160%) was elicited by E2 and E4 . In another study, a 1000 times higher concentration of E4 was needed to promote MCF-7 and MCF-7/BOS cell growth in vitro to the same extent as E2, confirming the weaker potency of E4 to induce human breast cancer cell growth compared with E2 in vitro . An estrogen supplementation is necessary for the growth of MCF-7 and the formation of tumor in vivo. To determine if E4 could achieve the same effect as E2 in this model, ovariectomized immunodeficient mice implanted with MCF-7 cells received a daily oral treatment of E4 (0.5, 1, 3 or 10 mg/kg/day) or E2 (3 mg/kg/day). After 5 weeks of treatment, E2 promoted tumor growth, with tumor weights being fivefold higher compared with the untreated group. No significant difference was observed between the untreated control group and mice treated with E4 0.5 mg/kg/day. Indeed, E4 was as efficient as E2 in promoting tumor growth only at the highest dose level of 10 mg/kg/day, confirming the lower potency of E4 to induce breast cancer growth compared with E2 in vivo . The effect of a combined treatment with E2 and E4 on MCF-7 tumor growth was also analyzed, whereby ovariectomized mice implanted with MCF-7 cells and with a subcutaneous E2 pellet received a daily oral treatment of E4 (1, 3 or 10 mg/kg/day) for 5 weeks. In these conditions, E4 attenuated E2-induced tumor growth in a dose-dependent manner. Exposure to the combination of E2 + E4 decreased the tumor volume and tumor weight by approximately 50% compared with mice exposed to E2 alone. This antagonistic effect of E4 in the presence of E2 was also observed for MCF-7 cell proliferation in vitro. This effect became maximal when E4 was at least 100 times more concentrated than E2 . In a broader preclinical study combining genetically engineered mouse models, human cell line xenografts and hormone-dependent authentic breast tumor patient-derived xenografts, the authors showed a limited effect of E4 on breast cancer growth in vivo when used at doses similar to the therapeutic levels required for contraception or menopause . Breast cancer cell movement requires a remodeling of the actin cytoskeleton, involving estrogen-mediated signaling pathways. Interestingly, it has been demonstrated that E4 acts as a weak estrogen on breast cancer cell migration and invasion. The effects of E4 on its own or in the presence of E2 were tested on T47-D breast cancer cell migration and invasion of three-dimensional matrices. Exposure of T47-D cells to E4 weakly stimulated migration and invasion in comparison with E2. In addition, E4 decreased the extent of movement and invasion induced by E2 . The effect of 14 days of preoperative treatment with E4 20 mg/day on tumor proliferation markers was investigated in a preoperative window trial in 30 pre- and postmenopausal women with ER+ early breast cancer. E4 had a significant proapoptotic effect on tumor tissue, whereas Ki67 expression (a marker of cell proliferation) remained unchanged in both pre- and postmenopausal women . The efficacy of high doses of E4 in postmenopausal patients with pretreated, locally advanced, and/or metastatic ER+/HER2− breast cancer was assessed in a phase IB/IIA, dose-escalation study in which successive cohorts of three patients received E4 20, 40 or 60 mg/day for 12 weeks by oral administration. Five of nine patients completing 12 weeks of E4 treatment showed objective antitumor effects, as evaluated by computer tomography scanning according to the Response Evaluation Criteria in Solid Tumors (RECIST) criteria, with stabilization of the disease in four patients and one complete response. The complete response was seen with the 20 mg dose, and stabilization of the disease was observed in one patient in the 20 mg group and three patients treated with 40 mg . The use of hormone therapy can impact metabolic markers such as total cholesterol, low-density lipoprotein cholesterol (LDLc), high-density lipoprotein cholesterol (HDLc), triglycerides and glucose levels. The effect of E4 on the pathophysiological consequences of a Western diet (42% kcal fat, 0.2% cholesterol) was evaluated in mice. Weekly body weight measurements showed that chronic treatment with E4 reduced body weight gain and protected mice against Western diet-induced obesity. After 7 weeks of Western diet feeding, E4 improved glucose tolerance in mice. At the end of the protocol, fasting glucose levels were significantly lower in mice treated with E4. A reduced accumulation of subcutaneous, perigonadal and mesenteric adipose tissue was also observed in the E4-treated group. In addition, disorders associated with obesity, such as atherosclerosis and steatosis, were prevented in mice fed a Western diet and treated with E4. Furthermore, E4 induced a lower accumulation of lipids in the liver. Accordingly, the expression of genes involved in lipid metabolism, including cholesterol metabolism and lipoprotein assembly, was decreased in the liver of E4-treated mice compared with control mice. The study demonstrated that E4 prevents Western-induced obesity by increasing locomotor activity and energy expenditure . In postmenopausal women receiving E4 2.5, 5, 10 or 15 mg daily for 12 weeks in the E4Relief trial (NCT02834312), absolute changes from baseline in triglyceride levels were minimal in all study groups and were not significant when compared with placebo. HDLc increased from baseline in all E4 groups, while no increase was observed in the placebo group. An increase from baseline was observed for LDLc in the E4 2.5 and 5 mg groups and for total cholesterol in the E4 2.5, 5 and 10 mg groups. None of these changes were significantly different when compared with placebo. Regarding glucose metabolism, no significant change in fasting glucose level was observed from baseline. A significant decrease was seen in insulin resistance and hemoglobin A1c in the E4 10 and 15 mg groups, respectively, suggesting an improved glucose tolerance . The impact of E4 in combination with a progestin on lipid metabolism was evaluated in healthy women. Participants (healthy women aged between 18 and 35 years) received E4/DRSP, E4/LNG or EE/DRSP as a comparator for three consecutive cycles. Minor effects on lipoproteins were observed in the E4 groups and the effects on triglycerides in the E4 groups were significantly lower compared with the EE group, demonstrating that E4-containing COCs have a limited effect on lipid metabolism . The effect of the combination of E4 15 mg/DRSP 3 mg on metabolic parameters, including lipid profile and carbohydrate metabolism, after six treatment cycles was then evaluated in healthy subjects . The study included two frequently used EE-containing COCs as comparators, one with LNG and one with DRSP, to validate changes related to the estrogen component. E4/DRSP had a minimal impact on lipid parameters; the largest effect was observed for triglycerides (+24.0%), which was less compared with EE/LNG (+28.0%) and EE/DRSP (+65.5%). With E4/DRSP, no significant changes from baseline were observed for LDL-C, total cholesterol, the HDL-C/LDL-C ratio, and lipoprotein A. Carbohydrate parameters, including fasting insulin and glucose, C-peptide and HbA1c, remained relatively stable in all treatment groups. Oral glucose tolerance test (OGTT) glucose and insulin concentrations varied substantially with no remarkable treatment differences. Changes in carbohydrate parameters were minimal, pointing towards a negligible impact on glycemic control . Taken together, these data tend to demonstrate a low impact of E4 on lipids and carbohydrate metabolism. The COC with E4 15 mg/DRSP 3 mg is associated with a favorable effect on body weight control . Sex hormone binding globulin (SHBG) is an estrogen-responsive protein produced by the liver that reflects the overall estrogenic impact of a compound on the liver . Moreover, the plasma levels of SHBG can modify the plasma distribution of natural steroid ligands. The effect of E4 on the production of SHBG has been evaluated in vitro in human HepG2 cells and human Hep89 cells overexpressing ERα and was compared with the effect of other estrogens. Exposure to E4 (0.1–1000 nM) during 24, 48 or 72 h did not stimulate the production of SHBG in either cell lines. In contrast, a significant dose-dependent increase in SHBG production was observed after exposure to other estrogens such as EE, E2 and E3 . These in vitro data may indicate that E4 is less likely to modulate the plasma levels of SHBG. In a clinical trial including 49 postmenopausal women, treatment with escalating doses of E4 (2–40 mg) for 28 days induced a dose-dependent increase of SHBG levels. When looking at the different doses of E4, only the 10 mg E4 group elicited a similar increase in SHBG levels as the 2 mg E2-valerate group (59% and 62%, respectively), suggesting a lower potency of E4 on the production of SHBG . In the E4Relief trial (NCT02834312) conducted in postmenopausal women, a dose-dependent increase in SHBG levels compared with baseline was also observed in E4-treated groups (+10.3%, +23.3%, +61.8% and +99.4% for E4 2.5, 5, 10 and 15 mg, respectively) . Moreover, the combination E4/DRSP given to healthy women during six cycles had significantly less impact (+55%) than the combination EE/DRSP (+251%) on SHBG production . In conclusion, although an effect on SHBG production is noted after oral treatment with E4 in clinical trials, in contrast to the absence of effect reported in in vitro assays, the effect of E4 stayed small compared with other estrogens, suggesting a lower estrogenic effect on the liver. Preclinical studies in relation to coagulation risks, showed that chronic E4 treatment in ovariectomized female mice exhibited a prolonged tail-bleeding time and were protected from arterial and venous thrombosis in vivo. In addition, E4 treatment decreased ex vivo thrombus growth on collagen under arterial flow conditions . To assess the effects of the COC containing E4 on hemostasis parameters, healthy women received the combination E4/DRSP, EE/LNG or EE/DRSP as comparators, during six cycles. Activated protein C resistance is observed in COC users and this functional assay is used to assess the thrombogenicity potential of COCs . The median change of endogenous thrombin potential (ETP)-based activated protein C sensitivity resistance (APCr) at cycle 6 was +30% for E4/DRSP, +165% for EE/LNG, and +219% for EE/DRSP. Changes in hemostasis parameters, including anticoagulant proteins and fibrinolytic proteins, after treatment with six cycles of E4/DRSP, were smaller or similar to those observed for EE/LNG. However, much more pronounced changes were observed with EE/DRSP . Absolute changes from baseline for hemostasis parameters were also minimal in postmenopausal women receiving E4 alone for 12 weeks . The thrombin generation coagulation assay is used as a marker of hypercoagulability and risk of VTE . A comparative assessment of the impact of E4/DRSP and EE/LNG or EE/DRSP on thrombin generation was conducted. Data were collected from trial NCT02957630, in which thrombograms and thrombin generation parameters were extracted for each subject at baseline and after six cycles of treatment. It was shown that E4 in combination with DRSP does not have any impact on thrombin generation in contrast to EE-containing products that induce the production of procoagulant factors, a decrease in the synthesis of anticoagulant factors, and therefore induce a shift towards a prothrombotic state . It can therefore be concluded that EE-containing products induce a prothrombotic environment while E4 exhibits a neutral profile on hemostasis (Fig. ). Pooled analysis of data from two phase III trials including 3417 participants showed that the combination of E4/DRSP is associated with an overall favorable safety profile. A single case of VTE was reported, which resolved without sequelae after anticoagulant treatment . While available hemostasis data described above suggest that the E4/DRSP COC may be associated with a lower VTE risk, this will need to be demonstrated in a larger population in postauthorization safety studies. Estrogen Signaling Pathways The pleiotropic effects of estrogens are mainly mediated by ER alpha (ERα) and beta (ERβ), each encoded by separate genes, ESR1 and ESR2 , respectively, located on different chromosomes. ERα and ERβ belong to the nuclear receptors protein family and mainly function as ligand-dependent transcription factors. ERs contain two transactivation functional domains AF-1 and AF-2. After ligand binding to ERs, an ordered sequence of events takes place to regulate the transcription of estrogen-responsive genes. The binding of estrogen to the ligand-binding domain (LBD) of the ER induces conformational changes in the receptor. After the dimerization and recruitment of coregulators, the estrogen/ER complexes translocate into the nucleus and directly bind to the estrogen-responsive element (ERE) in the promoter region of target genes to directly modulate their transcription. This ERE-dependent process is referred to as the classical genomic pathway. Distinctly, ERs can also regulate the transcription of genes without any direct interaction with the DNA. In this case, ERs act as co-activators for other DNA-binding transcription factors, leading to the indirect binding of ERs to regulatory DNA sequences, such as AP-1 or Sp1 sequences. This mechanism of action enables the transcription of genes that do not harbor ERE sequences in their promoter region. This pathway is referred to as the non-classical genomic pathway. Aside from genomic signaling, estrogens can elicit non-genomic events, also commonly referred as extranuclear or membrane-initiated signaling. The non-genomic effects of estrogens are mediated via a pool of ERs present at the plasma membrane or in the cytoplasm. The membrane-bound ERα undergoes a post-translational palmitoylation on cysteine 447 (451 in mice). This modification is necessary for ERα localization at the cell membrane, where it is associated with caveolin-1. The ligand binding to membrane ER leads to rapid activation of different signaling pathways, including the mitogen-activated protein kinase (MAPK) or phosphatidylinositol-3 kinase (PI3K) pathways, and the subsequent production or modulation of second messengers such as AMPc, calcium mobilization or NO synthesis, which in turn directly influence various cell functions. The non-genomic effects usually occur within seconds or minutes after estrogenic treatment. Another membrane receptor termed G protein-coupled estrogen receptor 1 (GPER, also referred to as GPR30) has been more recently described to contribute to the physiological and pathological effects promoted by estrogens . While the genomic and non-genomic pathways induced by estrogens play specific roles in the regulation of transcription and in rapid signaling, respectively, and modulate biological processes independently, they also interact in a still poorly described way. The activation of kinase signaling cascades can ultimately induce the phosphorylation and activation of transcription factors, including ERs themselves and coregulators, and therefore indirectly regulate gene expression. This interplay between both signaling pathways can thus result in enhanced transcriptional activity and cellular responses. Conversely, the genomic pathway can modulate the transcription of genes involved in the non-genomic signaling. Cellular responses and biological processes induced by estrogens are usually thought to be a convergence of both genomic and non-genomic pathways . A schematic representation of genomic and non-genomic signaling pathways is presented in Fig. . Interaction of E4 with ERα and ERβ E4 selectively binds to both ERα and ERβ, with a 4 to 5-fold higher binding affinity for ERα. The binding affinity of E4 for ERα is at least 25-fold lower compared with E2 . When the crystal structures of the ERα LBD complexed with E4 or E2 were compared, both were found to be very similar in their overall conformation. In addition, the ligands were perfectly superimposable and interacted equally with residues within the ligand-binding pocket. In addition, similar to the E2-ERα complex, the E4-ERα complex binds to the key coactivator protein SRC3 . A further functional characterization of E4-ERα has been performed through a coregulator recruitment assay, comparing the binding pattern of ERα to 154 coregulator motifs induced by E2 and E4. The pattern of coregulator recruitment induced by E4 was very similar to that elicited by E2, but E4 was less potent than E2 to induce this recruitment pattern . E4 Genomic Signaling Pathways E4 induces transcriptional activity via both ERα and ERβ. The impact of E4 on the activation and binding of ERα to the ERE was investigated with a luciferase reporter gene assay based on T47D-KBluc cells (breast cancer cell line) in the presence of increasing concentrations of E2 or E4. Like E2, E4 stimulated the ERE transactivation in these cells, although with a 100- to 1000-fold lower potency compared with E2. E4 failed to antagonize the effects of E2 on the induction of ERE transactivation . These data are consistent with the lower ERα binding affinity of E4 compared with E2. The contribution of AF-1 and AF-2 in the classical genomic actions induced by E4 was evaluated in HepG2 and HeLa cell systems. As previously described for E2, both AF-1 and -2 are involved in this action in a cell type-dependent manner . The capacity of E4 to promote non-classical genomic effects was also confirmed by measuring the expression of genes that do not harbor an ERE in their promoter region: BRCA1 and CCDN1. These genes are thought to be regulated by the recruitment of ERα to AP-1 or Sp1 sites in their promoter region. E4 was able to upregulate the expression of these genes , demonstrating the capacity of E4 to induce classical and non-classical genomic effects. Several estrogen-induced biological responses have been described to be dependent on the nuclear activation of ERα, including uterine epithelial proliferation , vaginal epithelial proliferation and lubrication , prevention of bone demineralization , the cardioprotective effect observed in response to estrogens , and the actions of estrogens on energy balance and glucose homeostasis . As described in the different sections above, E4 is able to induce these responses to the same extent as other estrogens, confirming its capacity to activate the ERα nuclear pathway. E4 Non-genomic Signaling Pathways E4 has been shown to induce rapid extranuclear effects on the ERK1/2 and PI3K/AKT pathways in MCF-7 cells in vitro or in MCF-7 tumors collected from mice after 5 weeks of treatment. E4 increased the phosphorylation of ERK1/2 in a fast and transient manner, with maximal activation seen after 5 min . However, in this study, the receptor responsible for the extranuclear effects induced by estrogens was not identified and the signal of the immunohistochemistry staining in tumors was not quantified. The interaction between ERα and the tyrosine kinase Src leading to the extranuclear complex ERα:Src is a well-described aspect of ERα activation at the membrane . While E2 was found to promote this interaction, E4 was much less efficient in inducing the ERα:Src interaction. In addition, when administrated together, the combination of E2 + E4 totally abrogated the interaction between ERα and Src . The treatment of breast cancer cells with estrogens is associated with ERα membrane translocation and the rapid formation of specialized cell membrane structures through activation of the actin-binding protein moesin. This process is responsible for rapid changes in cell membrane morphology, leading to cell migration and invasion . The treatment of T47-D cells with E4 stimulated migration and invasion to a much lower extent than E2. When E4 was added to E2, an inhibition of the actin remodeling induced by E2 was seen. E4 decreased the extent of movement and invasion induced by E2 . E4 was tested on the activation of eNOS in mice aortae by measuring eNOS phosphorylation and NO production, which are thought to be exclusively dependent on membrane ERα signaling after estrogen treatment . While E2 rapidly induced eNOS phosphorylation and NO production, E4 failed to produce these effects. Furthermore, when coadministered, E4 inhibited the stimulatory action of E2 on these endothelial actions. Altogether, the data indicate that in specific cell types such as breast cancer cells and endothelial cells, E4 presents a specific profile of ERα activation by inducing only ERα nuclear actions and preventing ERα membrane actions. Furthermore, a possible contribution of the membrane receptor GPER in E4-induced breast cancer cell growth has been suggested since G15, a GPER antagonist, partially decreased E4-induced MCF-7 cell growth . The effect of E4 on endothelial cells migration has also been reported to be driven by GPER-dependent mechanisms . The pleiotropic effects of estrogens are mainly mediated by ER alpha (ERα) and beta (ERβ), each encoded by separate genes, ESR1 and ESR2 , respectively, located on different chromosomes. ERα and ERβ belong to the nuclear receptors protein family and mainly function as ligand-dependent transcription factors. ERs contain two transactivation functional domains AF-1 and AF-2. After ligand binding to ERs, an ordered sequence of events takes place to regulate the transcription of estrogen-responsive genes. The binding of estrogen to the ligand-binding domain (LBD) of the ER induces conformational changes in the receptor. After the dimerization and recruitment of coregulators, the estrogen/ER complexes translocate into the nucleus and directly bind to the estrogen-responsive element (ERE) in the promoter region of target genes to directly modulate their transcription. This ERE-dependent process is referred to as the classical genomic pathway. Distinctly, ERs can also regulate the transcription of genes without any direct interaction with the DNA. In this case, ERs act as co-activators for other DNA-binding transcription factors, leading to the indirect binding of ERs to regulatory DNA sequences, such as AP-1 or Sp1 sequences. This mechanism of action enables the transcription of genes that do not harbor ERE sequences in their promoter region. This pathway is referred to as the non-classical genomic pathway. Aside from genomic signaling, estrogens can elicit non-genomic events, also commonly referred as extranuclear or membrane-initiated signaling. The non-genomic effects of estrogens are mediated via a pool of ERs present at the plasma membrane or in the cytoplasm. The membrane-bound ERα undergoes a post-translational palmitoylation on cysteine 447 (451 in mice). This modification is necessary for ERα localization at the cell membrane, where it is associated with caveolin-1. The ligand binding to membrane ER leads to rapid activation of different signaling pathways, including the mitogen-activated protein kinase (MAPK) or phosphatidylinositol-3 kinase (PI3K) pathways, and the subsequent production or modulation of second messengers such as AMPc, calcium mobilization or NO synthesis, which in turn directly influence various cell functions. The non-genomic effects usually occur within seconds or minutes after estrogenic treatment. Another membrane receptor termed G protein-coupled estrogen receptor 1 (GPER, also referred to as GPR30) has been more recently described to contribute to the physiological and pathological effects promoted by estrogens . While the genomic and non-genomic pathways induced by estrogens play specific roles in the regulation of transcription and in rapid signaling, respectively, and modulate biological processes independently, they also interact in a still poorly described way. The activation of kinase signaling cascades can ultimately induce the phosphorylation and activation of transcription factors, including ERs themselves and coregulators, and therefore indirectly regulate gene expression. This interplay between both signaling pathways can thus result in enhanced transcriptional activity and cellular responses. Conversely, the genomic pathway can modulate the transcription of genes involved in the non-genomic signaling. Cellular responses and biological processes induced by estrogens are usually thought to be a convergence of both genomic and non-genomic pathways . A schematic representation of genomic and non-genomic signaling pathways is presented in Fig. . E4 selectively binds to both ERα and ERβ, with a 4 to 5-fold higher binding affinity for ERα. The binding affinity of E4 for ERα is at least 25-fold lower compared with E2 . When the crystal structures of the ERα LBD complexed with E4 or E2 were compared, both were found to be very similar in their overall conformation. In addition, the ligands were perfectly superimposable and interacted equally with residues within the ligand-binding pocket. In addition, similar to the E2-ERα complex, the E4-ERα complex binds to the key coactivator protein SRC3 . A further functional characterization of E4-ERα has been performed through a coregulator recruitment assay, comparing the binding pattern of ERα to 154 coregulator motifs induced by E2 and E4. The pattern of coregulator recruitment induced by E4 was very similar to that elicited by E2, but E4 was less potent than E2 to induce this recruitment pattern . E4 induces transcriptional activity via both ERα and ERβ. The impact of E4 on the activation and binding of ERα to the ERE was investigated with a luciferase reporter gene assay based on T47D-KBluc cells (breast cancer cell line) in the presence of increasing concentrations of E2 or E4. Like E2, E4 stimulated the ERE transactivation in these cells, although with a 100- to 1000-fold lower potency compared with E2. E4 failed to antagonize the effects of E2 on the induction of ERE transactivation . These data are consistent with the lower ERα binding affinity of E4 compared with E2. The contribution of AF-1 and AF-2 in the classical genomic actions induced by E4 was evaluated in HepG2 and HeLa cell systems. As previously described for E2, both AF-1 and -2 are involved in this action in a cell type-dependent manner . The capacity of E4 to promote non-classical genomic effects was also confirmed by measuring the expression of genes that do not harbor an ERE in their promoter region: BRCA1 and CCDN1. These genes are thought to be regulated by the recruitment of ERα to AP-1 or Sp1 sites in their promoter region. E4 was able to upregulate the expression of these genes , demonstrating the capacity of E4 to induce classical and non-classical genomic effects. Several estrogen-induced biological responses have been described to be dependent on the nuclear activation of ERα, including uterine epithelial proliferation , vaginal epithelial proliferation and lubrication , prevention of bone demineralization , the cardioprotective effect observed in response to estrogens , and the actions of estrogens on energy balance and glucose homeostasis . As described in the different sections above, E4 is able to induce these responses to the same extent as other estrogens, confirming its capacity to activate the ERα nuclear pathway. E4 has been shown to induce rapid extranuclear effects on the ERK1/2 and PI3K/AKT pathways in MCF-7 cells in vitro or in MCF-7 tumors collected from mice after 5 weeks of treatment. E4 increased the phosphorylation of ERK1/2 in a fast and transient manner, with maximal activation seen after 5 min . However, in this study, the receptor responsible for the extranuclear effects induced by estrogens was not identified and the signal of the immunohistochemistry staining in tumors was not quantified. The interaction between ERα and the tyrosine kinase Src leading to the extranuclear complex ERα:Src is a well-described aspect of ERα activation at the membrane . While E2 was found to promote this interaction, E4 was much less efficient in inducing the ERα:Src interaction. In addition, when administrated together, the combination of E2 + E4 totally abrogated the interaction between ERα and Src . The treatment of breast cancer cells with estrogens is associated with ERα membrane translocation and the rapid formation of specialized cell membrane structures through activation of the actin-binding protein moesin. This process is responsible for rapid changes in cell membrane morphology, leading to cell migration and invasion . The treatment of T47-D cells with E4 stimulated migration and invasion to a much lower extent than E2. When E4 was added to E2, an inhibition of the actin remodeling induced by E2 was seen. E4 decreased the extent of movement and invasion induced by E2 . E4 was tested on the activation of eNOS in mice aortae by measuring eNOS phosphorylation and NO production, which are thought to be exclusively dependent on membrane ERα signaling after estrogen treatment . While E2 rapidly induced eNOS phosphorylation and NO production, E4 failed to produce these effects. Furthermore, when coadministered, E4 inhibited the stimulatory action of E2 on these endothelial actions. Altogether, the data indicate that in specific cell types such as breast cancer cells and endothelial cells, E4 presents a specific profile of ERα activation by inducing only ERα nuclear actions and preventing ERα membrane actions. Furthermore, a possible contribution of the membrane receptor GPER in E4-induced breast cancer cell growth has been suggested since G15, a GPER antagonist, partially decreased E4-induced MCF-7 cell growth . The effect of E4 on endothelial cells migration has also been reported to be driven by GPER-dependent mechanisms . Available clinical data indicate that E4 has a more selective pharmacological profile compared with other estrogens, reflected by a low estrogenic impact on the liver, including on SHBG production, hemostasis parameters and lipid profile. Preclinical data also suggest that E4 may have a differential effect on breast epithelial cells and breast cancer cells compared with other estrogens. The biological effects induced by E4 were shown to be primarily driven via ERα. E4 is able to activate the nuclear ERα signaling pathway to the same extent as other estrogens to induce biological responses. However, in different cell types, E4 displays a specific profile of ERα activation uncoupling nuclear and membrane activation. In breast cancer cells, E4 poorly induces the extranuclear interaction between ERα and Src and poorly induces the ERα-dependent activation of moesin. Both genomic and non-genomic actions of ERα play pivotal roles and work in concert to induce breast cancer cell proliferation and survival . The interaction between the MAPK pathway and ERα has, for example, been described to promote a proneoplastic transcriptional network in the mammary gland . Furthermore, the extranuclear signaling between ERα and Src is reported to play an important role in ER+ breast cancer and to constitute a potential new therapeutic target in breast cancer . The molecular mode of action of E4 may therefore support the lower impact of E4 on breast cell proliferation and breast cancer growth observed in preclinical studies. Next to the ER-mediated effects on breast cell proliferation, the production of highly reactive estrogen metabolites also contributes to the risk of breast carcinogenesis . CYP enzymes do not play a major role in the metabolism of E4, suggesting that E4 metabolism does not generate reactive metabolites and that E4 might be devoid of this carcinogenesis pathway, contrary to other estrogens. This hypothesis needs to be verified in dedicated studies. While the precise mechanisms behind the modulation of hemostasis parameters by estrogens are not fully understood, several studies suggest that estrogen metabolism might be linked to the higher risk of VTE seen among estrogen users. Women using oral but not transdermal MHT have an increased risk of VTE, suggesting that the hepatic first-pass effect of oral estrogens might be involved. Oral MHT results in a substantial increase in plasma E1 concentration, and studies showed that E1 levels correlated with peak thrombin generation in women using oral MHT. The effect of E1, the main metabolite of oral E2, on thrombin generation may therefore provide an explanation for the higher thrombotic risk seen in women using oral MHT . Another study indicated that the thrombotic risk may be modulated by the expression of CYP enzymes involved in the hepatic metabolism of estrogens. Carriers of the CYP3A5*1 allele exhibit a high expression of CYP3A5 and present a higher thrombotic risk with oral estrogen compared with non-carriers . This suggests that the formation of hydroxylated estrogen derivatives could be involved in the exacerbated hormone response in the liver. Since CYP enzymes are not of importance in the metabolism of E4 and since E4 is not converted back into E1, the metabolic particularities of E4 may support its lower impact on the hemostasis balance compared with other estrogens. The selective pharmacological profile of E4 is also characterized by a low impact on lipid profile. Using transgenic mice expressing the ligand-binding domain of ERα exclusively at the plasma membrane, a study showed that exposure to propyl-pyrazole-triol, a selective ERα agonist, influenced the expression of many genes involved in lipid synthesis and lipid content in the liver (cholesterol, triglycerides and fatty acids). These data indicate that membrane-localized ERα is able to regulate some metabolic responses, at least in the liver, through a mechanism independent of the nuclear ERα pathway . The differential effect of E4 on the membrane ERα may therefore also play a role in the hepatic aspects and, more specifically, on the lipid profile. Collectively, pieces of evidence suggest that the molecular mode of action of E4 and the different metabolism of E4 may provide explanations for its selective pharmacological profile. E4 is the estrogenic component of a recently marketed COC in combination with the progestin DRSP. E4 is also under development for use as an MHT. The pharmacological characterization conducted in the framework of these developments indicates that E4, alone or in combination with a progestin, offers therapeutic efficacy for the prevention of pregnancy and alleviation of menopausal symptoms. E4 elicits an adequate estrogenic activity in uterovaginal tissues and has the potential to prevent bone loss, as shown by preliminary clinical data. In addition, preclinical studies highlighted that E4 exerts beneficial actions on different cardiovascular functions. While the use of oral estrogens can cause unwanted effects due to their impact on non-target tissues, E4 seems to display a more selective pharmacological profile. This includes a low estrogenic impact on the liver and the hemostasis balance, suggestive of a lower thrombotic risk. Although sex steroids can promote the growth of certain hormone-dependent tissues and tumors due to their hormonal action, preclinical evidence suggests that E4 may be associated with a lower risk of breast carcinogenesis compared with other estrogens. Epidemiological studies will be essential to verify these hypotheses and to confirm the improved safety profile of E4. The main pharmacological properties of E4 described in this review are summarized in Fig. . While further elucidation of the possible mechanisms will provide a deeper understanding, current data suggest that the molecular mode of action and the different metabolism of E4 may support its selective pharmacological profile and therefore its favorable benefit–risk ratio. |
Exploring perceived learning effectiveness in virtual reality health communication through the lens of construal level theory | f6bf7e0a-59e6-46de-af94-f6b45b9212c0 | 11360774 | Health Communication[mh] | The rapid evolution of technology has ushered in innovative forms of communication, such as virtual reality (VR). VR refers to computer-generated simulations that enable individuals to interact with artificial sensory environments . The global adoption of VR devices is increasing, with the VR market projected to increase from US$12.26 billion in 2022 to US$28.84 billion in 2026 . Additionally, VR headset sales reached 16.44 million units in 2021 and are anticipated to increase to 34 million units by 2024 . Due to its immersive and interactive nature, VR holds significant potential as an effective communication tool . Consequently, it is being increasingly utilized as a communication medium across various sectors, including healthcare, where its value is forecasted to increase from US$2.33 billion in 2022 to US$25.22 billion in 2030, reflecting a 34.90% growth rate . In healthcare, practitioners leverage computer-generated VR imagery to assist clients in visualizing specific medical conditions, comprehending the mechanisms of therapies, and better understanding their bodies and potential treatments . Given the increasing popularity of VR and its application in healthcare communication, it is important to understand how it can be effectively applied to health communication to ensure that it enhances the learning effectiveness of the communicated messages. Research on human–computer interactions has emphasized the importance of psychological distance in technology-mediated communication. Anchored in construal level theory, psychological distance refers to the degree of abstraction in an individual's experience with immersive technology . According to this theory, people interpret stimuli based on mental representations shaped by the perceived psychological distance in their interaction with these stimuli . The construal level theory posits four subdimensions of psychological distance: temporal, social, spatial, and hypothetical . Additional subdimensions, such as technical distance and emotional distance, have been proposed in other studies . Mitigating psychological distance enhances users’ relationships with computer devices and increases usage intentions . In interpersonal contexts, experiences characterized by low psychological distance, indicative of healthy dyadic relationships, have been demonstrated to intensify engagement . Numerous studies have applied the construal level theory to elucidate how psychological distance shapes outcomes in technological contexts such as social media, human–robot interaction, and human–computer interaction . Despite the theory's significance in elucidating technology usage outcomes, its application in the VR domain remains underexplored, rendering the impact of psychological distance on engagement in VR contexts not to be fully understood. VR has distinct affordance which necessitates developing an understanding of its applicability in the VR context. First, VR offers unparalleled immersive experiences which allow users to feel physically present in the virtual environment, which can significantly reduce psychological distances . This immersion can make abstract health risks feel immediate and personal, thereby enhancing user engagement and motivation. Secondly, unlike traditional media, VR can create a sense of presence and embodiment and enable users to visualize and experience hypothetical health scenarios in a highly realistic manner . This ability to manipulate temporal, spatial, and social distances within a controlled virtual environment makes VR a powerful tool for health communication and allows messages to be framed in ways that resonate more deeply with users' perceptions and emotions. These distinct affordances of virtual reality necessitate the examination of its applicability in the health communication context. On the other hand, in the health communication context, the need for personalization and emotional engagement is paramount . VR's immersive nature can facilitate empathy by placing users in scenarios that closely mimic real-life experiences, such as witnessing the progression of a disease or the benefits of a healthy lifestyle firsthand . This immersive experience can make health risks and preventive measures more tangible and urgent . Moreover, VR can simulate the impact of health behaviors over time, providing a vivid, experiential understanding that is difficult to achieve through other media . Considering the distinctiveness of the VR and health communication contexts, this study applies the construal level theory to VR health communication to investigate the effect of psychological distance on engagement. The existing body of research on technology-mediated communication underscores the pivotal role of engagement in influencing communication effectiveness in these contexts . When individuals are actively engaged in communication, they are more likely to internalize information effectively, as engagement captures attention, enhances comprehension, and facilitates information retention . Research in interactive technology contexts has particularly emphasized the significance of flow, immersion, and presence as integral components of engagement . Immersion allows users to interact with and manipulate objects naturally and intuitively . Flow, achieved by maintaining a balance between the challenge of the virtual environment and the individual’s skill level, ensures deep concentration and enjoyment . Additionally, presence, facilitated through realistic sensory inputs such as graphics, spatial audio, and haptic feedback, creates a ‘convincing illusion’ of physical presence in the virtual world . Despite the crucial role of engagement in technology-mediated communication, relatively few studies have empirically examined how engagement shapes communication effectiveness in VR . Furthermore, scholars have advocated for additional empirical research to foster a deeper understanding of engagement in VR communication . In response to this gap in empirical evidence on the role of engagement in VR communication, this study proposes that psychological distance negatively impacts engagement, and that engagement positively affects perceived learning effectiveness. This study thus addresses the following questions: (1) how does psychological distance affect engagement in health communication in the VR context? (2) does engagement enhance perceived learning effectiveness of health communication in the VR context? With two empirical studies, this research established that psychological distance and engagement enhance learning engagement in VR health communication. The results of the DEMATEL study showed that the three dimensions of psychological distance (i.e., emotional distance, spatial distance, and social distance) had causal effects on perceived learning effectiveness and that temporal distance, technical distance, and hypothetical distance were effect factors. The structural equation modeling study revealed that the dimensions of psychological distance had negative effects on flow and presence and that immersion and presence positively affected perceived learning effectiveness. The findings of this study also confirmed that presence mediated the effect of psychological distance on perceived learning effectiveness. This study contributes to the field of VR communication research in several ways. First, applying the construal level theory sheds light on the psychological factors that shape the effectiveness of VR communication. Given the novelty of VR devices, existing research has focused predominantly on the determinants of their adoption rather than their effective usage . Second, the study underscores the crucial role of user engagement in VR communication. While engagement is recognized as a determinant of communication effectiveness in technology-mediated contexts , its role in VR communication has yet to be explored in research. By demonstrating the mediating role of engagement, this study emphasizes that audience engagement is pivotal in shaping the effectiveness of VR-mediated learning. Furthermore, the study highlights the importance of psychological distance in enhancing VR communication, an aspect overlooked mainly by existing research. The concept has been examined to determine its effectiveness in contexts other than VR communication . However, the differences in affordances across communication media necessitate the examination of the construct in VR communication. The findings also emphasize the need for reducing psychological distance and enhancing engagement in VR communication. The rest of the paper is structured as follows: " " section discusses the literature review and hypothesis development, Sect. " " discusses the methodology, results and discussion of the results for both studies, " " section presents the theoretical implications, " " section discusses the practical implications, " " section presents the limitations and directions for future research. Construal level theory of psychological distance According to construal level theory, individuals' preferences and evaluations of external stimuli are influenced by psychological distance . Psychological distance refers to an individual's perception of whether something is close or far from the self . When an object is perceived as distant from the self, it is represented at a higher construal level, as its mental representation requires greater abstraction. In contrast, if the object is perceived as close to the self, its mental representation requires a lower-level abstraction and is represented at a lower construal level . The levels of abstraction differ for near and distant objects because it is easier to obtain extensive information for nearer objects [as compared to distant objects], reducing the cognitive effort required to form representations about an object . Prior research indicates that psychological distance encompasses several interrelated subdimensions. The classical construal level theory literature suggests four dimensions of psychological distance: temporal, spatial, social, and hypothetical . Temporal distance pertains to individuals' perception of the interval between an object and future occurrences . A higher temporal distance suggests an event will likely occur much later . Spatial distance involves an individual's perception of the physical distance between himself or herself and the object they interact with within the virtual environment . Objects perceived as being in the individual's vicinity [i.e., having low spatial distance] are construed at a lower construal level. Social distance is defined by the closeness of the relationship between the individual and the object . Social connections help individuals develop close relationships with objects. A socially close object is construed at a lower level, whereas a socially distant object is construed at a higher level . Hypothetical distance refers to an individual's belief in the reality of the object . An object considered hypothetically near has a high probability of occurring, while one hypothetically distant object has a lower probability . Objects that are hypothetically near have a high-level construal, whereas those that are hypothetically distant have a low-level construal. Given that different media contexts have different affordances that shape users’ perceptions of psychological distance, extant research has called for additional dimensions of psychological distance . Recent research has proposed additional dimensions of psychological distance, more notably, technical and emotional distance . A crucial factor in the use of technology is the efficacy of utilizing the technology effectively . When individual efficacy is high, the use of technological devices becomes more accessible, allowing users to derive maximum benefits . In VR, technical distance pertains to the ease users can navigate and use VR technology . High technical distance occurs when an individual struggles to use VR technology comfortably, while low technical distance indicates that the individual can use VR technology with ease and efficiency. A low technical distance implies that the information communicated via the VR device can be understood at a low level because minimal cognitive effort is needed for device usage . The emotional connection between the individual and the object of communication is also crucial in determining the success of communication. When individuals feel connected to a communication object, they are more likely to internalize the message effectively . Recognizing the centrality of emotion in communication, emotional distance emerges as another dimension of psychological distance. Emotional distance refers to an individual’s emotional affinity toward the subject being communicated through VR technology . When an individual feels emotionally close to the object of communication, the emotional distance is low, and the message is processed with a low-level construal . Conversely, if the individual feels emotionally disconnected from the object of communication, the emotional distance is high, and the message is processed with a high-level construal. Engagement in VR Engagement in VR is a multi-faceted concept that encompasses immersion, flow and presence . These dimensions contribute to the overall user experience and effectiveness of VR communication. Immersion is the extent to which the VR system can deliver an inclusive, extensive, surrounding, and vivid illusion of reality . It involves both the technical aspects of VR (such as graphical fidelity, audio quality, and haptic feedback) and the user's psychological involvement . Immersion enables users to be deeply absorbed in the virtual environment and reduces their awareness of the real world . It also enhances the believability of the virtual experience . As a component of engagement, immersion enhances the intensity and quality of the VR experience . Flow refers to the state of optimal experience characterized by complete absorption, enjoyment, and intrinsic motivation during VR use . In the context of VR, flow is achieved when users are fully engaged in the virtual experience to the extent of losing track of time and external distractions . The balance between challenge and skill is essential for achieving flow as it ensures that that users are neither bored nor overwhelmed . Flow in VR has been linked to increased user satisfaction, prolonged engagement, and enhanced task performance . Presence refers to the sense of being physically and spatially situated within a virtual environment . It is the psychological state where users feel they are "inside" the VR world rather than merely observing it from an external perspective. Presence is crucial for enhancing the realism of VR experiences and fostering a deeper connection between users and the virtual environment . Prior research has identified several antecedents of engagement in VR, including usability, interactivity, and content quality. Usability refers to the ease with which users can navigate and interact with the VR system, impacting their overall experience and satisfaction . Interactivity, or the degree to which users can influence the virtual environment, has been shown to enhance engagement by providing a more dynamic and responsive experience . Content quality, which encompasses the narrative, visuals, and overall coherence of the VR experience, also plays a crucial role in sustaining user interest and immersion . However, there is a notable gap in the literature regarding the role of psychological distance as an antecedent of engagement in VR. Psychological distance has been established as a stimulant of active user behavior in contexts such as traditional media and social media . We apply the concept in the context of VR to examine its effects on engagement. Furthermore, engagement in VR has been linked to various positive outcomes, including increased enjoyment, enhanced learning, and improved task performance. Users who are highly engaged in VR experiences tend to exhibit greater satisfaction and prolonged interaction with the system . Engagement in VR can also lead to improved learning outcomes, as the immersive and interactive nature of VR facilitates deeper cognitive processing and retention of information . This research examines perceived learning effectiveness as an outcome of engagement in VR health communication. Thus, the research provides insights into the efficacy of VR as a health communication tool and identifies the best practices for designing educational VR content that maximizes user engagement and learning. VR, psychological distance, immersion, flow, and presence VR technologies are engineered to teleport individuals from physical environments to virtual realms, delivering vivid and interactive immersive experiences . Immersive media offer a rich sensory experience by providing viewers with a rich sensory experience environment . Conversely, VR stimulates various senses at high resolution, intensifying the individual’s perception and absorption into the virtual environment . Through vividness and interactivity, VR can alter individuals’ perceptions of time, space, and social interactions, helping users feel present in a specific place at a certain time and interacting with seemingly close but remote elements . Moreover, VR enhances interactivity by allowing users to actively modify the form and content of the mediated environment in real time . This is facilitated through hardware, software, and user elements . VR systems incorporate display and audio systems for participants to engage with displayed content and tracking technologies for real-time monitoring of user movements and positioning, enabling accurate system responses . These systems provide instant feedback to users, stimulating further user actions . Thus, VR systems are meticulously designed to ensure interactivity, enhancing individuals’ proximity and, consequently, psychological distance from the virtual environment. Immersion encompasses physical, mental, and emotional involvement, significantly contributing to the overall quality of the VR experience . Unlike spatial distance, which measures the user’s perceived physical distance from the virtual space, immersion focuses on the individual's mental and emotional perception of the virtual environment . When the gap between the virtual and real worlds is minimized, immersion is enhanced. Users who perceive the virtual environment as realistic are more likely to experience it as an alternate reality, fostering familiarity and recognition that encourage active engagement and deeper immersion . Low psychological distance, which implies increased realism, reduces users’ disbelief in the virtual environment, thereby enhancing their sense of immersion . Additionally, low psychological distance allows users to emotionally connect with both the VR device and the communicated health message, further enhancing immersion . Therefore, it is expected that psychological distance has a negative relationship with immersion. When psychological distance is high, users may struggle to perceive the virtual environment as realistic and relevant, which may reduce their overall sense of immersion. Conversely, reducing psychological distance can create a more engaging and immersive VR experience. Hence, the following hypothesis is proposed: H1: Psychological distance has a negative relationship with immersion. Flow is an optimal psychological experience whereby individuals become fully absorbed in an activity. For flow to occur, several preconditions must be met . First, the individual must possess adequate skills to meet the demands of the activity. Second, the activity must have clear goals and provide timely, unambiguous feedback. Third, the individual must be considerably focused on the task, with a sense of control over it. During flow, the individual loses a degree of self-consciousness and experiences an autotelic experience and a distorted sense of time. Flow implies fun and interest while engaging in an activity, as the individual is focused entirely on the task . Ultimately, this leads to positive behavioral outcomes such as continued usage intention and attachment to the activity . In the context of VR, the conceptualization of flow as an affordance plays a pivotal role in enhancing the user experience and achieving a state of reduced psychological distance between the user and the virtual environment. This aligns with the concept of affordances that emphasizes the relationship between the characteristics of an environment and the actions individuals can perform within that environment . Flow, characterized by deep engagement and optimal challenge–skill balance, closely aligns with VR by providing affordances that facilitate immersive and meaningful interactions . Recent research substantiates that flow significantly contributes to positive outcomes in VR experiences . A key goal in VR design is reducing psychological distance, aiming to minimize the perceived gap between the user and the virtual world . When experiencing flow, users become fully absorbed in the VR environment, thereby blurring the boundary between reality and the virtual space. This reduced psychological distance is crucial for achieving a sense of presence and immersion, which is fundamental to successful VR experiences . The feeling of being present in the virtual world is closely tied to flow, which contributes to the overall sense of user satisfaction and engagement in VR applications . When health messages in virtual environments feel distant, users must expend additional cognitive effort to bridge these gaps . This can increase cognitive load and distracting them from the immersive experience . High psychological distance can also dampen emotional engagement, which can make health messages feel less relevant or urgent and reduce users' emotional connection to the VR content . This reduced emotional engagement can prevent users from becoming fully immersed, which is otherwise a necessary condition for achieving flow . Furthermore, psychological distance can undermine presence by making it feel less real or immediate. Without a strong sense of presence, users may struggle to suspend disbelief and fully engage with the VR scenario . Additionally, psychological distance can interfere with intrinsic motivation, as health messages perceived as hypothetical or not immediately applicable may reduce users' motivation to deeply engage with the VR content . As a result, psychological distance is likely to negatively affect flow in VR health communication by increasing cognitive load, reducing emotional engagement, undermining presence, and interfering with intrinsic motivation. The following hypothesis is proposed: H2: Psychological distance negatively affects flow. Presence in VR refers to the feeling of being psychologically immersed in a virtual environment, even while physically situated elsewhere . This immersive experience makes users perceive the virtual environment as genuine rather than a mere collection of computer images . Presence in VR encompasses physical presence (feeling physically present in the virtual environment) and psychological presence (mental and emotional presence in the virtual environment). Achieving presence is a two-step process: first, individuals must perceive the virtual environment as a plausible and recognizable space, and second, they must perceive themselves as actively engaged within this environment . Spatial presence is contingent upon the individual's active engagement and attention to the virtual environment . When health messages or virtual environments feel temporally distant, such as future health risks that seem far off from the current period, users may find it challenging to perceive the information as immediate or urgent, thereby reducing their sense of presence in the VR health communication process . Similarly, spatial distance, where the virtual environment feels geographically remote, can make it harder for users to relate to the scenario as they may perceive it as less applicable to themselves . This can, in turn, diminish their immersive experience and sense of presence in the VR communication process . In addition, cases of high social distance, where the characters or scenarios in VR are perceived as socially or culturally distant, can hinder users' emotional and cognitive connection to the virtual environment and make it less engaging and realistic . Finally, hypothetical distance, where the scenarios are perceived as unlikely or abstract, can further reduce the believability and immediacy of the VR experience . Thus, when psychological distance is high, users struggle to suspend disbelief and fully immerse themselves in the virtual environment, thus negatively affecting their sense of presence. The following hypothesis is proposed: H3: Psychological distance has a negative relationship with presence. Immersion, flow, presence and perceived learning effectiveness Prior studies have focused on assessing communication effectiveness by measuring objective communication outcomes from the perspective of defined and targeted senders of information . However, recipients in the communication process form their own beliefs and judgments about the effectiveness of the learning process. These personal evaluations of communication effectiveness are crucial predictors of the behavioral outcomes related to communication . Given that the primary goal of communication is often to persuade the audience to adopt or execute behaviors supportive of the communicated subject, it is crucial to understand the determinants of perceived learning effectiveness in communication. Immersion entails profound engagement with the virtual environment, minimizing distractions from stimuli in the physical environment during the communication process . The absence of external distractions enhances engagement and focus, improving information processing and retention . Immersion is also linked to affective information processing, contributing to positive learning outcomes . Therefore, immersion is expected to influence perceived learning positively. Effective learning requires individuals to be absorbed in the learning process, maintain focus, establish clear learning goals, maintain interest in the learning process, and possess sufficient skills to facilitate learning . These characteristics align with the flow experience. Clear learning goals enable individuals to focus on achievement and make corrections when necessary. Adequate interest ensures sustained attention to the learning process, contributing to effective learning . Additionally, possessing sufficient skills increases the likelihood of achieving learning goals . In this study, presence refers to perceiving the virtual environment as interactive, emotional, and authentic . Interactivity allows VR users to provide input during interactions, potentially aiding the internalization of communicated messages . In addition, emotional engagement contributes to the consolidation and retrieval of learned content . Furthermore, the realism of the learning process enhances the credibility of the learning output, attracting attention and consequently contributing to the internalization of communicated content . Therefore, it is expected that immersion, flow, and presence will positively impact perceived learning effectiveness, leading to the formulation of the following hypothesis: H4: [a] Immersion, [b] Flow, and [c] Presence positively affect perceived learning effectiveness. Immersion, flow, and presence as mediators Engagement is critical to learning in immersive contexts such as VR. Immersion, flow, and presence are key constructs that characterize engagement in VR , playing a vital role in the learning process within VR contexts. Specifically, immersion involves the individual being absorbed into the virtual environment with minimal external distractions, enabling undistracted focus on VR content . With explicit learning goals, personal interest, and skill adequacy, flow enhances attention and makes the learning experience interesting, potentially boosting learning effectiveness . Additionally, presence entails perceiving the VR environment as authentic, making information from the VR context more believable . This perception increases interest in the learning process, contributing to effective learning. The theoretical rationale presented in this paper posits that psychological distance positively influences immersion, flow, and presence. A lower disconnect between the individual and the virtual environment leads to individuals being more fully absorbed into the virtual environment . Consequently, psychological distance is expected to positively affect immersion, flow, and presence, ultimately influencing the perceived learning experience. Given the critical roles of immersion, flow, and presence in VR communication and their impact on the effectiveness of the communication process , this study proposes that immersion, flow, and presence mediate the effect of psychological distance on perceived learning effectiveness. Thus, hypothesis 5 is proposed. H5: (a) Immersion, (b) flow, and (c) presence mediate the relationship between psychological distance and perceived learning effectiveness. Figure illustrates the research framework for Study 2, which employs the SEM framework. This framework will guide the investigation of how psychological distance influences key engagement factors (immersion, flow, and presence) and how these factors subsequently impact the perceived effectiveness of learning in VR. According to construal level theory, individuals' preferences and evaluations of external stimuli are influenced by psychological distance . Psychological distance refers to an individual's perception of whether something is close or far from the self . When an object is perceived as distant from the self, it is represented at a higher construal level, as its mental representation requires greater abstraction. In contrast, if the object is perceived as close to the self, its mental representation requires a lower-level abstraction and is represented at a lower construal level . The levels of abstraction differ for near and distant objects because it is easier to obtain extensive information for nearer objects [as compared to distant objects], reducing the cognitive effort required to form representations about an object . Prior research indicates that psychological distance encompasses several interrelated subdimensions. The classical construal level theory literature suggests four dimensions of psychological distance: temporal, spatial, social, and hypothetical . Temporal distance pertains to individuals' perception of the interval between an object and future occurrences . A higher temporal distance suggests an event will likely occur much later . Spatial distance involves an individual's perception of the physical distance between himself or herself and the object they interact with within the virtual environment . Objects perceived as being in the individual's vicinity [i.e., having low spatial distance] are construed at a lower construal level. Social distance is defined by the closeness of the relationship between the individual and the object . Social connections help individuals develop close relationships with objects. A socially close object is construed at a lower level, whereas a socially distant object is construed at a higher level . Hypothetical distance refers to an individual's belief in the reality of the object . An object considered hypothetically near has a high probability of occurring, while one hypothetically distant object has a lower probability . Objects that are hypothetically near have a high-level construal, whereas those that are hypothetically distant have a low-level construal. Given that different media contexts have different affordances that shape users’ perceptions of psychological distance, extant research has called for additional dimensions of psychological distance . Recent research has proposed additional dimensions of psychological distance, more notably, technical and emotional distance . A crucial factor in the use of technology is the efficacy of utilizing the technology effectively . When individual efficacy is high, the use of technological devices becomes more accessible, allowing users to derive maximum benefits . In VR, technical distance pertains to the ease users can navigate and use VR technology . High technical distance occurs when an individual struggles to use VR technology comfortably, while low technical distance indicates that the individual can use VR technology with ease and efficiency. A low technical distance implies that the information communicated via the VR device can be understood at a low level because minimal cognitive effort is needed for device usage . The emotional connection between the individual and the object of communication is also crucial in determining the success of communication. When individuals feel connected to a communication object, they are more likely to internalize the message effectively . Recognizing the centrality of emotion in communication, emotional distance emerges as another dimension of psychological distance. Emotional distance refers to an individual’s emotional affinity toward the subject being communicated through VR technology . When an individual feels emotionally close to the object of communication, the emotional distance is low, and the message is processed with a low-level construal . Conversely, if the individual feels emotionally disconnected from the object of communication, the emotional distance is high, and the message is processed with a high-level construal. Engagement in VR is a multi-faceted concept that encompasses immersion, flow and presence . These dimensions contribute to the overall user experience and effectiveness of VR communication. Immersion is the extent to which the VR system can deliver an inclusive, extensive, surrounding, and vivid illusion of reality . It involves both the technical aspects of VR (such as graphical fidelity, audio quality, and haptic feedback) and the user's psychological involvement . Immersion enables users to be deeply absorbed in the virtual environment and reduces their awareness of the real world . It also enhances the believability of the virtual experience . As a component of engagement, immersion enhances the intensity and quality of the VR experience . Flow refers to the state of optimal experience characterized by complete absorption, enjoyment, and intrinsic motivation during VR use . In the context of VR, flow is achieved when users are fully engaged in the virtual experience to the extent of losing track of time and external distractions . The balance between challenge and skill is essential for achieving flow as it ensures that that users are neither bored nor overwhelmed . Flow in VR has been linked to increased user satisfaction, prolonged engagement, and enhanced task performance . Presence refers to the sense of being physically and spatially situated within a virtual environment . It is the psychological state where users feel they are "inside" the VR world rather than merely observing it from an external perspective. Presence is crucial for enhancing the realism of VR experiences and fostering a deeper connection between users and the virtual environment . Prior research has identified several antecedents of engagement in VR, including usability, interactivity, and content quality. Usability refers to the ease with which users can navigate and interact with the VR system, impacting their overall experience and satisfaction . Interactivity, or the degree to which users can influence the virtual environment, has been shown to enhance engagement by providing a more dynamic and responsive experience . Content quality, which encompasses the narrative, visuals, and overall coherence of the VR experience, also plays a crucial role in sustaining user interest and immersion . However, there is a notable gap in the literature regarding the role of psychological distance as an antecedent of engagement in VR. Psychological distance has been established as a stimulant of active user behavior in contexts such as traditional media and social media . We apply the concept in the context of VR to examine its effects on engagement. Furthermore, engagement in VR has been linked to various positive outcomes, including increased enjoyment, enhanced learning, and improved task performance. Users who are highly engaged in VR experiences tend to exhibit greater satisfaction and prolonged interaction with the system . Engagement in VR can also lead to improved learning outcomes, as the immersive and interactive nature of VR facilitates deeper cognitive processing and retention of information . This research examines perceived learning effectiveness as an outcome of engagement in VR health communication. Thus, the research provides insights into the efficacy of VR as a health communication tool and identifies the best practices for designing educational VR content that maximizes user engagement and learning. VR technologies are engineered to teleport individuals from physical environments to virtual realms, delivering vivid and interactive immersive experiences . Immersive media offer a rich sensory experience by providing viewers with a rich sensory experience environment . Conversely, VR stimulates various senses at high resolution, intensifying the individual’s perception and absorption into the virtual environment . Through vividness and interactivity, VR can alter individuals’ perceptions of time, space, and social interactions, helping users feel present in a specific place at a certain time and interacting with seemingly close but remote elements . Moreover, VR enhances interactivity by allowing users to actively modify the form and content of the mediated environment in real time . This is facilitated through hardware, software, and user elements . VR systems incorporate display and audio systems for participants to engage with displayed content and tracking technologies for real-time monitoring of user movements and positioning, enabling accurate system responses . These systems provide instant feedback to users, stimulating further user actions . Thus, VR systems are meticulously designed to ensure interactivity, enhancing individuals’ proximity and, consequently, psychological distance from the virtual environment. Immersion encompasses physical, mental, and emotional involvement, significantly contributing to the overall quality of the VR experience . Unlike spatial distance, which measures the user’s perceived physical distance from the virtual space, immersion focuses on the individual's mental and emotional perception of the virtual environment . When the gap between the virtual and real worlds is minimized, immersion is enhanced. Users who perceive the virtual environment as realistic are more likely to experience it as an alternate reality, fostering familiarity and recognition that encourage active engagement and deeper immersion . Low psychological distance, which implies increased realism, reduces users’ disbelief in the virtual environment, thereby enhancing their sense of immersion . Additionally, low psychological distance allows users to emotionally connect with both the VR device and the communicated health message, further enhancing immersion . Therefore, it is expected that psychological distance has a negative relationship with immersion. When psychological distance is high, users may struggle to perceive the virtual environment as realistic and relevant, which may reduce their overall sense of immersion. Conversely, reducing psychological distance can create a more engaging and immersive VR experience. Hence, the following hypothesis is proposed: H1: Psychological distance has a negative relationship with immersion. Flow is an optimal psychological experience whereby individuals become fully absorbed in an activity. For flow to occur, several preconditions must be met . First, the individual must possess adequate skills to meet the demands of the activity. Second, the activity must have clear goals and provide timely, unambiguous feedback. Third, the individual must be considerably focused on the task, with a sense of control over it. During flow, the individual loses a degree of self-consciousness and experiences an autotelic experience and a distorted sense of time. Flow implies fun and interest while engaging in an activity, as the individual is focused entirely on the task . Ultimately, this leads to positive behavioral outcomes such as continued usage intention and attachment to the activity . In the context of VR, the conceptualization of flow as an affordance plays a pivotal role in enhancing the user experience and achieving a state of reduced psychological distance between the user and the virtual environment. This aligns with the concept of affordances that emphasizes the relationship between the characteristics of an environment and the actions individuals can perform within that environment . Flow, characterized by deep engagement and optimal challenge–skill balance, closely aligns with VR by providing affordances that facilitate immersive and meaningful interactions . Recent research substantiates that flow significantly contributes to positive outcomes in VR experiences . A key goal in VR design is reducing psychological distance, aiming to minimize the perceived gap between the user and the virtual world . When experiencing flow, users become fully absorbed in the VR environment, thereby blurring the boundary between reality and the virtual space. This reduced psychological distance is crucial for achieving a sense of presence and immersion, which is fundamental to successful VR experiences . The feeling of being present in the virtual world is closely tied to flow, which contributes to the overall sense of user satisfaction and engagement in VR applications . When health messages in virtual environments feel distant, users must expend additional cognitive effort to bridge these gaps . This can increase cognitive load and distracting them from the immersive experience . High psychological distance can also dampen emotional engagement, which can make health messages feel less relevant or urgent and reduce users' emotional connection to the VR content . This reduced emotional engagement can prevent users from becoming fully immersed, which is otherwise a necessary condition for achieving flow . Furthermore, psychological distance can undermine presence by making it feel less real or immediate. Without a strong sense of presence, users may struggle to suspend disbelief and fully engage with the VR scenario . Additionally, psychological distance can interfere with intrinsic motivation, as health messages perceived as hypothetical or not immediately applicable may reduce users' motivation to deeply engage with the VR content . As a result, psychological distance is likely to negatively affect flow in VR health communication by increasing cognitive load, reducing emotional engagement, undermining presence, and interfering with intrinsic motivation. The following hypothesis is proposed: H2: Psychological distance negatively affects flow. Presence in VR refers to the feeling of being psychologically immersed in a virtual environment, even while physically situated elsewhere . This immersive experience makes users perceive the virtual environment as genuine rather than a mere collection of computer images . Presence in VR encompasses physical presence (feeling physically present in the virtual environment) and psychological presence (mental and emotional presence in the virtual environment). Achieving presence is a two-step process: first, individuals must perceive the virtual environment as a plausible and recognizable space, and second, they must perceive themselves as actively engaged within this environment . Spatial presence is contingent upon the individual's active engagement and attention to the virtual environment . When health messages or virtual environments feel temporally distant, such as future health risks that seem far off from the current period, users may find it challenging to perceive the information as immediate or urgent, thereby reducing their sense of presence in the VR health communication process . Similarly, spatial distance, where the virtual environment feels geographically remote, can make it harder for users to relate to the scenario as they may perceive it as less applicable to themselves . This can, in turn, diminish their immersive experience and sense of presence in the VR communication process . In addition, cases of high social distance, where the characters or scenarios in VR are perceived as socially or culturally distant, can hinder users' emotional and cognitive connection to the virtual environment and make it less engaging and realistic . Finally, hypothetical distance, where the scenarios are perceived as unlikely or abstract, can further reduce the believability and immediacy of the VR experience . Thus, when psychological distance is high, users struggle to suspend disbelief and fully immerse themselves in the virtual environment, thus negatively affecting their sense of presence. The following hypothesis is proposed: H3: Psychological distance has a negative relationship with presence. Prior studies have focused on assessing communication effectiveness by measuring objective communication outcomes from the perspective of defined and targeted senders of information . However, recipients in the communication process form their own beliefs and judgments about the effectiveness of the learning process. These personal evaluations of communication effectiveness are crucial predictors of the behavioral outcomes related to communication . Given that the primary goal of communication is often to persuade the audience to adopt or execute behaviors supportive of the communicated subject, it is crucial to understand the determinants of perceived learning effectiveness in communication. Immersion entails profound engagement with the virtual environment, minimizing distractions from stimuli in the physical environment during the communication process . The absence of external distractions enhances engagement and focus, improving information processing and retention . Immersion is also linked to affective information processing, contributing to positive learning outcomes . Therefore, immersion is expected to influence perceived learning positively. Effective learning requires individuals to be absorbed in the learning process, maintain focus, establish clear learning goals, maintain interest in the learning process, and possess sufficient skills to facilitate learning . These characteristics align with the flow experience. Clear learning goals enable individuals to focus on achievement and make corrections when necessary. Adequate interest ensures sustained attention to the learning process, contributing to effective learning . Additionally, possessing sufficient skills increases the likelihood of achieving learning goals . In this study, presence refers to perceiving the virtual environment as interactive, emotional, and authentic . Interactivity allows VR users to provide input during interactions, potentially aiding the internalization of communicated messages . In addition, emotional engagement contributes to the consolidation and retrieval of learned content . Furthermore, the realism of the learning process enhances the credibility of the learning output, attracting attention and consequently contributing to the internalization of communicated content . Therefore, it is expected that immersion, flow, and presence will positively impact perceived learning effectiveness, leading to the formulation of the following hypothesis: H4: [a] Immersion, [b] Flow, and [c] Presence positively affect perceived learning effectiveness. Engagement is critical to learning in immersive contexts such as VR. Immersion, flow, and presence are key constructs that characterize engagement in VR , playing a vital role in the learning process within VR contexts. Specifically, immersion involves the individual being absorbed into the virtual environment with minimal external distractions, enabling undistracted focus on VR content . With explicit learning goals, personal interest, and skill adequacy, flow enhances attention and makes the learning experience interesting, potentially boosting learning effectiveness . Additionally, presence entails perceiving the VR environment as authentic, making information from the VR context more believable . This perception increases interest in the learning process, contributing to effective learning. The theoretical rationale presented in this paper posits that psychological distance positively influences immersion, flow, and presence. A lower disconnect between the individual and the virtual environment leads to individuals being more fully absorbed into the virtual environment . Consequently, psychological distance is expected to positively affect immersion, flow, and presence, ultimately influencing the perceived learning experience. Given the critical roles of immersion, flow, and presence in VR communication and their impact on the effectiveness of the communication process , this study proposes that immersion, flow, and presence mediate the effect of psychological distance on perceived learning effectiveness. Thus, hypothesis 5 is proposed. H5: (a) Immersion, (b) flow, and (c) presence mediate the relationship between psychological distance and perceived learning effectiveness. Figure illustrates the research framework for Study 2, which employs the SEM framework. This framework will guide the investigation of how psychological distance influences key engagement factors (immersion, flow, and presence) and how these factors subsequently impact the perceived effectiveness of learning in VR. This study applied the F-DEMATEL method and SEM to examine the relationships among the variables. The hypotheses could be validated through SEM; however, the F-DEMATEL method was also employed to provide a more comprehensive analysis. While SEM is effective for testing the relationships between constructs and validating the proposed model, F-DEMATEL offers additional insights into the causal relationships among variables. Specifically, F-DEMATEL helps to identify the direct and indirect effects and the strength of these relationships in a complex system. This dual-method approach allows for a deeper understanding of the dynamics within the model, ensuring a more robust and nuanced analysis of the data. Furthermore, the F-DEMATEL method was used to investigate the interrelationships among the predictor variables, whereas SEM was used to examine the research hypotheses proposed by the study. The two methods were employed to triangulate the sources of information in the communication process, which involves both senders and receivers of information. Thus, information was collected from experts in VR communication (i.e., senders) and analyzed using the F-DEMATEL method, and from end users of VR systems (i.e., receivers), and analyzed via SEM. Study 1: The F-DEMATEL method Fuzzy DEMATEL, an integration of the Decision Making Trial and Evaluation Laboratory (DEMATEL) technique with fuzzy logic, is an advanced method for decision analysis that considers the inherent uncertainty and imprecision in real-world decision-making contexts . The DEMATEL method examines cause-and-effect relationships among decision elements, offering insights into the interdependencies between these factors . Fuzzy logic enhances this approach by accommodating qualitative data and imprecise information through the use of fuzzy sets . The process involves constructing a fuzzy pairwise comparison matrix, where decision makers use fuzzy numbers to express the strength and direction of influence between elements . This information is subsequently aggregated into a fuzzy total influence matrix and normalized for consistency and interpretation . The outcome provides quantitative measures of influence and visual representations, empowering decision makers to prioritize factors and comprehend the intricate interactions within the decision model. Therefore, F-DEMATEL is a potent tool for helping decision makers navigate complex decision landscapes and offers a means to consider and integrate the uncertainties inherent in real-world scenarios . The uncertainties inherent in communication, which are subject to influence by various external factors, make this a challenging endeavor. Applying fuzzy theory to DEMATEL helps mitigate subjectivity and ensures representative reliability . Various studies have demonstrated the applicability of the F-DEMATEL model in diverse contexts, such as supply chain management and health promotion [72]. Figure illustrates the F-DEMATEL process framework, a systematic approach to analyzing and modeling complex causal relationships within a system. The computational steps involved in F-DEMATEL are described below. Step 1: Determination of the influencing factors in the system A literature review was also conducted to determine the factors affecting the outcome variables. Table below indicates the factors used in the study after the literature survey and the three M-Delphi rounds. Step 2: Designing the fuzzy linguistic scale The degrees of influence applied in the F-DEMATEL method typically consist of five levels: No Influence (N), Very Low Influence (VL), Low Influence (L), High Influence (H), and Very High Influence (VH). Participants used this semantic scale to rate causal relationships among factors within the system. The fuzzy linguistic scale in Table . was used to collect feedback from the experts. Step 3: Computing the initial direct relation fuzzy matrix Every respondent’s initial direct relation matrix [12pt]{minimal} $${Z}^{k}$$ Z k comprises ratings denoted by [12pt]{minimal} $${Z}_{ij}^{k}$$ Z ij k . The direct relation matrix [12pt]{minimal} $${X}_{ij}^{k}$$ X ij k comprises three submatrices, L, M and U. 1 [12pt]{minimal} $${Z}^{k}= [0& {Z}_{12}^{k}& & {Z}_{1n}^{k}\\ {Z}_{21}^{k}& 0& & {Z}_{2n}^{k}\\ & & & \\ {Z}_{n1}^{k}& {Z}_{n2}^{k}& & 0] ; k = 1, 2, , p $$ Z k = 0 Z 12 k ⋯ Z 1 n k Z 21 k 0 ⋯ Z 2 n k ⋮ ⋮ ⋱ ⋮ Z n 1 k Z n 2 k ⋯ 0 ; k = 1 , 2 , ⋯ , p ⋯ where [12pt]{minimal} $${Z}_{ij}^{k}=({L}_{ij}^{k} , {M}_{ij}^{k} , {U}_{ij}^{k}),=p$$ Z ij k = ( L ij k , M ij k , U ij k ) , n = p The combined average direct relation matrix for all respondents is obtained as follows: 2 [12pt]{minimal} $$A= ({ }_{k=1}^{P}{Z}^{k}) k = 1, 2, , p$$ A = 1 p ∑ k = 1 P Z k k = 1 , 2 , ⋯ , p ⋯ where [12pt]{minimal} $${ }_{k=1}^{P}{Z}^{k}= { }_{k=1}^{P}{L}^{k}, { }_{k=1}^{P}{M}^{k}, { }_{k=1}^{P}{U}^{k}$$ ∑ k = 1 P Z k = ∑ k = 1 P L k , ∑ k = 1 P M k , ∑ k = 1 P U k . Step 4: Normalize the direct-relation fuzzy matrix. To normalize the direct relation matrix, the most significant values element in the initial direct relation matrix is obtained 3 [12pt]{minimal} $$^{k}= }({ }_{j=1}^{n}{u}_{ij}^{k}) k = 1, 2, , p $$ as r k = max 1 ≤ i ≤ n ∑ j = 1 n u ij k k = 1 , 2 , ⋯ , p ⋯ Thereafter, the normalized direct relation fuzzy matrix [12pt]{minimal} $${X}^{k}$$ X k is obtained by dividing each element of the average direct relation matrix by the highest value of the sum of the matrix’s rows and columns ( [12pt]{minimal} $${r}^{k}$$ r k ). Thus, the normalized direct relation matrix is obtained as follows. 4 [12pt]{minimal} $${X}^{K}= [_{11}^{K}& {X}_{12}^{k}& & {X}_{1n}^{k}\\ {X}_{21}^{k}& {X}_{22}^{k}& & {X}_{2n}^{k}\\ & & & \\ {X}_{n1}^{k}& {X}_{n2}^{k}& & {X}_{nn}^{k}] k = 1, 2, , p$$ X K = X 11 K X 12 k ⋯ X 1 n k X 21 k X 22 k ⋯ X 2 n k ⋮ ⋮ ⋱ ⋮ X n 1 k X n 2 k ⋯ X nn k k = 1 , 2 , ⋯ , p ⋯ where [12pt]{minimal} $${X}_{ij}^{k}=({L}_{ij}^{k} , {M}_{ij}^{k} , {U}_{ij}^{k}) = (_{ij}^{k}}{{r}^{k}})= (_{ij}^{k}}{{r}^{k}}{}_{ij}^{k}}{{r}^{k}}{}_{ij}^{k}}{{r}^{k}})$$ X ij k = L ij k , M ij k , U ij k = Z ij k r k = L ij k r k ′ M ij k r k ′ U ij k r k . Step 5: Obtaining the fuzzy total relation matrix To compute the fuzzy total relation matrix, [12pt]{minimal} $$}{X}^{w}$$ lim w → ∞ X w must first be obtained. [12pt]{minimal} $${X}^{w}$$ X w represents the triangular fuzzy matrix, which can be expressed as: [12pt]{minimal} $${X}^{w}= [0& {x}_{12}^{w}& & {x}_{1n}^{w}\\ {x}_{21}^{w}& 0& & {x}_{2n}^{w}\\ & & & \\ {x}_{n1}^{w}& {x}_{n2}^{w}& & 0]$$ X w = 0 x 12 w ⋯ x 1 n w x 21 w 0 ⋯ x 2 n w ⋮ ⋮ ⋱ ⋮ x n 1 w x n 2 w ⋯ 0 , where [12pt]{minimal} $${x}_{ij}^{w} = ({L}_{ij}^{w} , {M}_{ij}^{w} , {U}_{ij}^{w})$$ x ij w = L ij w , M ij w , U ij w and. [12pt]{minimal} $${L}_{ij}^{w}= [0& {L}_{12}^{w}& & {L}_{1n}^{w}\\ {L}_{21}^{w}& 0& & {L}_{2n}^{w}\\ & & & \\ {L}_{n1}^{w}& {L}_{n2}^{w}& & 0]$$ L ij w = 0 L 12 w ⋯ L 1 n w L 21 w 0 ⋯ L 2 n w ⋮ ⋮ ⋱ ⋮ L n 1 w L n 2 w ⋯ 0 ’ [12pt]{minimal} $${M}_{ij}^{w}= [0& {M}_{12}^{w}& & {M}_{1n}^{w}\\ {M}_{21}^{w}& 0& & {M}_{2n}^{w}\\ & & & \\ {M}_{n1}^{w}& {M}_{n2}^{w}& & 0]$$ M ij w = 0 M 12 w ⋯ M 1 n w M 21 w 0 ⋯ M 2 n w ⋮ ⋮ ⋱ ⋮ M n 1 w M n 2 w ⋯ 0 and [12pt]{minimal} $${U}_{ij}^{w}= [0& {U}_{12}^{w}& & {U}_{1n}^{w}\\ {U}_{21}^{w}& 0& & {U}_{2n}^{w}\\ & & & \\ {U}_{n1}^{w}& {U}_{n2}^{w}& & 0]$$ U ij w = 0 U 12 w ⋯ U 1 n w U 21 w 0 ⋯ U 2 n w ⋮ ⋮ ⋱ ⋮ U n 1 w U n 2 w ⋯ 0 These three matrices can be ordered as [12pt]{minimal} $${L}_{ij}^{w}= {X}_{L}^{w}, {M}_{ij}^{w}={X}_{M}^{w}, {U}_{U}^{w}={X}_{U}^{w}$$ L ij w = X L w , M ij w = X M w , U U w = X U w . If [12pt]{minimal} $$}{X}^{w}$$ lim w → ∞ X w = 0 and [12pt]{minimal} $$}{(1+X+X}^{2}+ +{X}^{k})$$ lim w → ∞ ( 1 + X + X 2 + ⋯ + X k ) = [12pt]{minimal} $$(1-{X)}^{-1}$$ ( 1 - X ) - 1 , where 0 is the zero matrix and I is the identity matrix, the total relation matrix can be expressed as [12pt]{minimal} $$= }{(X+X}^{2}+ +{X}^{k})$$ T = lim w → ∞ ( X + X 2 + ⋯ + X k ) = [12pt]{minimal} $$(1-{X)}^{-1}$$ ( 1 - X ) - 1 . Because the [12pt]{minimal} $${X}^{w}$$ X w matrix comprises matrices [12pt]{minimal} $${L}_{ij}^{w} , {M}_{ij}^{w} , {U}_{ij}^{w}$$ L ij w , M ij w , U ij w , the fuzzy total relation matrix for the three submatrices can be obtained as follows: 5 [12pt]{minimal} $${T}^{L}=[{T}^{L}]=}(L+{L}^{2}+ +{L}^{k})={L(1-L)}^{-1}$$ T L = [ T L ] = lim c → ∞ ( L + L 2 + ⋯ + L k ) = L ( 1 - L ) - 1 ⋯ 6 [12pt]{minimal} $${T}^{M}={[T}^{M}]=}(M+{M}^{2}+ +{M}^{k})={M(1-M)}^{-1}$$ T M = [ T M ] = lim c → ∞ ( M + M 2 + ⋯ + M k ) = M ( 1 - M ) - 1 ⋯ 7 [12pt]{minimal} $${T}^{U}=[{T}^{U}]=}(U+{U}^{2}+ +{U}^{k})={U(1-U)}^{-1}$$ T U = [ T U ] = lim c → ∞ ( U + U 2 + ⋯ + U k ) = U ( 1 - U ) - 1 ⋯ Step 6: Obtaining the sum of rows and columns The sums of the rows and columns are plotted as vectors [12pt]{minimal} $${D}_{i}$$ D i and [12pt]{minimal} $${R}_{i}$$ R i . Prominence, the horizontal axis vector ( [12pt]{minimal} $${D}_{i}+{R}_{i}$$ D i + R i ), is obtained by summing the rows and columns for each factor at the three levels. [12pt]{minimal} $${D}_{i}-{R}_{i}$$ D i - R i ), the vertical axis vector, is obtained by subtracting the columns from the rows for each factor. The [12pt]{minimal} $${D}_{i}$$ D i and [12pt]{minimal} $${R}_{i}$$ R i values have three levels ( [12pt]{minimal} $${D}_{i}^{L}$$ D i L , [12pt]{minimal} $${D}_{i}^{M}$$ D i M , [12pt]{minimal} $${D}_{i}^{U}$$ D i U and [12pt]{minimal} $${R}_{i}^{L}$$ R i L , [12pt]{minimal} $${R}_{i}^{M}$$ R i M and [12pt]{minimal} $${R}_{i}^{U}$$ R i U ). 8 [12pt]{minimal} $${D}_{i}^{*}=_{x=1}^{n}{T}_{ix}{}=[{D}_{i}^{L},{D}_{i}^{M},{D}_{i}^{U}]$$ D i ∗ = ∑ x = 1 n T ix ′ = D i L , D i M , D i U ⋯ 9 [12pt]{minimal} $${R}_{i}^{*}=_{y=1}^{n}{T}_{yj}{}=[{R}_{i}^{L},{R}_{i}^{M},{R}_{i}^{U}]$$ R i ∗ = ∑ y = 1 n T yj ′ = R i L , R i M , R i U ⋯ To obtain single values from triangular values, the mean of the triangular values is obtained as follows: 10 [12pt]{minimal} $${D}_{i}+{R}_{i}={Mean (D}_{i}^{*}-{R}_{i}^{*})$$ D i + R i = M e a n ( D i ∗ - R i ∗ ) ⋯ 11 [12pt]{minimal} $${D}_{i}-{R}_{i}={Mean(D}_{i}^{*}-{R}_{i}^{*})$$ D i - R i = M e a n ( D i ∗ - R i ∗ ) ⋯ The criteria are then classified into cause-and-effect groups. Factors with positive [12pt]{minimal} $${D}_{i}+{R}_{i}$$ D i + R i values are categorized as causal factors, and those with negative [12pt]{minimal} $${D}_{i}+{R}_{i}$$ D i + R i values are categorized as effect factors. The causal model is obtained by graphing the values of [12pt]{minimal} $${D}_{i}+{R}_{i}$$ D i + R i and [12pt]{minimal} $${D}_{i}-{R}_{i}$$ D i - R i . Sampling This study recruited experts in VR communication from the Tzu Chi Foundation, an international humanitarian organization founded in 1966 by Dharma Master Cheng Yen. The foundation, dedicated to charitable services, humanitarian values, and community well-being, operates under principles of compassion, relief, and respect for all life. Its activities include disaster relief, medical aid, environmental conservation, and educational initiatives. With a global network of volunteers, Tzu Chi provides aid regardless of ethnicity, nationality, or religion, aiming to alleviate suffering and promote sustainable living. To participate in the study, individuals needed over ten years of experience in VR communication and a managerial position. Twenty participants met these criteria. All data elements were present, and ratings were within the provided scale, so all collected samples were included in the final analysis. Fuzzy DEMATEL, an integration of the Decision Making Trial and Evaluation Laboratory (DEMATEL) technique with fuzzy logic, is an advanced method for decision analysis that considers the inherent uncertainty and imprecision in real-world decision-making contexts . The DEMATEL method examines cause-and-effect relationships among decision elements, offering insights into the interdependencies between these factors . Fuzzy logic enhances this approach by accommodating qualitative data and imprecise information through the use of fuzzy sets . The process involves constructing a fuzzy pairwise comparison matrix, where decision makers use fuzzy numbers to express the strength and direction of influence between elements . This information is subsequently aggregated into a fuzzy total influence matrix and normalized for consistency and interpretation . The outcome provides quantitative measures of influence and visual representations, empowering decision makers to prioritize factors and comprehend the intricate interactions within the decision model. Therefore, F-DEMATEL is a potent tool for helping decision makers navigate complex decision landscapes and offers a means to consider and integrate the uncertainties inherent in real-world scenarios . The uncertainties inherent in communication, which are subject to influence by various external factors, make this a challenging endeavor. Applying fuzzy theory to DEMATEL helps mitigate subjectivity and ensures representative reliability . Various studies have demonstrated the applicability of the F-DEMATEL model in diverse contexts, such as supply chain management and health promotion [72]. Figure illustrates the F-DEMATEL process framework, a systematic approach to analyzing and modeling complex causal relationships within a system. The computational steps involved in F-DEMATEL are described below. Step 1: Determination of the influencing factors in the system A literature review was also conducted to determine the factors affecting the outcome variables. Table below indicates the factors used in the study after the literature survey and the three M-Delphi rounds. Step 2: Designing the fuzzy linguistic scale The degrees of influence applied in the F-DEMATEL method typically consist of five levels: No Influence (N), Very Low Influence (VL), Low Influence (L), High Influence (H), and Very High Influence (VH). Participants used this semantic scale to rate causal relationships among factors within the system. The fuzzy linguistic scale in Table . was used to collect feedback from the experts. Step 3: Computing the initial direct relation fuzzy matrix Every respondent’s initial direct relation matrix [12pt]{minimal} $${Z}^{k}$$ Z k comprises ratings denoted by [12pt]{minimal} $${Z}_{ij}^{k}$$ Z ij k . The direct relation matrix [12pt]{minimal} $${X}_{ij}^{k}$$ X ij k comprises three submatrices, L, M and U. 1 [12pt]{minimal} $${Z}^{k}= [0& {Z}_{12}^{k}& & {Z}_{1n}^{k}\\ {Z}_{21}^{k}& 0& & {Z}_{2n}^{k}\\ & & & \\ {Z}_{n1}^{k}& {Z}_{n2}^{k}& & 0] ; k = 1, 2, , p $$ Z k = 0 Z 12 k ⋯ Z 1 n k Z 21 k 0 ⋯ Z 2 n k ⋮ ⋮ ⋱ ⋮ Z n 1 k Z n 2 k ⋯ 0 ; k = 1 , 2 , ⋯ , p ⋯ where [12pt]{minimal} $${Z}_{ij}^{k}=({L}_{ij}^{k} , {M}_{ij}^{k} , {U}_{ij}^{k}),=p$$ Z ij k = ( L ij k , M ij k , U ij k ) , n = p The combined average direct relation matrix for all respondents is obtained as follows: 2 [12pt]{minimal} $$A= ({ }_{k=1}^{P}{Z}^{k}) k = 1, 2, , p$$ A = 1 p ∑ k = 1 P Z k k = 1 , 2 , ⋯ , p ⋯ where [12pt]{minimal} $${ }_{k=1}^{P}{Z}^{k}= { }_{k=1}^{P}{L}^{k}, { }_{k=1}^{P}{M}^{k}, { }_{k=1}^{P}{U}^{k}$$ ∑ k = 1 P Z k = ∑ k = 1 P L k , ∑ k = 1 P M k , ∑ k = 1 P U k . Step 4: Normalize the direct-relation fuzzy matrix. To normalize the direct relation matrix, the most significant values element in the initial direct relation matrix is obtained 3 [12pt]{minimal} $$^{k}= }({ }_{j=1}^{n}{u}_{ij}^{k}) k = 1, 2, , p $$ as r k = max 1 ≤ i ≤ n ∑ j = 1 n u ij k k = 1 , 2 , ⋯ , p ⋯ Thereafter, the normalized direct relation fuzzy matrix [12pt]{minimal} $${X}^{k}$$ X k is obtained by dividing each element of the average direct relation matrix by the highest value of the sum of the matrix’s rows and columns ( [12pt]{minimal} $${r}^{k}$$ r k ). Thus, the normalized direct relation matrix is obtained as follows. 4 [12pt]{minimal} $${X}^{K}= [_{11}^{K}& {X}_{12}^{k}& & {X}_{1n}^{k}\\ {X}_{21}^{k}& {X}_{22}^{k}& & {X}_{2n}^{k}\\ & & & \\ {X}_{n1}^{k}& {X}_{n2}^{k}& & {X}_{nn}^{k}] k = 1, 2, , p$$ X K = X 11 K X 12 k ⋯ X 1 n k X 21 k X 22 k ⋯ X 2 n k ⋮ ⋮ ⋱ ⋮ X n 1 k X n 2 k ⋯ X nn k k = 1 , 2 , ⋯ , p ⋯ where [12pt]{minimal} $${X}_{ij}^{k}=({L}_{ij}^{k} , {M}_{ij}^{k} , {U}_{ij}^{k}) = (_{ij}^{k}}{{r}^{k}})= (_{ij}^{k}}{{r}^{k}}{}_{ij}^{k}}{{r}^{k}}{}_{ij}^{k}}{{r}^{k}})$$ X ij k = L ij k , M ij k , U ij k = Z ij k r k = L ij k r k ′ M ij k r k ′ U ij k r k . Step 5: Obtaining the fuzzy total relation matrix To compute the fuzzy total relation matrix, [12pt]{minimal} $$}{X}^{w}$$ lim w → ∞ X w must first be obtained. [12pt]{minimal} $${X}^{w}$$ X w represents the triangular fuzzy matrix, which can be expressed as: [12pt]{minimal} $${X}^{w}= [0& {x}_{12}^{w}& & {x}_{1n}^{w}\\ {x}_{21}^{w}& 0& & {x}_{2n}^{w}\\ & & & \\ {x}_{n1}^{w}& {x}_{n2}^{w}& & 0]$$ X w = 0 x 12 w ⋯ x 1 n w x 21 w 0 ⋯ x 2 n w ⋮ ⋮ ⋱ ⋮ x n 1 w x n 2 w ⋯ 0 , where [12pt]{minimal} $${x}_{ij}^{w} = ({L}_{ij}^{w} , {M}_{ij}^{w} , {U}_{ij}^{w})$$ x ij w = L ij w , M ij w , U ij w and. [12pt]{minimal} $${L}_{ij}^{w}= [0& {L}_{12}^{w}& & {L}_{1n}^{w}\\ {L}_{21}^{w}& 0& & {L}_{2n}^{w}\\ & & & \\ {L}_{n1}^{w}& {L}_{n2}^{w}& & 0]$$ L ij w = 0 L 12 w ⋯ L 1 n w L 21 w 0 ⋯ L 2 n w ⋮ ⋮ ⋱ ⋮ L n 1 w L n 2 w ⋯ 0 ’ [12pt]{minimal} $${M}_{ij}^{w}= [0& {M}_{12}^{w}& & {M}_{1n}^{w}\\ {M}_{21}^{w}& 0& & {M}_{2n}^{w}\\ & & & \\ {M}_{n1}^{w}& {M}_{n2}^{w}& & 0]$$ M ij w = 0 M 12 w ⋯ M 1 n w M 21 w 0 ⋯ M 2 n w ⋮ ⋮ ⋱ ⋮ M n 1 w M n 2 w ⋯ 0 and [12pt]{minimal} $${U}_{ij}^{w}= [0& {U}_{12}^{w}& & {U}_{1n}^{w}\\ {U}_{21}^{w}& 0& & {U}_{2n}^{w}\\ & & & \\ {U}_{n1}^{w}& {U}_{n2}^{w}& & 0]$$ U ij w = 0 U 12 w ⋯ U 1 n w U 21 w 0 ⋯ U 2 n w ⋮ ⋮ ⋱ ⋮ U n 1 w U n 2 w ⋯ 0 These three matrices can be ordered as [12pt]{minimal} $${L}_{ij}^{w}= {X}_{L}^{w}, {M}_{ij}^{w}={X}_{M}^{w}, {U}_{U}^{w}={X}_{U}^{w}$$ L ij w = X L w , M ij w = X M w , U U w = X U w . If [12pt]{minimal} $$}{X}^{w}$$ lim w → ∞ X w = 0 and [12pt]{minimal} $$}{(1+X+X}^{2}+ +{X}^{k})$$ lim w → ∞ ( 1 + X + X 2 + ⋯ + X k ) = [12pt]{minimal} $$(1-{X)}^{-1}$$ ( 1 - X ) - 1 , where 0 is the zero matrix and I is the identity matrix, the total relation matrix can be expressed as [12pt]{minimal} $$= }{(X+X}^{2}+ +{X}^{k})$$ T = lim w → ∞ ( X + X 2 + ⋯ + X k ) = [12pt]{minimal} $$(1-{X)}^{-1}$$ ( 1 - X ) - 1 . Because the [12pt]{minimal} $${X}^{w}$$ X w matrix comprises matrices [12pt]{minimal} $${L}_{ij}^{w} , {M}_{ij}^{w} , {U}_{ij}^{w}$$ L ij w , M ij w , U ij w , the fuzzy total relation matrix for the three submatrices can be obtained as follows: 5 [12pt]{minimal} $${T}^{L}=[{T}^{L}]=}(L+{L}^{2}+ +{L}^{k})={L(1-L)}^{-1}$$ T L = [ T L ] = lim c → ∞ ( L + L 2 + ⋯ + L k ) = L ( 1 - L ) - 1 ⋯ 6 [12pt]{minimal} $${T}^{M}={[T}^{M}]=}(M+{M}^{2}+ +{M}^{k})={M(1-M)}^{-1}$$ T M = [ T M ] = lim c → ∞ ( M + M 2 + ⋯ + M k ) = M ( 1 - M ) - 1 ⋯ 7 [12pt]{minimal} $${T}^{U}=[{T}^{U}]=}(U+{U}^{2}+ +{U}^{k})={U(1-U)}^{-1}$$ T U = [ T U ] = lim c → ∞ ( U + U 2 + ⋯ + U k ) = U ( 1 - U ) - 1 ⋯ Step 6: Obtaining the sum of rows and columns The sums of the rows and columns are plotted as vectors [12pt]{minimal} $${D}_{i}$$ D i and [12pt]{minimal} $${R}_{i}$$ R i . Prominence, the horizontal axis vector ( [12pt]{minimal} $${D}_{i}+{R}_{i}$$ D i + R i ), is obtained by summing the rows and columns for each factor at the three levels. [12pt]{minimal} $${D}_{i}-{R}_{i}$$ D i - R i ), the vertical axis vector, is obtained by subtracting the columns from the rows for each factor. The [12pt]{minimal} $${D}_{i}$$ D i and [12pt]{minimal} $${R}_{i}$$ R i values have three levels ( [12pt]{minimal} $${D}_{i}^{L}$$ D i L , [12pt]{minimal} $${D}_{i}^{M}$$ D i M , [12pt]{minimal} $${D}_{i}^{U}$$ D i U and [12pt]{minimal} $${R}_{i}^{L}$$ R i L , [12pt]{minimal} $${R}_{i}^{M}$$ R i M and [12pt]{minimal} $${R}_{i}^{U}$$ R i U ). 8 [12pt]{minimal} $${D}_{i}^{*}=_{x=1}^{n}{T}_{ix}{}=[{D}_{i}^{L},{D}_{i}^{M},{D}_{i}^{U}]$$ D i ∗ = ∑ x = 1 n T ix ′ = D i L , D i M , D i U ⋯ 9 [12pt]{minimal} $${R}_{i}^{*}=_{y=1}^{n}{T}_{yj}{}=[{R}_{i}^{L},{R}_{i}^{M},{R}_{i}^{U}]$$ R i ∗ = ∑ y = 1 n T yj ′ = R i L , R i M , R i U ⋯ To obtain single values from triangular values, the mean of the triangular values is obtained as follows: 10 [12pt]{minimal} $${D}_{i}+{R}_{i}={Mean (D}_{i}^{*}-{R}_{i}^{*})$$ D i + R i = M e a n ( D i ∗ - R i ∗ ) ⋯ 11 [12pt]{minimal} $${D}_{i}-{R}_{i}={Mean(D}_{i}^{*}-{R}_{i}^{*})$$ D i - R i = M e a n ( D i ∗ - R i ∗ ) ⋯ The criteria are then classified into cause-and-effect groups. Factors with positive [12pt]{minimal} $${D}_{i}+{R}_{i}$$ D i + R i values are categorized as causal factors, and those with negative [12pt]{minimal} $${D}_{i}+{R}_{i}$$ D i + R i values are categorized as effect factors. The causal model is obtained by graphing the values of [12pt]{minimal} $${D}_{i}+{R}_{i}$$ D i + R i and [12pt]{minimal} $${D}_{i}-{R}_{i}$$ D i - R i . A literature review was also conducted to determine the factors affecting the outcome variables. Table below indicates the factors used in the study after the literature survey and the three M-Delphi rounds. The degrees of influence applied in the F-DEMATEL method typically consist of five levels: No Influence (N), Very Low Influence (VL), Low Influence (L), High Influence (H), and Very High Influence (VH). Participants used this semantic scale to rate causal relationships among factors within the system. The fuzzy linguistic scale in Table . was used to collect feedback from the experts. Every respondent’s initial direct relation matrix [12pt]{minimal} $${Z}^{k}$$ Z k comprises ratings denoted by [12pt]{minimal} $${Z}_{ij}^{k}$$ Z ij k . The direct relation matrix [12pt]{minimal} $${X}_{ij}^{k}$$ X ij k comprises three submatrices, L, M and U. 1 [12pt]{minimal} $${Z}^{k}= [0& {Z}_{12}^{k}& & {Z}_{1n}^{k}\\ {Z}_{21}^{k}& 0& & {Z}_{2n}^{k}\\ & & & \\ {Z}_{n1}^{k}& {Z}_{n2}^{k}& & 0] ; k = 1, 2, , p $$ Z k = 0 Z 12 k ⋯ Z 1 n k Z 21 k 0 ⋯ Z 2 n k ⋮ ⋮ ⋱ ⋮ Z n 1 k Z n 2 k ⋯ 0 ; k = 1 , 2 , ⋯ , p ⋯ where [12pt]{minimal} $${Z}_{ij}^{k}=({L}_{ij}^{k} , {M}_{ij}^{k} , {U}_{ij}^{k}),=p$$ Z ij k = ( L ij k , M ij k , U ij k ) , n = p The combined average direct relation matrix for all respondents is obtained as follows: 2 [12pt]{minimal} $$A= ({ }_{k=1}^{P}{Z}^{k}) k = 1, 2, , p$$ A = 1 p ∑ k = 1 P Z k k = 1 , 2 , ⋯ , p ⋯ where [12pt]{minimal} $${ }_{k=1}^{P}{Z}^{k}= { }_{k=1}^{P}{L}^{k}, { }_{k=1}^{P}{M}^{k}, { }_{k=1}^{P}{U}^{k}$$ ∑ k = 1 P Z k = ∑ k = 1 P L k , ∑ k = 1 P M k , ∑ k = 1 P U k . To normalize the direct relation matrix, the most significant values element in the initial direct relation matrix is obtained 3 [12pt]{minimal} $$^{k}= }({ }_{j=1}^{n}{u}_{ij}^{k}) k = 1, 2, , p $$ as r k = max 1 ≤ i ≤ n ∑ j = 1 n u ij k k = 1 , 2 , ⋯ , p ⋯ Thereafter, the normalized direct relation fuzzy matrix [12pt]{minimal} $${X}^{k}$$ X k is obtained by dividing each element of the average direct relation matrix by the highest value of the sum of the matrix’s rows and columns ( [12pt]{minimal} $${r}^{k}$$ r k ). Thus, the normalized direct relation matrix is obtained as follows. 4 [12pt]{minimal} $${X}^{K}= [_{11}^{K}& {X}_{12}^{k}& & {X}_{1n}^{k}\\ {X}_{21}^{k}& {X}_{22}^{k}& & {X}_{2n}^{k}\\ & & & \\ {X}_{n1}^{k}& {X}_{n2}^{k}& & {X}_{nn}^{k}] k = 1, 2, , p$$ X K = X 11 K X 12 k ⋯ X 1 n k X 21 k X 22 k ⋯ X 2 n k ⋮ ⋮ ⋱ ⋮ X n 1 k X n 2 k ⋯ X nn k k = 1 , 2 , ⋯ , p ⋯ where [12pt]{minimal} $${X}_{ij}^{k}=({L}_{ij}^{k} , {M}_{ij}^{k} , {U}_{ij}^{k}) = (_{ij}^{k}}{{r}^{k}})= (_{ij}^{k}}{{r}^{k}}{}_{ij}^{k}}{{r}^{k}}{}_{ij}^{k}}{{r}^{k}})$$ X ij k = L ij k , M ij k , U ij k = Z ij k r k = L ij k r k ′ M ij k r k ′ U ij k r k . To compute the fuzzy total relation matrix, [12pt]{minimal} $$}{X}^{w}$$ lim w → ∞ X w must first be obtained. [12pt]{minimal} $${X}^{w}$$ X w represents the triangular fuzzy matrix, which can be expressed as: [12pt]{minimal} $${X}^{w}= [0& {x}_{12}^{w}& & {x}_{1n}^{w}\\ {x}_{21}^{w}& 0& & {x}_{2n}^{w}\\ & & & \\ {x}_{n1}^{w}& {x}_{n2}^{w}& & 0]$$ X w = 0 x 12 w ⋯ x 1 n w x 21 w 0 ⋯ x 2 n w ⋮ ⋮ ⋱ ⋮ x n 1 w x n 2 w ⋯ 0 , where [12pt]{minimal} $${x}_{ij}^{w} = ({L}_{ij}^{w} , {M}_{ij}^{w} , {U}_{ij}^{w})$$ x ij w = L ij w , M ij w , U ij w and. [12pt]{minimal} $${L}_{ij}^{w}= [0& {L}_{12}^{w}& & {L}_{1n}^{w}\\ {L}_{21}^{w}& 0& & {L}_{2n}^{w}\\ & & & \\ {L}_{n1}^{w}& {L}_{n2}^{w}& & 0]$$ L ij w = 0 L 12 w ⋯ L 1 n w L 21 w 0 ⋯ L 2 n w ⋮ ⋮ ⋱ ⋮ L n 1 w L n 2 w ⋯ 0 ’ [12pt]{minimal} $${M}_{ij}^{w}= [0& {M}_{12}^{w}& & {M}_{1n}^{w}\\ {M}_{21}^{w}& 0& & {M}_{2n}^{w}\\ & & & \\ {M}_{n1}^{w}& {M}_{n2}^{w}& & 0]$$ M ij w = 0 M 12 w ⋯ M 1 n w M 21 w 0 ⋯ M 2 n w ⋮ ⋮ ⋱ ⋮ M n 1 w M n 2 w ⋯ 0 and [12pt]{minimal} $${U}_{ij}^{w}= [0& {U}_{12}^{w}& & {U}_{1n}^{w}\\ {U}_{21}^{w}& 0& & {U}_{2n}^{w}\\ & & & \\ {U}_{n1}^{w}& {U}_{n2}^{w}& & 0]$$ U ij w = 0 U 12 w ⋯ U 1 n w U 21 w 0 ⋯ U 2 n w ⋮ ⋮ ⋱ ⋮ U n 1 w U n 2 w ⋯ 0 These three matrices can be ordered as [12pt]{minimal} $${L}_{ij}^{w}= {X}_{L}^{w}, {M}_{ij}^{w}={X}_{M}^{w}, {U}_{U}^{w}={X}_{U}^{w}$$ L ij w = X L w , M ij w = X M w , U U w = X U w . If [12pt]{minimal} $$}{X}^{w}$$ lim w → ∞ X w = 0 and [12pt]{minimal} $$}{(1+X+X}^{2}+ +{X}^{k})$$ lim w → ∞ ( 1 + X + X 2 + ⋯ + X k ) = [12pt]{minimal} $$(1-{X)}^{-1}$$ ( 1 - X ) - 1 , where 0 is the zero matrix and I is the identity matrix, the total relation matrix can be expressed as [12pt]{minimal} $$= }{(X+X}^{2}+ +{X}^{k})$$ T = lim w → ∞ ( X + X 2 + ⋯ + X k ) = [12pt]{minimal} $$(1-{X)}^{-1}$$ ( 1 - X ) - 1 . Because the [12pt]{minimal} $${X}^{w}$$ X w matrix comprises matrices [12pt]{minimal} $${L}_{ij}^{w} , {M}_{ij}^{w} , {U}_{ij}^{w}$$ L ij w , M ij w , U ij w , the fuzzy total relation matrix for the three submatrices can be obtained as follows: 5 [12pt]{minimal} $${T}^{L}=[{T}^{L}]=}(L+{L}^{2}+ +{L}^{k})={L(1-L)}^{-1}$$ T L = [ T L ] = lim c → ∞ ( L + L 2 + ⋯ + L k ) = L ( 1 - L ) - 1 ⋯ 6 [12pt]{minimal} $${T}^{M}={[T}^{M}]=}(M+{M}^{2}+ +{M}^{k})={M(1-M)}^{-1}$$ T M = [ T M ] = lim c → ∞ ( M + M 2 + ⋯ + M k ) = M ( 1 - M ) - 1 ⋯ 7 [12pt]{minimal} $${T}^{U}=[{T}^{U}]=}(U+{U}^{2}+ +{U}^{k})={U(1-U)}^{-1}$$ T U = [ T U ] = lim c → ∞ ( U + U 2 + ⋯ + U k ) = U ( 1 - U ) - 1 ⋯ The sums of the rows and columns are plotted as vectors [12pt]{minimal} $${D}_{i}$$ D i and [12pt]{minimal} $${R}_{i}$$ R i . Prominence, the horizontal axis vector ( [12pt]{minimal} $${D}_{i}+{R}_{i}$$ D i + R i ), is obtained by summing the rows and columns for each factor at the three levels. [12pt]{minimal} $${D}_{i}-{R}_{i}$$ D i - R i ), the vertical axis vector, is obtained by subtracting the columns from the rows for each factor. The [12pt]{minimal} $${D}_{i}$$ D i and [12pt]{minimal} $${R}_{i}$$ R i values have three levels ( [12pt]{minimal} $${D}_{i}^{L}$$ D i L , [12pt]{minimal} $${D}_{i}^{M}$$ D i M , [12pt]{minimal} $${D}_{i}^{U}$$ D i U and [12pt]{minimal} $${R}_{i}^{L}$$ R i L , [12pt]{minimal} $${R}_{i}^{M}$$ R i M and [12pt]{minimal} $${R}_{i}^{U}$$ R i U ). 8 [12pt]{minimal} $${D}_{i}^{*}=_{x=1}^{n}{T}_{ix}{}=[{D}_{i}^{L},{D}_{i}^{M},{D}_{i}^{U}]$$ D i ∗ = ∑ x = 1 n T ix ′ = D i L , D i M , D i U ⋯ 9 [12pt]{minimal} $${R}_{i}^{*}=_{y=1}^{n}{T}_{yj}{}=[{R}_{i}^{L},{R}_{i}^{M},{R}_{i}^{U}]$$ R i ∗ = ∑ y = 1 n T yj ′ = R i L , R i M , R i U ⋯ To obtain single values from triangular values, the mean of the triangular values is obtained as follows: 10 [12pt]{minimal} $${D}_{i}+{R}_{i}={Mean (D}_{i}^{*}-{R}_{i}^{*})$$ D i + R i = M e a n ( D i ∗ - R i ∗ ) ⋯ 11 [12pt]{minimal} $${D}_{i}-{R}_{i}={Mean(D}_{i}^{*}-{R}_{i}^{*})$$ D i - R i = M e a n ( D i ∗ - R i ∗ ) ⋯ The criteria are then classified into cause-and-effect groups. Factors with positive [12pt]{minimal} $${D}_{i}+{R}_{i}$$ D i + R i values are categorized as causal factors, and those with negative [12pt]{minimal} $${D}_{i}+{R}_{i}$$ D i + R i values are categorized as effect factors. The causal model is obtained by graphing the values of [12pt]{minimal} $${D}_{i}+{R}_{i}$$ D i + R i and [12pt]{minimal} $${D}_{i}-{R}_{i}$$ D i - R i . This study recruited experts in VR communication from the Tzu Chi Foundation, an international humanitarian organization founded in 1966 by Dharma Master Cheng Yen. The foundation, dedicated to charitable services, humanitarian values, and community well-being, operates under principles of compassion, relief, and respect for all life. Its activities include disaster relief, medical aid, environmental conservation, and educational initiatives. With a global network of volunteers, Tzu Chi provides aid regardless of ethnicity, nationality, or religion, aiming to alleviate suffering and promote sustainable living. To participate in the study, individuals needed over ten years of experience in VR communication and a managerial position. Twenty participants met these criteria. All data elements were present, and ratings were within the provided scale, so all collected samples were included in the final analysis. F-DEMATEL results The detailed computational procedure of the F-DEMATEL study is shown in the supplementary file. Table indicates the causal effects established by the study. The results indicate that the factors have a similar degree of importance. Emotional distance was the most significant causal factor ( [12pt]{minimal} $${D}_{i}-{R}_{i}$$ D i - R i = 0.9007), followed by spatial distance ( [12pt]{minimal} $${D}_{i}-{R}_{i}$$ D i - R i = 0.2359) and social distance ( [12pt]{minimal} $${D}_{i}-{R}_{i}$$ D i - R i = 0.0265). Among the effect factors, temporal distance has the highest [12pt]{minimal} $${D}_{i}-{R}_{i}$$ D i - R i value (0.7675), followed by technical distance ( [12pt]{minimal} $${D}_{i}-{R}_{i}$$ D i - R i = -0.3587) and hypothetical distance ( [12pt]{minimal} $${D}_{i}-{R}_{i}$$ D i - R i = -0.7675). Figure is the scatter plot of the causal relationship diagram of each factor. The Causal relationship diagrams are essential for visually representing and analyzing interactions within a system, offering several key benefits. They clarify complex systems by illustrating how different factors influence each other, identify key drivers for targeted interventions, and provide insights for prioritizing actions based on causal impacts. Discussion of the F-DEMATEL results The results showed that emotional distance is a significant causal factor, aligning with the conclusions drawn by prior research regarding the critical role of emotional distance in VR communication . Messages designed to reduce emotional distance exhibit increased retention by recipients, consistent with findings from previous research emphasizing the internalization of emotionally resonant information . Moreover, the study revealed that maintaining low emotional distance contributes to achieving temporal, virtual, and technical distance. The results further indicate that addressing spatial distance is crucial for learning effectiveness. This finding concurs with prior research suggesting that individuals are more likely to retain stimuli perceived as likely to occur, emphasizing the importance of ensuring that the message remains connected to the individual's immediate environment . The findings also show that spatial distance has causal effects on temporal, social, visual, and technical distance, indicating that spatial distance can enhance learning effectiveness by ensuring the attainment of these factors. The findings also confirm that social distance is another influential causal factor in the model, indicating that messages fostering a sense of closeness between the individual and the subject of communication enhance learning. This observation aligns with earlier research underscoring the role of social distance in promoting information retention . In addition, the results confirm that social distance enhances learning effectiveness by ensuring the attainment of temporal distance, spatial distance, visual distance, and technical distance. Furthermore, the results underscore temporal distance as the most significant factor. When health communication successfully reduces emotional, spatial, and social distancing, the audience perceives the issue as requiring urgent attention, enhancing learning effectiveness. Additionally, technical distance and hypothetical distance are identified as noteworthy effect factors. This shows that the technical barriers associated with VR system usage become more navigable when messages effectively minimize emotional, social, and spatial distance. Moreover, when VR health communication messages consider emotional, spatial, and social distance, individuals are more inclined to believe that the communicated subject is likely to experience such communication. Study 2: Structural equation modeling study Measurement and sampling The measurement instruments for the constructs in this study were adapted from existing studies. Specifically, items for measuring temporal distance, spatial distance, and technical distance were sourced from Kim and Lee . Social distance was assessed using items adapted from Cui et al. . Items used to measure hypothetical distance were drawn from Blauza et al. , and emotional distance was measured using a scale adapted from Wu et al. . Immersion, flow, and presence were measured utilizing a scale adopted from Shin , while perceived learning effectiveness was assessed with items adapted from Kirk-Johnson et al. . The study collected data from individuals in Taiwan who had experience using VR devices for health communication. Back-translation was employed to translate the items into Mandarin Chinese, and a pretest involving 50 participants in ten rounds of five participants each was conducted to validate the translation. The questionnaire underwent modifications based on participant feedback until no further issues were encountered. Subsequently, a pilot study involving 120 participants was carried out to refine the specified model, with 95% of the initial 120 participants included in the final analysis, indicating a 79.17% effectiveness rate. The collected data were subjected to SmartPLS analysis and passed all validity and reliability tests, confirming the readiness of the items for use in the formal study. The formal survey, conducted among 1104 Taiwanese individuals with experience learning about health issues through VR devices, resulted in 775 responses deemed suitable for final analysis, reflecting a 70.20% validity rate. The demographic distribution of the respondents included 405 males, 656 in the 18–25 age range, 710 with a bachelor's degree, and 106 employed individuals. The demographic data are shown in Table . Data analysis Common Method Bias (CMB) To avoid CMB, the measurement items were randomized, and the identities of the respondents were concealed . The study also used Harman’s single-factor test with exploratory factor analysis. The variance explained by the first factor was 22.875%, which is less than the recommended threshold of 50% . Furthermore, a multicollinearity test based on the variance inflation factors (VIFs) was conducted in SmartPLS 3.0. The VIFs were below the 3.3 threshold . Thus, CMB was not an issue in this study. Measurement model SmartPLS 3.0 was used to perform the confirmatory factor analysis. Convergent validity was assessed by checking the factor loadings, squared multiple correlations (SMCs), and average variance extracted (AVE). The loadings, SMC, and AVE values exceeded the thresholds of 0.5, 0.2, and 0.5, respectively, confirming convergent validity. The Cronbach’s alpha for all the constructs was greater than 0.7, confirming reliability. The correlation between every pair of constructs was lower than the square root of the AVE of either of those constructs. Furthermore, all the constructs had HTMT ratios lower than 0.9. These results (see Tables and ) indicate that discriminant validity was achieved. In addition, all the constructs had composite reliability values higher than 0.7, indicating that the constructs had internal consistency. SEM results A two-step analytical approach was utilized because psychological distance was conceptualized as a second-order formative construct based on prior research . While the six distances are conventionally treated as reflective indicators, this study classifies them as formative indicators. This classification is justified because these distances collectively define and construct the latent variable of psychological distance within the context of VR health communication . Each distance contributes uniquely to the construct, and variations in any of the distances can affect the overall perception of psychological distance. Treating these indicators as formative allows for a more nuanced representation of how the distances collectively influence the construct of psychological distance in this specific context. First, a preliminary factor analysis was conducted to examine the reliability and validity of the model. Thereafter, the latent scores of the subconstructs for psychological distance were used to create the formative psychological distance construct, which was used for hypothesis testing. The coefficients of emotional distance, hypothetical distance, social distance, spatial distance, technical distance, and temporal distance were significant (0.690, 0.234, 0.186, 0.316, 0.150, and 0.247, respectively; p < 0.001). The R 2 values for immersion, flow, presence, and perceived learning effectiveness were greater than 0.1 (0.134, 0.258, and 0.785, respectively). However, the R 2 value for presence was 0.002. The Q 2 values for the outcome variables were also higher than the threshold of 0 (immersion = 0.114, flow = 0.127, presence = 0.002, perceived learning effectiveness = 0.612). In addition, the model had a standardized root mean squared residual (SRMR) of 0.038, which is lower than the recommended threshold of 0.08. The results (Fig. , Tables and ) showed that psychological distance has a negative effect on immersion (β = 0.367, p < 0.001, CI = (-0.388, -0.430)), supporting hypothesis 1. The relationship between psychological distance and flow was also significant (β = -0.511, p < 0.001, CI = (-0.525, -0.592)), supporting hypothesis 2. However, the relationship between psychological distance and telepresence was nonsignificant (β = -0.053, p = 0.370; CI = (-0.044, -0.140)), refuting hypothesis 3. Immersion had a significant and positive effect on perceived learning effectiveness (β = 0.163, p < 0.001; CI = (0.110, 0.225)), and presence had a positive and significant effect on perceived learning effectiveness (β = 0.813, p < 0.001; CI = (0.754, 0.863)). However, flow did not significantly affect perceived learning effectiveness (β = 0.013, p = 0.429, CI = (-0.019, 0.048)). Thus, hypothesis 4a and hypothesis 4c were supported, but hypothesis 4b was not supported. The mediating effect of immersion was also significant (β = 0.-0.060, p < 0.000, CI = (-0.077, -0.088)), providing support for hypothesis 5a. However, the mediating effects of flow (β = -0.007, p < 0.441; CI = (-0.009, 0.025)) and presence (β = -0.043, p < 0.369; CI = (-0.061, 0.113)) were not significant. Thus, hypotheses 5a and 5b were not supported. The detailed computational procedure of the F-DEMATEL study is shown in the supplementary file. Table indicates the causal effects established by the study. The results indicate that the factors have a similar degree of importance. Emotional distance was the most significant causal factor ( [12pt]{minimal} $${D}_{i}-{R}_{i}$$ D i - R i = 0.9007), followed by spatial distance ( [12pt]{minimal} $${D}_{i}-{R}_{i}$$ D i - R i = 0.2359) and social distance ( [12pt]{minimal} $${D}_{i}-{R}_{i}$$ D i - R i = 0.0265). Among the effect factors, temporal distance has the highest [12pt]{minimal} $${D}_{i}-{R}_{i}$$ D i - R i value (0.7675), followed by technical distance ( [12pt]{minimal} $${D}_{i}-{R}_{i}$$ D i - R i = -0.3587) and hypothetical distance ( [12pt]{minimal} $${D}_{i}-{R}_{i}$$ D i - R i = -0.7675). Figure is the scatter plot of the causal relationship diagram of each factor. The Causal relationship diagrams are essential for visually representing and analyzing interactions within a system, offering several key benefits. They clarify complex systems by illustrating how different factors influence each other, identify key drivers for targeted interventions, and provide insights for prioritizing actions based on causal impacts. The results showed that emotional distance is a significant causal factor, aligning with the conclusions drawn by prior research regarding the critical role of emotional distance in VR communication . Messages designed to reduce emotional distance exhibit increased retention by recipients, consistent with findings from previous research emphasizing the internalization of emotionally resonant information . Moreover, the study revealed that maintaining low emotional distance contributes to achieving temporal, virtual, and technical distance. The results further indicate that addressing spatial distance is crucial for learning effectiveness. This finding concurs with prior research suggesting that individuals are more likely to retain stimuli perceived as likely to occur, emphasizing the importance of ensuring that the message remains connected to the individual's immediate environment . The findings also show that spatial distance has causal effects on temporal, social, visual, and technical distance, indicating that spatial distance can enhance learning effectiveness by ensuring the attainment of these factors. The findings also confirm that social distance is another influential causal factor in the model, indicating that messages fostering a sense of closeness between the individual and the subject of communication enhance learning. This observation aligns with earlier research underscoring the role of social distance in promoting information retention . In addition, the results confirm that social distance enhances learning effectiveness by ensuring the attainment of temporal distance, spatial distance, visual distance, and technical distance. Furthermore, the results underscore temporal distance as the most significant factor. When health communication successfully reduces emotional, spatial, and social distancing, the audience perceives the issue as requiring urgent attention, enhancing learning effectiveness. Additionally, technical distance and hypothetical distance are identified as noteworthy effect factors. This shows that the technical barriers associated with VR system usage become more navigable when messages effectively minimize emotional, social, and spatial distance. Moreover, when VR health communication messages consider emotional, spatial, and social distance, individuals are more inclined to believe that the communicated subject is likely to experience such communication. Measurement and sampling The measurement instruments for the constructs in this study were adapted from existing studies. Specifically, items for measuring temporal distance, spatial distance, and technical distance were sourced from Kim and Lee . Social distance was assessed using items adapted from Cui et al. . Items used to measure hypothetical distance were drawn from Blauza et al. , and emotional distance was measured using a scale adapted from Wu et al. . Immersion, flow, and presence were measured utilizing a scale adopted from Shin , while perceived learning effectiveness was assessed with items adapted from Kirk-Johnson et al. . The study collected data from individuals in Taiwan who had experience using VR devices for health communication. Back-translation was employed to translate the items into Mandarin Chinese, and a pretest involving 50 participants in ten rounds of five participants each was conducted to validate the translation. The questionnaire underwent modifications based on participant feedback until no further issues were encountered. Subsequently, a pilot study involving 120 participants was carried out to refine the specified model, with 95% of the initial 120 participants included in the final analysis, indicating a 79.17% effectiveness rate. The collected data were subjected to SmartPLS analysis and passed all validity and reliability tests, confirming the readiness of the items for use in the formal study. The formal survey, conducted among 1104 Taiwanese individuals with experience learning about health issues through VR devices, resulted in 775 responses deemed suitable for final analysis, reflecting a 70.20% validity rate. The demographic distribution of the respondents included 405 males, 656 in the 18–25 age range, 710 with a bachelor's degree, and 106 employed individuals. The demographic data are shown in Table . The measurement instruments for the constructs in this study were adapted from existing studies. Specifically, items for measuring temporal distance, spatial distance, and technical distance were sourced from Kim and Lee . Social distance was assessed using items adapted from Cui et al. . Items used to measure hypothetical distance were drawn from Blauza et al. , and emotional distance was measured using a scale adapted from Wu et al. . Immersion, flow, and presence were measured utilizing a scale adopted from Shin , while perceived learning effectiveness was assessed with items adapted from Kirk-Johnson et al. . The study collected data from individuals in Taiwan who had experience using VR devices for health communication. Back-translation was employed to translate the items into Mandarin Chinese, and a pretest involving 50 participants in ten rounds of five participants each was conducted to validate the translation. The questionnaire underwent modifications based on participant feedback until no further issues were encountered. Subsequently, a pilot study involving 120 participants was carried out to refine the specified model, with 95% of the initial 120 participants included in the final analysis, indicating a 79.17% effectiveness rate. The collected data were subjected to SmartPLS analysis and passed all validity and reliability tests, confirming the readiness of the items for use in the formal study. The formal survey, conducted among 1104 Taiwanese individuals with experience learning about health issues through VR devices, resulted in 775 responses deemed suitable for final analysis, reflecting a 70.20% validity rate. The demographic distribution of the respondents included 405 males, 656 in the 18–25 age range, 710 with a bachelor's degree, and 106 employed individuals. The demographic data are shown in Table . Common Method Bias (CMB) To avoid CMB, the measurement items were randomized, and the identities of the respondents were concealed . The study also used Harman’s single-factor test with exploratory factor analysis. The variance explained by the first factor was 22.875%, which is less than the recommended threshold of 50% . Furthermore, a multicollinearity test based on the variance inflation factors (VIFs) was conducted in SmartPLS 3.0. The VIFs were below the 3.3 threshold . Thus, CMB was not an issue in this study. Measurement model SmartPLS 3.0 was used to perform the confirmatory factor analysis. Convergent validity was assessed by checking the factor loadings, squared multiple correlations (SMCs), and average variance extracted (AVE). The loadings, SMC, and AVE values exceeded the thresholds of 0.5, 0.2, and 0.5, respectively, confirming convergent validity. The Cronbach’s alpha for all the constructs was greater than 0.7, confirming reliability. The correlation between every pair of constructs was lower than the square root of the AVE of either of those constructs. Furthermore, all the constructs had HTMT ratios lower than 0.9. These results (see Tables and ) indicate that discriminant validity was achieved. In addition, all the constructs had composite reliability values higher than 0.7, indicating that the constructs had internal consistency. To avoid CMB, the measurement items were randomized, and the identities of the respondents were concealed . The study also used Harman’s single-factor test with exploratory factor analysis. The variance explained by the first factor was 22.875%, which is less than the recommended threshold of 50% . Furthermore, a multicollinearity test based on the variance inflation factors (VIFs) was conducted in SmartPLS 3.0. The VIFs were below the 3.3 threshold . Thus, CMB was not an issue in this study. SmartPLS 3.0 was used to perform the confirmatory factor analysis. Convergent validity was assessed by checking the factor loadings, squared multiple correlations (SMCs), and average variance extracted (AVE). The loadings, SMC, and AVE values exceeded the thresholds of 0.5, 0.2, and 0.5, respectively, confirming convergent validity. The Cronbach’s alpha for all the constructs was greater than 0.7, confirming reliability. The correlation between every pair of constructs was lower than the square root of the AVE of either of those constructs. Furthermore, all the constructs had HTMT ratios lower than 0.9. These results (see Tables and ) indicate that discriminant validity was achieved. In addition, all the constructs had composite reliability values higher than 0.7, indicating that the constructs had internal consistency. A two-step analytical approach was utilized because psychological distance was conceptualized as a second-order formative construct based on prior research . While the six distances are conventionally treated as reflective indicators, this study classifies them as formative indicators. This classification is justified because these distances collectively define and construct the latent variable of psychological distance within the context of VR health communication . Each distance contributes uniquely to the construct, and variations in any of the distances can affect the overall perception of psychological distance. Treating these indicators as formative allows for a more nuanced representation of how the distances collectively influence the construct of psychological distance in this specific context. First, a preliminary factor analysis was conducted to examine the reliability and validity of the model. Thereafter, the latent scores of the subconstructs for psychological distance were used to create the formative psychological distance construct, which was used for hypothesis testing. The coefficients of emotional distance, hypothetical distance, social distance, spatial distance, technical distance, and temporal distance were significant (0.690, 0.234, 0.186, 0.316, 0.150, and 0.247, respectively; p < 0.001). The R 2 values for immersion, flow, presence, and perceived learning effectiveness were greater than 0.1 (0.134, 0.258, and 0.785, respectively). However, the R 2 value for presence was 0.002. The Q 2 values for the outcome variables were also higher than the threshold of 0 (immersion = 0.114, flow = 0.127, presence = 0.002, perceived learning effectiveness = 0.612). In addition, the model had a standardized root mean squared residual (SRMR) of 0.038, which is lower than the recommended threshold of 0.08. The results (Fig. , Tables and ) showed that psychological distance has a negative effect on immersion (β = 0.367, p < 0.001, CI = (-0.388, -0.430)), supporting hypothesis 1. The relationship between psychological distance and flow was also significant (β = -0.511, p < 0.001, CI = (-0.525, -0.592)), supporting hypothesis 2. However, the relationship between psychological distance and telepresence was nonsignificant (β = -0.053, p = 0.370; CI = (-0.044, -0.140)), refuting hypothesis 3. Immersion had a significant and positive effect on perceived learning effectiveness (β = 0.163, p < 0.001; CI = (0.110, 0.225)), and presence had a positive and significant effect on perceived learning effectiveness (β = 0.813, p < 0.001; CI = (0.754, 0.863)). However, flow did not significantly affect perceived learning effectiveness (β = 0.013, p = 0.429, CI = (-0.019, 0.048)). Thus, hypothesis 4a and hypothesis 4c were supported, but hypothesis 4b was not supported. The mediating effect of immersion was also significant (β = 0.-0.060, p < 0.000, CI = (-0.077, -0.088)), providing support for hypothesis 5a. However, the mediating effects of flow (β = -0.007, p < 0.441; CI = (-0.009, 0.025)) and presence (β = -0.043, p < 0.369; CI = (-0.061, 0.113)) were not significant. Thus, hypotheses 5a and 5b were not supported. The findings of Study 2 align with our theoretical framework, suggesting that psychological distance has adverse effects on immersion and flow in VR communication, consistent with previous research emphasizing the need to minimize psychological distance in this context . Low psychological distance enhances the realism of virtual environments, immersing users more effectively. Conversely, high psychological distance impedes the attainment and maintenance of flow experiences, as disconnection from the virtual environment leads to disinterest and a lack of cognitive or behavioral engagement. This underscores the importance of designing VR experiences that reduce the psychological distance to foster deeper immersion and sustained flow, as this can enhance user engagement and the overall effectiveness of VR communication. Interestingly, the study revealed a nonsignificant effect of psychological distance on immersion, inconsistent with the findings of prior research which suggested a positive effect of psychological distance on immersion . This could be attributed to VR technology's ability to create a strong sense of presence and realism. The immersive nature of VR involves multiple sensory modalities and interactivity . This can make distant or hypothetical scenarios feel immediate and engaging, potentially mitigating the effects of psychological distance . Additionally, well-crafted narratives and contextual framing in VR experiences can make users feel connected to the content regardless of its perceived distance . As users become more familiar with VR technology, their ability to immerse themselves in virtual environments improves . This can further reduce the effect of psychological distance, as individuals become more engrossed in the experience regardless of their perceived distance from the virtual world. Consistent with earlier studies , this research affirms the positive effect of immersion on perceived learning effectiveness. Immersing users in lifelike scenarios enhances comprehension by minimizing external distractions and fostering user engagement and focus during learning. By creating a realistic and engaging environment, immersion helps individuals concentrate better on the content. This can lead to improved understanding and retention of information, ensuring effective learning outcomes. The study also establishes a positive impact of presence on perceived learning effectiveness, echoing previous findings that link presence to positive learning outcomes . When users feel a strong sense of "being there" in the virtual environment, their behavioral engagement facilitates effective learning. Nevertheless, the results did not confirm the expected positive effect of flow on perceived learning effectiveness, challenging prior research suggesting a beneficial role of flow in achieving learning outcomes in immersive environments . This finding could be due to the engaging nature of the flow. The intense focus and enjoyment associated with the flow may lead users to become so absorbed in the experience that they need to remember educational content . As such, instead of actively processing and retaining information, users might prioritize the entertaining aspects of the VR experience . Furthermore, VR's novelty and entertainment value can sometimes overshadow educational objectives . When users are highly engaged and enjoy the VR environment, they may focus more on the immersive experience rather than on learning . In addition, the study confirms the mediating role of immersion in the relationship between psychological distance and perceived learning effectiveness. Reduced psychological distance leads to effective learning outcomes by enhancing immersion, as individuals who perceive a low psychological distance become fully engaged in the virtual environment and pay undistracted attention to the content. This engagement and focused attention foster better comprehension and retention of information, further highlighting the critical importance of minimizing psychological distance to optimize learning effectiveness in VR environments. Conversely, the mediating effects of flow and telepresence were nonsignificant, highlighting the intricate nature of the VR learning process. Flow and presence may only sometimes enhance learning, especially if the challenge posed by flow exceeds the individual's skill level or if the VR environment closely mirrors the real environment, potentially triggering suspicions of uncanniness among users . This suggests that while immersion plays a crucial role in enhancing perceived learning effectiveness, the impact of flow and telepresence can vary based on the user's experience and the design of the VR environment. This indicates the need for careful calibration of VR experiences to balance engagement and realism without overwhelming users. Theoretical implications This study contributes significantly to the literature on VR communication in several key aspects. First, this study advances our understanding of the psychological factors influencing the effectiveness of VR communication. While prior studies have focused primarily on the adoption of VR by users , the current research addresses the critical need to comprehend how VR devices can be effectively utilized for communication initiatives, particularly in health communication. The findings highlight the pivotal role of psychological distance, demonstrating that emotional, spatial, and social distances are causal factors leading to the effectiveness of VR communication. Moreover, addressing these factors ensures the achievement of effect factors, such as temporal, technical, and hypothetical distances, ultimately contributing to the effectiveness of VR-mediated learning. The structural equation modeling results underscore the importance of low psychological distance, with immersion and presence emerging as key factors in facilitating effective VR-mediated learning. Second, this study underscores the essential role of user engagement with immersive technologies in facilitating effective learning. Extending findings from other technological contexts, the research establishes that immersion and presence , as outcomes of low psychological distance, are instrumental in fostering effective learning in VR. The mediating effect of immersion further emphasizes the centrality of engagement in the VR learning context. However, the findings of this study did not confirm the positive effects of flow on perceived learning effectiveness or the mediating effects of flow and presence. This finding suggested that not all dimensions of engagement necessarily lead to effective learning in VR, in contrast to the findings of studies in other technological contexts that suggest that all three dimensions have a positive influence on communication effectiveness. Furthermore, this study demonstrated the suitability of the F-DEMATEL method for communication research. Given that communication is shaped by many uncertainties, this method is adaptable and capable of providing insights. The method's strength is handling imprecision and subjectivity, which are common in communication. By combining the structured approach of DEMATEL with fuzzy logic, researchers can use this methodology to quantify the strength and direction of relationships, prioritize influencing factors, and provide a quantitative measure of their impact on communication outcomes. The application of the F-DEMATEL in this study reveals the causal effects of psychological distance elements on various dimensions, demonstrating the effect of psychological distance in shaping learning effectiveness and how these factors interact in shaping VR learning effectiveness. Researchers exploring the complexities of communication dynamics and prioritizing influential factors may find F-DEMATEL a valuable tool. Practical implications The findings of this study hold several implications for VR communication practitioners. First, the results underscore the imperative of diminishing psychological distance in virtual communication to enhance effectiveness. Users' perception of the immersive environment as realistic and user friendly is pivotal. Communicators should strive to employ scenarios that are not only realistic but also emotionally compelling, fostering relevance for individuals. This approach and user-friendly VR technologies can effectively reduce psychological distance, optimizing VR-mediated learning. Another managerial insight is the important role of engagement, particularly immersion and presence, in shaping learning effectiveness. Immersion and presence positively affect perceived learning effectiveness. Managers must consider strategies that enhance immersion and presence to enhance the learning experience. Although this study demonstrated that psychological distance is one way of achieving engagement, other means for boosting engagement, such as focused interactivity of VR systems and individuals’ perceived control of their actions during the learning process, could be used to increase immersion and presence. By increasing immersion and presence through these mechanisms, effective learning of health communication messages results. The nonsignificant effect of flow on perceived learning effectiveness also has important managerial implications. Managers must ensure that the VR learning experience is simple for audiences, which can hinder learning effectiveness. Complex learning processes may not match the skills and abilities of individual users. As a result, users may become frustrated with the learning process, as they may perceive the process to be beyond their ability to keep up with and the content beyond their ability to comprehend. Thus, VR learning practitioners must ensure that the learning process is as simple as possible to ensure that users go through it smoothly with fewer learning hiccups. This study contributes significantly to the literature on VR communication in several key aspects. First, this study advances our understanding of the psychological factors influencing the effectiveness of VR communication. While prior studies have focused primarily on the adoption of VR by users , the current research addresses the critical need to comprehend how VR devices can be effectively utilized for communication initiatives, particularly in health communication. The findings highlight the pivotal role of psychological distance, demonstrating that emotional, spatial, and social distances are causal factors leading to the effectiveness of VR communication. Moreover, addressing these factors ensures the achievement of effect factors, such as temporal, technical, and hypothetical distances, ultimately contributing to the effectiveness of VR-mediated learning. The structural equation modeling results underscore the importance of low psychological distance, with immersion and presence emerging as key factors in facilitating effective VR-mediated learning. Second, this study underscores the essential role of user engagement with immersive technologies in facilitating effective learning. Extending findings from other technological contexts, the research establishes that immersion and presence , as outcomes of low psychological distance, are instrumental in fostering effective learning in VR. The mediating effect of immersion further emphasizes the centrality of engagement in the VR learning context. However, the findings of this study did not confirm the positive effects of flow on perceived learning effectiveness or the mediating effects of flow and presence. This finding suggested that not all dimensions of engagement necessarily lead to effective learning in VR, in contrast to the findings of studies in other technological contexts that suggest that all three dimensions have a positive influence on communication effectiveness. Furthermore, this study demonstrated the suitability of the F-DEMATEL method for communication research. Given that communication is shaped by many uncertainties, this method is adaptable and capable of providing insights. The method's strength is handling imprecision and subjectivity, which are common in communication. By combining the structured approach of DEMATEL with fuzzy logic, researchers can use this methodology to quantify the strength and direction of relationships, prioritize influencing factors, and provide a quantitative measure of their impact on communication outcomes. The application of the F-DEMATEL in this study reveals the causal effects of psychological distance elements on various dimensions, demonstrating the effect of psychological distance in shaping learning effectiveness and how these factors interact in shaping VR learning effectiveness. Researchers exploring the complexities of communication dynamics and prioritizing influential factors may find F-DEMATEL a valuable tool. The findings of this study hold several implications for VR communication practitioners. First, the results underscore the imperative of diminishing psychological distance in virtual communication to enhance effectiveness. Users' perception of the immersive environment as realistic and user friendly is pivotal. Communicators should strive to employ scenarios that are not only realistic but also emotionally compelling, fostering relevance for individuals. This approach and user-friendly VR technologies can effectively reduce psychological distance, optimizing VR-mediated learning. Another managerial insight is the important role of engagement, particularly immersion and presence, in shaping learning effectiveness. Immersion and presence positively affect perceived learning effectiveness. Managers must consider strategies that enhance immersion and presence to enhance the learning experience. Although this study demonstrated that psychological distance is one way of achieving engagement, other means for boosting engagement, such as focused interactivity of VR systems and individuals’ perceived control of their actions during the learning process, could be used to increase immersion and presence. By increasing immersion and presence through these mechanisms, effective learning of health communication messages results. The nonsignificant effect of flow on perceived learning effectiveness also has important managerial implications. Managers must ensure that the VR learning experience is simple for audiences, which can hinder learning effectiveness. Complex learning processes may not match the skills and abilities of individual users. As a result, users may become frustrated with the learning process, as they may perceive the process to be beyond their ability to keep up with and the content beyond their ability to comprehend. Thus, VR learning practitioners must ensure that the learning process is as simple as possible to ensure that users go through it smoothly with fewer learning hiccups. This study has several limitations that future research could address. First, data were collected at a single time point, so the learning process of users was not monitored over time. Investigating the effects of psychological distance and engagement could offer deeper insights into how individuals retain information from VR over extended periods. Future studies should explore how these factors influence VR learning over time. Second, the study relied on data from Taiwan, where VR usage dynamics may differ due to cultural factors . Future research could apply the study's model in different cultural contexts to better understand cross-cultural differences. Examining possible moderators in the variables could also help provide a deeper understanding of the antecedents of health communication in VR. As extant research indicates that technology readiness and social influence are among the determinants of technology usage outcomes , future research could examine these and other similar constructs as boundary conditions that shape the VR learning process. In addition, given the dynamic nature of VR technologies, the learning outcomes may also vary over time as the technologies improve over time. Changes in technology media used for learning tend to shape users’ internalization of information in the usage process. As such, future research could examine the dynamics of VR-learning experiences vis-à-vis continuous changes in VR technologies. Finally, examining moderating variables such as cognitive load, task relevance, and cultural factors could further elucidate the relationships among the constructs in this model . Supplementary Material 1. |
Bile acid synthesis, modulation, and dementia: A metabolomic, transcriptomic, and pharmacoepidemiologic study | 6395a56b-3df7-4f36-9450-e35ef7cf6b78 | 8158920 | Pharmacology[mh] | Accumulating evidence suggests that dementias such as Alzheimer disease (AD) and vascular dementia (VaD) may be the terminal consequences of metabolic abnormalities, such as hypercholesterolemia, manifesting several years prior to the onset of cognitive impairment and functional decline. While hypercholesterolemia is associated with increased risk of both AD and VaD , the precise molecular mechanisms underlying this association remain unclear. Moreover, as cholesterol itself is impermeable to the blood–brain barrier (BBB), the question of how increased levels of peripheral cholesterol may mediate greater risk of dementia remains to be answered. These questions also bear important clinical translational implications for understanding how therapeutic targeting of cholesterol-related metabolic pathways may impact risk of dementia. The relationship between cholesterol metabolism and vascular disease has been predominantly studied from the perspective of de novo cholesterol biosynthesis, while relatively little attention has focused on cholesterol catabolism. The principal catabolic fate of cholesterol is its conversion to the primary bile acids (BAs), cholic acid (CA), and chenodeoxycholic acid (CDCA) through BBB-permeable intermediate metabolites called oxysterols . In order to test whether dysregulation of cholesterol catabolism is associated with dementia pathogenesis, we applied a 3-step study design. First (Step 1), we used targeted metabolomics assays of serum samples within a longitudinal observational study to test whether serum concentrations of metabolites related to cholesterol catabolism, including the de novo synthesis of primary BAs, are associated with early neuroimaging markers of dementia including brain amyloid accumulation, white matter lesions (WMLs), and brain atrophy. We validated the association of BA levels with longitudinal neuroimaging outcomes in an independent cohort. Second (Step 2), based on associations between serum BA concentrations and neuroimaging markers of dementia identified in Step 1, we hypothesized that drugs targeting de novo BA synthesis, i.e., bile acid sequestrants (BAS) would alter risk of dementia. We tested this hypothesis in a large real-world clinical dataset. Third (Step 3), we explored plausible molecular mechanisms underlying our findings by testing whether levels of primary BAs and gene expression of their receptors were altered in the brain in dementia. Given prior evidence suggesting sex-specific differences in the serum lipidome as well as in the association between lipid levels and dementia risk , we performed sex-stratified analyses to test the relationship between cholesterol catabolism and dementia.
Data used in our analyses were derived from the Baltimore Longitudinal Study of Aging (BLSA), the Alzheimer’s Disease Neuroimaging Initiative (ADNI), the Alzheimer’s Disease Metabolomics Consortium (ADMC), the Religious Orders Study and Memory and Aging Project (ROSMAP), and the Clinical Practice Research Datalink (CPRD). BLSA, ADNI, and ROSMAP are long-running, longitudinal cohorts established and prospectively followed to help address broad questions related to aging and disease. CPRD includes anonymized electronic medical record (EMR) data gathered from general practitioners in the United Kingdom. Specific analyses addressing focused hypotheses described herein were not included in prospective analysis plans in the original study protocols for these cohorts. BLSA, ADNI, and ROSMAP participants included in our analyses were a convenience sample available to researchers; power calculations to determine study size were not performed. Details on the analytic plan, including when specific plans were developed, are included in the Statistical methods section below. This study is reported as per the Strengthening the Reporting of Observational studies in Epidemiology (STROBE) guidelines . Ethics approval The BLSA study protocol has ongoing approval from the Institutional Review Board (IRB) of the National Institute of Environmental Health Science, National Institutes of Health (“Early Markers of Alzheimer’s Disease (BLSA)”, IRB No. 2009–074). Informed written consent was obtained at each visit from all participants. The ADNI study protocol was approved by the IRBs of all the participating institutions/study sites . Informed written consent was obtained from all participants at each site. All ADNI studies are conducted according to Good Clinical Practice guidelines, the Declaration of Helsinki, and United States of America 21 CFR Part 50 (Protection of Human Subjects) and Part 56 (IRBs). Additional details can be found at adni.loni.usc.edu . The ROSMAP study, including the parent study and substudies, was approved by the IRB of Rush University Medical Center. Informed written consent was obtained for all participants as well as an Anatomical Gift Act and a repository consent to share data and biospecimens. CPRD data are anonymized, general practitioners do not need to seek patient consent when sharing data with CPRD, and patients have the option of opting out. Additional details can be found at https://www.cprd.com/public . This study was approved by the CPRD Independent Scientific Advisory Committee (ISAC; Protocol # Protocol 18_173) and exempted from full IRB review by the National Institutes of Health Office of Human Subject Research. Step 1: Test associations between cholesterol catabolism (i.e., BA synthesis) and neuroimaging markers of dementia We performed targeted metabolomics assays measuring the principal cholesterol breakdown products (i.e., CA and CDCA) as well as their principal biosynthetic precursor, (7α-hydroxycholesterol; 7α-OHC) in serum samples from participants in the Baltimore Longitudinal Study of Aging Neuroimaging (BLSA-NI) substudy who also underwent in vivo brain amyloid positron emission tomography (PET) and longitudinal structural magnetic resonance imaging (MRI). In order to validate index results from BLSA, we used the ADNI sample to test associations between CA and CDCA and neuroimaging outcomes (note: 7α-OHC was not assayed in the ADNI serum samples). Study participants The BLSA is a prospective cohort study that began in 1958 and is administered by the National Institute on Aging (NIA) . BLSA-NI substudy imaging and visit schedules have varied over time and have been described in detail previously and included in . ADNI is an ongoing longitudinal study launched in 2003. The primary goal of ADNI has been to test whether longitudinal MRI, PET, and other biological markers can measure the progression of mild cognitive impairment (MCI) and early AD. ADMC performed serum BA assays in participants enrolled in ADNI, and data are publicly available. Study design details have been published previously and are available at www.adni-info.org . Quantitative serum metabolomics assays Blood serum samples were collected from BLSA participants at each visit; details on collection and processing have been published previously and included in . Blood serum samples were collected from ADNI participants at baseline; details on collection and processing have been published previously and are described in detail in the Biospecimen Results section (link: AD Metabolomics Consortium Bile Acids Methods (PDF)–Version: January 21, 2016) at http://adni.loni.usc.edu . Data used in this study are available in the Biospecimen Results section (link: AD Metabolomics Consortium Bile Acids–Post Processed Data [ADNI1, GO, 2]–Version: June 28, 2018). In vivo brain amyloid imaging, WMLs, and brain volumes BLSA-NI participants underwent 11 C-Pittsburgh compound-B (PiB) PET scans to assess brain amyloid-β burden. A detailed description of acquisition and preprocessing procedures has been published previously . Individuals were characterized as amyloid +ve or amyloid −ve based on a mean cortical distribution volume ratio (cDVR) threshold of 1.066 . Among amyloid +ve individuals, we examined mean cDVR, a weighted global average of brain amyloid deposition, and regional DVR in the precuneus, a region vulnerable to early amyloid deposition in AD . The total sample included 141 individuals (66 male; 75 female) of whom 36 were amyloid +ve (21 male; 15 female). BLSA brain MRI was performed on a 3T Philips Achieva scanner (Philips Healthcare, Netherlands) to quantify both global and regional brain volumes and WMLs. A detailed description is included in . We a priori defined a set of brain regions to examine brain atrophy over time based on prior work using BLSA-NI data suggesting that these regions were sensitive to age-related change . These regions included global brain volumes: total brain, ventricular cerebrospinal fluid (CSF), total gray matter, and white matter; lobar volumes: temporal, parietal, and occipital white matter and gray matter; and additional regions sensitive to early neurodegeneration: hippocampus, entorhinal cortex, amygdala, parahippocampal gyrus, fusiform gyrus, and precuneus. All BLSA MRI data, including brain volumes and WMLs, after onset of clinical symptoms among individuals who developed MCI or AD were excluded (21 visits). The total sample included 134 individuals (62 male; 72 female) with an average of 2.5 longitudinal MRI visits (male: 2.3; female: 2.6). ADNI brain MRI was used to quantify both global and regional brain volumes. A detailed description of acquisition and preprocessing is included in . As our analyses in the ADNI sample were performed to confirm index results from BLSA, we restricted these analyses to gray matter and subcortical brain regions described above, excluding all white matter regions based on the lack of associations in BLSA analyses. ADNI is enriched for individuals with MCI and AD at baseline, and all data across baseline diagnoses (control (CON), MCI, and AD) were included in analyses. Similar to our primary analyses in BLSA, MRI data from all CON individuals in ADNI, after onset of clinical symptoms among individuals who subsequently developed MCI or AD, were excluded (200 visits). The total sample included 1,666 individuals (918 male; 748 female) with an average of 5.2 longitudinal MRI visits (male: 5.3; female: 5.1). Statistical methods Step 1 of the analytic plan using BLSA data were developed in January 2018 prior to starting analyses in June 2019. The inclusion of ADNI BA and neuroimaging data to validate significant BLSA findings and sensitivity analyses (i.e., adding statin as a covariate) were performed in June 2020 in response to reviewer recommendations. Similar to prior work in BLSA , metabolite concentrations above the upper limit of quantification (ULOQ) were excluded, concentrations below the limit of detection (LOD) were imputed as the threshold LOD/2, and resulting concentrations were natural log transformed. Outliers ± 3 × interquartile range (IQR) were excluded. For ADNI, data processing steps for serum BA concentrations have been described in detail previously . Metabolite concentrations below the LOD were imputed as LOD/2, and all samples were log 2 transformed. Outliers ± 3 × IQR were excluded. To test for group differences between amyloid +ve and amyoid −ve individuals, we examined associations between serum concentrations of metabolites (i.e., 7α-OHC, CA, and CDCA) and brain amyloid deposition, in overall and sex-stratified linear regression models with metabolites as the dependent variable and the binary brain amyloid variable (i.e., amyloid +ve/amyloid −ve) as the main predictor. Covariates included mean-centered age and sex in the overall model and mean-centered age only in the sex-stratified model. We next tested the association between metabolite concentrations and mean cDVR (BLSA) and precuneus DVR (BLSA) in amyloid +ve individuals only. We used similar linear regression models for the continuous DVR predictors. The significance threshold was uncorrected and set at p = 0.05 to accommodate the limited sample size. To test the association between serum concentrations of metabolites (i.e., 7α-OHC, CA, and CDCA) and longitudinal changes in (1) regional brain volumes and (2) WMLs, we used total and sex-stratified linear mixed models with brain regions of interest (ROIs) volumes and WMLs as the dependent variable (i.e., outcome) and metabolite concentration as the predictor. We first performed analyses in BLSA and then validated results in ADNI. The statistical significance threshold for both BLSA and ADNI was set at a false discovery rate (FDR)-corrected p = 0.05. Additional details on model specifications are included in . In sensitivity analyses, we explored the effect on associations of adding statin drug use as a covariate in BLSA models. Step 2: Test whether pharmacological modulation of BAs alters dementia risk in a large, real-world clinical dataset In order to extend findings from our Step 1 analysis relating BA levels with brain amyloid accumulation, rates of brain atrophy, and progression of WMLs, we next tested whether pharmacological modulation of de novo BA synthesis influences dementia risk. As BAS are lipid-modifying treatments that are known to decrease the circulating pool of BAs and promote the breakdown of cholesterol, we hypothesized that exposure to these drugs would alter the risk of dementia. We therefore tested associations between exposure to BAS and dementia risk using data from the UK’s CPRD, an anonymized electronic health record (EHR) covering more than 4 million active registrants from the UK general practice clinics. Our results from Step 1 suggested a plausible sex difference in the effect of BAS on dementia risk, a hypothesis that we tested using the CPRD. Data source and study population The CPRD is a primary care database covering >four million active registrants from >650 general practice clinics and is representative of the broader UK population in terms of age and sex . From the August 2018 CPRD data release, we identified patients ≥18 years old who had a first prescription record (i.e., new users) for BAS (colestipol, colesevelam, and cholestyramine) or non-statin lipid-modifying therapies (LMT; fibrate, cholesterol absorption inhibitor, nicotinic acid derivative, and probucol) between January 1, 1995 and August 1, 2018. BAS are often used as a second-line treatment independently or in combination with statins, and therefore, we selected non-statin LMT users as an active comparator group. In both groups (BAS or non-statin LMTs), we allowed for prior statin use in combination with either BAS or LMTs. Individuals who only had a prescription record of statin use were excluded from this study. The index date was defined as the date of the first BAS or LMT prescription. For each BAS user, we selected up to five LMT users matched on sex, year of birth (±5 year), region, year of clinic registration (±2 year), and year of first prescription (±2 years). Analysis was restricted to those with at least 12 months of clinical registration prior to the index date (to allow for covariate evaluation). We restricted BAS users to those aged ≥50 years and to those with two or more BAS/LMT prescriptions. The final analysis included 3,208 (1,083 male; 2,125 female) new BAS users and 23,483 (8,977 male; 14,506 female) new LMT users . The outcomes of interest were all-cause dementia, and its subtypes: AD, VaD, and other dementia not otherwise specified (NOS). We used the last reported dementia diagnosis to identify the disease subtype (read codes are available upon request). The significance threshold was set at p = 0.05 considering that each outcome of interest was a priori specified. Statistical methods Step 2 of the analytic plan using CPRD data was developed in January 2018 prior to starting data analyses in June 2019. We added a comparison of patient characteristics across outcome categories (i.e., dementia subtypes) based on reviewer recommendations. We compared patient characteristics and their comorbidity profiles across dementia subtypes (AD, VaD, and NOS) as well as drug use (BAS and LMT) using the chi-squared test for categorical variables and Wilcoxon rank-sum tests for continuous variables. For multivariable analyses, we used Cox proportional hazard models to calculate hazard ratios (HRs) and 95% confidence intervals (CIs) comparing dementia risk (all-cause and subtypes) in BAS versus LMT users in the overall and sex-stratified samples. Our sex-specific analyses were a priori specified and based on findings from Step 1. We also tested the dose–effect relationship between dementia risk and drugs of interest using the number of prescriptions. Models were adjusted for factors that were significantly different between BAS and LMT groups to account for potential confounding by indication (since patient comorbidity profiles can lead to a BAS versus LMT prescription decision). Models were also adjusted for statin use during follow-up (until one year prior to exit date) using a time-varying covariate to account for its impact on dementia (26% versus 80% of BAS and LMT users, respectively, were prescribed statins in the 12 months before the index date and the majority (63%) continued its use for all or part of the follow-up). See for details on model specifications. Step 3: Test plausible molecular mechanisms relating BA signaling in the brain to dementia pathogenesis using targeted metabolomics and transcriptomics Given our findings that peripheral levels of the primary BAs, CA, and CDCA are associated with neuroimaging markers of dementia (Step 1) and that their pharmacological manipulation influences dementia risk (Step 2), we hypothesized that alterations in brain BA-mediated signaling may be a plausible biological mechanism underlying these findings. We first tested whether concentrations of CA and CDCA were detectable in the brain and whether they were altered in AD in participants from the BLSA autopsy program. We then tested whether gene expression of BA receptors was altered in AD using single-cell RNA sequencing (scRNA-Seq) data from the ROSMAP autopsy program. Study participants The autopsy program of the BLSA was initiated in 1986 and has been described previously . See for additional details. Tissue samples from AD ( n = 16) and CON ( n = 13) from the inferior temporal gyrus (ITG) and middle frontal gyrus (MFG), regions representing areas of early neurofibrillary (i.e., tau) and neuritic plaque (i.e., amyloid) accumulation, respectively , as well as the cerebellum (CB) were included in these analyses. scRNA-Seq gene expression data from ROSMAP were downloaded from Synapse ( https://www.synapse.org/#!Synapse:syn18485175 ) under the doi 10.7303/syn18485175 ; code used to run analyses presented in Mathys and colleagues was requested from coauthors. Data came from postmortem participants in ROSMAP including 46 individuals: 32 individuals (18 male and 14 female) in the AD category and 14 individuals (5 male and 9 female) in the CON category. The AD category included individuals with a clinical diagnosis of AD, including individuals with AD and no other condition contributing to cognitive impairment and AD and another condition contributing to cognitive impairment, as well as individuals with a clinical diagnosis of MCI and no other condition contributing to cognitive impairment. The CON category included individuals with a clinical diagnosis of no cognitive impairment. Tissue was profiled from the prefrontal cortex (Brodmann area 10) across eight major cell types in the aged dorsolateral prefrontal cortex including inhibitory neurons, excitatory neurons, astrocytes, oligodendrocytes, microglia, oligodendrocyte progenitor cells, endothelial cells, and pericytes. Additional details are provided in the index paper . We identified BA receptor genes (including receptors involved in BA homeostasis) using a literature search and include the full list in . There were 21 BA receptor genes that had available data in the ROSMAP dataset: Nuclear Receptor Subfamily 1 Group I Member 3 (NR1I3); Retinoid X Receptor Gamma (RXRG); Nuclear Receptor Subfamily 5 Group A Member 2 (NR5A2); Cholinergic Receptor Muscarinic 3 (CHRM3); G Protein-Coupled Bile Acid Receptor 1 (GPBAR1); Peroxisome Proliferator Activated Receptor Gamma (PPARG); Nuclear Receptor Subfamily 1 Group I Member 2 (NR1I2); Kinase Insert Domain Receptor (KDR); Nuclear Receptor Subfamily 3 Group C Member 1 (NR3C1); Retinoid X Receptor Beta (RXRB); Peroxisome Proliferator Activated Receptor Delta (PPARD); Cholinergic Receptor Muscarinic 2 (CHRM2); Retinoid X Receptor Alpha (RXRA); Nuclear Receptor Subfamily 1 Group H Member 3 (NR1H3); Vitamin D Receptor (VDR); Nuclear Receptor Subfamily 1 Group H Member 4 (NR1H4); Retinoic Acid Receptor Alpha (RARA); Hepatocyte Nuclear Factor 4 Alpha (HNF4A); Nuclear Receptor Subfamily 1 Group H Member 2 (NR1H2); Formyl Peptide Receptor 1 (FPR1); Peroxisome Proliferator Activated Receptor Alpha (PPARA). Quantitative brain metabolomics assays Quantitative metabolomics assays were performed on brain tissue samples to measure concentrations of the primary BAs, including CA and CDCA, using the Biocrates Bile Acids kit (Biocrates Life Sciences AG, Austria). Details on both assay kits, as well as calibration steps, have been published previously . Additional details regarding the use of internal standards are included in . Statistical methods Step 3 of the analytic plan using BLSA data was developed in January 2020 in order to address a plausible molecular mechanism explaining findings from Step 1 and Step 2. The inclusion of scRNA-Seq data from ROSMAP occurred in June 2020 in response to reviewer recommendations to use non-array, non-bulk tissue-based gene expression data. In order to assess whether primary BAs were present in the brain, we visualized CA and CDCA concentrations in AD and CON samples in the ITG, MFG, and CB using dot plots. Concentrations above and below LOD were indicated. We used tobit regression models to determine whether mean metabolite concentrations were significantly different between AD and CON samples. We set the lower limit as the metabolite-specific LOD threshold and included covariates age and sex (mean centered). In brain regions where metabolite concentrations were all below LOD (i.e., CB), we used chi-squared tests to determine whether percentage of samples below LOD was significantly different between AD and CON samples. Due to a small number of individuals with BA metabolite values above LOD, we were not able to sex-stratify these analyses. Additionally, the statistical significance threshold was set at p = 0.05 to accommodate the limited sample size. For gene expression data, we scaled each sample to have the same total read count. To test differences between AD and CON, we used the Wilcoxon rank-sum test in the total and sex-stratified samples. Similar to the index paper , each single-cell–specific sample from a participant was treated as an independent sample. We summarized age- and sex-corrected fold changes (total sample) as well as sex-specific fold changes indicating whether genes were differentially expressed in AD versus CON samples. We additionally visualized results for significant associations using a heatmap. The significance threshold was set at an FDR-corrected p = 0.05.
The BLSA study protocol has ongoing approval from the Institutional Review Board (IRB) of the National Institute of Environmental Health Science, National Institutes of Health (“Early Markers of Alzheimer’s Disease (BLSA)”, IRB No. 2009–074). Informed written consent was obtained at each visit from all participants. The ADNI study protocol was approved by the IRBs of all the participating institutions/study sites . Informed written consent was obtained from all participants at each site. All ADNI studies are conducted according to Good Clinical Practice guidelines, the Declaration of Helsinki, and United States of America 21 CFR Part 50 (Protection of Human Subjects) and Part 56 (IRBs). Additional details can be found at adni.loni.usc.edu . The ROSMAP study, including the parent study and substudies, was approved by the IRB of Rush University Medical Center. Informed written consent was obtained for all participants as well as an Anatomical Gift Act and a repository consent to share data and biospecimens. CPRD data are anonymized, general practitioners do not need to seek patient consent when sharing data with CPRD, and patients have the option of opting out. Additional details can be found at https://www.cprd.com/public . This study was approved by the CPRD Independent Scientific Advisory Committee (ISAC; Protocol # Protocol 18_173) and exempted from full IRB review by the National Institutes of Health Office of Human Subject Research.
We performed targeted metabolomics assays measuring the principal cholesterol breakdown products (i.e., CA and CDCA) as well as their principal biosynthetic precursor, (7α-hydroxycholesterol; 7α-OHC) in serum samples from participants in the Baltimore Longitudinal Study of Aging Neuroimaging (BLSA-NI) substudy who also underwent in vivo brain amyloid positron emission tomography (PET) and longitudinal structural magnetic resonance imaging (MRI). In order to validate index results from BLSA, we used the ADNI sample to test associations between CA and CDCA and neuroimaging outcomes (note: 7α-OHC was not assayed in the ADNI serum samples).
The BLSA is a prospective cohort study that began in 1958 and is administered by the National Institute on Aging (NIA) . BLSA-NI substudy imaging and visit schedules have varied over time and have been described in detail previously and included in . ADNI is an ongoing longitudinal study launched in 2003. The primary goal of ADNI has been to test whether longitudinal MRI, PET, and other biological markers can measure the progression of mild cognitive impairment (MCI) and early AD. ADMC performed serum BA assays in participants enrolled in ADNI, and data are publicly available. Study design details have been published previously and are available at www.adni-info.org .
Blood serum samples were collected from BLSA participants at each visit; details on collection and processing have been published previously and included in . Blood serum samples were collected from ADNI participants at baseline; details on collection and processing have been published previously and are described in detail in the Biospecimen Results section (link: AD Metabolomics Consortium Bile Acids Methods (PDF)–Version: January 21, 2016) at http://adni.loni.usc.edu . Data used in this study are available in the Biospecimen Results section (link: AD Metabolomics Consortium Bile Acids–Post Processed Data [ADNI1, GO, 2]–Version: June 28, 2018).
BLSA-NI participants underwent 11 C-Pittsburgh compound-B (PiB) PET scans to assess brain amyloid-β burden. A detailed description of acquisition and preprocessing procedures has been published previously . Individuals were characterized as amyloid +ve or amyloid −ve based on a mean cortical distribution volume ratio (cDVR) threshold of 1.066 . Among amyloid +ve individuals, we examined mean cDVR, a weighted global average of brain amyloid deposition, and regional DVR in the precuneus, a region vulnerable to early amyloid deposition in AD . The total sample included 141 individuals (66 male; 75 female) of whom 36 were amyloid +ve (21 male; 15 female). BLSA brain MRI was performed on a 3T Philips Achieva scanner (Philips Healthcare, Netherlands) to quantify both global and regional brain volumes and WMLs. A detailed description is included in . We a priori defined a set of brain regions to examine brain atrophy over time based on prior work using BLSA-NI data suggesting that these regions were sensitive to age-related change . These regions included global brain volumes: total brain, ventricular cerebrospinal fluid (CSF), total gray matter, and white matter; lobar volumes: temporal, parietal, and occipital white matter and gray matter; and additional regions sensitive to early neurodegeneration: hippocampus, entorhinal cortex, amygdala, parahippocampal gyrus, fusiform gyrus, and precuneus. All BLSA MRI data, including brain volumes and WMLs, after onset of clinical symptoms among individuals who developed MCI or AD were excluded (21 visits). The total sample included 134 individuals (62 male; 72 female) with an average of 2.5 longitudinal MRI visits (male: 2.3; female: 2.6). ADNI brain MRI was used to quantify both global and regional brain volumes. A detailed description of acquisition and preprocessing is included in . As our analyses in the ADNI sample were performed to confirm index results from BLSA, we restricted these analyses to gray matter and subcortical brain regions described above, excluding all white matter regions based on the lack of associations in BLSA analyses. ADNI is enriched for individuals with MCI and AD at baseline, and all data across baseline diagnoses (control (CON), MCI, and AD) were included in analyses. Similar to our primary analyses in BLSA, MRI data from all CON individuals in ADNI, after onset of clinical symptoms among individuals who subsequently developed MCI or AD, were excluded (200 visits). The total sample included 1,666 individuals (918 male; 748 female) with an average of 5.2 longitudinal MRI visits (male: 5.3; female: 5.1).
Step 1 of the analytic plan using BLSA data were developed in January 2018 prior to starting analyses in June 2019. The inclusion of ADNI BA and neuroimaging data to validate significant BLSA findings and sensitivity analyses (i.e., adding statin as a covariate) were performed in June 2020 in response to reviewer recommendations. Similar to prior work in BLSA , metabolite concentrations above the upper limit of quantification (ULOQ) were excluded, concentrations below the limit of detection (LOD) were imputed as the threshold LOD/2, and resulting concentrations were natural log transformed. Outliers ± 3 × interquartile range (IQR) were excluded. For ADNI, data processing steps for serum BA concentrations have been described in detail previously . Metabolite concentrations below the LOD were imputed as LOD/2, and all samples were log 2 transformed. Outliers ± 3 × IQR were excluded. To test for group differences between amyloid +ve and amyoid −ve individuals, we examined associations between serum concentrations of metabolites (i.e., 7α-OHC, CA, and CDCA) and brain amyloid deposition, in overall and sex-stratified linear regression models with metabolites as the dependent variable and the binary brain amyloid variable (i.e., amyloid +ve/amyloid −ve) as the main predictor. Covariates included mean-centered age and sex in the overall model and mean-centered age only in the sex-stratified model. We next tested the association between metabolite concentrations and mean cDVR (BLSA) and precuneus DVR (BLSA) in amyloid +ve individuals only. We used similar linear regression models for the continuous DVR predictors. The significance threshold was uncorrected and set at p = 0.05 to accommodate the limited sample size. To test the association between serum concentrations of metabolites (i.e., 7α-OHC, CA, and CDCA) and longitudinal changes in (1) regional brain volumes and (2) WMLs, we used total and sex-stratified linear mixed models with brain regions of interest (ROIs) volumes and WMLs as the dependent variable (i.e., outcome) and metabolite concentration as the predictor. We first performed analyses in BLSA and then validated results in ADNI. The statistical significance threshold for both BLSA and ADNI was set at a false discovery rate (FDR)-corrected p = 0.05. Additional details on model specifications are included in . In sensitivity analyses, we explored the effect on associations of adding statin drug use as a covariate in BLSA models.
In order to extend findings from our Step 1 analysis relating BA levels with brain amyloid accumulation, rates of brain atrophy, and progression of WMLs, we next tested whether pharmacological modulation of de novo BA synthesis influences dementia risk. As BAS are lipid-modifying treatments that are known to decrease the circulating pool of BAs and promote the breakdown of cholesterol, we hypothesized that exposure to these drugs would alter the risk of dementia. We therefore tested associations between exposure to BAS and dementia risk using data from the UK’s CPRD, an anonymized electronic health record (EHR) covering more than 4 million active registrants from the UK general practice clinics. Our results from Step 1 suggested a plausible sex difference in the effect of BAS on dementia risk, a hypothesis that we tested using the CPRD.
The CPRD is a primary care database covering >four million active registrants from >650 general practice clinics and is representative of the broader UK population in terms of age and sex . From the August 2018 CPRD data release, we identified patients ≥18 years old who had a first prescription record (i.e., new users) for BAS (colestipol, colesevelam, and cholestyramine) or non-statin lipid-modifying therapies (LMT; fibrate, cholesterol absorption inhibitor, nicotinic acid derivative, and probucol) between January 1, 1995 and August 1, 2018. BAS are often used as a second-line treatment independently or in combination with statins, and therefore, we selected non-statin LMT users as an active comparator group. In both groups (BAS or non-statin LMTs), we allowed for prior statin use in combination with either BAS or LMTs. Individuals who only had a prescription record of statin use were excluded from this study. The index date was defined as the date of the first BAS or LMT prescription. For each BAS user, we selected up to five LMT users matched on sex, year of birth (±5 year), region, year of clinic registration (±2 year), and year of first prescription (±2 years). Analysis was restricted to those with at least 12 months of clinical registration prior to the index date (to allow for covariate evaluation). We restricted BAS users to those aged ≥50 years and to those with two or more BAS/LMT prescriptions. The final analysis included 3,208 (1,083 male; 2,125 female) new BAS users and 23,483 (8,977 male; 14,506 female) new LMT users . The outcomes of interest were all-cause dementia, and its subtypes: AD, VaD, and other dementia not otherwise specified (NOS). We used the last reported dementia diagnosis to identify the disease subtype (read codes are available upon request). The significance threshold was set at p = 0.05 considering that each outcome of interest was a priori specified.
Step 2 of the analytic plan using CPRD data was developed in January 2018 prior to starting data analyses in June 2019. We added a comparison of patient characteristics across outcome categories (i.e., dementia subtypes) based on reviewer recommendations. We compared patient characteristics and their comorbidity profiles across dementia subtypes (AD, VaD, and NOS) as well as drug use (BAS and LMT) using the chi-squared test for categorical variables and Wilcoxon rank-sum tests for continuous variables. For multivariable analyses, we used Cox proportional hazard models to calculate hazard ratios (HRs) and 95% confidence intervals (CIs) comparing dementia risk (all-cause and subtypes) in BAS versus LMT users in the overall and sex-stratified samples. Our sex-specific analyses were a priori specified and based on findings from Step 1. We also tested the dose–effect relationship between dementia risk and drugs of interest using the number of prescriptions. Models were adjusted for factors that were significantly different between BAS and LMT groups to account for potential confounding by indication (since patient comorbidity profiles can lead to a BAS versus LMT prescription decision). Models were also adjusted for statin use during follow-up (until one year prior to exit date) using a time-varying covariate to account for its impact on dementia (26% versus 80% of BAS and LMT users, respectively, were prescribed statins in the 12 months before the index date and the majority (63%) continued its use for all or part of the follow-up). See for details on model specifications.
Given our findings that peripheral levels of the primary BAs, CA, and CDCA are associated with neuroimaging markers of dementia (Step 1) and that their pharmacological manipulation influences dementia risk (Step 2), we hypothesized that alterations in brain BA-mediated signaling may be a plausible biological mechanism underlying these findings. We first tested whether concentrations of CA and CDCA were detectable in the brain and whether they were altered in AD in participants from the BLSA autopsy program. We then tested whether gene expression of BA receptors was altered in AD using single-cell RNA sequencing (scRNA-Seq) data from the ROSMAP autopsy program.
The autopsy program of the BLSA was initiated in 1986 and has been described previously . See for additional details. Tissue samples from AD ( n = 16) and CON ( n = 13) from the inferior temporal gyrus (ITG) and middle frontal gyrus (MFG), regions representing areas of early neurofibrillary (i.e., tau) and neuritic plaque (i.e., amyloid) accumulation, respectively , as well as the cerebellum (CB) were included in these analyses. scRNA-Seq gene expression data from ROSMAP were downloaded from Synapse ( https://www.synapse.org/#!Synapse:syn18485175 ) under the doi 10.7303/syn18485175 ; code used to run analyses presented in Mathys and colleagues was requested from coauthors. Data came from postmortem participants in ROSMAP including 46 individuals: 32 individuals (18 male and 14 female) in the AD category and 14 individuals (5 male and 9 female) in the CON category. The AD category included individuals with a clinical diagnosis of AD, including individuals with AD and no other condition contributing to cognitive impairment and AD and another condition contributing to cognitive impairment, as well as individuals with a clinical diagnosis of MCI and no other condition contributing to cognitive impairment. The CON category included individuals with a clinical diagnosis of no cognitive impairment. Tissue was profiled from the prefrontal cortex (Brodmann area 10) across eight major cell types in the aged dorsolateral prefrontal cortex including inhibitory neurons, excitatory neurons, astrocytes, oligodendrocytes, microglia, oligodendrocyte progenitor cells, endothelial cells, and pericytes. Additional details are provided in the index paper . We identified BA receptor genes (including receptors involved in BA homeostasis) using a literature search and include the full list in . There were 21 BA receptor genes that had available data in the ROSMAP dataset: Nuclear Receptor Subfamily 1 Group I Member 3 (NR1I3); Retinoid X Receptor Gamma (RXRG); Nuclear Receptor Subfamily 5 Group A Member 2 (NR5A2); Cholinergic Receptor Muscarinic 3 (CHRM3); G Protein-Coupled Bile Acid Receptor 1 (GPBAR1); Peroxisome Proliferator Activated Receptor Gamma (PPARG); Nuclear Receptor Subfamily 1 Group I Member 2 (NR1I2); Kinase Insert Domain Receptor (KDR); Nuclear Receptor Subfamily 3 Group C Member 1 (NR3C1); Retinoid X Receptor Beta (RXRB); Peroxisome Proliferator Activated Receptor Delta (PPARD); Cholinergic Receptor Muscarinic 2 (CHRM2); Retinoid X Receptor Alpha (RXRA); Nuclear Receptor Subfamily 1 Group H Member 3 (NR1H3); Vitamin D Receptor (VDR); Nuclear Receptor Subfamily 1 Group H Member 4 (NR1H4); Retinoic Acid Receptor Alpha (RARA); Hepatocyte Nuclear Factor 4 Alpha (HNF4A); Nuclear Receptor Subfamily 1 Group H Member 2 (NR1H2); Formyl Peptide Receptor 1 (FPR1); Peroxisome Proliferator Activated Receptor Alpha (PPARA).
Quantitative metabolomics assays were performed on brain tissue samples to measure concentrations of the primary BAs, including CA and CDCA, using the Biocrates Bile Acids kit (Biocrates Life Sciences AG, Austria). Details on both assay kits, as well as calibration steps, have been published previously . Additional details regarding the use of internal standards are included in .
Step 3 of the analytic plan using BLSA data was developed in January 2020 in order to address a plausible molecular mechanism explaining findings from Step 1 and Step 2. The inclusion of scRNA-Seq data from ROSMAP occurred in June 2020 in response to reviewer recommendations to use non-array, non-bulk tissue-based gene expression data. In order to assess whether primary BAs were present in the brain, we visualized CA and CDCA concentrations in AD and CON samples in the ITG, MFG, and CB using dot plots. Concentrations above and below LOD were indicated. We used tobit regression models to determine whether mean metabolite concentrations were significantly different between AD and CON samples. We set the lower limit as the metabolite-specific LOD threshold and included covariates age and sex (mean centered). In brain regions where metabolite concentrations were all below LOD (i.e., CB), we used chi-squared tests to determine whether percentage of samples below LOD was significantly different between AD and CON samples. Due to a small number of individuals with BA metabolite values above LOD, we were not able to sex-stratify these analyses. Additionally, the statistical significance threshold was set at p = 0.05 to accommodate the limited sample size. For gene expression data, we scaled each sample to have the same total read count. To test differences between AD and CON, we used the Wilcoxon rank-sum test in the total and sex-stratified samples. Similar to the index paper , each single-cell–specific sample from a participant was treated as an independent sample. We summarized age- and sex-corrected fold changes (total sample) as well as sex-specific fold changes indicating whether genes were differentially expressed in AD versus CON samples. We additionally visualized results for significant associations using a heatmap. The significance threshold was set at an FDR-corrected p = 0.05.
Step 1: Test associations between cholesterol catabolism (i.e., BA synthesis) and neuroimaging markers of dementia Participant demographic details are included in . Results of cross-sectional analyses testing associations between serum metabolite concentrations and amyloid status in BLSA are included in , and associations between serum metabolite concentrations and brain amyloid-β burden among amyloid +ve individuals are included in . Brain amyloid +/−ve status was not significantly associated with serum concentrations of 7α-OHC, CDCA, or CA in the total or sex-stratified samples. In the total sample and in males only, serum 7α-OHC concentration, representing the rate-limiting biosynthetic precursor of the primary BAs , was significantly, negatively associated with mean cDVR ( p = 0.034 and p = 0.041, respectively) and precuneus DVR ( p = 0.033 and p = 0.022, respectively), indicating that lower serum concentration of 7α-OHC was associated with higher levels of global and precuneus brain amyloid-β deposition. We observed no significant associations in the female-only sample. Results of longitudinal analyses in BLSA testing associations between serum metabolite concentrations and brain atrophy are shown in . In males, lower serum CDCA and CA concentrations were associated with faster rates of atrophy in the parietal gray matter (CDCA: FDR p = 0.003; CA: FDR p = 0.013) and precuneus (CDCA: FDR p < 0.001; CA: FDR p < 0.001). In females, lower serum CA concentration was associated with slower total gray matter atrophy (FDR p = 0.038). Sensitivity analyses including statin drug use as a covariate are included in ; results were not substantially altered. Results of longitudinal analyses in ADNI testing associations between serum metabolite concentrations and brain trophy are shown in . In the total sample, lower serum CDCA and CA concentrations were associated with faster rates of atrophy in the entorhinal cortex (CDCA: FDR p = 0.032; CA: FDR p = 0.009), frontal gray matter (CDCA: FDR p = 0.045; CA: FDR p = 0.005), fusiform gyrus (CDCA: FDR p = 0.012; CA: FDR p = 0.001), total gray matter (CDCA: FDR p = 0.030; CA: FDR p = 0.003), hippocampus (CDCA: FDR p = 0.030; CA: FDR p = 0.012), parahippocampal gyrus (CDCA: FDR p = 0.012; CA: FDR p = 0.009), temporal gray matter (CDCA: FDR p = 0.016; CA: FDR p = 0.002), and ventricles (CDCA: FDR p = 0.030; CA: FDR p = 0.008). Lower CA was also associated with faster rates of atrophy in the amygdala (FDR p = 0.030), occipital gray matter (FDR p = 0.012), parietal gray matter (FDR p = 0.016), and precuneus (FDR p = 0.030). In males, lower serum CDCA and CA concentrations were associated with faster rates of atrophy in the parahippocampal gyrus (CDCA: FDR p = 0.049; CA: FDR p = 0.049). Lower serum CDCA was associated with faster rates of atrophy in the ventricles (CDCA: FDR p = 0.049), and lower serum CA was associated with faster rates of atrophy in the entorhinal cortex, frontal gray matter, fusiform gyrus, total gray matter, hippocampus, and temporal gray matter (FDR p = 0.049). In females, CDCA was not significantly associated with rates of brain atrophy; lower CA was associated with faster rates of atrophy in only the fusiform gyrus and temporal gray matter (FDR p = 0.039). Results of longitudinal analyses in BLSA testing associations between serum metabolite concentrations and WML are shown in . In males, lower serum CDCA concentration was associated with faster accumulation of WML ( p = 0.050), and in females, lower serum 7α-OHC was associated with slower accumulation of WML ( p = 0.010). Results of longitudinal analyses in ADNI testing associations between serum metabolite concentrations and WML are shown in . In ADNI, we did not observe significant associations between serum metabolite concentrations of BAs (i.e., CA and CDCA) and WML in the total male or female samples. Step 2: Test whether pharmacological modulation of BAs alters dementia risk in a large, real-world clinical dataset summarizes characteristics of BAS and LMT users. LMT users were more likely to be overweight or obese compared with BAS users (73% versus 57%, respectively). In the 12 months prior to index date, LMT users were more likely to have used statins (80% versus 26%, respectively) or metformin (15% versus 7%) and had a record of coronary artery disease (7% versus 3%), type 2 diabetes (7% versus 3%), or dyslipidemia (25% versus 5%). BAS users were more likely to have a prior record of cancer (16% versus 8%). summarizes results from Cox proportional hazard models. During the median follow-up of 4.9 years, 809 incident dementia cases occurred ( N = 72 for BAS versus 737 for LMT) corresponding to crude incidence rates of 4.8 (95% CI = 3.8 to 6.1) and 5.5 per 1,000 person-years (95% CI = 5.1 to 5.9) among BAS and LMT users, respectively. In multivariable adjusted models including all patients and compared to LMT use of > = two prescriptions, BAS use was not statistically significantly associated with risk of all-cause dementia or with its subtypes (any dementia: HR = 1.03, 95% CI = 0.72 to 1.46, p = 0.88; AD: HR = 1.24, 95% CI = 0.72 to 2.14, p = 0.43; VaD: HR = 1.27, 95% CI = 0.70 to 2.31, p = 0.43; other dementia: HR = 0.50, 95% CI = 0·22 to 1.15, p = 0.10). In analyses stratified by sex, we observed a significant ( p = 0.040) difference between the HR of VaD in males compared to females, indicating a sex difference in the relationship between BAS and risk of VaD. BAS use was associated with nonsignificantly elevated risk of VaD in males (HR = 2.89, 95% CI = 0.96 to 8.68, p = 0.06). We identified a statistically significant dose–response relationship between BAS and risk of VaD in males. Specifically, risk of VaD was higher with the increased number of BAS prescriptions ( p -trend = 0.045) . There was no statistically significant association with VaD in females (overall or by number of prescriptions). Differences in patient characteristics across outcome categories are included in . Step 3: Test plausible molecular mechanisms relating BA signaling in the brain to dementia pathogenesis using targeted metabolomics and transcriptomics Participant demographic details for the BLSA autopsy study are included in . Demographic details of ROSMAP participants included in scRNA-Seq analyses have been published previously . The primary BAs, CDCA, and CA were detectable in postmortem brain tissue samples, although the majority were below the LOD (i.e., <LOD) . Tobit regression models indicated marginally higher (nonsignificant) concentrations of the primary BAs in AD samples compared to CON samples in the ITG and MFG. Chi-squared models indicated significantly more participants with metabolite concentrations above LOD in AD compared to CON in the CB. Due to a small number of individuals with BA metabolite values above LOD, we were not able to sex-stratify these analyses. We observed that gene expression of several BA receptors was different in AD versus CON in the total and male samples, mainly within neurons (i.e., both inhibitory and excitatory neurons). The majority of differentially expressed genes showed lower expression in AD relative to CON. Results across all 8 major brain cell types are included in . As indicated below in a heatmap visualizing sex-stratified differences in AD versus CON samples , for inhibitory neurons, in males, 10 out of 21 genes were significantly altered (FDR pval < 0.05); six had lower gene expression in AD compared to CON (AD<CON); and four had higher gene expression in AD compared to CON (AD>CON). In females, there were no differentially expressed BA receptor genes within inhibitory neurons. Within excitatory neurons, in males, 16 out of 21 genes were significantly altered (FDR pval < 0.05); 10 had lower gene expression in AD compared to CON (AD<CON); and 6 had higher gene expression in AD compared to CON (AD>CON). For females, four genes showed lower gene expression in AD compared to CON (AD<CON).
Participant demographic details are included in . Results of cross-sectional analyses testing associations between serum metabolite concentrations and amyloid status in BLSA are included in , and associations between serum metabolite concentrations and brain amyloid-β burden among amyloid +ve individuals are included in . Brain amyloid +/−ve status was not significantly associated with serum concentrations of 7α-OHC, CDCA, or CA in the total or sex-stratified samples. In the total sample and in males only, serum 7α-OHC concentration, representing the rate-limiting biosynthetic precursor of the primary BAs , was significantly, negatively associated with mean cDVR ( p = 0.034 and p = 0.041, respectively) and precuneus DVR ( p = 0.033 and p = 0.022, respectively), indicating that lower serum concentration of 7α-OHC was associated with higher levels of global and precuneus brain amyloid-β deposition. We observed no significant associations in the female-only sample. Results of longitudinal analyses in BLSA testing associations between serum metabolite concentrations and brain atrophy are shown in . In males, lower serum CDCA and CA concentrations were associated with faster rates of atrophy in the parietal gray matter (CDCA: FDR p = 0.003; CA: FDR p = 0.013) and precuneus (CDCA: FDR p < 0.001; CA: FDR p < 0.001). In females, lower serum CA concentration was associated with slower total gray matter atrophy (FDR p = 0.038). Sensitivity analyses including statin drug use as a covariate are included in ; results were not substantially altered. Results of longitudinal analyses in ADNI testing associations between serum metabolite concentrations and brain trophy are shown in . In the total sample, lower serum CDCA and CA concentrations were associated with faster rates of atrophy in the entorhinal cortex (CDCA: FDR p = 0.032; CA: FDR p = 0.009), frontal gray matter (CDCA: FDR p = 0.045; CA: FDR p = 0.005), fusiform gyrus (CDCA: FDR p = 0.012; CA: FDR p = 0.001), total gray matter (CDCA: FDR p = 0.030; CA: FDR p = 0.003), hippocampus (CDCA: FDR p = 0.030; CA: FDR p = 0.012), parahippocampal gyrus (CDCA: FDR p = 0.012; CA: FDR p = 0.009), temporal gray matter (CDCA: FDR p = 0.016; CA: FDR p = 0.002), and ventricles (CDCA: FDR p = 0.030; CA: FDR p = 0.008). Lower CA was also associated with faster rates of atrophy in the amygdala (FDR p = 0.030), occipital gray matter (FDR p = 0.012), parietal gray matter (FDR p = 0.016), and precuneus (FDR p = 0.030). In males, lower serum CDCA and CA concentrations were associated with faster rates of atrophy in the parahippocampal gyrus (CDCA: FDR p = 0.049; CA: FDR p = 0.049). Lower serum CDCA was associated with faster rates of atrophy in the ventricles (CDCA: FDR p = 0.049), and lower serum CA was associated with faster rates of atrophy in the entorhinal cortex, frontal gray matter, fusiform gyrus, total gray matter, hippocampus, and temporal gray matter (FDR p = 0.049). In females, CDCA was not significantly associated with rates of brain atrophy; lower CA was associated with faster rates of atrophy in only the fusiform gyrus and temporal gray matter (FDR p = 0.039). Results of longitudinal analyses in BLSA testing associations between serum metabolite concentrations and WML are shown in . In males, lower serum CDCA concentration was associated with faster accumulation of WML ( p = 0.050), and in females, lower serum 7α-OHC was associated with slower accumulation of WML ( p = 0.010). Results of longitudinal analyses in ADNI testing associations between serum metabolite concentrations and WML are shown in . In ADNI, we did not observe significant associations between serum metabolite concentrations of BAs (i.e., CA and CDCA) and WML in the total male or female samples.
summarizes characteristics of BAS and LMT users. LMT users were more likely to be overweight or obese compared with BAS users (73% versus 57%, respectively). In the 12 months prior to index date, LMT users were more likely to have used statins (80% versus 26%, respectively) or metformin (15% versus 7%) and had a record of coronary artery disease (7% versus 3%), type 2 diabetes (7% versus 3%), or dyslipidemia (25% versus 5%). BAS users were more likely to have a prior record of cancer (16% versus 8%). summarizes results from Cox proportional hazard models. During the median follow-up of 4.9 years, 809 incident dementia cases occurred ( N = 72 for BAS versus 737 for LMT) corresponding to crude incidence rates of 4.8 (95% CI = 3.8 to 6.1) and 5.5 per 1,000 person-years (95% CI = 5.1 to 5.9) among BAS and LMT users, respectively. In multivariable adjusted models including all patients and compared to LMT use of > = two prescriptions, BAS use was not statistically significantly associated with risk of all-cause dementia or with its subtypes (any dementia: HR = 1.03, 95% CI = 0.72 to 1.46, p = 0.88; AD: HR = 1.24, 95% CI = 0.72 to 2.14, p = 0.43; VaD: HR = 1.27, 95% CI = 0.70 to 2.31, p = 0.43; other dementia: HR = 0.50, 95% CI = 0·22 to 1.15, p = 0.10). In analyses stratified by sex, we observed a significant ( p = 0.040) difference between the HR of VaD in males compared to females, indicating a sex difference in the relationship between BAS and risk of VaD. BAS use was associated with nonsignificantly elevated risk of VaD in males (HR = 2.89, 95% CI = 0.96 to 8.68, p = 0.06). We identified a statistically significant dose–response relationship between BAS and risk of VaD in males. Specifically, risk of VaD was higher with the increased number of BAS prescriptions ( p -trend = 0.045) . There was no statistically significant association with VaD in females (overall or by number of prescriptions). Differences in patient characteristics across outcome categories are included in .
Participant demographic details for the BLSA autopsy study are included in . Demographic details of ROSMAP participants included in scRNA-Seq analyses have been published previously . The primary BAs, CDCA, and CA were detectable in postmortem brain tissue samples, although the majority were below the LOD (i.e., <LOD) . Tobit regression models indicated marginally higher (nonsignificant) concentrations of the primary BAs in AD samples compared to CON samples in the ITG and MFG. Chi-squared models indicated significantly more participants with metabolite concentrations above LOD in AD compared to CON in the CB. Due to a small number of individuals with BA metabolite values above LOD, we were not able to sex-stratify these analyses. We observed that gene expression of several BA receptors was different in AD versus CON in the total and male samples, mainly within neurons (i.e., both inhibitory and excitatory neurons). The majority of differentially expressed genes showed lower expression in AD relative to CON. Results across all 8 major brain cell types are included in . As indicated below in a heatmap visualizing sex-stratified differences in AD versus CON samples , for inhibitory neurons, in males, 10 out of 21 genes were significantly altered (FDR pval < 0.05); six had lower gene expression in AD compared to CON (AD<CON); and four had higher gene expression in AD compared to CON (AD>CON). In females, there were no differentially expressed BA receptor genes within inhibitory neurons. Within excitatory neurons, in males, 16 out of 21 genes were significantly altered (FDR pval < 0.05); 10 had lower gene expression in AD compared to CON (AD<CON); and 6 had higher gene expression in AD compared to CON (AD>CON). For females, four genes showed lower gene expression in AD compared to CON (AD<CON).
We found that lower serum concentrations of the rate-limiting biosynthetic precursor of BA synthesis, i.e., 7α-OHC, as well as the primary BAs mainly in males, were associated with neuroimaging measures of dementia progression and that pharmacological lowering of BA levels was associated with higher risk of VaD in males. We hypothesize that disruption of BA signaling in the brain as reflected in altered levels of primary BAs and reduced neuronal gene expression of BA receptors may represent a plausible biological mechanism underlying these results. Together, our observations suggest a novel mechanism relating abnormalities in cholesterol catabolism to risk of dementia. The role of hypercholesterolemia in the pathogenesis of dementia is well recognized but poorly understood. While the BBB ensures that brain concentrations of cholesterol are largely independent of peripheral tissues , the oxidative catabolism of cholesterol results in the generation of oxysterols that are permeable to the BBB and can both access the brain from the peripheral circulation, as well as efflux into the periphery from the brain . Oxysterols, including 7α-OHC, are key biosynthetic precursors of the primary BAs, CA, and CDCA, which, in turn, represent the primary catabolic products of cholesterol. We observed an association between serum concentration of 7α-OHC, representing the rate-limiting reaction in primary BA synthesis , and global brain amyloid burden as well as that in the precuneus, an early site of amyloid deposition in AD suggesting that impaired synthesis of primary BAs may be an important mediator of pathologic changes in AD. This relationship appears to be driven primarily by males suggesting a novel sex-specific association between BA synthesis and brain amyloid accumulation. It is important to note, however, that these cross-sectional analyses are not able to determine whether pathology, brain atrophy, or other dementia-associated endophenotypes may modify cholesterol catabolism. We then examined the relationships between BA synthesis and both regional rates of brain atrophy as well as the accumulation of WMLs that are key vascular contributors to dementia . Our results indicate that in males, lower serum CDCA and CA is associated with faster rates of brain atrophy and faster accumulation of brain WMLs in the BLSA. These findings were partially confirmed in ADNI where lower BA concentrations were associated with faster brain atrophy rates across several brain regions in males with far fewer associations in females. It is important to note, however, that the lack of sex-specific associations compared to the total sample in ADNI may be partially driven by sample size. Female participants in BLSA showed an opposite effect compared to males—lower serum concentrations of 7α-OHC and CA were associated with slower accumulation of brain WMLs and slower rates of brain atrophy. Our sex-specific WML associations in BLSA were not replicated in ADNI. This may be due, in part, to demographic differences: ADNI participants were younger at baseline and had a larger percentage of participants who were white. Additionally, ADNI participants represent later stages of disease progression compared to the BLSA sample with approximately 50% of participants at baseline being diagnosed as either MCI or AD. To the best of our knowledge, these findings are among the first to demonstrate sex-specific associations between the rate-limiting step in primary BA synthesis and brain amyloid deposition as well as longitudinal changes in brain atrophy and accumulation of WML burden. A previous cross-sectional study by Nho and colleagues in ADNI reported that lower plasma CA was associated with reduced hippocampal volume in a combined sample of AD, MCI, and CON participants and reported lower plasma CA levels in AD as well associations with increased risk of conversion from MCI to AD. These results, together with our current findings which included longitudinal markers of disease progression, suggest that the oxidative catabolism of cholesterol to BAs may impact both pathological changes in the brain preceding a diagnosis of dementia, as well as progression of clinical symptoms . Given that our neuroimaging results revealed sex-specific associations between primary BA synthesis and measures of dementia-related pathology, we next tested whether the modulation of peripheral BA levels would alter the risk of incident dementia in a sex-specific manner. To test this hypothesis, we leveraged one of the world’s largest databases of primary care records, i.e., the UK CPRD, to examine whether exposure to BAS, a commonly used class of medicines to treat hyperlipidemia, would alter the risk of incident dementia. BAS are nonsystemic pharmacological agents that bind to BAs in the gastrointestinal tract, reducing their entry into the enterohepatic circulation. A lower pool of circulating BAs reduces feedback inhibition of the rate-limiting step in BA synthesis catalyzed by CYP7A1 , resulting in greater oxidative catabolism of cholesterol. We observed a significant positive association between the number of BAS prescriptions and risk of VaD in males and no association in females. We additionally observed a statistically significant sex difference in the association between BAS and VaD. These results, while suggestive, are consistent with our neuroimaging findings indicating that a lower circulating pool of BA is associated with neuroimaging markers of dementia progression mainly in males. Together, these results suggest that cholesterol catabolism through its enzymatic conversion to primary BAs is a biological mechanism associated with increased risk of VaD in males. These findings may provide novel insights into sex-specific interventions targeting this biochemical pathway in at-risk older individuals. Further exploration of the association between pharmacologic manipulation of BA levels and dementia outcomes in complementary population-based databases with distinct demographic and clinical characteristics is essential to validate our findings and assess their generalizability. One plausible mechanism that may explain the association between dysregulated cholesterol catabolism and dementia pathogenesis is through altered BA signaling in the brain. Our findings are among the first to identify primary BAs (i.e., CA and CDCA) in the brain and report significant sex differences in neuronal gene expression of BA receptors in AD. A recent report by Mahmoudiandehkordi and colleagues reported a significant association between a higher ratio of the secondary BA, deoxycholic acid (DCA) to CA (DCA:CA) in both serum and brain tissue with severity of cognitive impairment in a combined sample of AD, MCI, and CON participants from the ROSMAP study . Our scRNA-Seq results are also broadly consistent with a recent multi-cohort transcriptomic analysis in AD and CON brain tissue samples by Baloni and colleagues that reported gene expression of several BA receptors. These included transcripts for RARA, RXRA, PPARA, and PPARG receptors that we find to be differentially expressed in AD brains in a sex-specific manner . Important differences between our current report and that by Baloni and colleagues include our use of scRNA-Seq compared to bulk tissue RNA-Seq as well as our sex-stratified analyses to probe differences in BA receptor transcript levels in AD. While the influx of BAs across the BBB from systemic circulation has been demonstrated , it is unclear whether de novo synthesis contributes substantially to the BA pool in the human brain . Few previous studies have reported the existence of BAs in the human brain . Pan and colleagues reported the presence of CDCA and CA in postmortem AD and CON brains but did not observe differences in their concentrations . While BA receptors play a critical role in regulating hepatic BA synthesis by mediating feedback inhibition of CYP7A1, accumulating evidence also points to their importance in signaling pathways in the brain . These include regulation of vascular risk factors including glucose, lipid, and energy homeostasis as well as modulation of GABAergic and NMDA receptor–mediated neurotransmission . Our results raise the possibility that dysregulation of cholesterol catabolism and BA synthesis in the periphery may impact early features of dementia pathogenesis through their effects on neuronal signaling pathways in the brain. This hypothesis merits evaluation in future experimental studies and may pave the way toward testing novel disease-modifying treatments in dementia targeting BA receptor–mediated signaling in the brain. While we have not addressed the precise mechanisms underlying sex-specific associations between BA metabolism and dementia pathogenesis, prior evidence suggests important sex differences in lipid metabolism that impact risk of cardiovascular disease . It is important to consider these findings together with animal studies that have also shown sex–specific differences in BA homeostasis during aging and suggest that these may be mediated by differences in expression of BA transporters as well as CYP7A1, the rate-limiting enzyme in BA synthesis . It is likely that such differences are relevant in other biological pathways as well. Our own prior work has uncovered striking sex differences in the systemic inflammatory response in preclinical AD that is related to neurodegeneration as well as differences in glucose metabolism that are associated with AD pathology . These findings may have implications for testing targeted treatment interventions that take into consideration sex-specific differences in molecular mechanisms underlying AD pathogenesis. Understanding sex differences in biological pathways related to AD risk and progression may also have important implications in our understanding of descriptive epidemiological estimates of dementia that reveal sex-specific longitudinal differences in both prevalence and incidence of dementia in diverse cohorts . It is also worth noting in this context that sex as a biological variable (SABV) has been largely ignored in neuroscience and dementia research . Our study design represents an approach to identify biological mechanisms of risk associated with dementia as well as to discover potential targets for disease-modifying treatments. First, the use of targeted metabolomics and transcriptomics within longitudinal observational studies in combination with established neuroimaging markers of disease progression (e.g., amyloid accumulation, brain atrophy, and WMLs) enables the identification of specific biochemical pathways that may present plausible drug targets. Second, the use of large, real-world clinical datasets with dementia outcomes enables testing drugs that may impact such targets. The strengths of our study include the use of a well-characterized population of older individuals with serial neuroimaging in the BLSA-NI and ADNI and testing the clinical implications of our findings in a large real-world clinical dataset. Limitations of our study include the relatively small sample sizes in the BLSA-NI and autopsy samples. Additionally, we were unable to sex-stratify analyses of brain tissue BA concentration due to a limited number of individuals with BA metabolite concentration values above LOD. However, our inclusion of sc-RNASeq data comparing AD and CON samples from ROSMAP did allow us to sex-stratify gene expression analyses and correct for multiple comparisons. Additional limitations include a likely inaccuracy in clinical diagnoses of dementia subtypes in primary care settings. We have previously analyzed data from more than 20 million Medicare fee-for-service beneficiaries in the USA and reported that accurate subtyping of dementia in such datasets may be challenging . While our matched cohort design with an active drug comparator group and adjustment for common comorbidities may have addressed some of the limitations associated with pharmacoepidemiologic analyses, our findings merit confirmation in other independent studies. It is also important to note that large longitudinal studies have consistently reported that mixed brain pathologies account for the majority of dementia cases with considerable overlap between AD neuropathology and vascular brain injury including macroscopic, lacunar, and microscopic infarcts . Additionally, particularly in the oldest old, “single neuropathological entities” may be less relevant compared to mixed pathologies including AD and vascular disease . In summary, we have combined targeted metabolomic assays of serum with in vivo amyloid PET and MRI of the brain to identify cholesterol catabolism through BA synthesis as a biological pathway involved in neuropathological changes prior to dementia onset. We then extended these findings by analyzing a large real-world clinical dataset to show that BA modulation alters the trajectory of VaD in males. Our transcriptomics results suggest that alterations in BA signaling through their neuronal receptors may mediate some of these associations. Our findings suggest that future experimental studies may provide insight into modulation of BA levels as a plausible therapeutic target in dementia.
S1 Fig Catabolism of cholesterol into primary BAs. The oxidative catabolism of cholesterol occurs through 3 enzymatically catalyzed biochemical pathways: the classic/neutral pathway in the liver accounts for the majority of BA synthesis in humans and begins with the oxidation of cholesterol to 7α-OHC by microsomal CYP7A1, the rate-limiting enzyme of the pathway. The alternative or acidic pathway is responsible for synthesis of a smaller proportion of the BA pool; cholesterol is oxidized to 27-OHC, catalyzed by mitochondrial CYP27A1 in both liver and extra-hepatic tissues. Both the classic/neutral and acidic pathways of BA synthesis ultimately generate the primary BAs, CA, and CDCA which are the principal catabolic products of cholesterol. A third, neuron-specific pathway of cholesterol breakdown in the brain is catalyzed by CYP46A1-mediated conversion of cholesterol to 24S-OHC which effluxes into the peripheral circulation for further conversion into the primary BAs in the liver . 7α-OHC, 7α-hydroxycholesterol; 24S-OHC, 24S-hydroxycholesterol; 27-OHC, 27-hydroxycholesterol; BA, bile acid; CA, cholic acid; CDCA, chenodeoxycholic acid. (TIFF) Click here for additional data file. S1 Text Supporting information text. (DOCX) Click here for additional data file. S1 Table STROBE checklist. STROBE, Strengthening the Reporting of Observational studies in Epidemiology. (DOCX) Click here for additional data file. S2 Table ROSMAP scRNA-Seq BA receptor gene expression data availability. Indicates data availability in the scRNA-Seq ROSMAP dataset. BA receptor genes that are indicated as “Not Available” either did not have sufficient counts or did not have any data available in the ROSMAP scRNA-Seq dataset. BA, bile acid; CHRM2, Cholinergic Receptor Muscarinic 2; CHRM3, Cholinergic Receptor Muscarinic 3; FGF19, Fibroblast Growth Factor 19; FPR1, Formyl Peptide Receptor 1; GPBAR1, G Protein-Coupled Bile Acid Receptor 1; HNF4A, Hepatocyte Nuclear Factor 4 Alpha; KDR, Kinase Insert Domain Receptor; NR0B2, Nuclear Receptor Subfamily 0 Group B Member 2; NR1H2, Nuclear Receptor Subfamily 1 Group H Member 2; NR1H3, Nuclear Receptor Subfamily 1 Group H Member 3; NR1H4, Nuclear Receptor Subfamily 1 Group H Member 4; NR1I2, Nuclear Receptor Subfamily 1 Group I Member 2; NR1I3, Nuclear Receptor Subfamily 1 Group I Member 3; NR3C1, Nuclear Receptor Subfamily 3 Group C Member 1; NR5A2, Nuclear Receptor Subfamily 5 Group A Member 2; PPARA, Peroxisome Proliferator Activated Receptor Alpha; PPARD, Peroxisome Proliferator Activated Receptor Delta; PPARG, Peroxisome Proliferator Activated Receptor Gamma; RARA, Retinoic Acid Receptor Alpha; ROSMAP, Religious Orders Study and Memory and Aging Project; RXRA, Retinoid X Receptor Alpha; RXRB, Retinoid X Receptor Beta; RXRG, Retinoid X Receptor Gamma; S1PR2, Sphingosine-1-Phosphate Receptor 2; scRNA-Seq, single-cell RNA sequencing; VDR, Vitamin D Receptor. (DOCX) Click here for additional data file. S3 Table (A) Demographic characteristics of BLSA-NI sample. APOE4, e4 allele of the Apolipoprotein E gene; BLSA, Baltimore Longitudinal Study of Aging; MRI, magnetic resonance imaging; NI, neuroimaging; PiB, Pittsburgh compound B; SD, standard deviation; WML, white matter lesion. (B) Demographic characteristics of ADNI sample. ADNI, Alzheimer’s Disease Neuroimaging Initiative; MRI, magnetic resonance imaging; NI, neuroimaging; SD, standard deviation; WML, white matter lesion (DOCX) Click here for additional data file. S4 Table Associations between serum metabolite concentrations and PiB/amyloid status. coef, coefficient from linear regression model; PiB, Pittsburgh compound B; pval, p -value. (DOCX) Click here for additional data file. S5 Table Sensitivity analyses: Associations between serum metabolite concentrations and brain amyloid-β deposition, longitudinal changes in global brain WML burden, and rates of brain atrophy–BLSA. Sensitivity analyses after including statin use as a covariate. BLSA, Baltimore Longitudinal Study of Aging; coef, coefficient from linear regression model or mixed effects model; FDR, false discovery rate (Benjamini–Hochberg) corrected p -value; pval, p -value; WML, white matter lesion. (DOCX) Click here for additional data file. S6 Table Characteristics of participants who received at least 2 BAS or LMT prescriptions with at least 1 year of follow-up after second prescription. Wilcoxon rank-sum test. 1 Chi-squared test. 2 1 year prior to index date. 3 During study follow-up. BAS, bile acid sequestrants; LMT, lipid-modifying therapies. (DOCX) Click here for additional data file. S7 Table Characteristics of participants with incident dementia events during follow-up. 1 1 year prior to index date. (DOCX) Click here for additional data file. S8 Table Demographic characteristics of BLSA autopsy sample. AD, Alzheimer disease; APOE4, apolipoprotein E allele epsilon 4; BLSA, Baltimore Longitudinal Study of Aging; CON, control; PMI, postmortem interval (hours). (DOCX) Click here for additional data file. S9 Table Differences in brain primary BA concentrations between AD and CON. * In the CON sample in the CB, all concentrations were below LOD; we therefore tested for differences in the number of concentrations below LOD comparing AD to CON using the chi-squared test and present the associated p -value. AD, Alzheimer disease; BA, bile acid; coef, coefficient for disease (AD vs. CON) from the tobit model including mean-centered age and sex where the lower limit is set as the metabolite specific LOD; CON, control; LOD, limit of detection; pval, p -value. (DOCX) Click here for additional data file. S10 Table Differences in brain BA receptor gene expression (including receptors involved in BA homeostasis) in AD compared to CON. AD, Alzheimer disease; BA, bile acid; CHRM2, Cholinergic Receptor Muscarinic 2; CHRM3, Cholinergic Receptor Muscarinic 3; CON, control; FDR, false discovery rate (Benjamini–Hochberg) corrected p -value; FPR1, Formyl Peptide Receptor 1; GPBAR1, G Protein-Coupled Bile Acid Receptor 1; HNF4A, Hepatocyte Nuclear Factor 4 Alpha; KDR, Kinase Insert Domain Receptor; NR1H2, Nuclear Receptor Subfamily 1 Group H Member 2; NR1H3, Nuclear Receptor Subfamily 1 Group H Member 3; NR1H4, Nuclear Receptor Subfamily 1 Group H Member 4; NR1I2, Nuclear Receptor Subfamily 1 Group I Member 2; NR1I3, Nuclear Receptor Subfamily 1 Group I Member 3; NR3C1, Nuclear Receptor Subfamily 3 Group C Member 1; NR5A2, Nuclear Receptor Subfamily 5 Group A Member 2; PPARA, Peroxisome Proliferator Activated Receptor Alpha; PPARD, Peroxisome Proliferator Activated Receptor Delta; PPARG, Peroxisome Proliferator Activated Receptor Gamma; pval: p -value; RARA, Retinoic Acid Receptor Alpha; RXRA, Retinoid X Receptor Alpha; RXRB, Retinoid X Receptor Beta; RXRG, Retinoid X Receptor Gamma; VDR, Vitamin D Receptor. (XLSX) Click here for additional data file.
|
Synaptotagmin 1 oligomerization via the juxtamembrane linker regulates spontaneous and evoked neurotransmitter release | 5ab23de9-2ec8-4973-a631-d4618c120b36 | 8694047 | Physiology[mh] | The Syt1 Cytoplasmic Domain Forms Multimeric Structures in Solution upon Binding Anionic Phospholipids. The structure of syt1 is depicted in . The N-terminal region comprises a short luminal domain, a single TMD, and a juxtamembrane linker that is followed by the tandem C2 domains . Within this juxtamembrane linker lies a sequence containing 10 positively charged residues directly after the TMD . A key feature of the experiments reported here is that we used the intact cytoplasmic domain of syt1, residues 80 to 421, which includes this cationic juxtamembrane segment. Again, this contrasts with the majority of published work describing syt1 biochemistry and oligomerization because those reports were based on shorter fragments (residues 96 to 421 or 143 to 421, both lacking the polybasic region) . We first assessed the self-association properties of syt1 by performing DLS in aqueous media . Each of the two C2 domains of syt1 are ∼2.5 × 5 nm ( SI Appendix , Fig. S1 A ). We reasoned that if syt1 assembled into a multimeric structure, the average hydrodynamic diameter of the pure protein should exceed these monomeric dimensions. When suspended in physiological, aqueous media, pure syt1(80-421) was found to have an average diameter of ∼5 nm, suggesting it is indeed monomeric . Syt1 is known to function by binding anionic lipids, namely phosphatidylserine (PS) and PIP 2 . We therefore examined how anionic lipids would influence multimerization. Remarkably, we found that the addition of a soluble, short-chain anionic lipid, 6:0 PS, caused syt1 to assemble into structures with a diameter of ∼250 nm . In contrast, syt1 remained monomeric in the presence of a nonacylated variant (phosphoserine) ( SI Appendix , Fig. S1 B ), and 6:0 PS alone failed to generate a detectable DLS signal. This demonstrates that syt1 self-association is triggered upon binding anionic phospholipids, mediated by electrostatic and hydrophobic interactions. We then proceeded to image the DLS samples by negative-stain EM. In line with the DLS, EM imaging found large clusters in the WT syt1(80-421) + 6:0 PS sample, whereas protein alone and 6:0 PS alone samples were devoid of these large structures . Since SVs have, on average, 15 copies of syt1 , we do not expect these large structures to exist in vivo. However, these findings show that the cytoplasmic domain of syt1 has the ability to self-associate in the presence of anionic lipids. As such, we reasoned that analyzing the size of these supraphysiological multimeric structures, in conjunction with site-directed mutagenesis, would enable us to map the determinants that mediate lipid-dependent homomeric interactions under aqueous conditions. Syt1 Self-Association Persists after Substitution of Residues in the C2B Domain. To gain insight into the structural elements of syt1(80-421) that mediate self-association, we mutated a number of residues that have previously been reported to regulate syt1 oligomerization through the C2B domain and performed DLS analysis. These mutant forms of syt1 encompass K326,327A , a positively charged region that is also responsible for Ca 2+ -independent PIP 2 -binding activity ; F349A, which was reported to disrupt the ring structures observed by negative-stain EM ; and R398,399Q, implicated in binding SNAREs and C2B self-association via back-to-back dimerization . All three sets of mutations failed to disrupt multimerization under our experimental conditions ( and ). Lysine Residues in the Juxtamembrane Linker of Syt1 Regulate Self-Association. As outlined in the introduction, syt1 was first thought to self-associate via determinants in the N terminus of the protein . Within this putative oligomerization region, the juxtamembrane linker was shown to mediate Ca 2+ -independent interactions with membranes . The juxtamembrane linker contains a segment (between residues 80 to 95) in which 10 of the first 16 residues after the transmembrane domain are lysines . This cationic region is poorly characterized, as residues 80 to 95 are commonly excluded from recombinant preparations of the syt1 soluble domain, perhaps because of the increased difficulty in purification (see Methods ). We hypothesized that these charged residues mediate interactions with anionic phospholipids to promote oligomerization . To assess this possibility, we neutralized the juxtamembrane lysines via mutagenesis (Juxta K) ( SI Appendix , Fig. S2 ) and, again, performed DLS on syt1(80-421) with and without 6:0 PS. In contrast to WT protein, we found that 6:0 PS failed to trigger self-association of the Juxta K variant; Juxta K syt1 remained monomeric in solution ( and ). This DLS result was further validated by EM imaging, which found that the Juxta K + 6:0 PS sample completely lacked large protein–lipid clusters . The Syt1 Cytoplasmic Domain Forms Multimeric Structures on SLBs. Although DLS and EM analysis served as an efficient screen that enabled us to uncover Juxta K–mediated syt1 self-association, this system is accompanied by nonphysiological caveats. To validate the DLS and EM results, we developed a second in vitro assay to examine if the syt1 cytoplasmic domain self-associates on the surface of phospholipid bilayers. For this, we generated SLBs and performed AFM imaging while aiming to mimic native, physiological conditions . This AFM strategy combines the distinct advantages of both DLS and EM by enabling high sensitivity experiments, with single particle resolution, in aqueous media. As a preliminary test of our AFM approach, we incubated 1 µM syt1(80-421) with an SLB (DOPC/DOPS/PIP 2 , 72:25:3) for 6 h in the absence of Ca 2+ . Under these aqueous conditions, we observed the formation of large numbers of ring-like structures formed by syt1 on the lipid bilayer surface ( SI Appendix , Fig. S3 A ). A zoomed-in three-dimensional view of a representative structure is also shown ( SI Appendix , Fig. S3 B ). Overall, the diameter of these structures (i.e., the distance between the two highest points on both sides in a cross-section) was commonly greater than 100 nm, suggesting that they are formed by a high copy number of syt1(80-421). Importantly, the syt1 rings that we observed by AFM are filled with lipids in the center as determined by lateral height profiles in conjunction with an analysis of surface roughness inside and outside the rings ( SI Appendix , Fig. S3 B and C ). These findings indicate that these structures assemble on the surface of intact bilayers. We also occasionally observed structures that formed around defects (holes) in the SLB ( SI Appendix , Fig. S4 A ). These structures resemble protein-decorated holes that form on SLBs after treatment with pore-forming proteins such as Bax . However, since syt1(80-421) does not form large pores in bilayers ( SI Appendix , Fig. S4 B ), we believe the effects of pore-forming proteins like Bax on SLBs are distinct from the ring-like structures that we observed. Indeed, upon further examination, we found that defects in the SLB are stable over time ( SI Appendix , Fig. S4 C ), while the number of ring-like structures increase dramatically with time ( SI Appendix , Fig. S4 D ). Hence, syt1 ring formation does not require membrane defects, and these ring-like structures represent bona fide syt1 multimers on the SLB surface. Notably, since the properties of the structures that associate around membrane defects are dominated by the size and shape of the defect itself rather than the multimerization properties of syt1, we excluded structures with an interior hole or defect in the bilayer from all analyses. Syt1 Self-Association Requires Anionic Phospholipids and Is Enhanced by Ca 2+ . After establishing that circular multimerization (i.e., ring formation) on the bilayer surface was robust and reproducible by AFM imaging, we set the incubation time of syt1(80-421) with the SLB to 20 min in all subsequent trials. In the absence of Ca 2+ [0.5 mM ethylene glycol-bis(β-aminoethyl ether)- N , N , N ′, N ′-tetraacetic acid (EGTA)], at increasing protein concentration, protein structures on the SLB transitioned from particles (50 nM) to rings (1 µM) and from rings to patches (3 µM) ( SI Appendix , Figs. S5 and S6 and Table S1 ). The sensitivity of the multimeric structures to protein concentration in our AFM analysis and the variation in the morphology of these multimers, suggest structural plasticity in the multimerization process. In comparison, in 1 mM free Ca 2+ , rings and patches still formed, but both classes of multimers formed at lower protein concentrations as compared to the Ca 2+ -free condition ( SI Appendix , Fig. S6 and Table S1 ). The effect of Ca 2+ was also consistent across a range of Ca 2+ concentrations ( SI Appendix , Fig. S7 A ). These findings show that all forms of syt1 self-assembly (particles, rings, and patches) can occur in the absence of Ca 2+ and that the addition of Ca 2+ facilitates multimerization. To further confirm this Ca 2+ -dependent enhancement, we tested a syt1 Ca 2+ -binding mutant, syt1 4N (80-421), in which two acidic Ca 2+ ligands in each C2 domain were mutated to neutral residues, thus abolishing Ca 2+ -binding activity . Ring-like structures were still observed, but Ca 2+ failed to enhance further assembly ( SI Appendix , Fig. S7 B ). The precise mechanism by which Ca 2+ promotes syt1 self-association is not yet known but likely involves conformational changes that alter the relative orientation of its tandem C2 domains within the multimer . In addition, Ca 2+ might also facilitate self-assembly by increasing the local syt1 concentration on the bilayer. To examine the influence of the SLB phospholipid composition on syt1 self-association, we omitted PS and PIP 2 in our protein–lipid interaction tests of syt1(80-421). We found that anionic phospholipids were required for syt1 multimers to assemble on the bilayer ( SI Appendix , Fig. S8 A ). Next, to validate that phospholipid binding promotes syt1 self-association, we deposited syt1(80-421) onto a bare mica surface (lipid free) and again studied its morphology under aqueous conditions ( SI Appendix , Fig. S8 B ). When no lipid was present, syt1(80-421) molecules appear as dispersed particles with similar dimensions in both EGTA and Ca 2+ conditions. Lysine Residues in the Juxtamembrane Linker of Syt1 Are Essential for Large Ring Formation on SLBs. Our DLS and EM analyses revealed that syt1 multimerization, in response to lipid binding, is governed by the Juxta K region. We revisited all the syt1 mutants characterized by DLS to assess their respective impact on multimerization on phospholipid bilayers. The WT and mutant syt1(80-421) structures that were analyzed by AFM and the associated lateral height profiles are shown in SI Appendix , Figs. S9–S13 . Our AFM analysis found the K326,327A, F349A, and R398,399Q mutations have no effect on the formation of large multimeric structures, similar to the DLS result . However, also in agreement with our DLS and EM results, we found that neutralizing the juxtamembrane lysine residues (Juxta K) dramatically disrupted the formation of large rings on the SLBs . Interestingly, although no large multimeric structures were present with 1 µM Juxta K in 0.5 mM EGTA, smaller ∼30-nm ring structures were formed in the presence of 1 mM free Ca 2+ ( and SI Appendix , Fig. S13 ). The distinct diameters of the small and large rings suggest that the two populations of syt1(80-421) multimers form by different mechanisms. Notably, the smaller rings have dimensions that are comparable to the rings reported by Rothman, Krishnakumar, Volynski, and colleagues . In stark contrast, however, the small rings observed by AFM in aqueous buffer only appeared in the presence of Ca 2+ , whereas the rings observed by negative stain EM were dispersed by Ca 2+ . Concurrent Mutations in Two Distinct Regions of Syt1 Abolish Self-Association. In an effort to further disrupt syt1 self-association (small rings), we analyzed all of the above C2B mutations in a Juxta K mutant background. Remarkably, only featureless particles were observed in the AFM images of Juxta K + K326,327A, Juxta K + F349A, and Juxta K + R398,399Q . These findings reveal that syt1 multimerization is regulated by a complex interplay between the juxtamembrane region and the C2B domain of syt1, with the juxtamembrane region serving as the primary determinant. These data suggest that the C2B mutations may have had subtle effects on self-association that escaped detection when analyzed in an otherwise WT background . Syt1 Self-Association Mutants Are Targeted to Synapses with the Correct Topology. Having established that the complete cytoplasmic domain of syt1 forms homo-multimers under aqueous conditions on lipid bilayers, we sought to determine the functional relevance of this interaction by conducting cell-based experiments. We first determined whether each of the mutants described in our in vitro experiments were properly targeted to SVs in cultured syt1 knockout (KO) mouse hippocampal neurons. Floxed syt1 was disrupted using Cre recombinase followed by re-expression of WT or each mutant form of the protein using lentiviral transduction; the expression was monitored via immunoblot ( and SI Appendix , Fig. S14 ; we note that juxtamembrane lysine mutations [Juxta K] reduced the mobility of both recombinant and neuronally expressed syt1 on sodium dodecyl sulfate–polyacrylamide gel electrophoresis [SDS-PAGE] gels). We observed that WT and each mutant form of syt1 were highly colocalized with the SV marker synaptophysin . Notably, the syt1 Juxta K mutant lentiviral expression construct preserved the WT K80 and K81 residues directly following the TMD to ensure proper syt1 topology ( SI Appendix , Fig. S2 ) . To further confirm targeting to SVs, we conducted pHluorin experiments and found that upon stimulation and exocytosis, all constructs rescued the reduction in the time to peak that is characteristic of syt1 KO neurons . Moreover, all but the F349A mutant rescued the kinetic defect in endocytosis that occurs in the KO . The unexpected inability of F349A to rescue SV recycling will be addressed in a future study. In summary, these findings demonstrate that all constructs are targeted to SVs with the same topology as the WT protein. However, the low time resolution of the pHluorin measurements sharply limits what can be learned about excitation–secretion coupling, so we next turned to high-speed glutamate imaging and electrophysiology experiments to address the impact of the self-association mutants on evoked and spontaneous neurotransmitter release. Syt1 Self-Association Is Essential for Driving and Synchronizing Evoked SV Exocytosis. In the next series of experiments, we used iGluSnFR to monitor the evoked release of glutamate from SVs, triggered by single action potentials, in WT, syt1 KO, and syt1 KO neurons rescued with each of the constructs detailed in . From the raw traces and from histograms that were created by binning the peak iGluSnFR signal (ΔF/F 0 ) versus time , it was evident that loss of syt1 abolished rapid synchronous release; only slow asynchronous release was detected. Fast release was completely rescued by WT syt1 and the F349A mutant. In contrast, the Juxta K mutant, alone and combined with the F349A mutant, only partially rescued fast release . Examination of the average traces showed that expression of the Juxta K mutant resulted in a 63 ± 1.7% reduction in the peak amplitude of the iGluSnFR signal, and this was exacerbated by adding the F349A mutation, resulting in a 79 ± 1.7% reduction in the signal ( and SI Appendix , Table S2 ). To visualize the influence that these mutations have on the balance of synchronous versus asynchronous release, we plotted the normalized cumulative frequency distributions for WT and each mutant as a function of time. From this analysis, the ability of syt1 to synchronize release is readily apparent ; the synchronous fraction of total release observed over the image series was then extracted and is plotted in . Substitution of F349 slightly reduced synchronization, but this did not reach significance in our experiments (but see ref. ). However, the Juxta K mutations, which strongly affected homo-multimerization, resulted in a marked reduction in the ability of syt1 to synchronize release. Moreover, this effect was even greater when the Juxta K mutant also included the F349A mutation . Hence, there is a correlation between the ability of mutations to impair syt1 self-association and to reduce and desynchronize evoked release. Syt1 Self-Association Is Required for Clamping Spontaneous Release. Careful inspection of the iGluSnFR signals, before an action potential was delivered, indicated that some of the syt1 mutations also affected basal fusion rates . Indeed, it is well documented that under resting conditions, syt1 serves as a fusion clamp that inhibits spontaneous release (minis) . Monitoring spontaneous activity with iGluSnFR will reveal frequency and spatial information . However, it is unknown if this method will accurately report event amplitude. Because these properties may be affected by our syt1 mutants, we turned to electrophysiological characterization of spontaneous release . We focused on inhibitory GABAergic minis (mIPSCs), as they are more reliant on syt1 than are glutamatergic minis . These experiments revealed yet another correlation: mutations that impair syt1 self-association also result in concomitant increases in spontaneous fusion rates . More specifically, F349A partially disrupted the ability of syt1 to clamp minis as previously reported , while the Juxta K mutant had an even stronger effect; mutating both regions almost completely abolished the clamping activity of syt1 . The mini amplitudes were unchanged across all conditions . Together, the physiology experiments described in this study support a model in which syt1 must multimerize in order to clamp minis and to drive and synchronize evoked SV exocytosis.
The structure of syt1 is depicted in . The N-terminal region comprises a short luminal domain, a single TMD, and a juxtamembrane linker that is followed by the tandem C2 domains . Within this juxtamembrane linker lies a sequence containing 10 positively charged residues directly after the TMD . A key feature of the experiments reported here is that we used the intact cytoplasmic domain of syt1, residues 80 to 421, which includes this cationic juxtamembrane segment. Again, this contrasts with the majority of published work describing syt1 biochemistry and oligomerization because those reports were based on shorter fragments (residues 96 to 421 or 143 to 421, both lacking the polybasic region) . We first assessed the self-association properties of syt1 by performing DLS in aqueous media . Each of the two C2 domains of syt1 are ∼2.5 × 5 nm ( SI Appendix , Fig. S1 A ). We reasoned that if syt1 assembled into a multimeric structure, the average hydrodynamic diameter of the pure protein should exceed these monomeric dimensions. When suspended in physiological, aqueous media, pure syt1(80-421) was found to have an average diameter of ∼5 nm, suggesting it is indeed monomeric . Syt1 is known to function by binding anionic lipids, namely phosphatidylserine (PS) and PIP 2 . We therefore examined how anionic lipids would influence multimerization. Remarkably, we found that the addition of a soluble, short-chain anionic lipid, 6:0 PS, caused syt1 to assemble into structures with a diameter of ∼250 nm . In contrast, syt1 remained monomeric in the presence of a nonacylated variant (phosphoserine) ( SI Appendix , Fig. S1 B ), and 6:0 PS alone failed to generate a detectable DLS signal. This demonstrates that syt1 self-association is triggered upon binding anionic phospholipids, mediated by electrostatic and hydrophobic interactions. We then proceeded to image the DLS samples by negative-stain EM. In line with the DLS, EM imaging found large clusters in the WT syt1(80-421) + 6:0 PS sample, whereas protein alone and 6:0 PS alone samples were devoid of these large structures . Since SVs have, on average, 15 copies of syt1 , we do not expect these large structures to exist in vivo. However, these findings show that the cytoplasmic domain of syt1 has the ability to self-associate in the presence of anionic lipids. As such, we reasoned that analyzing the size of these supraphysiological multimeric structures, in conjunction with site-directed mutagenesis, would enable us to map the determinants that mediate lipid-dependent homomeric interactions under aqueous conditions.
To gain insight into the structural elements of syt1(80-421) that mediate self-association, we mutated a number of residues that have previously been reported to regulate syt1 oligomerization through the C2B domain and performed DLS analysis. These mutant forms of syt1 encompass K326,327A , a positively charged region that is also responsible for Ca 2+ -independent PIP 2 -binding activity ; F349A, which was reported to disrupt the ring structures observed by negative-stain EM ; and R398,399Q, implicated in binding SNAREs and C2B self-association via back-to-back dimerization . All three sets of mutations failed to disrupt multimerization under our experimental conditions ( and ).
As outlined in the introduction, syt1 was first thought to self-associate via determinants in the N terminus of the protein . Within this putative oligomerization region, the juxtamembrane linker was shown to mediate Ca 2+ -independent interactions with membranes . The juxtamembrane linker contains a segment (between residues 80 to 95) in which 10 of the first 16 residues after the transmembrane domain are lysines . This cationic region is poorly characterized, as residues 80 to 95 are commonly excluded from recombinant preparations of the syt1 soluble domain, perhaps because of the increased difficulty in purification (see Methods ). We hypothesized that these charged residues mediate interactions with anionic phospholipids to promote oligomerization . To assess this possibility, we neutralized the juxtamembrane lysines via mutagenesis (Juxta K) ( SI Appendix , Fig. S2 ) and, again, performed DLS on syt1(80-421) with and without 6:0 PS. In contrast to WT protein, we found that 6:0 PS failed to trigger self-association of the Juxta K variant; Juxta K syt1 remained monomeric in solution ( and ). This DLS result was further validated by EM imaging, which found that the Juxta K + 6:0 PS sample completely lacked large protein–lipid clusters .
Although DLS and EM analysis served as an efficient screen that enabled us to uncover Juxta K–mediated syt1 self-association, this system is accompanied by nonphysiological caveats. To validate the DLS and EM results, we developed a second in vitro assay to examine if the syt1 cytoplasmic domain self-associates on the surface of phospholipid bilayers. For this, we generated SLBs and performed AFM imaging while aiming to mimic native, physiological conditions . This AFM strategy combines the distinct advantages of both DLS and EM by enabling high sensitivity experiments, with single particle resolution, in aqueous media. As a preliminary test of our AFM approach, we incubated 1 µM syt1(80-421) with an SLB (DOPC/DOPS/PIP 2 , 72:25:3) for 6 h in the absence of Ca 2+ . Under these aqueous conditions, we observed the formation of large numbers of ring-like structures formed by syt1 on the lipid bilayer surface ( SI Appendix , Fig. S3 A ). A zoomed-in three-dimensional view of a representative structure is also shown ( SI Appendix , Fig. S3 B ). Overall, the diameter of these structures (i.e., the distance between the two highest points on both sides in a cross-section) was commonly greater than 100 nm, suggesting that they are formed by a high copy number of syt1(80-421). Importantly, the syt1 rings that we observed by AFM are filled with lipids in the center as determined by lateral height profiles in conjunction with an analysis of surface roughness inside and outside the rings ( SI Appendix , Fig. S3 B and C ). These findings indicate that these structures assemble on the surface of intact bilayers. We also occasionally observed structures that formed around defects (holes) in the SLB ( SI Appendix , Fig. S4 A ). These structures resemble protein-decorated holes that form on SLBs after treatment with pore-forming proteins such as Bax . However, since syt1(80-421) does not form large pores in bilayers ( SI Appendix , Fig. S4 B ), we believe the effects of pore-forming proteins like Bax on SLBs are distinct from the ring-like structures that we observed. Indeed, upon further examination, we found that defects in the SLB are stable over time ( SI Appendix , Fig. S4 C ), while the number of ring-like structures increase dramatically with time ( SI Appendix , Fig. S4 D ). Hence, syt1 ring formation does not require membrane defects, and these ring-like structures represent bona fide syt1 multimers on the SLB surface. Notably, since the properties of the structures that associate around membrane defects are dominated by the size and shape of the defect itself rather than the multimerization properties of syt1, we excluded structures with an interior hole or defect in the bilayer from all analyses.
2+ . After establishing that circular multimerization (i.e., ring formation) on the bilayer surface was robust and reproducible by AFM imaging, we set the incubation time of syt1(80-421) with the SLB to 20 min in all subsequent trials. In the absence of Ca 2+ [0.5 mM ethylene glycol-bis(β-aminoethyl ether)- N , N , N ′, N ′-tetraacetic acid (EGTA)], at increasing protein concentration, protein structures on the SLB transitioned from particles (50 nM) to rings (1 µM) and from rings to patches (3 µM) ( SI Appendix , Figs. S5 and S6 and Table S1 ). The sensitivity of the multimeric structures to protein concentration in our AFM analysis and the variation in the morphology of these multimers, suggest structural plasticity in the multimerization process. In comparison, in 1 mM free Ca 2+ , rings and patches still formed, but both classes of multimers formed at lower protein concentrations as compared to the Ca 2+ -free condition ( SI Appendix , Fig. S6 and Table S1 ). The effect of Ca 2+ was also consistent across a range of Ca 2+ concentrations ( SI Appendix , Fig. S7 A ). These findings show that all forms of syt1 self-assembly (particles, rings, and patches) can occur in the absence of Ca 2+ and that the addition of Ca 2+ facilitates multimerization. To further confirm this Ca 2+ -dependent enhancement, we tested a syt1 Ca 2+ -binding mutant, syt1 4N (80-421), in which two acidic Ca 2+ ligands in each C2 domain were mutated to neutral residues, thus abolishing Ca 2+ -binding activity . Ring-like structures were still observed, but Ca 2+ failed to enhance further assembly ( SI Appendix , Fig. S7 B ). The precise mechanism by which Ca 2+ promotes syt1 self-association is not yet known but likely involves conformational changes that alter the relative orientation of its tandem C2 domains within the multimer . In addition, Ca 2+ might also facilitate self-assembly by increasing the local syt1 concentration on the bilayer. To examine the influence of the SLB phospholipid composition on syt1 self-association, we omitted PS and PIP 2 in our protein–lipid interaction tests of syt1(80-421). We found that anionic phospholipids were required for syt1 multimers to assemble on the bilayer ( SI Appendix , Fig. S8 A ). Next, to validate that phospholipid binding promotes syt1 self-association, we deposited syt1(80-421) onto a bare mica surface (lipid free) and again studied its morphology under aqueous conditions ( SI Appendix , Fig. S8 B ). When no lipid was present, syt1(80-421) molecules appear as dispersed particles with similar dimensions in both EGTA and Ca 2+ conditions.
Our DLS and EM analyses revealed that syt1 multimerization, in response to lipid binding, is governed by the Juxta K region. We revisited all the syt1 mutants characterized by DLS to assess their respective impact on multimerization on phospholipid bilayers. The WT and mutant syt1(80-421) structures that were analyzed by AFM and the associated lateral height profiles are shown in SI Appendix , Figs. S9–S13 . Our AFM analysis found the K326,327A, F349A, and R398,399Q mutations have no effect on the formation of large multimeric structures, similar to the DLS result . However, also in agreement with our DLS and EM results, we found that neutralizing the juxtamembrane lysine residues (Juxta K) dramatically disrupted the formation of large rings on the SLBs . Interestingly, although no large multimeric structures were present with 1 µM Juxta K in 0.5 mM EGTA, smaller ∼30-nm ring structures were formed in the presence of 1 mM free Ca 2+ ( and SI Appendix , Fig. S13 ). The distinct diameters of the small and large rings suggest that the two populations of syt1(80-421) multimers form by different mechanisms. Notably, the smaller rings have dimensions that are comparable to the rings reported by Rothman, Krishnakumar, Volynski, and colleagues . In stark contrast, however, the small rings observed by AFM in aqueous buffer only appeared in the presence of Ca 2+ , whereas the rings observed by negative stain EM were dispersed by Ca 2+ .
In an effort to further disrupt syt1 self-association (small rings), we analyzed all of the above C2B mutations in a Juxta K mutant background. Remarkably, only featureless particles were observed in the AFM images of Juxta K + K326,327A, Juxta K + F349A, and Juxta K + R398,399Q . These findings reveal that syt1 multimerization is regulated by a complex interplay between the juxtamembrane region and the C2B domain of syt1, with the juxtamembrane region serving as the primary determinant. These data suggest that the C2B mutations may have had subtle effects on self-association that escaped detection when analyzed in an otherwise WT background .
Having established that the complete cytoplasmic domain of syt1 forms homo-multimers under aqueous conditions on lipid bilayers, we sought to determine the functional relevance of this interaction by conducting cell-based experiments. We first determined whether each of the mutants described in our in vitro experiments were properly targeted to SVs in cultured syt1 knockout (KO) mouse hippocampal neurons. Floxed syt1 was disrupted using Cre recombinase followed by re-expression of WT or each mutant form of the protein using lentiviral transduction; the expression was monitored via immunoblot ( and SI Appendix , Fig. S14 ; we note that juxtamembrane lysine mutations [Juxta K] reduced the mobility of both recombinant and neuronally expressed syt1 on sodium dodecyl sulfate–polyacrylamide gel electrophoresis [SDS-PAGE] gels). We observed that WT and each mutant form of syt1 were highly colocalized with the SV marker synaptophysin . Notably, the syt1 Juxta K mutant lentiviral expression construct preserved the WT K80 and K81 residues directly following the TMD to ensure proper syt1 topology ( SI Appendix , Fig. S2 ) . To further confirm targeting to SVs, we conducted pHluorin experiments and found that upon stimulation and exocytosis, all constructs rescued the reduction in the time to peak that is characteristic of syt1 KO neurons . Moreover, all but the F349A mutant rescued the kinetic defect in endocytosis that occurs in the KO . The unexpected inability of F349A to rescue SV recycling will be addressed in a future study. In summary, these findings demonstrate that all constructs are targeted to SVs with the same topology as the WT protein. However, the low time resolution of the pHluorin measurements sharply limits what can be learned about excitation–secretion coupling, so we next turned to high-speed glutamate imaging and electrophysiology experiments to address the impact of the self-association mutants on evoked and spontaneous neurotransmitter release.
In the next series of experiments, we used iGluSnFR to monitor the evoked release of glutamate from SVs, triggered by single action potentials, in WT, syt1 KO, and syt1 KO neurons rescued with each of the constructs detailed in . From the raw traces and from histograms that were created by binning the peak iGluSnFR signal (ΔF/F 0 ) versus time , it was evident that loss of syt1 abolished rapid synchronous release; only slow asynchronous release was detected. Fast release was completely rescued by WT syt1 and the F349A mutant. In contrast, the Juxta K mutant, alone and combined with the F349A mutant, only partially rescued fast release . Examination of the average traces showed that expression of the Juxta K mutant resulted in a 63 ± 1.7% reduction in the peak amplitude of the iGluSnFR signal, and this was exacerbated by adding the F349A mutation, resulting in a 79 ± 1.7% reduction in the signal ( and SI Appendix , Table S2 ). To visualize the influence that these mutations have on the balance of synchronous versus asynchronous release, we plotted the normalized cumulative frequency distributions for WT and each mutant as a function of time. From this analysis, the ability of syt1 to synchronize release is readily apparent ; the synchronous fraction of total release observed over the image series was then extracted and is plotted in . Substitution of F349 slightly reduced synchronization, but this did not reach significance in our experiments (but see ref. ). However, the Juxta K mutations, which strongly affected homo-multimerization, resulted in a marked reduction in the ability of syt1 to synchronize release. Moreover, this effect was even greater when the Juxta K mutant also included the F349A mutation . Hence, there is a correlation between the ability of mutations to impair syt1 self-association and to reduce and desynchronize evoked release.
Careful inspection of the iGluSnFR signals, before an action potential was delivered, indicated that some of the syt1 mutations also affected basal fusion rates . Indeed, it is well documented that under resting conditions, syt1 serves as a fusion clamp that inhibits spontaneous release (minis) . Monitoring spontaneous activity with iGluSnFR will reveal frequency and spatial information . However, it is unknown if this method will accurately report event amplitude. Because these properties may be affected by our syt1 mutants, we turned to electrophysiological characterization of spontaneous release . We focused on inhibitory GABAergic minis (mIPSCs), as they are more reliant on syt1 than are glutamatergic minis . These experiments revealed yet another correlation: mutations that impair syt1 self-association also result in concomitant increases in spontaneous fusion rates . More specifically, F349A partially disrupted the ability of syt1 to clamp minis as previously reported , while the Juxta K mutant had an even stronger effect; mutating both regions almost completely abolished the clamping activity of syt1 . The mini amplitudes were unchanged across all conditions . Together, the physiology experiments described in this study support a model in which syt1 must multimerize in order to clamp minis and to drive and synchronize evoked SV exocytosis.
Cell-based experiments suggest that syt1 functions in the SV cycle as an oligomer . Indeed, density gradient fractionation and coimmunoprecipitation experiments support the notion that syt1 oligomerizes in the presence of detergent. However, the stoichiometry and structure of these oligomers, as well as the effect of Ca 2+ ions and phospholipids on self-association, remain unresolved issues. Hence, whether oligomerization impacts syt1 function remains unclear. It is therefore crucial to address the ability of syt1 to homo-multimerize on lipid bilayers, in the absence of detergent, under physiologically relevant conditions. In the current study, we first addressed this by conducting DLS, EM, and AFM measurements in a reconstituted syt1–membrane system under detergent-free aqueous conditions. We observed that the complete cytoplasmic domain of syt1 does in fact form multimers on membranes under relatively native conditions. We emphasize that our in vitro experiments utilized the complete cytoplasmic domain of syt1, residues 80 through 421, which includes the entire juxtamembrane linker between the TMD and the C2A domain. Residues 80 through 95 have largely been overlooked in previous studies examining syt1 function, but two studies suggested that this region might be important for function. In one report, a peptide corresponding to residues 80 through 98 of syt1 reduced neurotransmitter release, potentially by inhibiting syt1–membrane interactions . In another study, the juxtamembrane linker was directly shown to promote membrane binding and to mediate syt1 glycine zipper interactions . Strikingly, we did not observe rings or other multimeric structures when using a shorter fragment that began at residue 96; this fragment lacks the juxtamembrane lysine patch. These observations might seem to be at odds with the fact that rings on lipid monolayers were initially observed by EM with a syt1 C2AB domain that lacked the entire juxtamembrane linker (143 to 421) , but in that study, ring formation required low ionic strength buffers (5 to 15 mM KCl) and 40% PS. Indeed, syt1 self-association is highly dependent on the salt concentration of the buffer , and supraphysiological PS promotes calcium-independent membrane binding . Subsequent EM studies continued to include a low salt step during sample processing; however, it was later determined that the juxtamembrane linker could stabilize ring formation after buffer exchange from 5 mM to 100 mM KCl prior to imaging. In short, both mutagenesis and truncation experiments demonstrate that the juxtamembrane linker is required for the cytoplasmic domain of syt1 to form homo-multimers at physiological ionic strength. It is presently unknown whether syt1 residues 1 to 79 (the N-terminal region and the TMD) influence self-association.The in vitro approaches used in this study are well suited to examine the self-association properties of the cytosolic domain of syt1. However, alternative strategies would be required to address the multimerization of full-length syt1. For example, DLS analysis of syt1 proteoliposomes would be overwhelmed by the light scattered from the vesicles themselves and attempts to visualize the oligomeric status of membrane-embedded full-length syt1 via AFM might be confounded by N-terminal interactions with the mica surface. We also note that full-length syt1 contains five cysteine residues within the TMD region that are palmitoylated in vivo . It has been demonstrated that recombinant full-length syt1 is prone to aberrant disulfide bonding when these residues are buried within the hydrophobic core of a phospholipid bilayer . Therefore, future in vitro studies of syt1 self-association would require examination of a cysteine-free variant or the use of fully palmitoylated protein. Although we did not examine the full-length recombinant protein in the current study, the strong correlation between the in vitro results and physiology data suggests that syt1(80-421) is a reasonable proxy for full-length syt1 in vivo. Under our aqueous conditions, syt1(80-421) formed large ring-like structures with diameters ∼180 nm on lipid bilayers; in contrast, previous reports of negative-stain EM analysis revealed rings that were 17 to 45 nm in diameter . Another striking difference was that in our AFM experiments, Ca 2+ promoted self-assembly, while in the EM studies, Ca 2+ dissolved the rings. Interestingly, Wang et al. also found that Ca 2+ stabilized syt1 rings that had assembled in solution after binding a variety of polyanionic compounds. Differences in experimental conditions and sample handling steps in EM versus AFM experiments are likely to underlie the observed differences in multimeric structure and regulation. Specifically, AFM facilitates imaging of syt1 on lipid bilayers in an aqueous buffer; EM yields greater resolution but involves extensive sample preparation, and images are obtained on monolayers under vacuum. At present, we believe that the multimeric structures observed in our system are distinct from the previous reports of syt1 rings. Interestingly, after neutralizing the juxtamembrane lysines, we observed small rings by AFM that were comparable to the size of the rings that were observed via EM. This may reflect the fact that the majority of the EM studies were, again, conducted using a truncated form of syt1 (143 to 421) that lacked the crucial juxtamembrane lysine-rich segment . We went on to examine all three mutations [K326,327A , F349A , and R398,399Q ] that have been implicated in oligomerization activity mediated by the C2B domain. None of these substitutions abolished self-association by themselves, but our AFM imaging found that each set of mutations completely disrupted the self-association in the Juxta K mutant background . We therefore conclude that the juxtamembrane linker and the C2B domain of syt1 both contribute to a complex multimerization mechanism. The formation of large and small rings thus appears to involve somewhat distinct structural elements; the precise experimental conditions might determine which element dominates. Because WT syt1 self-associates in the absence of Ca 2+ , it is likely to form multimers under resting conditions in neurons. Our AFM imaging of syt1 on phospholipid bilayers further revealed that these structures not only persisted but were enhanced by Ca 2+ . Moreover, Juxta K neutralization abolished self-association in EGTA and revealed that Ca 2+ promoted a second mode of multimerization that was mediated by C2B domain interactions. Taken together, our data suggest that Juxta K–mediated self-association is likely constitutive, while C2B-mediated multimerization would be regulated by Ca 2+ during the exocytic limb of the SV cycle in neurons. Importantly, anionic phospholipids are an essential cofactor for self-association ( and SI Appendix , Fig. S8 ). Given the close apposition of the Juxta K region to the SV membrane and the lack of rampant vesicle clustering by reconstituted full-length syt1 (i.e., a lack of trans interactions), we believe the Juxta region likely associates with anionic lipids in a cis configuration on the surface of SVs. In this model, the Juxta K region serves to organize syt1 on SVs, which then guides C2B-mediated self-association on the presynaptic plasma membrane. We took advantage of the mutations that impaired syt1 self-association in vitro under relatively physiological aqueous conditions and conducted functional assays in neurons. Deletion of the entire juxtamembrane linker has been shown to disrupt syt1 function, perhaps by altering the ability of the C2 domains to engage effectors . However, the role of the Juxta K region in SV exocytosis had not been previously explored via an amino acid substitution approach. We first established that each mutant was efficiently targeted to SVs with proper topology and then conducted physiology experiments that revealed a clear correlation between the ability of the mutations to disrupt multimerization activity with their ability to disrupt the clamping activity of syt1, resulting in higher rates of spontaneous SV release. Moreover, impairment of syt1 self-association was also correlated with reductions in peak glutamate release and the desynchronization of this release in response to single action potentials. Together with our in vitro studies, these cell-based findings strongly suggest that syt1 must assemble into multimers, mainly via its juxtamembrane lysine-rich patch, but also with contributions from the C2B domain, in order to inhibit spontaneous release and to drive rapid and efficient evoked release. Interestingly, a recent study using a reconstituted vesicle fusion assay proposed that syt1 must first clamp fusion in order to become subsequently responsive to Ca 2+ . We reiterate that large clusters observed by DLS and the ring-like structures that we observed by AFM are unlikely to form on SVs in vivo, as there are only ∼15 copies of syt1 per vesicle . In contrast, the syt1 copy number in our in vitro systems were largely unrestricted, and this is expected to exaggerate the oligomeric state. Still, these supraphysiological structures served as a platform to assay for regulatory factors (lipids, Ca 2+ ) and structural elements that govern syt1 self-association on lipid bilayers. Although the precise structural arrangement and oligomeric status of syt1 on SVs is still unknown, our data suggest that the ∼15 molecules of syt1 per SV would likely multimerize with each other in vivo. We also note that ∼20 to 25% of neuronal syt1 resides in the plasma membrane of presynaptic boutons , and this pool could also potentially play a role in exocytosis by forming large (>15 copies) oligomers. Regardless, the in vitro experiments reported here demonstrate that highly purified syt1 self-associates under native conditions, and these assays provided a means to monitor the disruption of this activity via mutagenesis to, in turn, guide functional experiments. Again, our pHluorin experiments largely rule out a role for juxtamembrane-mediated oligomerization in endocytosis , but we demonstrate that syt1 self-association clearly plays a role in clamping spontaneous release and in determining the extent and synchronization of evoked SV exocytosis ( and ). At present, it is unclear as to how the oligomerization of syt1 contributes to its ability to regulate SV fusion. It seems likely that Juxta K–mediated multimerization of syt1 on the surface of SVs would, in addition to facilitating C2B–C2B interactions, influence how the C2 domains engage with binding partners on the presynaptic plasma membrane. For example, syt1 oligomerization might serve to “order” SNARE proteins around the fusion pore via direct physical interactions with t-SNAREs . Oligomerization could also play additional roles in the regulation of release by adding mass to the fusion complex to drive pore dilation, orienting the C2 domains of syt1 to mediate its distinct effects on spontaneous and evoked release , allowing copies of syt1 to functionally cooperate with one another via direct physical interactions , or allowing groups of syt1 molecules to penetrate bilayers and rearrange the phospholipids, as an ensemble, to drive fusion pore transitions. These issues will be addressed in future studies and will be facilitated by the robust AFM approach that allows the study of syt1 self-association under physiological conditions.
Recombinant Proteins. Recombinant rat syt1 was purified from Escherichia coli (BL21) as an N-terminally tagged his6-SUMO fusion protein. Protein expression was induced by the addition of 200 µM isopropyl β-d-1-thiogalactopyranoside when the optical density of the culture at 600 nm reached 0.6 to 0.8. Bacterial pellets were lysed by sonication in 50 mM Tris, 300 mM NaCl, 5% glycerol, 5 mM 2-mercaptoethanol, 1% Triton X-100 (pH 7.4) plus a protease inhibitor mixture (Roche). The samples were also incubated with RNase and DNase (10 µg/mL) to prevent nucleic acid–mediated aggregation . Insoluble material was removed by centrifugation at 4,000 rpm for 15 min, and the supernatant was incubated with nickel–nitrilotriacetic acid (Ni-NTA) agarose followed by washing the beads with 50 mM Tris, 1 M NaCl, and 5% glycerol (pH 7.4). Protein was liberated from the Ni-NTA agarose by overnight incubation with 0.5 µM recombinant SENP2 protease. A final fast protein liquid chromatography purification was performed by running the samples through a Superdex 200 Increase 10/300 GL column in 25 mM Hepes, 300 mM NaCl, 5% glycerol, and 5 mM 2-mercaptoethanol (pH 7.4). Samples were subjected to SDS-PAGE, and protein concentration was determined by staining with Coomassie blue using bovine serum albumin as a standard. DLS Analysis. DLS of syt1(80-421) was performed using a DynaPro NanoStar Dynamic Light Scattering instrument (Wyatt Technology). The syt1(80-421) protein (2 µM) was suspended in 25 mM Hepes, 100 mM KCl, and 0.5 mM EGTA, with and without 100 µM 1,2-dihexanoyl-sn-glycero-3-phospho-L-serine (6:0 PS), and average diameter distributions were determined. Each of the WT and mutant syt1(80-421) samples were analyzed in triplicate with consistent results. The DLS samples were then blinded and imaged by EM as previously described . AFM Imaging. A total of 30 µl liposomes (1mM stock solution; PC/PS/PIP 2 , 72:25:3 or PC/PS, 80:20, extruded with a 100 nm filter) were suspended in 270 µl imaging buffer (25 mM Hepes, pH 7.4, 150 mM potassium gluconate, and 0.5 mM EGTA) with or without 1.5 mM CaCl 2 and deposited onto freshly cleaved mica (20-mm diameter discs). After incubating for 30 min, the sample was rinsed with 150 µl of the same buffer six times, using a pipette, to remove unabsorbed material; the substrate was covered by the buffer at all times. Purified syt1(80-421) was suspended in the same buffer and added to the sample. After a 20-min or 6-h incubation period, the sample was again rinsed repeatedly to remove unbound protein, and the final sample volume was adjusted to 300 µl. AFM imaging was carried out with an Agilent 5500 Scanning Probe Microscope in acoustic alternating current mode with silicon nitride probes (FastScan-D, Bruker or BL-AC40TS, Oxford Instruments). Images were captured with minimum imaging force at a scan rate of 2 Hz with 512 lines per area. Data analysis was performed with PicoImage 5.1.1. Particle volume was determined using the peak/dip volume tool (within the PicoImage software) line by line. The ring structure diameter was defined as the distance between the two highest points on each side of a cross-section across the imaged structure. Protein coverage on lipid bilayers was determined by setting the height threshold to anything 1 nm or more above the bilayer surface. iGluSnFR Imaging and Quantification. iGluSnFR imaging and quantification were performed as previously described with the following modifications. For single stimuli imaging, 150 frames were collected at 10-ms exposure (1.5 s total), and a single field stimulus was triggered at half a second after the initial frame. pHluorin Imaging. Live-cell fluorescent imaging of pHluorin-expressing neurons was carried out under the same conditions as iGluSnFR imaging. Briefly, images were acquired on an Olympus IX83 inverted microscope equipped with a cellTIRF 4Line excitation system using an Olympus 60×/1.49 Apo N objective and an Orca Flash4.0 CMOS camera (Hamamatsu Photonics). This microscope runs Metamorph software with Olympus 7.8.6.0 acquisition software from Molecular Devices. The imaging media was extracellular fluid with 2 mM CaCl 2 . Single image planes were acquired with 500-ms exposure using a white organic light-emitting diode with standard green fluorescent protein filters. Images were collected once a second for 3 min. A stimulation train was started 9 s into imaging. The trains (200 stimuli in 10 s [20 Hz]) were triggered by a Grass SD9 stimulator through platinum parallel wires attached to a field stimulation chamber (Warner Instruments; RC-49MFSH). All biosensor imaging experiments were performed at 32 to 34 °C. The environment was controlled by a Tokai incubation controller and chamber. Colocalization Quantification. Colocalization was measured as described previously , using Fiji for ImageJ and Just Another Colocalization Plugin . Electrophysiology. mIPSCs were recorded using a Multiclamp 700B amplifier (Molecular Devices) and analyzed as previously described . Briefly, syt1 KO hippocampal neurons expressing WT, Juxta K, F349A, or Juxta K + F349A at day in vitro (DIV) 14 through 19 were transferred to a recording chamber with a bath solution containing the following (in mM): 128 NaCl, 5 KCl, 2 CaCl2, 1 MgCl2, 30 D-glucose, 25 Hepes, and 1 μM tetrodotoxin, pH 7.3 (305 mOsm). Borosilicate glass pipettes (Sutter Instruments) were pulled by a dual-stage glass micropipette puller (Narishige) and filled with an internal solution containing (in mM) 130 KCl, 1 EGTA, 10 Hepes, 2 ATP, 0.3 GTP, 5 QX-314 (Abcam), and 5 sodium phosphocreatine, pH 7.35 (295 mOsm). mIPSCs were pharmacologically isolated by bath applying D-AP5 (50 µM, Abcam) and cyanquixaline (20 µM, Abcam) and acquired using a Digidata 1440B analog-to-digital converter (Molecular Devices) and Clampex 10 software (Molecular Devices) at 10 kHz. Neurons were held at −70 mV. All cells were equilibrated for ∼1 min after break in before recordings started. Series resistance was compensated, and traces were discarded if the access resistance exceeded 15 ΜΩ for the entire duration. The collected miniature events were detected in Clampfit 11.1 (Molecular Devices) using a template matching search.
Recombinant rat syt1 was purified from Escherichia coli (BL21) as an N-terminally tagged his6-SUMO fusion protein. Protein expression was induced by the addition of 200 µM isopropyl β-d-1-thiogalactopyranoside when the optical density of the culture at 600 nm reached 0.6 to 0.8. Bacterial pellets were lysed by sonication in 50 mM Tris, 300 mM NaCl, 5% glycerol, 5 mM 2-mercaptoethanol, 1% Triton X-100 (pH 7.4) plus a protease inhibitor mixture (Roche). The samples were also incubated with RNase and DNase (10 µg/mL) to prevent nucleic acid–mediated aggregation . Insoluble material was removed by centrifugation at 4,000 rpm for 15 min, and the supernatant was incubated with nickel–nitrilotriacetic acid (Ni-NTA) agarose followed by washing the beads with 50 mM Tris, 1 M NaCl, and 5% glycerol (pH 7.4). Protein was liberated from the Ni-NTA agarose by overnight incubation with 0.5 µM recombinant SENP2 protease. A final fast protein liquid chromatography purification was performed by running the samples through a Superdex 200 Increase 10/300 GL column in 25 mM Hepes, 300 mM NaCl, 5% glycerol, and 5 mM 2-mercaptoethanol (pH 7.4). Samples were subjected to SDS-PAGE, and protein concentration was determined by staining with Coomassie blue using bovine serum albumin as a standard.
DLS of syt1(80-421) was performed using a DynaPro NanoStar Dynamic Light Scattering instrument (Wyatt Technology). The syt1(80-421) protein (2 µM) was suspended in 25 mM Hepes, 100 mM KCl, and 0.5 mM EGTA, with and without 100 µM 1,2-dihexanoyl-sn-glycero-3-phospho-L-serine (6:0 PS), and average diameter distributions were determined. Each of the WT and mutant syt1(80-421) samples were analyzed in triplicate with consistent results. The DLS samples were then blinded and imaged by EM as previously described .
A total of 30 µl liposomes (1mM stock solution; PC/PS/PIP 2 , 72:25:3 or PC/PS, 80:20, extruded with a 100 nm filter) were suspended in 270 µl imaging buffer (25 mM Hepes, pH 7.4, 150 mM potassium gluconate, and 0.5 mM EGTA) with or without 1.5 mM CaCl 2 and deposited onto freshly cleaved mica (20-mm diameter discs). After incubating for 30 min, the sample was rinsed with 150 µl of the same buffer six times, using a pipette, to remove unabsorbed material; the substrate was covered by the buffer at all times. Purified syt1(80-421) was suspended in the same buffer and added to the sample. After a 20-min or 6-h incubation period, the sample was again rinsed repeatedly to remove unbound protein, and the final sample volume was adjusted to 300 µl. AFM imaging was carried out with an Agilent 5500 Scanning Probe Microscope in acoustic alternating current mode with silicon nitride probes (FastScan-D, Bruker or BL-AC40TS, Oxford Instruments). Images were captured with minimum imaging force at a scan rate of 2 Hz with 512 lines per area. Data analysis was performed with PicoImage 5.1.1. Particle volume was determined using the peak/dip volume tool (within the PicoImage software) line by line. The ring structure diameter was defined as the distance between the two highest points on each side of a cross-section across the imaged structure. Protein coverage on lipid bilayers was determined by setting the height threshold to anything 1 nm or more above the bilayer surface.
iGluSnFR imaging and quantification were performed as previously described with the following modifications. For single stimuli imaging, 150 frames were collected at 10-ms exposure (1.5 s total), and a single field stimulus was triggered at half a second after the initial frame.
Live-cell fluorescent imaging of pHluorin-expressing neurons was carried out under the same conditions as iGluSnFR imaging. Briefly, images were acquired on an Olympus IX83 inverted microscope equipped with a cellTIRF 4Line excitation system using an Olympus 60×/1.49 Apo N objective and an Orca Flash4.0 CMOS camera (Hamamatsu Photonics). This microscope runs Metamorph software with Olympus 7.8.6.0 acquisition software from Molecular Devices. The imaging media was extracellular fluid with 2 mM CaCl 2 . Single image planes were acquired with 500-ms exposure using a white organic light-emitting diode with standard green fluorescent protein filters. Images were collected once a second for 3 min. A stimulation train was started 9 s into imaging. The trains (200 stimuli in 10 s [20 Hz]) were triggered by a Grass SD9 stimulator through platinum parallel wires attached to a field stimulation chamber (Warner Instruments; RC-49MFSH). All biosensor imaging experiments were performed at 32 to 34 °C. The environment was controlled by a Tokai incubation controller and chamber.
Colocalization was measured as described previously , using Fiji for ImageJ and Just Another Colocalization Plugin .
mIPSCs were recorded using a Multiclamp 700B amplifier (Molecular Devices) and analyzed as previously described . Briefly, syt1 KO hippocampal neurons expressing WT, Juxta K, F349A, or Juxta K + F349A at day in vitro (DIV) 14 through 19 were transferred to a recording chamber with a bath solution containing the following (in mM): 128 NaCl, 5 KCl, 2 CaCl2, 1 MgCl2, 30 D-glucose, 25 Hepes, and 1 μM tetrodotoxin, pH 7.3 (305 mOsm). Borosilicate glass pipettes (Sutter Instruments) were pulled by a dual-stage glass micropipette puller (Narishige) and filled with an internal solution containing (in mM) 130 KCl, 1 EGTA, 10 Hepes, 2 ATP, 0.3 GTP, 5 QX-314 (Abcam), and 5 sodium phosphocreatine, pH 7.35 (295 mOsm). mIPSCs were pharmacologically isolated by bath applying D-AP5 (50 µM, Abcam) and cyanquixaline (20 µM, Abcam) and acquired using a Digidata 1440B analog-to-digital converter (Molecular Devices) and Clampex 10 software (Molecular Devices) at 10 kHz. Neurons were held at −70 mV. All cells were equilibrated for ∼1 min after break in before recordings started. Series resistance was compensated, and traces were discarded if the access resistance exceeded 15 ΜΩ for the entire duration. The collected miniature events were detected in Clampfit 11.1 (Molecular Devices) using a template matching search.
Supplementary File
|
Vaccine practices, literacy, and hesitancy among parents in the United Arab Emirates | 5934a35a-071d-4f61-bc87-a48b47f8a566 | 11349212 | Health Literacy[mh] | Immunisation, one of public health’s greatest success stories, is responsible for saving millions of lives every year. Yet nearly 20 million infants every year have insufficient access to vaccines with progress being reversed in some countries . In 2012, the World Health Organization (WHO) established the Strategic Advisory Group of Experts on Immunization (SAGE) who began working towards addressing vaccine hesitancy. SAGE defines vaccine hesitancy as “a delay in acceptance or refusal of vaccines despite availability of vaccination services,” proposing the 3C framework for vaccine hesitancy: confidence, complacency, and convenience . The 3C model was updated and expanded in 2018, yielding the 5C model: confidence, complacency, constraints, calculation and collective responsibility . The COVID-19 pandemic itself also disrupted vaccine efforts, as well as exposed weaknesses and exacerbated strains in healthcare systems in 2020 and 2021, leading to setbacks globally . However, continuous efforts have helped the immunisation coverage to begin recovering, with the World Health Assembly endorsing a global strategy (the Immunization Agenda 2030) that envisions a world where everyone at every age can fully benefit from vaccines . In fact, while the number of zero-dose children (children missing out on any vaccination) dropped 3.9 million to reach 14.3 million in 2022, it is still above the pre-pandemic level of 12.9 million . As recognized by the WHO previously, vaccine hesitancy has emerged as one of the most serious threats to global health; unvaccinated individuals lead to failure of herd immunity and can act as virus reservoirs, cause outbreaks, delay spread control . Vaccination attitudes, contrary to previous beliefs, are multifactorial and complex, influenced by a number of individual, contextual, and vaccine-specific factors. For example, studies have shown that individual factors such as personal beliefs, past experiences with vaccines, and perceived risks of vaccination play significant roles in shaping vaccination attitudes . Contextual factors, including socio-economic status, cultural norms, and access to healthcare services, also influence vaccination decisions within communities . Moreover, vaccine-specific factors such as the perceived effectiveness and safety of vaccines, as well as the presence of misinformation or vaccine controversies, further contribute to the complexity of vaccination attitudes . As such, more recent literature has moved to recognizing that vaccine hesitancy cannot be resolved with a “one-size fits all” approach but requires a multimodal and tailored solution targeted to specific populations . Moreover, the challenges posed by vaccine hesitancy were further compounded by the emergence of the COVID-19 pandemic. Tackling vaccine hesitancy is possible and ensuring widespread vaccine acceptance is achievable but requires immediate responsiveness towards emerging concerns . According to the United Nations Children’s Fund (UNICEF), in the Middle East and North Africa, 3.8 million children have missed out entirely or partially on routine immunisation between 2019 and 2021. There are significant differences between countries within the MENA region;data from the WHO indicates that some countries in the Gulf Cooperation Council (GCC), such as the UAE and Qatar have relatively high immunization rates compared to other countries in the region, such as Yemen and Syria, where conflict and humanitarian crises have severely disrupted healthcare systems and access to immunization services . Still, the UAE seems to lag behind other countries in the region according to national immunisation estimates, with 4% of children under the age of 1 not having received any vaccines . However, there is a paucity of results; local research in the UAE highlight a mix of positive attitudes and hesitancy towards vaccination among parents, underscoring the need for further investigation . Yet, more recent studies have examined factors influencing vaccine acceptance among university students and the general population, shedding light on misinformation, digital literacy, and healthcare professional guidance . Given the diverse cultural landscape of the United Arab Emirates (UAE), it is hypothesized that parental attitudes towards childhood vaccination and vaccine hesitancy will vary significantly across demographic factors such as age, educational level, and nationality as well as other variables such as knowledge source and vaccine literacy. As such, this study aims to thoroughly evaluate the UAE’s parents’ general attitudes and vaccination practices, digital vaccine literacy, as well as estimate the prevalence of vaccine hesitancy and its determinants.
Study population and data collection This UAE cross-sectional study collected data from parents across the country from 18th March 2024 to 9th April 2024 through convenience sampling. Participants were approached through WhatsApp groups and other social media platforms such as X and Instagram. The UAE is one of the world’s largest consumers of social media and has a nearly 100% internet penetration ratio; as such, the majority of the population can be reached through social media. Only parents with current or previous children were included. A minimum sample size of 385 participants was calculated using Cochran’s sample size formula, assuming a confidence level of 95%, sampling error of 5%, and a standard error of 1.96. A total of 550 responses were retained after removing those not meeting inclusion criteria. The inclusion criteria included English-speaking and/or Arabic-speaking parents living in the UAE, with at least one child. A participant information sheet (PIS) was presented before starting the study and filling the questionnaire indicated consent to participate in the study. Finally, the collected data, which does not contain any identifying information, was stored securely and accessible only by the investigators to ensure confidentiality. The raw unprocessed data can be found in the . Questionnaire development The tool used in this study was developed by adapting the questionnaire used by Voo et al. as well as including the following: the Parental Attitudes towards Childhood Vaccines scale (PACV) by Opel et al. , the WHO’s Vaccine Hesitancy Scale (VHS) by Larson et al. , and the Digital Vaccine Literacy (DVL) scale by Montagni et al. . The questionnaire was originally developed in English and then translated to Arabic. With regards to the PACV, the already translated and validated Arabic version by Alsuwaidi et al. was used. The overall Arabic questionnaire was reviewed multiple times to ensure consistency with the original. Both were pilot tested two times; all provided feedback was evaluated and incorporated if appropriate. Required edits included expanding the options for some of the demographic questions; the VHS, PACV, and DVL were unchanged. The 60-item self-administered questionnaire consisted of three different sections: demographics, childhood vaccination attitudes and knowledge sources, and childhood vaccinations and practices. It included a mixture of yes/no questions, 5-item and 4-item likert scales, as well as single and multi-select questions . This research was reviewed and approved by the Research Ethics Committee of the University of Sharjah (Reference Number: REC-24-02-26-01-F). It was conducted in accordance with all relevant guidelines and regulations. No identifying information was collected from the participants. Statistical analysis Only the researchers had access to the raw data. Data was exported from Google Forms to CSV format and processed in python-3 using the Matplotlib-v3.3.4, pandas-v1.2.4, and statsmodels-v0.12.2 packages for analysis and interpretation. Missing values were dealt with through pairwise deletion. Frequency distributions were calculated for each categorical variable and 5-item Likert scales were collapsed to tertiary variables as per the scoring guidelines for the PACV and VHS scales (responses that agreed or strongly agreed with vaccine-hesitant statements were coded as hesitant, responses that disagreed or strongly disagreed with vaccine hesitant statements were coded as non-hesitant, and neutral or “I do not know” responses were coded as unsure). Vaccine hesitancy status was determined using the overall PACV scores. All baseline and demographic characteristics were used to outline determinants of vaccine hesitancy. Chi-squared tests were used for bivariate analyses and logistic regression was used for multivariate modelling. P values less than 0.05 were taken to be significant.
This UAE cross-sectional study collected data from parents across the country from 18th March 2024 to 9th April 2024 through convenience sampling. Participants were approached through WhatsApp groups and other social media platforms such as X and Instagram. The UAE is one of the world’s largest consumers of social media and has a nearly 100% internet penetration ratio; as such, the majority of the population can be reached through social media. Only parents with current or previous children were included. A minimum sample size of 385 participants was calculated using Cochran’s sample size formula, assuming a confidence level of 95%, sampling error of 5%, and a standard error of 1.96. A total of 550 responses were retained after removing those not meeting inclusion criteria. The inclusion criteria included English-speaking and/or Arabic-speaking parents living in the UAE, with at least one child. A participant information sheet (PIS) was presented before starting the study and filling the questionnaire indicated consent to participate in the study. Finally, the collected data, which does not contain any identifying information, was stored securely and accessible only by the investigators to ensure confidentiality. The raw unprocessed data can be found in the .
The tool used in this study was developed by adapting the questionnaire used by Voo et al. as well as including the following: the Parental Attitudes towards Childhood Vaccines scale (PACV) by Opel et al. , the WHO’s Vaccine Hesitancy Scale (VHS) by Larson et al. , and the Digital Vaccine Literacy (DVL) scale by Montagni et al. . The questionnaire was originally developed in English and then translated to Arabic. With regards to the PACV, the already translated and validated Arabic version by Alsuwaidi et al. was used. The overall Arabic questionnaire was reviewed multiple times to ensure consistency with the original. Both were pilot tested two times; all provided feedback was evaluated and incorporated if appropriate. Required edits included expanding the options for some of the demographic questions; the VHS, PACV, and DVL were unchanged. The 60-item self-administered questionnaire consisted of three different sections: demographics, childhood vaccination attitudes and knowledge sources, and childhood vaccinations and practices. It included a mixture of yes/no questions, 5-item and 4-item likert scales, as well as single and multi-select questions . This research was reviewed and approved by the Research Ethics Committee of the University of Sharjah (Reference Number: REC-24-02-26-01-F). It was conducted in accordance with all relevant guidelines and regulations. No identifying information was collected from the participants.
Only the researchers had access to the raw data. Data was exported from Google Forms to CSV format and processed in python-3 using the Matplotlib-v3.3.4, pandas-v1.2.4, and statsmodels-v0.12.2 packages for analysis and interpretation. Missing values were dealt with through pairwise deletion. Frequency distributions were calculated for each categorical variable and 5-item Likert scales were collapsed to tertiary variables as per the scoring guidelines for the PACV and VHS scales (responses that agreed or strongly agreed with vaccine-hesitant statements were coded as hesitant, responses that disagreed or strongly disagreed with vaccine hesitant statements were coded as non-hesitant, and neutral or “I do not know” responses were coded as unsure). Vaccine hesitancy status was determined using the overall PACV scores. All baseline and demographic characteristics were used to outline determinants of vaccine hesitancy. Chi-squared tests were used for bivariate analyses and logistic regression was used for multivariate modelling. P values less than 0.05 were taken to be significant.
Demographics 84.55% of participants were female (n = 465/550). Nearly half were middle-aged (31–45 years old) with only 7.85% being young adults (18–30 years). Almost 95% were married and three-quarters of participants were Other (non-Emirati) Arabs. Only 21.09% (n = 116/550) were healthcare workers and 62.73% (n = 345/550) had 3 children or more in the household. The children’s age distribution varied, with more parents having older children and nearly half having children aged 12–18 years. Majority of parents had medically insured all of their children 75.64%, (n = 416/550), with only 13.45% (n = 74/550) reporting none of their children being insured. presents all the participants’ demographic data. Vaccination practices and knowledge 75.27% (n = 414/550) of participants reported receiving the influenza vaccine last year, compared to nearly 95% that reported receiving the COVID-19 vaccine (41.18% (n = 232/550) receiving 1 or 2 doses and 52.00% (n = 286/550) receiving 3 doses or more). More than a third of parents did not regularly visit their children’s doctor for checkup. While 94.36% (n = 519/550) had their child/children receive all vaccines mandated by the Ministry of Health, only 31.99% (n = 175/547) had their child/children receive vaccines other than those mandated. contains additional information regarding the parent’s practices. Only 39.82% (n = 219/550) found their level of knowledge about childhood vaccinations to be good/excellent, with a fifth reporting poor/inadequate knowledge. Additionally, only 64.00% (n = 352/550) believed they had enough sources of information on immunisation. The most commonly utilised source for information on childhood vaccination was the general practitioner/ primary care paediatrician at 55.64% (n = 306/550). It was followed by governmental websites 36.55%, (n = 201/550), specialist doctors 31.18%, (n = 177/550), and social media 24.73%, (n = 136/550). lists the various knowledge sources and the number of participants who utilised them. Digital Vaccine Literacy (DVL) scores were high ( μ = 75.01, σ = 10.85) with the overwhelming majority 70.11%, (n = 386/550) of participants showing high digital vaccine literacy (score ≥ 20/28). displays the distribution of the DVL scores as well as the results of each individual response; most evident is the strong trust in vaccine information provided by governmental websites 94.00%, (n = 517/550). Notably however, the DVL scale showed poor internal consistency (Cronbach’s α = 0.53; 95% CI: 0.47–0.59). General vaccination attitudes presents the results regarding some general participants’ vaccination attitudes. 48.36% (n = 266/550) of parents had no concerns regarding childhood vaccines; with 31.64% (n = 174/550) concerned about possible pain and fever post-vaccination. No predominant concern dominated: 11.27% worried that ingredients in vaccines were unsafe (n = 62/550), 10.00% (n = 55/550) believed that vaccines may cause learning disabilities such as autism, the remaining concerns were infrequent. Overall, 71.82% (n = 395/550) did not delay any vaccines (for reasons other than allergy) and 77.09% (n = 424/550) did not refuse any vaccines (for reasons other than allergy). As with concerns, no predominant reason emerged for delaying or refusing vaccines. Concerns about side-effects were reported by 5.29% (n = 29/550) of those delaying vaccines and 6.18% (n = 34/550) of those refusing vaccines. Participants were also asked about the importance of vaccination against a number of well-known diseases. Results were overwhelmingly positive for measles, meningitis, and pertussis with more than 95% regarding vaccination against those illnesses as fairly/ very important. Rotavirus perceived vaccine importance dropped with 80.18% (n = 441/550) finding it to be fairly/very important. Finally, COVID-19 and influenza vaccines were viewed as less important by the participants: 31.82% (n = 175/550) and 35.81% (n = 197/550) found the vaccines to be not at all/not very important, respectively. Vaccine hesitancy Two scales were used to evaluate vaccine hesitancy: WHO’s Vaccine Hesitancy Scale (VHS) and the Parental Attitudes towards Childhood Vaccines (PACV), shown in Figs and . respectively. VHS scores showed low hesitancy overall ( μ = 21.35%, σ = 14.92%). The scale measures two underlying factors: lack of trust and perceived vaccine risk. . highlights how participants overwhelmingly scored high on trust (less than 3% displayed hesitant attitudes across items #1–4,6–8) but displayed hesitancy when it came to perceived risk (around two-fifths had hesitant attitudes for items #7,9,10). Unlike the PACV, there is no well-established cut-off for VHS scores to categorize parents into vaccine hesitant and non-hesitant. PACV scores tended to be higher than VHS scores ( μ = 27.59%, σ = 18.52%), with only 14.00% (n = 77/550) meeting the traditional 50% cut-off for being hesitant (with only 67.53% (n = 52/77) recognizing themselves as vaccine-hesitant). The highest level of vaccine trust was seen in PACV item #2 with 86.55% showing non-hesitant attitudes towards refusing vaccines. Additionally, parents had strong positive attitudes towards their child/children’s doctor with nearly three-quarters finding them to be trustworthy and reporting being able to discuss all their concerns with the doctors. PACV was used to determine the overall vaccine hesitancy status of the parents. Only using general practitioner/ paediatrician as a knowledge source, digital vaccine literacy, perceived children’s vaccine knowledge, and nationality were found to be significant at the bivariate level (using chi-squared). The four determinants were fed into a multivariate logistic regression model, the results of which can be seen in . Overall, having a general practitioner/ paediatrician knowledge source (OR: 0.288, 95% CI: 0.167–0.495), adequate (OR: 0.536, 95% CI: 0.289–0.995) or good/excellent (OR: 0.374, 95% CI: 0.193–0.724) perceived children’s vaccine knowledge, other (non-Emirati) Arab (OR:0.318, 95% CI: 0.145–0.695), and having high digital vaccine literacy (OR:0.436, 95% CI: 0.257–0.739) were associated with lower vaccine hesitancy status.
84.55% of participants were female (n = 465/550). Nearly half were middle-aged (31–45 years old) with only 7.85% being young adults (18–30 years). Almost 95% were married and three-quarters of participants were Other (non-Emirati) Arabs. Only 21.09% (n = 116/550) were healthcare workers and 62.73% (n = 345/550) had 3 children or more in the household. The children’s age distribution varied, with more parents having older children and nearly half having children aged 12–18 years. Majority of parents had medically insured all of their children 75.64%, (n = 416/550), with only 13.45% (n = 74/550) reporting none of their children being insured. presents all the participants’ demographic data.
75.27% (n = 414/550) of participants reported receiving the influenza vaccine last year, compared to nearly 95% that reported receiving the COVID-19 vaccine (41.18% (n = 232/550) receiving 1 or 2 doses and 52.00% (n = 286/550) receiving 3 doses or more). More than a third of parents did not regularly visit their children’s doctor for checkup. While 94.36% (n = 519/550) had their child/children receive all vaccines mandated by the Ministry of Health, only 31.99% (n = 175/547) had their child/children receive vaccines other than those mandated. contains additional information regarding the parent’s practices. Only 39.82% (n = 219/550) found their level of knowledge about childhood vaccinations to be good/excellent, with a fifth reporting poor/inadequate knowledge. Additionally, only 64.00% (n = 352/550) believed they had enough sources of information on immunisation. The most commonly utilised source for information on childhood vaccination was the general practitioner/ primary care paediatrician at 55.64% (n = 306/550). It was followed by governmental websites 36.55%, (n = 201/550), specialist doctors 31.18%, (n = 177/550), and social media 24.73%, (n = 136/550). lists the various knowledge sources and the number of participants who utilised them. Digital Vaccine Literacy (DVL) scores were high ( μ = 75.01, σ = 10.85) with the overwhelming majority 70.11%, (n = 386/550) of participants showing high digital vaccine literacy (score ≥ 20/28). displays the distribution of the DVL scores as well as the results of each individual response; most evident is the strong trust in vaccine information provided by governmental websites 94.00%, (n = 517/550). Notably however, the DVL scale showed poor internal consistency (Cronbach’s α = 0.53; 95% CI: 0.47–0.59).
presents the results regarding some general participants’ vaccination attitudes. 48.36% (n = 266/550) of parents had no concerns regarding childhood vaccines; with 31.64% (n = 174/550) concerned about possible pain and fever post-vaccination. No predominant concern dominated: 11.27% worried that ingredients in vaccines were unsafe (n = 62/550), 10.00% (n = 55/550) believed that vaccines may cause learning disabilities such as autism, the remaining concerns were infrequent. Overall, 71.82% (n = 395/550) did not delay any vaccines (for reasons other than allergy) and 77.09% (n = 424/550) did not refuse any vaccines (for reasons other than allergy). As with concerns, no predominant reason emerged for delaying or refusing vaccines. Concerns about side-effects were reported by 5.29% (n = 29/550) of those delaying vaccines and 6.18% (n = 34/550) of those refusing vaccines. Participants were also asked about the importance of vaccination against a number of well-known diseases. Results were overwhelmingly positive for measles, meningitis, and pertussis with more than 95% regarding vaccination against those illnesses as fairly/ very important. Rotavirus perceived vaccine importance dropped with 80.18% (n = 441/550) finding it to be fairly/very important. Finally, COVID-19 and influenza vaccines were viewed as less important by the participants: 31.82% (n = 175/550) and 35.81% (n = 197/550) found the vaccines to be not at all/not very important, respectively.
Two scales were used to evaluate vaccine hesitancy: WHO’s Vaccine Hesitancy Scale (VHS) and the Parental Attitudes towards Childhood Vaccines (PACV), shown in Figs and . respectively. VHS scores showed low hesitancy overall ( μ = 21.35%, σ = 14.92%). The scale measures two underlying factors: lack of trust and perceived vaccine risk. . highlights how participants overwhelmingly scored high on trust (less than 3% displayed hesitant attitudes across items #1–4,6–8) but displayed hesitancy when it came to perceived risk (around two-fifths had hesitant attitudes for items #7,9,10). Unlike the PACV, there is no well-established cut-off for VHS scores to categorize parents into vaccine hesitant and non-hesitant. PACV scores tended to be higher than VHS scores ( μ = 27.59%, σ = 18.52%), with only 14.00% (n = 77/550) meeting the traditional 50% cut-off for being hesitant (with only 67.53% (n = 52/77) recognizing themselves as vaccine-hesitant). The highest level of vaccine trust was seen in PACV item #2 with 86.55% showing non-hesitant attitudes towards refusing vaccines. Additionally, parents had strong positive attitudes towards their child/children’s doctor with nearly three-quarters finding them to be trustworthy and reporting being able to discuss all their concerns with the doctors. PACV was used to determine the overall vaccine hesitancy status of the parents. Only using general practitioner/ paediatrician as a knowledge source, digital vaccine literacy, perceived children’s vaccine knowledge, and nationality were found to be significant at the bivariate level (using chi-squared). The four determinants were fed into a multivariate logistic regression model, the results of which can be seen in . Overall, having a general practitioner/ paediatrician knowledge source (OR: 0.288, 95% CI: 0.167–0.495), adequate (OR: 0.536, 95% CI: 0.289–0.995) or good/excellent (OR: 0.374, 95% CI: 0.193–0.724) perceived children’s vaccine knowledge, other (non-Emirati) Arab (OR:0.318, 95% CI: 0.145–0.695), and having high digital vaccine literacy (OR:0.436, 95% CI: 0.257–0.739) were associated with lower vaccine hesitancy status.
Demographic influences on vaccine hesitancy This study found 14% of the participants to be vaccine hesitant, the majority of which self-reported as such. Moreover, even though the majority reported high trust in the local healthcare systems and physicians, parents still tended to have non-negligible concerns and worry regarding vaccine side-effects, ingredients, and effectiveness. Classical and well-established vaccines were viewed favourably by nearly everyone while newer vaccines were subject to more scepticism. The majority of participants were also found to have high digital vaccine literacy, even though less than half recognized their knowledge of childhood vaccines to be good/excellent. Finally, the study also recognized four demographic factors that were associated with lower rates of vaccine hesitancy, specifically using general practitioner/ paediatrician as a knowledge source, high digital vaccine literacy, high perceived children’s vaccine knowledge, and being non-Emirati Arab were associated with lower vaccine hesitancy. Surprisingly, a number of demographics that were hypothesized to influence vaccine hesitancy were found to be unassociated with the measure; yet, as expected more knowledgeable parents with higher digital vaccine literacy did have more positive parental attitudes toward children vaccination. Vaccines are essential. Between 2010 and 2018, the measles vaccine alone prevented 23 million deaths annually. However, progress in some countries has stalled or even reversed, undermining past achievements and increasing the risk of outbreaks . Given how large of a threat vaccine hesitancy has become, evaluating and monitoring hesitancy is essential as spikes in vaccine hesitancy have been much more extreme, in part due to the decline in public’s trust of experts, rise of political polarisation, belief-based extremism, and preference for alternative health . The Gulf Cooperation Council (GCC) countries, including the UAE, are no exception and it will be vitally important to understand the population’s concern to be able to tackle vaccine hesitancy, as no one single approach can address and eliminate hesitancy . The results of this study show no clear factor explaining vaccine delaying or refusal, again highlighting the multifaceted nature of vaccine hesitancy and the need for tailored and targeted approaches. This study addresses a gap highlighted by a recent systematic review which noted that while very little research has been conducted on vaccine hesitancy in the region. The review found that most published work focused on Saudi Arabia and that vaccine hesitancy was prevalent in the GCC, both among the public and healthcare workers . Rabei et al. had initially reported widespread positive attitudes towards vaccines in the UAE coupled with poor knowledge, culminating with a hesitancy rate of 10% . More recently, Alsuwaidi et al. found 12% (n = 36/300) of participants to be vaccine hesitant (according to the PACV). Interestingly, at the bivariate level, they found being male and divorced were associated with increased vaccine hesitancy. Similar to this study, age, educational level, and nationality also had no significant association on hesitancy status . Additionally, and similar to a Saudi study, nearly 95% of participants agreed/strongly agreed that vaccines are important . Yet, parents in general tend to be most concerned regarding negative consequences of vaccines, specifically about vaccines causing illnesses, children receiving too many vaccines, and/or vaccine ingredients being harmful . Trust in healthcare This study also showed a high level of trust in the local healthcare system and national immunisation program. This is not surprising given that a high level of trust in the UAE’s healthcare system and physicians has been consistently shown . Healthcare providers have been and continue to be the most trusted source about vaccines for the public . Knowledge is an important protective factor against vaccine hesitancy. Voo et al. showed that parents with good vaccine knowledge and awareness were less hesitant, and their children were more likely to have their immunizations be up to date . In fact, a technical report by the European Centre for Disease Prevention and Control (ECDC) found that 27 out of 40 interventions to tackle vaccine hesitancy used dialogue and communication to enhance vaccine trust and health literacy among parents . However, and moving forward, healthcare workers need to adopt a more proactive role in discussing vaccines and their importance, effectiveness, and safety with patients . The role of social media The role of social media and vaccine hesitancy is a novel area of research; a scoping review by the ECDC highlighted several ethical issues underpinning this issue as well as lack of standardised method for monitoring and analysing current social media vaccine information . The Centre for Countering Digital Hate (CCDH) reports that anti-vaccination campaigners’ social media accounts have over 62 million followers across the various platforms with annual revenues of $36 million and are worth over $1.1 billion in ad revenue to Big Tech . Additionally, the role of social media in fueling vaccine hesitancy grew substantially during the COVID-19 pandemic. Online social media is rampant with misinformation regarding vaccines, with the situation expected only to get worse with the rise of artificial intelligence systems that can rapidly disseminate false information online that social media bots can amplify . In a systematic review that aimed to identify health misinformation on various social media platforms, vaccines were the most common topic, accounting for nearly two-fifths of all misinformation .
This study found 14% of the participants to be vaccine hesitant, the majority of which self-reported as such. Moreover, even though the majority reported high trust in the local healthcare systems and physicians, parents still tended to have non-negligible concerns and worry regarding vaccine side-effects, ingredients, and effectiveness. Classical and well-established vaccines were viewed favourably by nearly everyone while newer vaccines were subject to more scepticism. The majority of participants were also found to have high digital vaccine literacy, even though less than half recognized their knowledge of childhood vaccines to be good/excellent. Finally, the study also recognized four demographic factors that were associated with lower rates of vaccine hesitancy, specifically using general practitioner/ paediatrician as a knowledge source, high digital vaccine literacy, high perceived children’s vaccine knowledge, and being non-Emirati Arab were associated with lower vaccine hesitancy. Surprisingly, a number of demographics that were hypothesized to influence vaccine hesitancy were found to be unassociated with the measure; yet, as expected more knowledgeable parents with higher digital vaccine literacy did have more positive parental attitudes toward children vaccination. Vaccines are essential. Between 2010 and 2018, the measles vaccine alone prevented 23 million deaths annually. However, progress in some countries has stalled or even reversed, undermining past achievements and increasing the risk of outbreaks . Given how large of a threat vaccine hesitancy has become, evaluating and monitoring hesitancy is essential as spikes in vaccine hesitancy have been much more extreme, in part due to the decline in public’s trust of experts, rise of political polarisation, belief-based extremism, and preference for alternative health . The Gulf Cooperation Council (GCC) countries, including the UAE, are no exception and it will be vitally important to understand the population’s concern to be able to tackle vaccine hesitancy, as no one single approach can address and eliminate hesitancy . The results of this study show no clear factor explaining vaccine delaying or refusal, again highlighting the multifaceted nature of vaccine hesitancy and the need for tailored and targeted approaches. This study addresses a gap highlighted by a recent systematic review which noted that while very little research has been conducted on vaccine hesitancy in the region. The review found that most published work focused on Saudi Arabia and that vaccine hesitancy was prevalent in the GCC, both among the public and healthcare workers . Rabei et al. had initially reported widespread positive attitudes towards vaccines in the UAE coupled with poor knowledge, culminating with a hesitancy rate of 10% . More recently, Alsuwaidi et al. found 12% (n = 36/300) of participants to be vaccine hesitant (according to the PACV). Interestingly, at the bivariate level, they found being male and divorced were associated with increased vaccine hesitancy. Similar to this study, age, educational level, and nationality also had no significant association on hesitancy status . Additionally, and similar to a Saudi study, nearly 95% of participants agreed/strongly agreed that vaccines are important . Yet, parents in general tend to be most concerned regarding negative consequences of vaccines, specifically about vaccines causing illnesses, children receiving too many vaccines, and/or vaccine ingredients being harmful .
This study also showed a high level of trust in the local healthcare system and national immunisation program. This is not surprising given that a high level of trust in the UAE’s healthcare system and physicians has been consistently shown . Healthcare providers have been and continue to be the most trusted source about vaccines for the public . Knowledge is an important protective factor against vaccine hesitancy. Voo et al. showed that parents with good vaccine knowledge and awareness were less hesitant, and their children were more likely to have their immunizations be up to date . In fact, a technical report by the European Centre for Disease Prevention and Control (ECDC) found that 27 out of 40 interventions to tackle vaccine hesitancy used dialogue and communication to enhance vaccine trust and health literacy among parents . However, and moving forward, healthcare workers need to adopt a more proactive role in discussing vaccines and their importance, effectiveness, and safety with patients .
The role of social media and vaccine hesitancy is a novel area of research; a scoping review by the ECDC highlighted several ethical issues underpinning this issue as well as lack of standardised method for monitoring and analysing current social media vaccine information . The Centre for Countering Digital Hate (CCDH) reports that anti-vaccination campaigners’ social media accounts have over 62 million followers across the various platforms with annual revenues of $36 million and are worth over $1.1 billion in ad revenue to Big Tech . Additionally, the role of social media in fueling vaccine hesitancy grew substantially during the COVID-19 pandemic. Online social media is rampant with misinformation regarding vaccines, with the situation expected only to get worse with the rise of artificial intelligence systems that can rapidly disseminate false information online that social media bots can amplify . In a systematic review that aimed to identify health misinformation on various social media platforms, vaccines were the most common topic, accounting for nearly two-fifths of all misinformation .
Finally, it is important to discuss some of the limitations of this study. The results of this study depended on what the parents reported without independent verification. Moreover, convenience and snowball sampling were used (leading to a female-heavy sample). Additionally, several biases such as social desirability, recall and response bias can affect results. Additional variables can also be explored along with their relationship with vaccine hesitancy such as religious views. However, the survey was distributed among all Emirates and was completely anonymous to ensure authentic and genuine responses. Moreover, the results are in line with what has been reported both regionally and globally.
Immunisation, one of public health’s greatest success stories, is being threatened by the rise of vaccine hesitancy, both globally and regionally. In the UAE, 14% of the participants were found to be vaccine hesitant. This highlights that even given the massive investments that the UAE government has already poured into tackling hesitancy, a non-negligible proportion remains. However, the majority of participants reported high trust in vaccines as well as the local healthcare systems and physicians, all while having non-negligible concerns and worries regarding vaccine side-effects, ingredients, and effectiveness. This necessitates adopting a public health campaign that continually attempts to address and minimize vaccine hesitancy in the country as well as build on the solid trust in the local health systems. Social media has been a massive force in amplifying and promoting vaccine misinformation. Vaccine hesitancy can be tackled but will require tailored and innovated solutions specific to the UAE population and healthcare workers need to adopt a more proactive role in discussing vaccines and their importance.
S1 File Raw data. (PDF) S2 File English and Arabic questionnaires. (XLSX)
|
Private detection of relatives in forensic genomics using homomorphic encryption | 37fe8001-7430-4af2-819e-16e644939342 | 11575431 | Forensic Medicine[mh] | The identification of unknown individuals using their DNA sample can be done either directly through DNA matching with target candidates or indirectly via familial tracing . Typically, in the absence of direct evidence for DNA matching, the latter method is used to approach the identification of the DNA sample. DNA matching is particularly relevant for finding unknown perpetrators of crime who are unidentifiable with standard DNA profiling. The method is known as Forensic Genetic Genealogy (FGG) . A typical application is forensic search on DNA collected from a crime scene, where the DNA helps law enforcement find close relatives of an unknown suspect in a genetic database. Even if the unknown suspect individual never had his/her DNA collected, law enforcement will be able to close in on his/her family circle and from there orient an investigation in the right direction. FGG shall not be confused with Familial DNA Searching (FDS). In FDS, collected DNA evidence is compared against the FBI’s CODIS database, which contains DNA profiles of known convicted offenders. This process aims to find partial matches that closely resemble the target DNA profile, primarily focusing on immediate relatives like parents and children . Conversely, FGG is employed when FDS is unsuccessful, utilizing non-criminal genetic genealogy databases. FDS and FGG also differ in their data types: FDS relies on Short Tandem Repeat (STR) DNA typing, while FGG uses Single Nucleotide Polymorphism (SNP) high-density markers. Consequently, their DNA matching algorithms use different analysis approaches: SNP array DNA matching algorithms commonly rely on probabilistic and heuristic methods, while STR DNA profiling algorithms compare the number of shared alleles at specific loci to determine genetic matches. In summary, FGG leverages genealogy and SNP analysis, whereas FDS focuses on CODIS and STR markers. As of 2018, several non-criminal genealogy databases could be used by law enforcement to resolve violent crimes and missing person cases, namely, . This process has law enforcement upload the raw DNA evidence to different genetic genealogy databases and several matches of distant relatives are found and used to build family trees to back trace to the identity of the DNA sample source. There could be many more genetic genealogy databases to search for matches of relatives. For this reason, it can be time consuming and unduly computationally expensive if no matches are found after comparing a query DNA sample with all entries in a database. A method that could perform a swift screening across all different databases shall alleviate this computational issue. This is the matter of this work. In addition, because the unknown DNA sample leave custody of law enforcement, it could arguably violate the principles of privacy on handling and processing genetic data, for which there could be unpredictable negative consequences to both investigation integrity and unwanted discoveries for the related matches. The benefit of enforcing genetic privacy could bring some positive gains such as breaking geographical barriers concerning access to genetic databases spread worldwide, which are protected by international privacy laws and regulations. Its value goes beyond prudent accessibility of genetic databases but also, more generally, to the proactive prevention of ethical and privacy issues involving the general public, which can be sidelined or overlooked and cause wrongful convictions . The yearly iDASH competition proposes the challenge of protecting genetic privacy using Homomorphic Encryption. The goal of the 2023 edition of iDASH is determining whether a DNA sample (query) shares any genetic information with genomes comprising a target genetic genealogy database. Aiming at addressing the iDASH 2023 Track 1 challenge, i.e., “Secure Relative Detection in (Forensic) Databases”, we devised three methods that utilize HE-based approaches to confirm the presence of a person’s relatives’ genetic data within a genomic database. During this procedure, the query site initiates the request, and the database site provides the response. Both sites would like to keep data confidential. The output of the method is a score that indicates for each query the likelihood rate about the presence of its relatives in the genomic database. Our methods enable a secure search for the target individual without compromising the privacy of the query individual or the genomic database. It also makes consent management more modular, as individuals can consent to secure searches but not searches in clear text. This is particularly relevant in the forensic domain, where using genetic genealogy databases (e.g., GEDMatch) to rapidly identify suspects and their relatives raises complex ethical issues, such as using genomic data without consent for forensic purposes. In the use case we consider, there are 3 entities (see also Fig. ): A law enforcement querying entity (QE) that holds the genome of a target suspect individual collected on a crime scene. A Database owner (DE), who manages a genetic genealogy database. A Non-colluding trusted computing entity (CE) that performs genome detection using the encrypted data from QE and DE. QE wants to find out if the genome of the target individual (or family relatives) is in the database. Neither QE nor DE is allowed to reveal the genomic information to the other party. The main challenge is to perform this search in a secure manner using a HE-based query system such that information exchanged between the entities remains encrypted at all time. The use case involves two steps: One-to-Many DNA comparisons: a way to compare a genetic profile to all other database members. In this case, a unique real-value score is computed to determine how likely a query individual has a familial relationship with any other individual in the database. This can be accomplished also by directly comparing a query to every member in the database and then selecting the maximum a real-valued number out of all comparisons, which directly pinpoints which member is most likely to be related to the queried genome. One-to-One autosomal DNA comparison allowing to confirm how much DNA an individual share with someone before contacting them. Contributions The solution to step 1, the focus of this work, can serve as a filtering system for forensics analysis of DNA samples collected on crime scenes. In this case, the problem does not require that the relative in the database be identified exactly, but instead it requires to determine if there exists at least one relative to the query individual in the database with certain probability. There could be many databases to search from. Since comparing a suspect with each individual in every database is computationally expensive, it pays off to reduce the number of databases to search from and to reduce the number of suspect candidates for each target database. In this regard, the first step would be to determine if a suspect has any relative in the target database. The key contributions of this work are three-fold: Firstly, we propose an HE-friendly mathematical simplification of the equation proposed in to detect contributing trace amounts of DNA to highly complex mixtures using homomorphically encrypted high-density SNP genotypes. Secondly, we introduce two novel algorithms to predict evaluation scores rating whether a DNA sample query shares genetic data with any other DNA sample in a genomic database, where one of these two is heuristically inspired by the z-test hypothesis testing, and assumes no prior knowledge of the reference populations, and the other algorithm uses a Machine Learning approach with linear regression model trained on a known reference population mixture inherited from the genealogy database. Finally, we demonstrate through several experiments that our methods perform high accuracy predictions in less than 37.5 milliseconds per query using encrypted genetic data in a privacy-preserving approach with provable 128-bit security. As follows, in “ ” section, we define the problem scope in the context of FGG and present a discussion on relevant related work in “ ” section. In “ ” section, we present the methods in details, including some data analysis and design considerations to address the problem statement effectively in aspects such as security, computing and resource optimizations. We present performance results of the methods in detail in “ ” section, including a description of the characteristics of the challenge, data, evaluation criteria, and computing resource constraints in “ ” section. Finally, we finalize our discussion in “ ” and “ ” sections. Forensic Genetic Genealogy (FGG) Forensic Genetic Genealogy (FGG) is an investigative tool that combines traditional genealogy research with advanced SNP DNA analysis to solve crimes and identify unknown individuals. It consists of the following steps: DNA sample collection: DNA sample is collected from a crime scene or an unidentified individual. SNP testing: this is the process in which DNA is analyzed to identify the SNP variations and then compiled into an array format (the input data of this paper). Profile upload: the genetic profile acquired from step 2, the SNP array (or genome), is then uploaded to a public genetic genealogy database, such as GEDmatch or FamilyTreeDNA. Database matching: matching algorithms are used to compare the uploaded profile with other genetic profiles in the database to identify potential relatives by measuring the amount of shared DNA segments. The scope of our work and the iDASH 2023 competition intersects with this since it concerns identifying whether there are any potential relatives in the database . Relationship estimation: an algorithm takes two genomes and estimates the degree of relatedness between them, which can range from close relatives (e.g., parents, siblings) to distant cousins. The methods proposed here can be used to perform relationship estimation but this is out of scope of this work. Genealogical research: genealogists use the matches found in step 5 to reconstruct family trees, tracking common ancestors and descendants to reduce the number of potential suspects. Identifying the suspect: once a potential match is identified, law enforcement collects a DNA sample from the suspect to confirm the match through traditional forensic methods. Kinship estimation The kinship score determines the degree of relatedness between two individuals based on their genetic data (see ). The database matching step, described in step 4 above, relies on predicting the kinship between the uploaded genetic profile and the genetic profiles in the database. It is a measure of the probability that a randomly chosen allele from one individual is identical by descent (IBD) to a randomly chosen allele from another individual. It can be mathematically described (see ) as 1 [12pt]{minimal} $$ _{ij}= _{l=1}^{L}-2p_l)(x_{jl}-2p_l)}{2p_l(1-p_l)}, $$ ϕ ij = ∑ l = 1 L ( x il - 2 p l ) ( x jl - 2 p l ) 2 p l ( 1 - p l ) , where L is the number of loci (genetic markers or SNP variants), [12pt]{minimal} $$x_{il}$$ x il and [12pt]{minimal} $$x_{jl}$$ x jl are the SNP variants of individuals i and j at locus l , and [12pt]{minimal} $$p_l$$ p l is the allele frequency at locus l . Scope of this work in the FGG context Step 4 is the subject matter of this work and of the iDASH competition task. It concerns kinship prediction. The input data of this work comes from step 2, a genome sequence formed of SNP variants represented with elements in the set [12pt]{minimal} $$\{0, 1, 2\}$$ { 0 , 1 , 2 } . This genome encoding is a sequence of bi-allelic SNP data. In the GDS (Genomic Data Structure) data format, which is derived from a VCF (Variant Call Format) data file, the genotype encodings 2, 1, and 0 refer, respectively, to Homozygous for the reference allele (both alleles match the reference allele), Heterozygous (one of the alleles matches the reference allele and the other matches the alternate allele), and Homozygous alternate genotype (both alleles match the alternate allele). It basically counts how many alleles match with the reference allele in a specific position (gene locus) of the reference genome (see similar explanation in ). For simplification, the database matching task in step 4 is a search problem cast as a decision problem. The matching task is reduced to finding out whether or not the uploaded profile matches with any of the profiles in the database, while not requiring that any potential matches be exactly identified or retrieved. This means that the uploaded profile may not need be compared with all, or any, of the database profiles to deliver the answer. In this case, step 4 of the FGG task can be split into two parts. The first part regards screening each database to find out whether there exists any potential matches. All that is needed is to identify the nature of the relationship between the uploaded profile and the genetic database, i.e. answering the question “Is there any relative of the query individual in the probed genetic database?”. Once the databases that contain relatives are identified, then the second part starts, which consists of searching for the actual candidate matches in each of the databases where the uploaded profile was screened and found to share DNA segments with other database profiles. We concentrate our efforts on part 1 of step 4 as just described since it was the required task in iDASH competition. Steps 1, 2, 3, 5, 6, and 7 fall outside the scope of this work. We simplify the problem to obtain the kinship score between the individual query and the genomic database. We use homomorphic encryption to devise privacy-preserving methods to perform the relatedness matching while securing the computation with genetic data. The output of our methods can also be used as kinship predictions between pairs of genetic profiles and then used to estimate relationship types (step 5), but it is not the subject of study here. We use the predicted kinship scores to estimate the relationship of the uploaded profile directly with the genetic genealogy database. Other privacy-preserving genetic relatedness testing methods have been proposed and are discussed in . Related work Current security and privacy protection practices in genomic data sharing Genomic data sharing is particularly useful for precise medicine . There are a myriad of unified genomic database knowledge projects (see for a list) that provide researchers with genetic data sharing and analysis capabilities for this purpose. Along with that, concerns regarding genomic data security and privacy are raised . They implement different strategies to offer security and privacy protection guarantees. For instance, control access through administrative processes, laws and regulations, data anonymization, and encryption. Administrative processes To obtain access to controlled data from the NCI (National Cancer Institute) Genomic Data Commons (GDC) knowledge database, it is required to file a dbGaP (Database of Genotypes and Phenotypes) authorization request that will be reviewed, approved or disapproved by the NIH (National Institutes of Health) Data Access Committee (DAC) on the basis of whether or not the usage will conform to the specification determined by the NIH Genomic Data Sharing Policy (see more details at ). Once access is granted, the recipient is entrusted with and accountable for the security, confidentiality, integrity and availability of the data, including when utilizing Cloud computing services. Another example is the European Genome-phenome Archive (EGA)’s data access , which operates in a similar manner, i.e. through Data Access Agreement (DAA) and Data Processing Agreement (DPA) documents, but enhancing data access security and confidentiality via authenticated encryption of data files using Crypt4GH . Many other public genomic datasets exist and implement similar security and privacy protection practices, as reviewed by . Employing administrative processes only is not suitable for privacy-preserving FGG . This implementation of access control to sensitive data depends on the integrity and goodwill of the authorized individual to self-report any agreement violations and data breaches. Once data access is granted, there is a lack of oversight to enforce policies related to genomic privacy, re-identification, and data misuse. Data anonymization Data anonymization involves obscuring personal identifiers in genetic data to protect individual’s privacy. It can also come in the form of aggregated data that shows trends and patterns without revealing specific identities. Data masking is also a technique employed to alter sensitive parts of the data to prevent identification . Employing data anonymization only is not suitable for privacy-preserving FGG . Genetic data is unique and inherently identifiable. Even when anonymized, it can often be re-identified through genealogical research and cross-referencing with other data sources. Anonymization of data also bring serious limitations due to the uniqueness of every individual’s genome, which can be easily subject to proven re-identification attacks (see ). Laws and regulations Laws and regulations play a crucial role in protecting the privacy of genetic data and medical information. They legally protect individual’s medical record and other PII data, including genetic data, by setting standards for the use and disclosure of such information by covered entities. Their security rules depends on appropriate administrative and technical safeguards to ensure confidentiality, integrity and security of protected health information. They set the foundation of genetic privacy but carry limitations that pose increased risk to individuals’ privacy. Employing laws and regulations only is not suitable for privacy-preserving FGG . There is a lack of standardized regulations and ethical guidelines governing the use of genetic data in forensic investigations. Legal acts such as HIPAA and GINA seem inadequate and leave gaps in protection since they focus on who holds the data rather than the data itself because it only applies to covered entities. For example, they do not regulate consumer-generated medical and health information or recreational genetic sequencing generated by commercial entities such as 23andMe and Ancestry.com. Therefore, we can argue that these commonly practiced solutions fall short in securing genomic data privacy. In all the aforementioned genomic data sharing database cases, privacy protection is traded by confidentiality agreements, which do not offer the same layer of protection to sensitive data since their compliance is subject to the actions of fallible human beings. The adequate solution shall enforce privacy protection policies on the data regardless of the creator or who has access to it. Cryptographic techniques appear to be the most suitable to address it in this manner (e.g. ), where the most advanced of them allows making inferences and analytics while the data is encrypted, while never revealing the contents to the user. Encryption in genomic data sharing Privacy risks associated to accessing and storing genetic data can be mitigated by enabling confidentiality through cryptography. If either at rest or in transit, genetic data can be guarded from unwarranted access using state-of-the-art encryption schemes (e.g. ). This way, only authorized personnel holding the decryption key can reveal the contents of the encrypted genetic data. Crypt4GH is an industry standard for genomic data file format to keep genomic data secure while at rest, in transit, and through random access; thus, allowing secure genomic data sharing between separate parties. A solution so-called SECRAM data format has been proposed for secure storage and retrieval of encrypted and compressed aligned genomic data. To perform data analytics with machine learning algorithms in such case, the data is required to be decrypted and it becomes vulnerable to cybersecurity attacks. This is the major protection limitation of conventional cryptographic encryption schemes, i.e. requiring decryption before computing. On the other hand, encrypting genomic data with Homomorphic Encryption (HE) schemes allows computation over encrypted data without ever decrypting it; thus, not revealing any sensitive content since the data remains encrypted, ensuring true private computation. This additional layer of security can potentially help reduce the time and cost spent on reviewing and approving data accesses. When computationally demanding data analysis is desired, more often than not, processing needs to occur in (public) untrusted cloud service providers due to limited local computing resources and/or access to a restricted number of analytic model IPs. In this context, modern cryptography introduces homomorphic encryption methods (e.g., BGV , and CKKS ), which bring the capability of protecting data privacy during computation in a semi-honest security model. Genetic privacy protection with homomorphic encryption Fully Homomorphic Encryption (FHE) allows computation of arbitrary functions on encrypted data without decryption . This means the data is also protected during computation (processing) since it remains encrypted. Its security guarantee stems from the hardness of Ring Learning with Errors (RLWE) assumptions . There are two aspects to this assumption, namely, decisional and computational. The decisional RLWE assumption states that it is infeasible to distinguish pairs ( a , b ) picked at random from a distribution over a ring [12pt]{minimal} $$ {R}_Q^2$$ R Q 2 and pairs constructed as [12pt]{minimal} $$(a,a s+e)$$ ( a , a · s + e ) with a sampled from [12pt]{minimal} $$ {R}_Q$$ R Q , where e and s are randomly sampled from a noise distribution [12pt]{minimal} $$ {X}$$ X over the ring [12pt]{minimal} $$ {R}$$ R . The computational assumption states that it is hard to discover the secret key s from many different samples [12pt]{minimal} $$(a,a s+e)$$ ( a , a · s + e ) . This homomorphic encryption construct is built on a polynomial ring [12pt]{minimal} $$ {R}_Q= {Z}_Q[x]/(X^N+1)$$ R Q = Z Q [ x ] / ( X N + 1 ) , where [12pt]{minimal} $$ {Z}_Q$$ Z Q denotes the ring of integers modulo Q that populate the polynomial coefficients, [12pt]{minimal} $$X^N+1$$ X N + 1 is the [12pt]{minimal} $$M^{th}$$ M th cyclotomic polynomial [12pt]{minimal} $$ _M(x)$$ ϕ M ( x ) , and [12pt]{minimal} $$N=M/2$$ N = M / 2 . The choice of N , where N is typically a power-of-2 integer, is determined by the value of the coefficient modulus Q and the security parameter [12pt]{minimal} $$$$ λ , such that [12pt]{minimal} $$M=M( ,Q)$$ M = M ( λ , Q ) is a function of [12pt]{minimal} $$$$ λ and Q . Various homomorphic encryption schemes built on RLWE constructs that work naturally with integers emerged in the literature (e.g. ). Although the genetic data in this work takes values in the set [12pt]{minimal} $$ {G}=\{0,1,2\}$$ G = { 0 , 1 , 2 } , the expected output and model parameters to perform the data analysis and predictions operate on numbers in floating-point representation. This is especially true when training machine learning models to make predictions from genotypes. For this reason, it is natural to opt for a homomorphic encryption scheme intrinsically designed to accommodate floating-point arithmetic. Cheon et al. put forward the first homomorphic encryption for arithmetic of approximate numbers, also commonly known as the CKKS (short for Cheon-Kim-Kim-Song) scheme, that is most suitable to operate on real numbers. The CKKS scheme is a levelled homomorphic encryption (LHE) public key encryption scheme based on the RLWE problem . It allows to perform computations on encrypted complex numbers; thus, real numbers too. The ability of the CKKS method to handle floating-point numbers, approximated with fixed-point representation, makes it particularly attractive for confidential machine learning (ML) and data analysis. In the following, we briefly describe the CKKS scheme that we will use throughout this paper. The same noise e added during the encryption to strengthen the security also contributes to limiting the number of consecutive multiplications as the noise grows as consequence of that, possibly causing decryption error. CKKS controls this error-causing noise growth with the concept of levels and rescaling. Initially, a fresh CKKS ciphertext ct is assumed to encrypt numbers with certain initial precision masked by the added noise of smaller precision. The initial noise budget of a CKKS ciphertext (see Fig. ) is determined by the parameter L (multiplicative depth). The integer L corresponds to the largest ciphertext modulus level permitted by the security parameter [12pt]{minimal} $$$$ λ . Let the ring dimension N be a power-of-2, a modulus [12pt]{minimal} $$Q=q_L= ^L$$ Q = q L = Δ L , and [12pt]{minimal} $$q_l:= ^{l}$$ q l : = Δ l for [12pt]{minimal} $$1 l L$$ 1 ≤ l ≤ L , and some integer scaling factor [12pt]{minimal} $$ =2^p$$ Δ = 2 p , where p is the number of bits for the desired (initial) precision. Before encryption, the message needs to be encoded in a plaintext space. Genetic data vector [12pt]{minimal} $$q {G}^n$$ q ∈ G n is seen as a single CKKS message [12pt]{minimal} $$z {C}^{N/2}$$ z ∈ C N / 2 , assuming [12pt]{minimal} $$n N/2$$ n ≤ N / 2 , mapped to a plaintext object [12pt]{minimal} $$ {m} {R}$$ m → ∈ R . This plaintext space supports element-wise vector-vector addition, subtraction, and Hadamard multiplication. For encoding and decoding procedures, CKKS relies on a field isomorphism called canonical embedding, i.e. [12pt]{minimal} $$ : {R}[x]/(X^N+1) { {C}^{N/2}}$$ τ : R [ x ] / ( X N + 1 ) → C N / 2 . Hence, we have 2 [12pt]{minimal} $$ {Encode}(z, )= ^{-1}(z) $$ Encode ( z , Δ ) = ⌊ Δ · τ - 1 ( z ) ⌉ 3 [12pt]{minimal} $$ {Decode}( {m}, )= ( m) $$ Decode ( m → , Δ ) = τ ( 1 Δ · m ) Equipped with the aforementioned concepts, we now define the following CKKS operators: [12pt]{minimal} $$KeyGen( {R}_{q_L}, _{key}, _{err},1^{ })$$ K e y G e n ( R q L , χ key , χ err , 1 λ ) Sample [12pt]{minimal} $$s { _{key}}$$ s ← χ key and set the secret key as [12pt]{minimal} $$sk=(1,s)$$ s k = ( 1 , s ) . Sample [12pt]{minimal} $$a {U( {R}_{q_L})}$$ a ← U ( R q L ) (where U denotes the Uniform distribution) and [12pt]{minimal} $$e { _{err}}$$ e ← χ err . Set the public key as [12pt]{minimal} $$pk=(b,a) {R}_{q_L}^2$$ p k = ( b , a ) ∈ R q L 2 where [12pt]{minimal} $$b=[-a {s} + e]_{q_L}$$ b = [ - a · s + e ] q L [12pt]{minimal} $$Enc_{pk}( {m})$$ E n c pk ( m → ) Given a plaintext message [12pt]{minimal} $$ {m} {R}$$ m → ∈ R , sample [12pt]{minimal} $$v { }_{enc}$$ v ← χ enc and [12pt]{minimal} $$e_0,e_1 { _{err}}$$ e 0 , e 1 ← χ err . Output the ciphertext [12pt]{minimal} $$ct = [v {pk} + ( {m} + e_0,e_1)]_{q_L}$$ c t = [ v · pk + ( m → + e 0 , e 1 ) ] q L . [12pt]{minimal} $$Dec_{sk}(ct)$$ D e c sk ( c t ) Given a ciphertext [12pt]{minimal} $$ct {R^{2}_{q_L}}$$ c t ∈ R L 2 , where ct as encryption of [12pt]{minimal} $$ {m}$$ m → satisfies [12pt]{minimal} $$ ct,sk = {m} + e ( q_L)$$ ⟨ c t , s k ⟩ = m → + e ( mod q L ) for some small e , then the decryption output results in [12pt]{minimal} $$ {m}^{ }= ct,sk ( q_L)$$ m → ′ = ⟨ c t , s k ⟩ ( mod q L ) , where [12pt]{minimal} $$ {m}^{ }$$ m → ′ is slightly different from the original encoded message [12pt]{minimal} $$ {m}$$ m → ; indeed, an approximated value when [12pt]{minimal} $$||e||_{ }<< || {m}||_{ }$$ | | e | | ∞ < < | | m → | | ∞ holds true. [12pt]{minimal} $$Add/Sub(ct_1,ct_2)$$ A d d / S u b ( c t 1 , c t 2 ) Given two ciphertexts [12pt]{minimal} $$ct_1,ct_2$$ c t 1 , c t 2 , output the ciphertext [12pt]{minimal} $$ct_{add}/ ct_{sub} = [ct_1 ct_2]_{q_L}$$ c t add / c t sub = [ c t 1 ± c t 2 ] q L encrypting a plaintext vector [12pt]{minimal} $$ {m}_1 {m}_2$$ m → 1 ± m → 2 . [12pt]{minimal} $$Mult_{evk}(ct_1,ct_2)$$ M u l t evk ( c t 1 , c t 2 ) Given two ciphertexts [12pt]{minimal} $$ct_1,ct_2 {R^{2}_{q_L}}$$ c t 1 , c t 2 ∈ R L 2 , output a level-downed ciphertext [12pt]{minimal} $$ct_{mult} {R^{2}_{q_{L-1}}}$$ c t mult ∈ R L - 1 2 encrypting a plaintext vector [12pt]{minimal} $$ {m}_1 {m}_2$$ m → 1 ⊙ m → 2 [12pt]{minimal} $$Relin_{evk}(ct)$$ R e l i n evk ( c t ) When two ciphertexts [12pt]{minimal} $$ct_1$$ c t 1 and [12pt]{minimal} $$ct_2$$ c t 2 are multiplied, the results if a larger ciphertext [12pt]{minimal} $$ct_{Mult}=Mu(ct_1,ct_2)=(d_0, d_1, d_2)$$ c t Mult = M u ( c t 1 , c t 2 ) = ( d 0 , d 1 , d 2 ) , where [12pt]{minimal} $$d_0$$ d 0 , [12pt]{minimal} $$d_1$$ d 1 , and [12pt]{minimal} $$d_2$$ d 2 are the components of the resulting ciphertext. To reduce the ciphertext back to the original size, a relinearization key evk is used to transform the ciphertext from three-component form back to a two-component form, such that [12pt]{minimal} $$ct_{relin}=Relin_{evk}((d_0,d_1,d_2))=(d_0^ ,d_1^ )$$ c t relin = R e l i n evk ( ( d 0 , d 1 , d 2 ) ) = ( d 0 ′ , d 1 ′ ) , where the results of applying [12pt]{minimal} $$Relin_{evk}$$ R e l i n evk is defined by the expression [12pt]{minimal} $$(d_0 + _{i=1}^{2}evk_i {d_i}, d_1 + _{i=1}^{2}evk_{i+2} {d_i})$$ ( d 0 + ∑ i = 1 2 e v k i · d i , d 1 + ∑ i = 1 2 e v k i + 2 · d i ) . [12pt]{minimal} $$Rotate_{rk}(ct,r)$$ R o t a t e rk ( c t , r ) This operator is also called automorphism. For a ciphertext ct encrypting a plaintext vector [12pt]{minimal} $$ {m} = (m_1, , m_n)$$ m → = ( m 1 , … , m n ) , output a ciphertext [12pt]{minimal} $$ct^{ }$$ c t ′ encrypting a plaintext vector [12pt]{minimal} $$ {m}^{ } = (m_{r+1}, ,m_n, m_1, ,m_r)$$ m → ′ = ( m r + 1 , … , m n , m 1 , … , m r ) , which is the (left) rotated plaintext vector of ct by r positions. Rescale ( ct ) When two ciphertexts [12pt]{minimal} $$ct_1$$ c t 1 and [12pt]{minimal} $$ct_2$$ c t 2 are multiplied, the resulting ciphertext [12pt]{minimal} $$ct_{Mult}=Mult_{evk}(ct_1,ct_2)$$ c t Mult = M u l t evk ( c t 1 , c t 2 ) has a scale that is the product of the scales of [12pt]{minimal} $$ct_1$$ c t 1 and [12pt]{minimal} $$ct_2$$ c t 2 , i.e. [12pt]{minimal} $$ _{Mult}= _1 { _2}$$ Δ Mult = Δ 1 · Δ 2 . Rescaling brings the scale back to a manageable level. It involves dividing the ciphertext by a factor [12pt]{minimal} $$$$ Δ , i.e. [12pt]{minimal} $$ct_{rs}= }{ } $$ c t rs = ⌊ c t Mult Δ ⌋ . [12pt]{minimal} $$ModSwitch(ct,q^ )$$ M o d S w i t c h ( c t , q ′ ) Modulus switching in CKKS is used to reduce the modulus of the ciphertext to help manage the noise (and plaintext) growth and to match levels of ciphertexts operating together. To switch to a smaller modulus [12pt]{minimal} $$q^ < q$$ q ′ < q , the ciphertext components [12pt]{minimal} $$ct_0$$ c t 0 and [12pt]{minimal} $$ct_1$$ c t 1 are scaled down and rounded according to [12pt]{minimal} $$ct^ =( {ct_0} , {ct_1} ) q^$$ c t ′ = ( ⌊ q ′ q · c t 0 ⌋ , ⌊ q ′ q · c t 1 ⌋ ) mod q ′ . The distribution [12pt]{minimal} $$ _{enc}$$ χ enc and [12pt]{minimal} $$ _{err}$$ χ err denote the discrete Gaussian distributions for some fixed standard deviation [12pt]{minimal} $$$$ σ . The distribution [12pt]{minimal} $$ _{key}$$ χ key outputs a polynomial of [12pt]{minimal} $$\{-{1} ,0 ,1 \}$$ { - 1 , 0 , 1 } coefficients. We denote the rounding function [12pt]{minimal} $$ $$ ⌊ · ⌉ and modulo q operation [12pt]{minimal} $$[ ]_q$$ [ · ] q . The encoding technique allows parallel computation over encryption in a Single-Instruction-Multiple-Data (SIMD) way making it efficient once the computation is amortized on the vector size. DNA matching methods There are two lines of work relevant to our topic: first, database queries on cleartext data that could be adapted to Homomorphic Encryption and second, encrypted genomic database queries. Not all popular methods in the unencrypted domain are good candidates to run in the encrypted domain. Depending upon the Homomorphic Encryption scheme, mathematical functions like max, min, greater than, less than, is equal to and algorithm like loops and sorts are not easily implementable on encrypted data. We are looking for methods that enable swift kinship searches for relatives up to the third degree, while observing the aforementioned constraints imposed by the difficulties to transform it into a homomorphic encryption arithmetic circuit. Cleartext protocols for DNA matching Genetic relatedness or kinship between two individuals can be described as the likelihood that, at a randomly chosen genomic location, the alleles in their genomes are inherited from a common ancestor. This phenomenon is known as Identical-by-Descent (IBD). This concept of relatedness should not be confused with Kinship coefficient and metrics closely connected to other genetic measures, including the inbreeding coefficient and probabilities associated with sharing IBD segments. To identify biological relationships beyond immediate family, the segment approach and extended IBD segments are effective but require high density markers, typically not available in forensic samples. Forensic samples typically rely on STR (Short Term Repeat) DNA typing, the preferred data format of forensic searches in criminal databases (i.e., Familial DNA Searching) to obtain partial matches with immediate relatives. Finding matches beyond immediate relatives is more suitable using single nucleotide polymorphism (SNP) data format. The main challenge in DNA kinship matching is choosing the right method for the computations. Most methods rely on observed allele sharing, Identity-By-State (IBS), to estimate probabilities of shared ancestry (IBD) or kinship coefficients and many of these are too complex to run on encrypted data. Methods available for DNA kinship matching up to the third degree (e.g., siblings, half-siblings, or first cousins) differ in complexity, accuracy and latency. distinguish four categories of kinship methods. The first category entails moment estimators such as KING , REAP , plink , GCTA , GRAF and PC-Relate that use Identical-by-State (IBS) markers and genotype distances to estimate expected kinship statistics. The second category is represented by the maximum-likelihood methods RelateAdmix and ERSA , which use expectation- maximization (EM) to jointly estimate the kinship statistics. The third and fourth families of methods use IBD-matching on phased genotypes (e.g. ), and kinship estimation from low-coverage next-generation sequencing data . All these methods use one or more of three types of analysis, namely: Identity by Descent (IBD) Analysis by considering shared alleles across the entire genome, provides insights into relatedness at different temporal scales and levels of relatedness. Dou et al. use mutual information between the relatives’ degree of relatedness and a tuple of their kinship coefficient to build a Bayes classifier to predict first through sixth-degree relationships. Smith et al. developed IBIS, an IBD detector that locates long regions of allele sharing between unphased individuals. Morimoto et al. use Identity by State (IBS) Analysis to identify regions of the genome where two individuals share the same alleles. The proportion of the genome that is IBS will indicate the level of relatedness. Ramstetter et al. use Haplotype Sharing Analysis to look at shared haplotypes within particular genomic regions to uncover recent common ancestry. Nonetheless, these methods can be too complex to yield the low latency required for demanding elaborate polynomial approximations of non-linear functions to transform them into a homomorphic encryption arithmetic circuit. Moreover, while the competition challenge is well-suited for search methods that calculate kinship scores between each query and every entry in the database, it can also be re-framed as a decision problem to become more amenable to resolution through decision algorithms. Specifically, the challenge involves the task of establishing kinship scores that quantify the degree of genetic relatedness between a given query and any sample within a genomic database. Homer et al. suggest an algorithm working on clear text using clustering of admixed population. They demonstrate experimentally the identification of the presence of genomic DNA of specific individuals within a series of highly complex genomic mixtures. This is significant for two reasons: first, it brings back to the forefront SNPs for identifying individual trace contributors within a forensics mixture, when STRs were the preferred method. Second, we will show (see “ ” section) that this method is low latency, accurate and amenable to Homomorphic Encryption. The choice of method depends on the quality and quantity of genetic data available, as well as the specific relationships being investigated and the population structure. The fastest methods to compute kinship are IBD methods. These methods, however, are not Homomorphic Encryption friendly and may require large computing resources and long latency running in the encrypted domain. Table shows the fastest available methods to compute kinship on unencrypted data . Private queries on encrypted data Over the last 10 years there has been a number of papers demonstrating private queries on encrypted data. Ramstetter et al. suggest a secure biometric authentication method that employs fully homomorphic encryption TFHE scheme. They match biometric data from a local device, to an encrypted biometric template on a remote-server encrypted database. Pradel and Mitchell introduce Private Collection Matching (PCM) problems, in which a client aims to determine whether a collection of sets owned by a database server matches their interests. EdalatNejad et al. propose a string matching protocol for querying the presence of particular mutations in a genome database. They combine Homomorphic Encrytion scheme BGV and private set intersection to search for similar string segments. Chen et al. compute private queries on encrypted data in a multi-user setting. Bao et al. compute conjunctive queries on encrypted data. Saha and Koshiba execute comparison queries while compute range queries on encrypted data. Boneh and Waters compute relatedness scores within the protective confined Trusted Execution Environment of SGX, a hardware approach. Chen et al. proposed “sketching”, worked on “fingerprinting”, while implemented a differential privacy scheme. Wang et al. proposed a method to compute relatedness in the encrypted domain using Homomorphic Encryption taking into account admixed populations. This projection-based approach utilizes existing reference genotype datasets for estimating admixture rates for each individual and use these to estimate kinship in admixed populations. Dervishi et al. implements a k -means algorithm on encrypted data using CKKS. This algorithm shows the feasibility of our clustering scheme should we require to implement it fully encrypted as proposed by . The solution to step 1, the focus of this work, can serve as a filtering system for forensics analysis of DNA samples collected on crime scenes. In this case, the problem does not require that the relative in the database be identified exactly, but instead it requires to determine if there exists at least one relative to the query individual in the database with certain probability. There could be many databases to search from. Since comparing a suspect with each individual in every database is computationally expensive, it pays off to reduce the number of databases to search from and to reduce the number of suspect candidates for each target database. In this regard, the first step would be to determine if a suspect has any relative in the target database. The key contributions of this work are three-fold: Firstly, we propose an HE-friendly mathematical simplification of the equation proposed in to detect contributing trace amounts of DNA to highly complex mixtures using homomorphically encrypted high-density SNP genotypes. Secondly, we introduce two novel algorithms to predict evaluation scores rating whether a DNA sample query shares genetic data with any other DNA sample in a genomic database, where one of these two is heuristically inspired by the z-test hypothesis testing, and assumes no prior knowledge of the reference populations, and the other algorithm uses a Machine Learning approach with linear regression model trained on a known reference population mixture inherited from the genealogy database. Finally, we demonstrate through several experiments that our methods perform high accuracy predictions in less than 37.5 milliseconds per query using encrypted genetic data in a privacy-preserving approach with provable 128-bit security. As follows, in “ ” section, we define the problem scope in the context of FGG and present a discussion on relevant related work in “ ” section. In “ ” section, we present the methods in details, including some data analysis and design considerations to address the problem statement effectively in aspects such as security, computing and resource optimizations. We present performance results of the methods in detail in “ ” section, including a description of the characteristics of the challenge, data, evaluation criteria, and computing resource constraints in “ ” section. Finally, we finalize our discussion in “ ” and “ ” sections. Forensic Genetic Genealogy (FGG) is an investigative tool that combines traditional genealogy research with advanced SNP DNA analysis to solve crimes and identify unknown individuals. It consists of the following steps: DNA sample collection: DNA sample is collected from a crime scene or an unidentified individual. SNP testing: this is the process in which DNA is analyzed to identify the SNP variations and then compiled into an array format (the input data of this paper). Profile upload: the genetic profile acquired from step 2, the SNP array (or genome), is then uploaded to a public genetic genealogy database, such as GEDmatch or FamilyTreeDNA. Database matching: matching algorithms are used to compare the uploaded profile with other genetic profiles in the database to identify potential relatives by measuring the amount of shared DNA segments. The scope of our work and the iDASH 2023 competition intersects with this since it concerns identifying whether there are any potential relatives in the database . Relationship estimation: an algorithm takes two genomes and estimates the degree of relatedness between them, which can range from close relatives (e.g., parents, siblings) to distant cousins. The methods proposed here can be used to perform relationship estimation but this is out of scope of this work. Genealogical research: genealogists use the matches found in step 5 to reconstruct family trees, tracking common ancestors and descendants to reduce the number of potential suspects. Identifying the suspect: once a potential match is identified, law enforcement collects a DNA sample from the suspect to confirm the match through traditional forensic methods. Kinship estimation The kinship score determines the degree of relatedness between two individuals based on their genetic data (see ). The database matching step, described in step 4 above, relies on predicting the kinship between the uploaded genetic profile and the genetic profiles in the database. It is a measure of the probability that a randomly chosen allele from one individual is identical by descent (IBD) to a randomly chosen allele from another individual. It can be mathematically described (see ) as 1 [12pt]{minimal} $$ _{ij}= _{l=1}^{L}-2p_l)(x_{jl}-2p_l)}{2p_l(1-p_l)}, $$ ϕ ij = ∑ l = 1 L ( x il - 2 p l ) ( x jl - 2 p l ) 2 p l ( 1 - p l ) , where L is the number of loci (genetic markers or SNP variants), [12pt]{minimal} $$x_{il}$$ x il and [12pt]{minimal} $$x_{jl}$$ x jl are the SNP variants of individuals i and j at locus l , and [12pt]{minimal} $$p_l$$ p l is the allele frequency at locus l . Scope of this work in the FGG context Step 4 is the subject matter of this work and of the iDASH competition task. It concerns kinship prediction. The input data of this work comes from step 2, a genome sequence formed of SNP variants represented with elements in the set [12pt]{minimal} $$\{0, 1, 2\}$$ { 0 , 1 , 2 } . This genome encoding is a sequence of bi-allelic SNP data. In the GDS (Genomic Data Structure) data format, which is derived from a VCF (Variant Call Format) data file, the genotype encodings 2, 1, and 0 refer, respectively, to Homozygous for the reference allele (both alleles match the reference allele), Heterozygous (one of the alleles matches the reference allele and the other matches the alternate allele), and Homozygous alternate genotype (both alleles match the alternate allele). It basically counts how many alleles match with the reference allele in a specific position (gene locus) of the reference genome (see similar explanation in ). For simplification, the database matching task in step 4 is a search problem cast as a decision problem. The matching task is reduced to finding out whether or not the uploaded profile matches with any of the profiles in the database, while not requiring that any potential matches be exactly identified or retrieved. This means that the uploaded profile may not need be compared with all, or any, of the database profiles to deliver the answer. In this case, step 4 of the FGG task can be split into two parts. The first part regards screening each database to find out whether there exists any potential matches. All that is needed is to identify the nature of the relationship between the uploaded profile and the genetic database, i.e. answering the question “Is there any relative of the query individual in the probed genetic database?”. Once the databases that contain relatives are identified, then the second part starts, which consists of searching for the actual candidate matches in each of the databases where the uploaded profile was screened and found to share DNA segments with other database profiles. We concentrate our efforts on part 1 of step 4 as just described since it was the required task in iDASH competition. Steps 1, 2, 3, 5, 6, and 7 fall outside the scope of this work. We simplify the problem to obtain the kinship score between the individual query and the genomic database. We use homomorphic encryption to devise privacy-preserving methods to perform the relatedness matching while securing the computation with genetic data. The output of our methods can also be used as kinship predictions between pairs of genetic profiles and then used to estimate relationship types (step 5), but it is not the subject of study here. We use the predicted kinship scores to estimate the relationship of the uploaded profile directly with the genetic genealogy database. Other privacy-preserving genetic relatedness testing methods have been proposed and are discussed in . The kinship score determines the degree of relatedness between two individuals based on their genetic data (see ). The database matching step, described in step 4 above, relies on predicting the kinship between the uploaded genetic profile and the genetic profiles in the database. It is a measure of the probability that a randomly chosen allele from one individual is identical by descent (IBD) to a randomly chosen allele from another individual. It can be mathematically described (see ) as 1 [12pt]{minimal} $$ _{ij}= _{l=1}^{L}-2p_l)(x_{jl}-2p_l)}{2p_l(1-p_l)}, $$ ϕ ij = ∑ l = 1 L ( x il - 2 p l ) ( x jl - 2 p l ) 2 p l ( 1 - p l ) , where L is the number of loci (genetic markers or SNP variants), [12pt]{minimal} $$x_{il}$$ x il and [12pt]{minimal} $$x_{jl}$$ x jl are the SNP variants of individuals i and j at locus l , and [12pt]{minimal} $$p_l$$ p l is the allele frequency at locus l . Step 4 is the subject matter of this work and of the iDASH competition task. It concerns kinship prediction. The input data of this work comes from step 2, a genome sequence formed of SNP variants represented with elements in the set [12pt]{minimal} $$\{0, 1, 2\}$$ { 0 , 1 , 2 } . This genome encoding is a sequence of bi-allelic SNP data. In the GDS (Genomic Data Structure) data format, which is derived from a VCF (Variant Call Format) data file, the genotype encodings 2, 1, and 0 refer, respectively, to Homozygous for the reference allele (both alleles match the reference allele), Heterozygous (one of the alleles matches the reference allele and the other matches the alternate allele), and Homozygous alternate genotype (both alleles match the alternate allele). It basically counts how many alleles match with the reference allele in a specific position (gene locus) of the reference genome (see similar explanation in ). For simplification, the database matching task in step 4 is a search problem cast as a decision problem. The matching task is reduced to finding out whether or not the uploaded profile matches with any of the profiles in the database, while not requiring that any potential matches be exactly identified or retrieved. This means that the uploaded profile may not need be compared with all, or any, of the database profiles to deliver the answer. In this case, step 4 of the FGG task can be split into two parts. The first part regards screening each database to find out whether there exists any potential matches. All that is needed is to identify the nature of the relationship between the uploaded profile and the genetic database, i.e. answering the question “Is there any relative of the query individual in the probed genetic database?”. Once the databases that contain relatives are identified, then the second part starts, which consists of searching for the actual candidate matches in each of the databases where the uploaded profile was screened and found to share DNA segments with other database profiles. We concentrate our efforts on part 1 of step 4 as just described since it was the required task in iDASH competition. Steps 1, 2, 3, 5, 6, and 7 fall outside the scope of this work. We simplify the problem to obtain the kinship score between the individual query and the genomic database. We use homomorphic encryption to devise privacy-preserving methods to perform the relatedness matching while securing the computation with genetic data. The output of our methods can also be used as kinship predictions between pairs of genetic profiles and then used to estimate relationship types (step 5), but it is not the subject of study here. We use the predicted kinship scores to estimate the relationship of the uploaded profile directly with the genetic genealogy database. Other privacy-preserving genetic relatedness testing methods have been proposed and are discussed in . Current security and privacy protection practices in genomic data sharing Genomic data sharing is particularly useful for precise medicine . There are a myriad of unified genomic database knowledge projects (see for a list) that provide researchers with genetic data sharing and analysis capabilities for this purpose. Along with that, concerns regarding genomic data security and privacy are raised . They implement different strategies to offer security and privacy protection guarantees. For instance, control access through administrative processes, laws and regulations, data anonymization, and encryption. Administrative processes To obtain access to controlled data from the NCI (National Cancer Institute) Genomic Data Commons (GDC) knowledge database, it is required to file a dbGaP (Database of Genotypes and Phenotypes) authorization request that will be reviewed, approved or disapproved by the NIH (National Institutes of Health) Data Access Committee (DAC) on the basis of whether or not the usage will conform to the specification determined by the NIH Genomic Data Sharing Policy (see more details at ). Once access is granted, the recipient is entrusted with and accountable for the security, confidentiality, integrity and availability of the data, including when utilizing Cloud computing services. Another example is the European Genome-phenome Archive (EGA)’s data access , which operates in a similar manner, i.e. through Data Access Agreement (DAA) and Data Processing Agreement (DPA) documents, but enhancing data access security and confidentiality via authenticated encryption of data files using Crypt4GH . Many other public genomic datasets exist and implement similar security and privacy protection practices, as reviewed by . Employing administrative processes only is not suitable for privacy-preserving FGG . This implementation of access control to sensitive data depends on the integrity and goodwill of the authorized individual to self-report any agreement violations and data breaches. Once data access is granted, there is a lack of oversight to enforce policies related to genomic privacy, re-identification, and data misuse. Data anonymization Data anonymization involves obscuring personal identifiers in genetic data to protect individual’s privacy. It can also come in the form of aggregated data that shows trends and patterns without revealing specific identities. Data masking is also a technique employed to alter sensitive parts of the data to prevent identification . Employing data anonymization only is not suitable for privacy-preserving FGG . Genetic data is unique and inherently identifiable. Even when anonymized, it can often be re-identified through genealogical research and cross-referencing with other data sources. Anonymization of data also bring serious limitations due to the uniqueness of every individual’s genome, which can be easily subject to proven re-identification attacks (see ). Laws and regulations Laws and regulations play a crucial role in protecting the privacy of genetic data and medical information. They legally protect individual’s medical record and other PII data, including genetic data, by setting standards for the use and disclosure of such information by covered entities. Their security rules depends on appropriate administrative and technical safeguards to ensure confidentiality, integrity and security of protected health information. They set the foundation of genetic privacy but carry limitations that pose increased risk to individuals’ privacy. Employing laws and regulations only is not suitable for privacy-preserving FGG . There is a lack of standardized regulations and ethical guidelines governing the use of genetic data in forensic investigations. Legal acts such as HIPAA and GINA seem inadequate and leave gaps in protection since they focus on who holds the data rather than the data itself because it only applies to covered entities. For example, they do not regulate consumer-generated medical and health information or recreational genetic sequencing generated by commercial entities such as 23andMe and Ancestry.com. Therefore, we can argue that these commonly practiced solutions fall short in securing genomic data privacy. In all the aforementioned genomic data sharing database cases, privacy protection is traded by confidentiality agreements, which do not offer the same layer of protection to sensitive data since their compliance is subject to the actions of fallible human beings. The adequate solution shall enforce privacy protection policies on the data regardless of the creator or who has access to it. Cryptographic techniques appear to be the most suitable to address it in this manner (e.g. ), where the most advanced of them allows making inferences and analytics while the data is encrypted, while never revealing the contents to the user. Encryption in genomic data sharing Privacy risks associated to accessing and storing genetic data can be mitigated by enabling confidentiality through cryptography. If either at rest or in transit, genetic data can be guarded from unwarranted access using state-of-the-art encryption schemes (e.g. ). This way, only authorized personnel holding the decryption key can reveal the contents of the encrypted genetic data. Crypt4GH is an industry standard for genomic data file format to keep genomic data secure while at rest, in transit, and through random access; thus, allowing secure genomic data sharing between separate parties. A solution so-called SECRAM data format has been proposed for secure storage and retrieval of encrypted and compressed aligned genomic data. To perform data analytics with machine learning algorithms in such case, the data is required to be decrypted and it becomes vulnerable to cybersecurity attacks. This is the major protection limitation of conventional cryptographic encryption schemes, i.e. requiring decryption before computing. On the other hand, encrypting genomic data with Homomorphic Encryption (HE) schemes allows computation over encrypted data without ever decrypting it; thus, not revealing any sensitive content since the data remains encrypted, ensuring true private computation. This additional layer of security can potentially help reduce the time and cost spent on reviewing and approving data accesses. When computationally demanding data analysis is desired, more often than not, processing needs to occur in (public) untrusted cloud service providers due to limited local computing resources and/or access to a restricted number of analytic model IPs. In this context, modern cryptography introduces homomorphic encryption methods (e.g., BGV , and CKKS ), which bring the capability of protecting data privacy during computation in a semi-honest security model. Genetic privacy protection with homomorphic encryption Fully Homomorphic Encryption (FHE) allows computation of arbitrary functions on encrypted data without decryption . This means the data is also protected during computation (processing) since it remains encrypted. Its security guarantee stems from the hardness of Ring Learning with Errors (RLWE) assumptions . There are two aspects to this assumption, namely, decisional and computational. The decisional RLWE assumption states that it is infeasible to distinguish pairs ( a , b ) picked at random from a distribution over a ring [12pt]{minimal} $$ {R}_Q^2$$ R Q 2 and pairs constructed as [12pt]{minimal} $$(a,a s+e)$$ ( a , a · s + e ) with a sampled from [12pt]{minimal} $$ {R}_Q$$ R Q , where e and s are randomly sampled from a noise distribution [12pt]{minimal} $$ {X}$$ X over the ring [12pt]{minimal} $$ {R}$$ R . The computational assumption states that it is hard to discover the secret key s from many different samples [12pt]{minimal} $$(a,a s+e)$$ ( a , a · s + e ) . This homomorphic encryption construct is built on a polynomial ring [12pt]{minimal} $$ {R}_Q= {Z}_Q[x]/(X^N+1)$$ R Q = Z Q [ x ] / ( X N + 1 ) , where [12pt]{minimal} $$ {Z}_Q$$ Z Q denotes the ring of integers modulo Q that populate the polynomial coefficients, [12pt]{minimal} $$X^N+1$$ X N + 1 is the [12pt]{minimal} $$M^{th}$$ M th cyclotomic polynomial [12pt]{minimal} $$ _M(x)$$ ϕ M ( x ) , and [12pt]{minimal} $$N=M/2$$ N = M / 2 . The choice of N , where N is typically a power-of-2 integer, is determined by the value of the coefficient modulus Q and the security parameter [12pt]{minimal} $$$$ λ , such that [12pt]{minimal} $$M=M( ,Q)$$ M = M ( λ , Q ) is a function of [12pt]{minimal} $$$$ λ and Q . Various homomorphic encryption schemes built on RLWE constructs that work naturally with integers emerged in the literature (e.g. ). Although the genetic data in this work takes values in the set [12pt]{minimal} $$ {G}=\{0,1,2\}$$ G = { 0 , 1 , 2 } , the expected output and model parameters to perform the data analysis and predictions operate on numbers in floating-point representation. This is especially true when training machine learning models to make predictions from genotypes. For this reason, it is natural to opt for a homomorphic encryption scheme intrinsically designed to accommodate floating-point arithmetic. Cheon et al. put forward the first homomorphic encryption for arithmetic of approximate numbers, also commonly known as the CKKS (short for Cheon-Kim-Kim-Song) scheme, that is most suitable to operate on real numbers. The CKKS scheme is a levelled homomorphic encryption (LHE) public key encryption scheme based on the RLWE problem . It allows to perform computations on encrypted complex numbers; thus, real numbers too. The ability of the CKKS method to handle floating-point numbers, approximated with fixed-point representation, makes it particularly attractive for confidential machine learning (ML) and data analysis. In the following, we briefly describe the CKKS scheme that we will use throughout this paper. The same noise e added during the encryption to strengthen the security also contributes to limiting the number of consecutive multiplications as the noise grows as consequence of that, possibly causing decryption error. CKKS controls this error-causing noise growth with the concept of levels and rescaling. Initially, a fresh CKKS ciphertext ct is assumed to encrypt numbers with certain initial precision masked by the added noise of smaller precision. The initial noise budget of a CKKS ciphertext (see Fig. ) is determined by the parameter L (multiplicative depth). The integer L corresponds to the largest ciphertext modulus level permitted by the security parameter [12pt]{minimal} $$$$ λ . Let the ring dimension N be a power-of-2, a modulus [12pt]{minimal} $$Q=q_L= ^L$$ Q = q L = Δ L , and [12pt]{minimal} $$q_l:= ^{l}$$ q l : = Δ l for [12pt]{minimal} $$1 l L$$ 1 ≤ l ≤ L , and some integer scaling factor [12pt]{minimal} $$ =2^p$$ Δ = 2 p , where p is the number of bits for the desired (initial) precision. Before encryption, the message needs to be encoded in a plaintext space. Genetic data vector [12pt]{minimal} $$q {G}^n$$ q ∈ G n is seen as a single CKKS message [12pt]{minimal} $$z {C}^{N/2}$$ z ∈ C N / 2 , assuming [12pt]{minimal} $$n N/2$$ n ≤ N / 2 , mapped to a plaintext object [12pt]{minimal} $$ {m} {R}$$ m → ∈ R . This plaintext space supports element-wise vector-vector addition, subtraction, and Hadamard multiplication. For encoding and decoding procedures, CKKS relies on a field isomorphism called canonical embedding, i.e. [12pt]{minimal} $$ : {R}[x]/(X^N+1) { {C}^{N/2}}$$ τ : R [ x ] / ( X N + 1 ) → C N / 2 . Hence, we have 2 [12pt]{minimal} $$ {Encode}(z, )= ^{-1}(z) $$ Encode ( z , Δ ) = ⌊ Δ · τ - 1 ( z ) ⌉ 3 [12pt]{minimal} $$ {Decode}( {m}, )= ( m) $$ Decode ( m → , Δ ) = τ ( 1 Δ · m ) Equipped with the aforementioned concepts, we now define the following CKKS operators: [12pt]{minimal} $$KeyGen( {R}_{q_L}, _{key}, _{err},1^{ })$$ K e y G e n ( R q L , χ key , χ err , 1 λ ) Sample [12pt]{minimal} $$s { _{key}}$$ s ← χ key and set the secret key as [12pt]{minimal} $$sk=(1,s)$$ s k = ( 1 , s ) . Sample [12pt]{minimal} $$a {U( {R}_{q_L})}$$ a ← U ( R q L ) (where U denotes the Uniform distribution) and [12pt]{minimal} $$e { _{err}}$$ e ← χ err . Set the public key as [12pt]{minimal} $$pk=(b,a) {R}_{q_L}^2$$ p k = ( b , a ) ∈ R q L 2 where [12pt]{minimal} $$b=[-a {s} + e]_{q_L}$$ b = [ - a · s + e ] q L [12pt]{minimal} $$Enc_{pk}( {m})$$ E n c pk ( m → ) Given a plaintext message [12pt]{minimal} $$ {m} {R}$$ m → ∈ R , sample [12pt]{minimal} $$v { }_{enc}$$ v ← χ enc and [12pt]{minimal} $$e_0,e_1 { _{err}}$$ e 0 , e 1 ← χ err . Output the ciphertext [12pt]{minimal} $$ct = [v {pk} + ( {m} + e_0,e_1)]_{q_L}$$ c t = [ v · pk + ( m → + e 0 , e 1 ) ] q L . [12pt]{minimal} $$Dec_{sk}(ct)$$ D e c sk ( c t ) Given a ciphertext [12pt]{minimal} $$ct {R^{2}_{q_L}}$$ c t ∈ R L 2 , where ct as encryption of [12pt]{minimal} $$ {m}$$ m → satisfies [12pt]{minimal} $$ ct,sk = {m} + e ( q_L)$$ ⟨ c t , s k ⟩ = m → + e ( mod q L ) for some small e , then the decryption output results in [12pt]{minimal} $$ {m}^{ }= ct,sk ( q_L)$$ m → ′ = ⟨ c t , s k ⟩ ( mod q L ) , where [12pt]{minimal} $$ {m}^{ }$$ m → ′ is slightly different from the original encoded message [12pt]{minimal} $$ {m}$$ m → ; indeed, an approximated value when [12pt]{minimal} $$||e||_{ }<< || {m}||_{ }$$ | | e | | ∞ < < | | m → | | ∞ holds true. [12pt]{minimal} $$Add/Sub(ct_1,ct_2)$$ A d d / S u b ( c t 1 , c t 2 ) Given two ciphertexts [12pt]{minimal} $$ct_1,ct_2$$ c t 1 , c t 2 , output the ciphertext [12pt]{minimal} $$ct_{add}/ ct_{sub} = [ct_1 ct_2]_{q_L}$$ c t add / c t sub = [ c t 1 ± c t 2 ] q L encrypting a plaintext vector [12pt]{minimal} $$ {m}_1 {m}_2$$ m → 1 ± m → 2 . [12pt]{minimal} $$Mult_{evk}(ct_1,ct_2)$$ M u l t evk ( c t 1 , c t 2 ) Given two ciphertexts [12pt]{minimal} $$ct_1,ct_2 {R^{2}_{q_L}}$$ c t 1 , c t 2 ∈ R L 2 , output a level-downed ciphertext [12pt]{minimal} $$ct_{mult} {R^{2}_{q_{L-1}}}$$ c t mult ∈ R L - 1 2 encrypting a plaintext vector [12pt]{minimal} $$ {m}_1 {m}_2$$ m → 1 ⊙ m → 2 [12pt]{minimal} $$Relin_{evk}(ct)$$ R e l i n evk ( c t ) When two ciphertexts [12pt]{minimal} $$ct_1$$ c t 1 and [12pt]{minimal} $$ct_2$$ c t 2 are multiplied, the results if a larger ciphertext [12pt]{minimal} $$ct_{Mult}=Mu(ct_1,ct_2)=(d_0, d_1, d_2)$$ c t Mult = M u ( c t 1 , c t 2 ) = ( d 0 , d 1 , d 2 ) , where [12pt]{minimal} $$d_0$$ d 0 , [12pt]{minimal} $$d_1$$ d 1 , and [12pt]{minimal} $$d_2$$ d 2 are the components of the resulting ciphertext. To reduce the ciphertext back to the original size, a relinearization key evk is used to transform the ciphertext from three-component form back to a two-component form, such that [12pt]{minimal} $$ct_{relin}=Relin_{evk}((d_0,d_1,d_2))=(d_0^ ,d_1^ )$$ c t relin = R e l i n evk ( ( d 0 , d 1 , d 2 ) ) = ( d 0 ′ , d 1 ′ ) , where the results of applying [12pt]{minimal} $$Relin_{evk}$$ R e l i n evk is defined by the expression [12pt]{minimal} $$(d_0 + _{i=1}^{2}evk_i {d_i}, d_1 + _{i=1}^{2}evk_{i+2} {d_i})$$ ( d 0 + ∑ i = 1 2 e v k i · d i , d 1 + ∑ i = 1 2 e v k i + 2 · d i ) . [12pt]{minimal} $$Rotate_{rk}(ct,r)$$ R o t a t e rk ( c t , r ) This operator is also called automorphism. For a ciphertext ct encrypting a plaintext vector [12pt]{minimal} $$ {m} = (m_1, , m_n)$$ m → = ( m 1 , … , m n ) , output a ciphertext [12pt]{minimal} $$ct^{ }$$ c t ′ encrypting a plaintext vector [12pt]{minimal} $$ {m}^{ } = (m_{r+1}, ,m_n, m_1, ,m_r)$$ m → ′ = ( m r + 1 , … , m n , m 1 , … , m r ) , which is the (left) rotated plaintext vector of ct by r positions. Rescale ( ct ) When two ciphertexts [12pt]{minimal} $$ct_1$$ c t 1 and [12pt]{minimal} $$ct_2$$ c t 2 are multiplied, the resulting ciphertext [12pt]{minimal} $$ct_{Mult}=Mult_{evk}(ct_1,ct_2)$$ c t Mult = M u l t evk ( c t 1 , c t 2 ) has a scale that is the product of the scales of [12pt]{minimal} $$ct_1$$ c t 1 and [12pt]{minimal} $$ct_2$$ c t 2 , i.e. [12pt]{minimal} $$ _{Mult}= _1 { _2}$$ Δ Mult = Δ 1 · Δ 2 . Rescaling brings the scale back to a manageable level. It involves dividing the ciphertext by a factor [12pt]{minimal} $$$$ Δ , i.e. [12pt]{minimal} $$ct_{rs}= }{ } $$ c t rs = ⌊ c t Mult Δ ⌋ . [12pt]{minimal} $$ModSwitch(ct,q^ )$$ M o d S w i t c h ( c t , q ′ ) Modulus switching in CKKS is used to reduce the modulus of the ciphertext to help manage the noise (and plaintext) growth and to match levels of ciphertexts operating together. To switch to a smaller modulus [12pt]{minimal} $$q^ < q$$ q ′ < q , the ciphertext components [12pt]{minimal} $$ct_0$$ c t 0 and [12pt]{minimal} $$ct_1$$ c t 1 are scaled down and rounded according to [12pt]{minimal} $$ct^ =( {ct_0} , {ct_1} ) q^$$ c t ′ = ( ⌊ q ′ q · c t 0 ⌋ , ⌊ q ′ q · c t 1 ⌋ ) mod q ′ . The distribution [12pt]{minimal} $$ _{enc}$$ χ enc and [12pt]{minimal} $$ _{err}$$ χ err denote the discrete Gaussian distributions for some fixed standard deviation [12pt]{minimal} $$$$ σ . The distribution [12pt]{minimal} $$ _{key}$$ χ key outputs a polynomial of [12pt]{minimal} $$\{-{1} ,0 ,1 \}$$ { - 1 , 0 , 1 } coefficients. We denote the rounding function [12pt]{minimal} $$ $$ ⌊ · ⌉ and modulo q operation [12pt]{minimal} $$[ ]_q$$ [ · ] q . The encoding technique allows parallel computation over encryption in a Single-Instruction-Multiple-Data (SIMD) way making it efficient once the computation is amortized on the vector size. DNA matching methods There are two lines of work relevant to our topic: first, database queries on cleartext data that could be adapted to Homomorphic Encryption and second, encrypted genomic database queries. Not all popular methods in the unencrypted domain are good candidates to run in the encrypted domain. Depending upon the Homomorphic Encryption scheme, mathematical functions like max, min, greater than, less than, is equal to and algorithm like loops and sorts are not easily implementable on encrypted data. We are looking for methods that enable swift kinship searches for relatives up to the third degree, while observing the aforementioned constraints imposed by the difficulties to transform it into a homomorphic encryption arithmetic circuit. Cleartext protocols for DNA matching Genetic relatedness or kinship between two individuals can be described as the likelihood that, at a randomly chosen genomic location, the alleles in their genomes are inherited from a common ancestor. This phenomenon is known as Identical-by-Descent (IBD). This concept of relatedness should not be confused with Kinship coefficient and metrics closely connected to other genetic measures, including the inbreeding coefficient and probabilities associated with sharing IBD segments. To identify biological relationships beyond immediate family, the segment approach and extended IBD segments are effective but require high density markers, typically not available in forensic samples. Forensic samples typically rely on STR (Short Term Repeat) DNA typing, the preferred data format of forensic searches in criminal databases (i.e., Familial DNA Searching) to obtain partial matches with immediate relatives. Finding matches beyond immediate relatives is more suitable using single nucleotide polymorphism (SNP) data format. The main challenge in DNA kinship matching is choosing the right method for the computations. Most methods rely on observed allele sharing, Identity-By-State (IBS), to estimate probabilities of shared ancestry (IBD) or kinship coefficients and many of these are too complex to run on encrypted data. Methods available for DNA kinship matching up to the third degree (e.g., siblings, half-siblings, or first cousins) differ in complexity, accuracy and latency. distinguish four categories of kinship methods. The first category entails moment estimators such as KING , REAP , plink , GCTA , GRAF and PC-Relate that use Identical-by-State (IBS) markers and genotype distances to estimate expected kinship statistics. The second category is represented by the maximum-likelihood methods RelateAdmix and ERSA , which use expectation- maximization (EM) to jointly estimate the kinship statistics. The third and fourth families of methods use IBD-matching on phased genotypes (e.g. ), and kinship estimation from low-coverage next-generation sequencing data . All these methods use one or more of three types of analysis, namely: Identity by Descent (IBD) Analysis by considering shared alleles across the entire genome, provides insights into relatedness at different temporal scales and levels of relatedness. Dou et al. use mutual information between the relatives’ degree of relatedness and a tuple of their kinship coefficient to build a Bayes classifier to predict first through sixth-degree relationships. Smith et al. developed IBIS, an IBD detector that locates long regions of allele sharing between unphased individuals. Morimoto et al. use Identity by State (IBS) Analysis to identify regions of the genome where two individuals share the same alleles. The proportion of the genome that is IBS will indicate the level of relatedness. Ramstetter et al. use Haplotype Sharing Analysis to look at shared haplotypes within particular genomic regions to uncover recent common ancestry. Nonetheless, these methods can be too complex to yield the low latency required for demanding elaborate polynomial approximations of non-linear functions to transform them into a homomorphic encryption arithmetic circuit. Moreover, while the competition challenge is well-suited for search methods that calculate kinship scores between each query and every entry in the database, it can also be re-framed as a decision problem to become more amenable to resolution through decision algorithms. Specifically, the challenge involves the task of establishing kinship scores that quantify the degree of genetic relatedness between a given query and any sample within a genomic database. Homer et al. suggest an algorithm working on clear text using clustering of admixed population. They demonstrate experimentally the identification of the presence of genomic DNA of specific individuals within a series of highly complex genomic mixtures. This is significant for two reasons: first, it brings back to the forefront SNPs for identifying individual trace contributors within a forensics mixture, when STRs were the preferred method. Second, we will show (see “ ” section) that this method is low latency, accurate and amenable to Homomorphic Encryption. The choice of method depends on the quality and quantity of genetic data available, as well as the specific relationships being investigated and the population structure. The fastest methods to compute kinship are IBD methods. These methods, however, are not Homomorphic Encryption friendly and may require large computing resources and long latency running in the encrypted domain. Table shows the fastest available methods to compute kinship on unencrypted data . Private queries on encrypted data Over the last 10 years there has been a number of papers demonstrating private queries on encrypted data. Ramstetter et al. suggest a secure biometric authentication method that employs fully homomorphic encryption TFHE scheme. They match biometric data from a local device, to an encrypted biometric template on a remote-server encrypted database. Pradel and Mitchell introduce Private Collection Matching (PCM) problems, in which a client aims to determine whether a collection of sets owned by a database server matches their interests. EdalatNejad et al. propose a string matching protocol for querying the presence of particular mutations in a genome database. They combine Homomorphic Encrytion scheme BGV and private set intersection to search for similar string segments. Chen et al. compute private queries on encrypted data in a multi-user setting. Bao et al. compute conjunctive queries on encrypted data. Saha and Koshiba execute comparison queries while compute range queries on encrypted data. Boneh and Waters compute relatedness scores within the protective confined Trusted Execution Environment of SGX, a hardware approach. Chen et al. proposed “sketching”, worked on “fingerprinting”, while implemented a differential privacy scheme. Wang et al. proposed a method to compute relatedness in the encrypted domain using Homomorphic Encryption taking into account admixed populations. This projection-based approach utilizes existing reference genotype datasets for estimating admixture rates for each individual and use these to estimate kinship in admixed populations. Dervishi et al. implements a k -means algorithm on encrypted data using CKKS. This algorithm shows the feasibility of our clustering scheme should we require to implement it fully encrypted as proposed by . Genomic data sharing is particularly useful for precise medicine . There are a myriad of unified genomic database knowledge projects (see for a list) that provide researchers with genetic data sharing and analysis capabilities for this purpose. Along with that, concerns regarding genomic data security and privacy are raised . They implement different strategies to offer security and privacy protection guarantees. For instance, control access through administrative processes, laws and regulations, data anonymization, and encryption. Administrative processes To obtain access to controlled data from the NCI (National Cancer Institute) Genomic Data Commons (GDC) knowledge database, it is required to file a dbGaP (Database of Genotypes and Phenotypes) authorization request that will be reviewed, approved or disapproved by the NIH (National Institutes of Health) Data Access Committee (DAC) on the basis of whether or not the usage will conform to the specification determined by the NIH Genomic Data Sharing Policy (see more details at ). Once access is granted, the recipient is entrusted with and accountable for the security, confidentiality, integrity and availability of the data, including when utilizing Cloud computing services. Another example is the European Genome-phenome Archive (EGA)’s data access , which operates in a similar manner, i.e. through Data Access Agreement (DAA) and Data Processing Agreement (DPA) documents, but enhancing data access security and confidentiality via authenticated encryption of data files using Crypt4GH . Many other public genomic datasets exist and implement similar security and privacy protection practices, as reviewed by . Employing administrative processes only is not suitable for privacy-preserving FGG . This implementation of access control to sensitive data depends on the integrity and goodwill of the authorized individual to self-report any agreement violations and data breaches. Once data access is granted, there is a lack of oversight to enforce policies related to genomic privacy, re-identification, and data misuse. Data anonymization Data anonymization involves obscuring personal identifiers in genetic data to protect individual’s privacy. It can also come in the form of aggregated data that shows trends and patterns without revealing specific identities. Data masking is also a technique employed to alter sensitive parts of the data to prevent identification . Employing data anonymization only is not suitable for privacy-preserving FGG . Genetic data is unique and inherently identifiable. Even when anonymized, it can often be re-identified through genealogical research and cross-referencing with other data sources. Anonymization of data also bring serious limitations due to the uniqueness of every individual’s genome, which can be easily subject to proven re-identification attacks (see ). Laws and regulations Laws and regulations play a crucial role in protecting the privacy of genetic data and medical information. They legally protect individual’s medical record and other PII data, including genetic data, by setting standards for the use and disclosure of such information by covered entities. Their security rules depends on appropriate administrative and technical safeguards to ensure confidentiality, integrity and security of protected health information. They set the foundation of genetic privacy but carry limitations that pose increased risk to individuals’ privacy. Employing laws and regulations only is not suitable for privacy-preserving FGG . There is a lack of standardized regulations and ethical guidelines governing the use of genetic data in forensic investigations. Legal acts such as HIPAA and GINA seem inadequate and leave gaps in protection since they focus on who holds the data rather than the data itself because it only applies to covered entities. For example, they do not regulate consumer-generated medical and health information or recreational genetic sequencing generated by commercial entities such as 23andMe and Ancestry.com. Therefore, we can argue that these commonly practiced solutions fall short in securing genomic data privacy. In all the aforementioned genomic data sharing database cases, privacy protection is traded by confidentiality agreements, which do not offer the same layer of protection to sensitive data since their compliance is subject to the actions of fallible human beings. The adequate solution shall enforce privacy protection policies on the data regardless of the creator or who has access to it. Cryptographic techniques appear to be the most suitable to address it in this manner (e.g. ), where the most advanced of them allows making inferences and analytics while the data is encrypted, while never revealing the contents to the user. Encryption in genomic data sharing Privacy risks associated to accessing and storing genetic data can be mitigated by enabling confidentiality through cryptography. If either at rest or in transit, genetic data can be guarded from unwarranted access using state-of-the-art encryption schemes (e.g. ). This way, only authorized personnel holding the decryption key can reveal the contents of the encrypted genetic data. Crypt4GH is an industry standard for genomic data file format to keep genomic data secure while at rest, in transit, and through random access; thus, allowing secure genomic data sharing between separate parties. A solution so-called SECRAM data format has been proposed for secure storage and retrieval of encrypted and compressed aligned genomic data. To perform data analytics with machine learning algorithms in such case, the data is required to be decrypted and it becomes vulnerable to cybersecurity attacks. This is the major protection limitation of conventional cryptographic encryption schemes, i.e. requiring decryption before computing. On the other hand, encrypting genomic data with Homomorphic Encryption (HE) schemes allows computation over encrypted data without ever decrypting it; thus, not revealing any sensitive content since the data remains encrypted, ensuring true private computation. This additional layer of security can potentially help reduce the time and cost spent on reviewing and approving data accesses. When computationally demanding data analysis is desired, more often than not, processing needs to occur in (public) untrusted cloud service providers due to limited local computing resources and/or access to a restricted number of analytic model IPs. In this context, modern cryptography introduces homomorphic encryption methods (e.g., BGV , and CKKS ), which bring the capability of protecting data privacy during computation in a semi-honest security model. To obtain access to controlled data from the NCI (National Cancer Institute) Genomic Data Commons (GDC) knowledge database, it is required to file a dbGaP (Database of Genotypes and Phenotypes) authorization request that will be reviewed, approved or disapproved by the NIH (National Institutes of Health) Data Access Committee (DAC) on the basis of whether or not the usage will conform to the specification determined by the NIH Genomic Data Sharing Policy (see more details at ). Once access is granted, the recipient is entrusted with and accountable for the security, confidentiality, integrity and availability of the data, including when utilizing Cloud computing services. Another example is the European Genome-phenome Archive (EGA)’s data access , which operates in a similar manner, i.e. through Data Access Agreement (DAA) and Data Processing Agreement (DPA) documents, but enhancing data access security and confidentiality via authenticated encryption of data files using Crypt4GH . Many other public genomic datasets exist and implement similar security and privacy protection practices, as reviewed by . Employing administrative processes only is not suitable for privacy-preserving FGG . This implementation of access control to sensitive data depends on the integrity and goodwill of the authorized individual to self-report any agreement violations and data breaches. Once data access is granted, there is a lack of oversight to enforce policies related to genomic privacy, re-identification, and data misuse. Data anonymization involves obscuring personal identifiers in genetic data to protect individual’s privacy. It can also come in the form of aggregated data that shows trends and patterns without revealing specific identities. Data masking is also a technique employed to alter sensitive parts of the data to prevent identification . Employing data anonymization only is not suitable for privacy-preserving FGG . Genetic data is unique and inherently identifiable. Even when anonymized, it can often be re-identified through genealogical research and cross-referencing with other data sources. Anonymization of data also bring serious limitations due to the uniqueness of every individual’s genome, which can be easily subject to proven re-identification attacks (see ). Laws and regulations play a crucial role in protecting the privacy of genetic data and medical information. They legally protect individual’s medical record and other PII data, including genetic data, by setting standards for the use and disclosure of such information by covered entities. Their security rules depends on appropriate administrative and technical safeguards to ensure confidentiality, integrity and security of protected health information. They set the foundation of genetic privacy but carry limitations that pose increased risk to individuals’ privacy. Employing laws and regulations only is not suitable for privacy-preserving FGG . There is a lack of standardized regulations and ethical guidelines governing the use of genetic data in forensic investigations. Legal acts such as HIPAA and GINA seem inadequate and leave gaps in protection since they focus on who holds the data rather than the data itself because it only applies to covered entities. For example, they do not regulate consumer-generated medical and health information or recreational genetic sequencing generated by commercial entities such as 23andMe and Ancestry.com. Therefore, we can argue that these commonly practiced solutions fall short in securing genomic data privacy. In all the aforementioned genomic data sharing database cases, privacy protection is traded by confidentiality agreements, which do not offer the same layer of protection to sensitive data since their compliance is subject to the actions of fallible human beings. The adequate solution shall enforce privacy protection policies on the data regardless of the creator or who has access to it. Cryptographic techniques appear to be the most suitable to address it in this manner (e.g. ), where the most advanced of them allows making inferences and analytics while the data is encrypted, while never revealing the contents to the user. Privacy risks associated to accessing and storing genetic data can be mitigated by enabling confidentiality through cryptography. If either at rest or in transit, genetic data can be guarded from unwarranted access using state-of-the-art encryption schemes (e.g. ). This way, only authorized personnel holding the decryption key can reveal the contents of the encrypted genetic data. Crypt4GH is an industry standard for genomic data file format to keep genomic data secure while at rest, in transit, and through random access; thus, allowing secure genomic data sharing between separate parties. A solution so-called SECRAM data format has been proposed for secure storage and retrieval of encrypted and compressed aligned genomic data. To perform data analytics with machine learning algorithms in such case, the data is required to be decrypted and it becomes vulnerable to cybersecurity attacks. This is the major protection limitation of conventional cryptographic encryption schemes, i.e. requiring decryption before computing. On the other hand, encrypting genomic data with Homomorphic Encryption (HE) schemes allows computation over encrypted data without ever decrypting it; thus, not revealing any sensitive content since the data remains encrypted, ensuring true private computation. This additional layer of security can potentially help reduce the time and cost spent on reviewing and approving data accesses. When computationally demanding data analysis is desired, more often than not, processing needs to occur in (public) untrusted cloud service providers due to limited local computing resources and/or access to a restricted number of analytic model IPs. In this context, modern cryptography introduces homomorphic encryption methods (e.g., BGV , and CKKS ), which bring the capability of protecting data privacy during computation in a semi-honest security model. Fully Homomorphic Encryption (FHE) allows computation of arbitrary functions on encrypted data without decryption . This means the data is also protected during computation (processing) since it remains encrypted. Its security guarantee stems from the hardness of Ring Learning with Errors (RLWE) assumptions . There are two aspects to this assumption, namely, decisional and computational. The decisional RLWE assumption states that it is infeasible to distinguish pairs ( a , b ) picked at random from a distribution over a ring [12pt]{minimal} $$ {R}_Q^2$$ R Q 2 and pairs constructed as [12pt]{minimal} $$(a,a s+e)$$ ( a , a · s + e ) with a sampled from [12pt]{minimal} $$ {R}_Q$$ R Q , where e and s are randomly sampled from a noise distribution [12pt]{minimal} $$ {X}$$ X over the ring [12pt]{minimal} $$ {R}$$ R . The computational assumption states that it is hard to discover the secret key s from many different samples [12pt]{minimal} $$(a,a s+e)$$ ( a , a · s + e ) . This homomorphic encryption construct is built on a polynomial ring [12pt]{minimal} $$ {R}_Q= {Z}_Q[x]/(X^N+1)$$ R Q = Z Q [ x ] / ( X N + 1 ) , where [12pt]{minimal} $$ {Z}_Q$$ Z Q denotes the ring of integers modulo Q that populate the polynomial coefficients, [12pt]{minimal} $$X^N+1$$ X N + 1 is the [12pt]{minimal} $$M^{th}$$ M th cyclotomic polynomial [12pt]{minimal} $$ _M(x)$$ ϕ M ( x ) , and [12pt]{minimal} $$N=M/2$$ N = M / 2 . The choice of N , where N is typically a power-of-2 integer, is determined by the value of the coefficient modulus Q and the security parameter [12pt]{minimal} $$$$ λ , such that [12pt]{minimal} $$M=M( ,Q)$$ M = M ( λ , Q ) is a function of [12pt]{minimal} $$$$ λ and Q . Various homomorphic encryption schemes built on RLWE constructs that work naturally with integers emerged in the literature (e.g. ). Although the genetic data in this work takes values in the set [12pt]{minimal} $$ {G}=\{0,1,2\}$$ G = { 0 , 1 , 2 } , the expected output and model parameters to perform the data analysis and predictions operate on numbers in floating-point representation. This is especially true when training machine learning models to make predictions from genotypes. For this reason, it is natural to opt for a homomorphic encryption scheme intrinsically designed to accommodate floating-point arithmetic. Cheon et al. put forward the first homomorphic encryption for arithmetic of approximate numbers, also commonly known as the CKKS (short for Cheon-Kim-Kim-Song) scheme, that is most suitable to operate on real numbers. The CKKS scheme is a levelled homomorphic encryption (LHE) public key encryption scheme based on the RLWE problem . It allows to perform computations on encrypted complex numbers; thus, real numbers too. The ability of the CKKS method to handle floating-point numbers, approximated with fixed-point representation, makes it particularly attractive for confidential machine learning (ML) and data analysis. In the following, we briefly describe the CKKS scheme that we will use throughout this paper. The same noise e added during the encryption to strengthen the security also contributes to limiting the number of consecutive multiplications as the noise grows as consequence of that, possibly causing decryption error. CKKS controls this error-causing noise growth with the concept of levels and rescaling. Initially, a fresh CKKS ciphertext ct is assumed to encrypt numbers with certain initial precision masked by the added noise of smaller precision. The initial noise budget of a CKKS ciphertext (see Fig. ) is determined by the parameter L (multiplicative depth). The integer L corresponds to the largest ciphertext modulus level permitted by the security parameter [12pt]{minimal} $$$$ λ . Let the ring dimension N be a power-of-2, a modulus [12pt]{minimal} $$Q=q_L= ^L$$ Q = q L = Δ L , and [12pt]{minimal} $$q_l:= ^{l}$$ q l : = Δ l for [12pt]{minimal} $$1 l L$$ 1 ≤ l ≤ L , and some integer scaling factor [12pt]{minimal} $$ =2^p$$ Δ = 2 p , where p is the number of bits for the desired (initial) precision. Before encryption, the message needs to be encoded in a plaintext space. Genetic data vector [12pt]{minimal} $$q {G}^n$$ q ∈ G n is seen as a single CKKS message [12pt]{minimal} $$z {C}^{N/2}$$ z ∈ C N / 2 , assuming [12pt]{minimal} $$n N/2$$ n ≤ N / 2 , mapped to a plaintext object [12pt]{minimal} $$ {m} {R}$$ m → ∈ R . This plaintext space supports element-wise vector-vector addition, subtraction, and Hadamard multiplication. For encoding and decoding procedures, CKKS relies on a field isomorphism called canonical embedding, i.e. [12pt]{minimal} $$ : {R}[x]/(X^N+1) { {C}^{N/2}}$$ τ : R [ x ] / ( X N + 1 ) → C N / 2 . Hence, we have 2 [12pt]{minimal} $$ {Encode}(z, )= ^{-1}(z) $$ Encode ( z , Δ ) = ⌊ Δ · τ - 1 ( z ) ⌉ 3 [12pt]{minimal} $$ {Decode}( {m}, )= ( m) $$ Decode ( m → , Δ ) = τ ( 1 Δ · m ) Equipped with the aforementioned concepts, we now define the following CKKS operators: [12pt]{minimal} $$KeyGen( {R}_{q_L}, _{key}, _{err},1^{ })$$ K e y G e n ( R q L , χ key , χ err , 1 λ ) Sample [12pt]{minimal} $$s { _{key}}$$ s ← χ key and set the secret key as [12pt]{minimal} $$sk=(1,s)$$ s k = ( 1 , s ) . Sample [12pt]{minimal} $$a {U( {R}_{q_L})}$$ a ← U ( R q L ) (where U denotes the Uniform distribution) and [12pt]{minimal} $$e { _{err}}$$ e ← χ err . Set the public key as [12pt]{minimal} $$pk=(b,a) {R}_{q_L}^2$$ p k = ( b , a ) ∈ R q L 2 where [12pt]{minimal} $$b=[-a {s} + e]_{q_L}$$ b = [ - a · s + e ] q L [12pt]{minimal} $$Enc_{pk}( {m})$$ E n c pk ( m → ) Given a plaintext message [12pt]{minimal} $$ {m} {R}$$ m → ∈ R , sample [12pt]{minimal} $$v { }_{enc}$$ v ← χ enc and [12pt]{minimal} $$e_0,e_1 { _{err}}$$ e 0 , e 1 ← χ err . Output the ciphertext [12pt]{minimal} $$ct = [v {pk} + ( {m} + e_0,e_1)]_{q_L}$$ c t = [ v · pk + ( m → + e 0 , e 1 ) ] q L . [12pt]{minimal} $$Dec_{sk}(ct)$$ D e c sk ( c t ) Given a ciphertext [12pt]{minimal} $$ct {R^{2}_{q_L}}$$ c t ∈ R L 2 , where ct as encryption of [12pt]{minimal} $$ {m}$$ m → satisfies [12pt]{minimal} $$ ct,sk = {m} + e ( q_L)$$ ⟨ c t , s k ⟩ = m → + e ( mod q L ) for some small e , then the decryption output results in [12pt]{minimal} $$ {m}^{ }= ct,sk ( q_L)$$ m → ′ = ⟨ c t , s k ⟩ ( mod q L ) , where [12pt]{minimal} $$ {m}^{ }$$ m → ′ is slightly different from the original encoded message [12pt]{minimal} $$ {m}$$ m → ; indeed, an approximated value when [12pt]{minimal} $$||e||_{ }<< || {m}||_{ }$$ | | e | | ∞ < < | | m → | | ∞ holds true. [12pt]{minimal} $$Add/Sub(ct_1,ct_2)$$ A d d / S u b ( c t 1 , c t 2 ) Given two ciphertexts [12pt]{minimal} $$ct_1,ct_2$$ c t 1 , c t 2 , output the ciphertext [12pt]{minimal} $$ct_{add}/ ct_{sub} = [ct_1 ct_2]_{q_L}$$ c t add / c t sub = [ c t 1 ± c t 2 ] q L encrypting a plaintext vector [12pt]{minimal} $$ {m}_1 {m}_2$$ m → 1 ± m → 2 . [12pt]{minimal} $$Mult_{evk}(ct_1,ct_2)$$ M u l t evk ( c t 1 , c t 2 ) Given two ciphertexts [12pt]{minimal} $$ct_1,ct_2 {R^{2}_{q_L}}$$ c t 1 , c t 2 ∈ R L 2 , output a level-downed ciphertext [12pt]{minimal} $$ct_{mult} {R^{2}_{q_{L-1}}}$$ c t mult ∈ R L - 1 2 encrypting a plaintext vector [12pt]{minimal} $$ {m}_1 {m}_2$$ m → 1 ⊙ m → 2 [12pt]{minimal} $$Relin_{evk}(ct)$$ R e l i n evk ( c t ) When two ciphertexts [12pt]{minimal} $$ct_1$$ c t 1 and [12pt]{minimal} $$ct_2$$ c t 2 are multiplied, the results if a larger ciphertext [12pt]{minimal} $$ct_{Mult}=Mu(ct_1,ct_2)=(d_0, d_1, d_2)$$ c t Mult = M u ( c t 1 , c t 2 ) = ( d 0 , d 1 , d 2 ) , where [12pt]{minimal} $$d_0$$ d 0 , [12pt]{minimal} $$d_1$$ d 1 , and [12pt]{minimal} $$d_2$$ d 2 are the components of the resulting ciphertext. To reduce the ciphertext back to the original size, a relinearization key evk is used to transform the ciphertext from three-component form back to a two-component form, such that [12pt]{minimal} $$ct_{relin}=Relin_{evk}((d_0,d_1,d_2))=(d_0^ ,d_1^ )$$ c t relin = R e l i n evk ( ( d 0 , d 1 , d 2 ) ) = ( d 0 ′ , d 1 ′ ) , where the results of applying [12pt]{minimal} $$Relin_{evk}$$ R e l i n evk is defined by the expression [12pt]{minimal} $$(d_0 + _{i=1}^{2}evk_i {d_i}, d_1 + _{i=1}^{2}evk_{i+2} {d_i})$$ ( d 0 + ∑ i = 1 2 e v k i · d i , d 1 + ∑ i = 1 2 e v k i + 2 · d i ) . [12pt]{minimal} $$Rotate_{rk}(ct,r)$$ R o t a t e rk ( c t , r ) This operator is also called automorphism. For a ciphertext ct encrypting a plaintext vector [12pt]{minimal} $$ {m} = (m_1, , m_n)$$ m → = ( m 1 , … , m n ) , output a ciphertext [12pt]{minimal} $$ct^{ }$$ c t ′ encrypting a plaintext vector [12pt]{minimal} $$ {m}^{ } = (m_{r+1}, ,m_n, m_1, ,m_r)$$ m → ′ = ( m r + 1 , … , m n , m 1 , … , m r ) , which is the (left) rotated plaintext vector of ct by r positions. Rescale ( ct ) When two ciphertexts [12pt]{minimal} $$ct_1$$ c t 1 and [12pt]{minimal} $$ct_2$$ c t 2 are multiplied, the resulting ciphertext [12pt]{minimal} $$ct_{Mult}=Mult_{evk}(ct_1,ct_2)$$ c t Mult = M u l t evk ( c t 1 , c t 2 ) has a scale that is the product of the scales of [12pt]{minimal} $$ct_1$$ c t 1 and [12pt]{minimal} $$ct_2$$ c t 2 , i.e. [12pt]{minimal} $$ _{Mult}= _1 { _2}$$ Δ Mult = Δ 1 · Δ 2 . Rescaling brings the scale back to a manageable level. It involves dividing the ciphertext by a factor [12pt]{minimal} $$$$ Δ , i.e. [12pt]{minimal} $$ct_{rs}= }{ } $$ c t rs = ⌊ c t Mult Δ ⌋ . [12pt]{minimal} $$ModSwitch(ct,q^ )$$ M o d S w i t c h ( c t , q ′ ) Modulus switching in CKKS is used to reduce the modulus of the ciphertext to help manage the noise (and plaintext) growth and to match levels of ciphertexts operating together. To switch to a smaller modulus [12pt]{minimal} $$q^ < q$$ q ′ < q , the ciphertext components [12pt]{minimal} $$ct_0$$ c t 0 and [12pt]{minimal} $$ct_1$$ c t 1 are scaled down and rounded according to [12pt]{minimal} $$ct^ =( {ct_0} , {ct_1} ) q^$$ c t ′ = ( ⌊ q ′ q · c t 0 ⌋ , ⌊ q ′ q · c t 1 ⌋ ) mod q ′ . The distribution [12pt]{minimal} $$ _{enc}$$ χ enc and [12pt]{minimal} $$ _{err}$$ χ err denote the discrete Gaussian distributions for some fixed standard deviation [12pt]{minimal} $$$$ σ . The distribution [12pt]{minimal} $$ _{key}$$ χ key outputs a polynomial of [12pt]{minimal} $$\{-{1} ,0 ,1 \}$$ { - 1 , 0 , 1 } coefficients. We denote the rounding function [12pt]{minimal} $$ $$ ⌊ · ⌉ and modulo q operation [12pt]{minimal} $$[ ]_q$$ [ · ] q . The encoding technique allows parallel computation over encryption in a Single-Instruction-Multiple-Data (SIMD) way making it efficient once the computation is amortized on the vector size. There are two lines of work relevant to our topic: first, database queries on cleartext data that could be adapted to Homomorphic Encryption and second, encrypted genomic database queries. Not all popular methods in the unencrypted domain are good candidates to run in the encrypted domain. Depending upon the Homomorphic Encryption scheme, mathematical functions like max, min, greater than, less than, is equal to and algorithm like loops and sorts are not easily implementable on encrypted data. We are looking for methods that enable swift kinship searches for relatives up to the third degree, while observing the aforementioned constraints imposed by the difficulties to transform it into a homomorphic encryption arithmetic circuit. Cleartext protocols for DNA matching Genetic relatedness or kinship between two individuals can be described as the likelihood that, at a randomly chosen genomic location, the alleles in their genomes are inherited from a common ancestor. This phenomenon is known as Identical-by-Descent (IBD). This concept of relatedness should not be confused with Kinship coefficient and metrics closely connected to other genetic measures, including the inbreeding coefficient and probabilities associated with sharing IBD segments. To identify biological relationships beyond immediate family, the segment approach and extended IBD segments are effective but require high density markers, typically not available in forensic samples. Forensic samples typically rely on STR (Short Term Repeat) DNA typing, the preferred data format of forensic searches in criminal databases (i.e., Familial DNA Searching) to obtain partial matches with immediate relatives. Finding matches beyond immediate relatives is more suitable using single nucleotide polymorphism (SNP) data format. The main challenge in DNA kinship matching is choosing the right method for the computations. Most methods rely on observed allele sharing, Identity-By-State (IBS), to estimate probabilities of shared ancestry (IBD) or kinship coefficients and many of these are too complex to run on encrypted data. Methods available for DNA kinship matching up to the third degree (e.g., siblings, half-siblings, or first cousins) differ in complexity, accuracy and latency. distinguish four categories of kinship methods. The first category entails moment estimators such as KING , REAP , plink , GCTA , GRAF and PC-Relate that use Identical-by-State (IBS) markers and genotype distances to estimate expected kinship statistics. The second category is represented by the maximum-likelihood methods RelateAdmix and ERSA , which use expectation- maximization (EM) to jointly estimate the kinship statistics. The third and fourth families of methods use IBD-matching on phased genotypes (e.g. ), and kinship estimation from low-coverage next-generation sequencing data . All these methods use one or more of three types of analysis, namely: Identity by Descent (IBD) Analysis by considering shared alleles across the entire genome, provides insights into relatedness at different temporal scales and levels of relatedness. Dou et al. use mutual information between the relatives’ degree of relatedness and a tuple of their kinship coefficient to build a Bayes classifier to predict first through sixth-degree relationships. Smith et al. developed IBIS, an IBD detector that locates long regions of allele sharing between unphased individuals. Morimoto et al. use Identity by State (IBS) Analysis to identify regions of the genome where two individuals share the same alleles. The proportion of the genome that is IBS will indicate the level of relatedness. Ramstetter et al. use Haplotype Sharing Analysis to look at shared haplotypes within particular genomic regions to uncover recent common ancestry. Nonetheless, these methods can be too complex to yield the low latency required for demanding elaborate polynomial approximations of non-linear functions to transform them into a homomorphic encryption arithmetic circuit. Moreover, while the competition challenge is well-suited for search methods that calculate kinship scores between each query and every entry in the database, it can also be re-framed as a decision problem to become more amenable to resolution through decision algorithms. Specifically, the challenge involves the task of establishing kinship scores that quantify the degree of genetic relatedness between a given query and any sample within a genomic database. Homer et al. suggest an algorithm working on clear text using clustering of admixed population. They demonstrate experimentally the identification of the presence of genomic DNA of specific individuals within a series of highly complex genomic mixtures. This is significant for two reasons: first, it brings back to the forefront SNPs for identifying individual trace contributors within a forensics mixture, when STRs were the preferred method. Second, we will show (see “ ” section) that this method is low latency, accurate and amenable to Homomorphic Encryption. The choice of method depends on the quality and quantity of genetic data available, as well as the specific relationships being investigated and the population structure. The fastest methods to compute kinship are IBD methods. These methods, however, are not Homomorphic Encryption friendly and may require large computing resources and long latency running in the encrypted domain. Table shows the fastest available methods to compute kinship on unencrypted data . Genetic relatedness or kinship between two individuals can be described as the likelihood that, at a randomly chosen genomic location, the alleles in their genomes are inherited from a common ancestor. This phenomenon is known as Identical-by-Descent (IBD). This concept of relatedness should not be confused with Kinship coefficient and metrics closely connected to other genetic measures, including the inbreeding coefficient and probabilities associated with sharing IBD segments. To identify biological relationships beyond immediate family, the segment approach and extended IBD segments are effective but require high density markers, typically not available in forensic samples. Forensic samples typically rely on STR (Short Term Repeat) DNA typing, the preferred data format of forensic searches in criminal databases (i.e., Familial DNA Searching) to obtain partial matches with immediate relatives. Finding matches beyond immediate relatives is more suitable using single nucleotide polymorphism (SNP) data format. The main challenge in DNA kinship matching is choosing the right method for the computations. Most methods rely on observed allele sharing, Identity-By-State (IBS), to estimate probabilities of shared ancestry (IBD) or kinship coefficients and many of these are too complex to run on encrypted data. Methods available for DNA kinship matching up to the third degree (e.g., siblings, half-siblings, or first cousins) differ in complexity, accuracy and latency. distinguish four categories of kinship methods. The first category entails moment estimators such as KING , REAP , plink , GCTA , GRAF and PC-Relate that use Identical-by-State (IBS) markers and genotype distances to estimate expected kinship statistics. The second category is represented by the maximum-likelihood methods RelateAdmix and ERSA , which use expectation- maximization (EM) to jointly estimate the kinship statistics. The third and fourth families of methods use IBD-matching on phased genotypes (e.g. ), and kinship estimation from low-coverage next-generation sequencing data . All these methods use one or more of three types of analysis, namely: Identity by Descent (IBD) Analysis by considering shared alleles across the entire genome, provides insights into relatedness at different temporal scales and levels of relatedness. Dou et al. use mutual information between the relatives’ degree of relatedness and a tuple of their kinship coefficient to build a Bayes classifier to predict first through sixth-degree relationships. Smith et al. developed IBIS, an IBD detector that locates long regions of allele sharing between unphased individuals. Morimoto et al. use Identity by State (IBS) Analysis to identify regions of the genome where two individuals share the same alleles. The proportion of the genome that is IBS will indicate the level of relatedness. Ramstetter et al. use Haplotype Sharing Analysis to look at shared haplotypes within particular genomic regions to uncover recent common ancestry. Nonetheless, these methods can be too complex to yield the low latency required for demanding elaborate polynomial approximations of non-linear functions to transform them into a homomorphic encryption arithmetic circuit. Moreover, while the competition challenge is well-suited for search methods that calculate kinship scores between each query and every entry in the database, it can also be re-framed as a decision problem to become more amenable to resolution through decision algorithms. Specifically, the challenge involves the task of establishing kinship scores that quantify the degree of genetic relatedness between a given query and any sample within a genomic database. Homer et al. suggest an algorithm working on clear text using clustering of admixed population. They demonstrate experimentally the identification of the presence of genomic DNA of specific individuals within a series of highly complex genomic mixtures. This is significant for two reasons: first, it brings back to the forefront SNPs for identifying individual trace contributors within a forensics mixture, when STRs were the preferred method. Second, we will show (see “ ” section) that this method is low latency, accurate and amenable to Homomorphic Encryption. The choice of method depends on the quality and quantity of genetic data available, as well as the specific relationships being investigated and the population structure. The fastest methods to compute kinship are IBD methods. These methods, however, are not Homomorphic Encryption friendly and may require large computing resources and long latency running in the encrypted domain. Table shows the fastest available methods to compute kinship on unencrypted data . Over the last 10 years there has been a number of papers demonstrating private queries on encrypted data. Ramstetter et al. suggest a secure biometric authentication method that employs fully homomorphic encryption TFHE scheme. They match biometric data from a local device, to an encrypted biometric template on a remote-server encrypted database. Pradel and Mitchell introduce Private Collection Matching (PCM) problems, in which a client aims to determine whether a collection of sets owned by a database server matches their interests. EdalatNejad et al. propose a string matching protocol for querying the presence of particular mutations in a genome database. They combine Homomorphic Encrytion scheme BGV and private set intersection to search for similar string segments. Chen et al. compute private queries on encrypted data in a multi-user setting. Bao et al. compute conjunctive queries on encrypted data. Saha and Koshiba execute comparison queries while compute range queries on encrypted data. Boneh and Waters compute relatedness scores within the protective confined Trusted Execution Environment of SGX, a hardware approach. Chen et al. proposed “sketching”, worked on “fingerprinting”, while implemented a differential privacy scheme. Wang et al. proposed a method to compute relatedness in the encrypted domain using Homomorphic Encryption taking into account admixed populations. This projection-based approach utilizes existing reference genotype datasets for estimating admixture rates for each individual and use these to estimate kinship in admixed populations. Dervishi et al. implements a k -means algorithm on encrypted data using CKKS. This algorithm shows the feasibility of our clustering scheme should we require to implement it fully encrypted as proposed by . The relatedness measurement of a genetic sample query to a population of individuals comprising a genetic genealogy database can be framed as a decision algorithm: its purpose is to ascertain whether a given forensic genomic sample has a relative (match) in the database, extending up to the 3rd degree of kinship. For each individual query, a score is calculated, and this score is designed to be high when a relative is found and low when there is no relative in the database. Data discovery and analysis reveal the necessity of having a reference frame for mapping the query. Interestingly, any genome can act as this reference frame, particularly because the competition genomic database is derived from the same statistical data as the genomes in the challenge database, resulting in identical second-order statistics. Consequently, for practicality, we have opted to utilize the mean genome (allele average across all genome samples) from the challenge database, which is calculated offline and encrypted at runtime, as our reference. To assess genetic relatedness, we design a metric built on one-sample paired z-test hypothesis testing. This turns into our unsupervised method discussed in “ ” section. In this approach, we assign weights to each coordinate when mapping the query onto the aforementioned reference frame. In order to improve accuracy and latency performances, we propose two supervised approaches. In “ ” section, we present the one that uses the k -means algorithm on the challenge database to discover k data points to represent the underlying population mixture. This method uses the distance between a query and these k data points to gauge whether it relates to any of the k reference populations comprising the probed genetic database. The second method, discussed in “ ” section, transforms the query by correlating it with the database mean. This transformation is an attempt to unveil an underlying pattern that could discern a query whose relative genetic data is present in the database from a query whose genetic data is absent. The output of these transformations are then used as features to learn a linear regression model trained to predict 1 if a query has a relative in the database and 0 if otherwise. On what concerns data privacy protection of the methods, a summarized description of the security parameters used for encryption is shown in Table , while a more detailed discussion is carried out in their respective security subsections. Unsupervised method Unsupervised algorithms present a natural choice for addressing the relatedness problem. They require minimal assumptions about the dataset and contribute to more robust generalization. We proceed with the assumption that the genomic database primarily comprises genomes from individuals who are unrelated to the query individual. This assumption is grounded in the fact that on average fertility rate in the world population is 2.27 ; an individual is on average likely to have less than 97 relatives up to the third degree (see Table ). In addition, it is unlikely that all relative genomes have found their way to the database; however, if we use the historical highest average fertility rate of 6.8 , the number of third degree relatives could reach 2339 (see Table ), which is still much lower than a typical genomic database size but greater than our challenge dataset, in which case, our method could not be used. That is, if we rely on the assumption that our database characteristics follow the fertility rate is 6.8, implying 2339 relatives up to 3rd degree, then our assumption that the individuals in the database are mostly unrelated, on which our method relies, would not be suitable since the challenge database has only 2000 samples. In the real-world scenario, where the databases have tens to hundreds of thousands of samples, then our assumption that the samples in the database are mostly unrelated might still hold, for which our method could still be functionally suitable. We precomputed offline the correlations (dot products) between known queries that are confirmed to have a relative within the challenge database and every entry in the database. Our analysis reveals that 367 entries in the challenge database are related to at least one of the 200 positive query individuals (see Fig. ). Note that, by carefully observing Fig. , we may infer that correlation values around 13250 could indicate relatives of 1st degree, around 12000 will probably determine relatives of 2nd degree, around 11500 sits relatives of 3rd degree, and below and beyond lies distant relatives or unrelated individuals, i.e. individuals from different populations, with respect to query i (marked along the x -axis). This implies that, on average, each positive query is associated with just 1.83 relatives within the challenge database, out of a potential total of 97 existing relatives. It is worth noting again that only a small minority of these potential relatives have their genome data present in the challenge database. These findings validate the robustness of our unsupervised approach. Within this framework, Eq. serves the purpose of quantifying the distance between a query q and the mean [12pt]{minimal} $$$$ μ of the database considering all genotype variants, from [12pt]{minimal} $$i=1$$ i = 1 to [12pt]{minimal} $$i=16344$$ i = 16344 . Clearly, the database mean aligns closely with the centroid of the unrelated genomes, given their substantial presence compared to the related ones. In fact, the database mean is the average of genotype values across all genomes from the database. In this manner, the database mean essentially characterizes “unrelatedness to any individual in the database”. This can be confirmed by observing that the correlation of any entry in database with the database mean (see blue x plots on lower right corner of Fig. ) has lower correlation values than a correlation between a query and its relative in the database (see green solid circles plotted in Fig. ). Another way to support this interpretation is by observing the scatter plot of the correlations between queries and the database mean in Fig. . They appear entangled and hardly defined to judge if positive or negative queries correlate more or less to the database mean. Superficially, it appears there is more correlation of the mean with the negative queries. This observation is used to consider that more correlation to the mean signifies more likelihood to be unrelated to any specific individual in the database since the mean approximates the average of the populations. We extend this observation to interpret and explain the clustering-based formulation proposed in “ ” section. 4 [12pt]{minimal} $$ {f}(q_{ },D) = _{i=1}^{n}- _i) ^2}{ _i^2}. $$ f ( q ℓ , D ) = ∑ i = 1 n q ℓ , i - μ i 2 σ i 2 . In Eq. , [12pt]{minimal} $$q_{ }$$ q ℓ is an encrypted genome sample (query) [12pt]{minimal} $$$$ ℓ , D is the encrypted genomic database, [12pt]{minimal} $$q_{ ,i}$$ q ℓ , i is the encrypted value of genotype variant at gene locus i in query [12pt]{minimal} $$q_{ }$$ q ℓ , [12pt]{minimal} $$ _i$$ μ i is the average value of genotype variants at gene locus i across all individuals in admixture population making up D (2000 database samples). Similarly, [12pt]{minimal} $$ _i^2$$ σ i 2 is the variance of genotype variants at gene locus i . Inspired by the one-sample paired z-test, we first assume that the means [12pt]{minimal} $$ _i$$ μ i are continuous and simple random sample from the population of interest. Second, we assume that the data in the population is approximately normally distributed and, third, that we can compute the population standard deviation from the genomic database. From that, we proceed with hypothesis testing, making the Eq. an approximation of the distance from the query to the group of unrelated individuals. When this distance is small, the query yields an “unfound” result, whereas a larger distance results in a “found” outcome. Notably, observations of genotype variants from related queries exhibit more significant deviations from the mean compared to those in unrelated queries. Experimentally, we verify that the distance values (scores) derived from related queries using Eq. tend to be higher in comparison to the reference population, as opposed to scores from unrelated queries (see Fig. ). These scores allow for the projection of related and unrelated queries into a linearly separable space using a predefined threshold. Indeed, the choice of a threshold renders a linear decision boundary to realize the final classification/detection about whether the query has a relative in the database or not. Additionally, examining the classification performance (False Positive Rate, Precision and Recall) at varying thresholds allows us to plot the receiver operating characteristics (ROC) curve and select an optimal threshold value for final predictions of unseen queries (see Fig. ). Optimization for performance As follows, we make adjustments to Eq. to ensure its compatibility with Homomorphic Encryption. In Eq. , we add a small constant e to the denominator to avoid division by zero. In Eq. , we replace the variance [12pt]{minimal} $$ ^2$$ σ 2 by the mean [12pt]{minimal} $$$$ μ to avoid computations that would not change the ranking – this was verified experimentally. 5 [12pt]{minimal} $$ _{i=1}^{n}- _i) ^2}{ _i^2} _{i=1}^{n}- _i) ^2 }{ _i^2 +e} $$ ∑ i = 1 n q ℓ , i - μ i 2 σ i 2 ≈ ∑ i = 1 n q ℓ , i - μ i 2 σ i 2 + e 6 [12pt]{minimal} $$ _{i=1}^{n}- _i) ^2 }{ _i^2+e} _{i=1}^{n}- _i) ^2 }{ _i +e} $$ ∑ i = 1 n q ℓ , i - μ i 2 σ i 2 + e → ∑ i = 1 n q ℓ , i - μ i 2 μ i + e In Eq. , we approximate the division by [12pt]{minimal} $$$$ μ with a linear equation. 7 [12pt]{minimal} $$ _{i=1}^{n}- _i) ^2 }{ _i+e} & _{i=1}^{n}( q_{ ,i}- _i) ^2 ( a( _i + e) +b) ^2, \\ {where}~a( _i+e)+b & }, a=-10.51, b=12.49 $$ ∑ i = 1 n q ℓ , i - μ i 2 μ i + e ≈ ∑ i = 1 n q ℓ , i - μ i 2 a ( μ i + e ) + b 2 , where a ( μ i + e ) + b ≈ 1 μ i + e , a = - 10.51 , b = 12.49 Initially, we considered utilizing the Goldschmidt’s algorithm for variance division calculation. However, this approach calls for a staggering 26 multiplicative levels (in our implementation, without the need for bootstrapping), rendering it unsuitable for achieving low latency. In lieu of Goldschmidt’s, we opted for a linear approximation of division by the mean [12pt]{minimal} $$$$ μ , even though it introduces a degree of inaccuracy. The trade-off, however, is the substantial reduction in latency. Equation requires multiplicative depth [12pt]{minimal} $$L=4$$ L = 4 (i.e. 4 multiplication levels) with the CKKS scheme. We achieve this by choosing [12pt]{minimal} $$ {log}Q=218$$ log Q = 218 with scaling factor [12pt]{minimal} $$ =2^{30}$$ Δ = 2 30 , for which the choice of smallest polynomial degree to reach 128-bit security is [12pt]{minimal} $$N=2^{13}$$ N = 2 13 . Note that Q here denotes the coefficient modulus value and [12pt]{minimal} $$ {log}Q$$ log Q is the number of bits required to represent it in binary base. The scaling factor [12pt]{minimal} $$ =2^{30}$$ Δ = 2 30 , even though small, proved to offer sufficient noise budget to refrain from arithmetic precision loss, such that the results obtained homomorphically are equal to the outputs in clear text. Table shows how we heuristically find the optimal threshold for post-prediction decision making and the small constant e used in Eq. . Figure shows how the [12pt]{minimal} $$ {auROC}$$ auROC varies with the value e , where each ROC curve is plotted by varying the prediction decision threshold value. The optimal threshold value in each ROC curve is located at the point in the curve that satisfies [12pt]{minimal} $$_{x,y}( {TPR}-(1- {FPR}))$$ min x , y ( TPR - ( 1 - FPR ) ) , where FPR is the false positive rate at the x-axis and TPR is the true positive rate at the y-axis. Additionally, we employed OpenMP to parallelize the addition operations involved in the homomorphic computation of the mean [12pt]{minimal} $$$$ μ . Note that the homomorphic computation of the mean is only necessary when dealing with fully unsupervised case, where the order statistics of the population is unknown; otherwise, the mean can be precomputed ahead of time and available for inference in encrypted form to further reduce inference latency. Furthermore, we reorder the sequence of operations to delay the homomorphic rotations such that they are always applied to a reduced number of ciphertexts; thus, effectively reducing the number of rotations since they are only performed in extremely necessary cases. We called this “lazy” rotation, typically happening for outer sums of ciphertexts. We also perform multiple query predictions in parallel using OpenMP. This algorithm runs in two steps: it first evaluates the database mean, and secondly it evaluates Eq. to obtain the prediction score. Step one can be done offline with the challenge database or online using the competition database during inference. If computed offline, the mean database will be encrypted and be part of the input to the homomorphic evaluation of Eq. . Precomputing the mean allows us to reduce the required multiplicative depth of the homomorphic circuit, in which case the encryption parameters are set to [12pt]{minimal} $$L=3$$ L = 3 and [12pt]{minimal} $$ Q=188$$ log Q = 188 , which in turn also helps reduce latency. Security level and parameters selection A security level of 128 bits is enforced by using a polynomial modulus degree of [12pt]{minimal} $$N=2^{13}$$ N = 2 13 and coefficient modulus size [12pt]{minimal} $$ {log}Q=218$$ log Q = 218 . We follow the BKZ.sieve model discussed in to determine the values for those parameters, namely [12pt]{minimal} $$ {log}Q$$ log Q and N , to achieve 128-bit security level. We set the sequence of co-primes to have bit lengths {49, 30, 30, 30, 30, 49} whose product approximates Q , whereas for the case [12pt]{minimal} $$L=3$$ L = 3 with [12pt]{minimal} $$ Q=188$$ log Q = 188 , the sequence has one less inner co-prime and it becomes {49, 30, 30, 30, 49}. Packing In order to reduce computational cost, we streamline the data packing into as few CKKS ciphertext as possible. Figure illustrates how by selecting a polynomial ring degree of [12pt]{minimal} $$N=2^{13}$$ N = 2 13 , there are 4096 slots within a single ciphertext where we can effectively store up to 4096 genotype variants out of the 16344. Consequently, merely 4 ciphertexts are needed for encrypting a genome feature vector encompassing 16344 genotypes variants. This means that if the polynomials of ciphertexts have degree N , Microsoft SEAL’s implementation of CKKS offers enough slots to store N /2 fixed-point numbers. Henceforth this same data packing strategy is used across all solutions presented in this work. Encrypted algorithm The encrypted algorithm is described in Algorithm 4. Lines 5 through 12 compute the database mean [12pt]{minimal} $${}$$ μ ^ in the encrypted domain. Lines 13 thru 14 add the small constant that avoid division by zero in the cleartext domain, where line 14 (see Algorithm 1) ensures that the plaintext e has the same scale and level of [12pt]{minimal} $${}$$ μ ^ . Line 15 sums the encrypted mean [12pt]{minimal} $${}$$ μ ^ with a small constant e . From line 16 (Algorithm 2) to line 19, we compute the encrypted approximation for [12pt]{minimal} $$}+e}$$ 1 μ ^ + e , i.e. [12pt]{minimal} $$(a( +e)+b)^2$$ ( a ( μ + e ) + b ) 2 . Lines 20 through 25 compute [12pt]{minimal} $$(q_{ }- )^2$$ ( q ℓ - μ ) 2 . Lines 26 through 28 multiply the two terms [12pt]{minimal} $$(q_{ }- )^2$$ ( q ℓ - μ ) 2 and [12pt]{minimal} $$(a( +e)+b)^2$$ ( a ( μ + e ) + b ) 2 . Finally, the final score for query [12pt]{minimal} $$q_{ }$$ q ℓ is computed as the sum of all the elements in the slot of ciphertext [12pt]{minimal} $${}^ [ ]$$ χ ^ ′ [ ℓ ] , i.e. performing the sum [12pt]{minimal} $$ _{i=1}^{n}( q_{ ,i}- _i) ^2 ( a( _i + e) +b) ^2$$ ∑ i = 1 n q ℓ , i - μ i 2 a ( μ i + e ) + b 2 (see Algorithm 3 for details on the rotation-sum operation). Clustering-based supervised method The clustering-based approach was derived from the framework put forward in and it is similar in spirit to the approach by , which takes into account sub-populations. The database D is represented by a set of cluster centroids [12pt]{minimal} $$c_j$$ c j and the database mean [12pt]{minimal} $$$$ μ . The first term of Eq. measures the absolute distance between a query [12pt]{minimal} $$q_{ }$$ q ℓ and the database population mean [12pt]{minimal} $$$$ μ . The smaller this distance, the more uncertainty to determine whether a query has a relative in the underlying population mixture. The second term measures the absolute distance between a query [12pt]{minimal} $$q_{ }$$ q ℓ and a centroid [12pt]{minimal} $$c_j$$ c j , in which j denotes the [12pt]{minimal} $$j^{th}$$ j th centroid. The smaller this second distance, the greater the likelihood for a query to have a relative in the mixture. The maximum difference between these two terms across all k centroids results in the final predicted kinship score. The numerator represents a measurement of the relationship of a query q (point) with respect to the cluster representing the underlying mixture. The denominator is a normalization factor for the computed value in the numerator and is constant for each individual query prediction; therefore, it can be disregarded in the actual computation to save on latency. Initially, within the unveiled procedure, the genomic database undergoes cleartext domain clustering (on the database owner’s premise). This clustering is solved using Lloyd’s k -means algorithm to determine a centroid j for each underlying sub-population (see ). The average complexity is given by [12pt]{minimal} $$O(k T)$$ O ( k η T ) , where [12pt]{minimal} $$$$ η is the number of samples and T is the number of iterations. Subsequently, Eq. finds its application in the encrypted domain, leveraging the k encrypted centroids established during the k -means algorithm’s operation. The selection of parameter k is determined by the k -means algorithm’s assessment of the reference database. To minimize latency, a prudent choice is made to employ a smaller value of k . More specifically, we set [12pt]{minimal} $$k=5$$ k = 5 as it does not compromise accuracy. This cluster-point relationship solution is mathematically described in Eqs. , and . Let a centroid [12pt]{minimal} $$c_j$$ c j represent a sub-population j in the genomic database. When the difference between the query [12pt]{minimal} $$q_{ }$$ q ℓ and the mean [12pt]{minimal} $$$$ μ is larger than the difference of query [12pt]{minimal} $$q_{ }$$ q ℓ with centroid [12pt]{minimal} $$c_j$$ c j , then the value is positive indicating that it has a relative in the database. Conversely, if the difference between the query and the mean is smaller than the difference of query with centroids, then the value is negative indicating that it does not have a relative in the database. These calculated scores pave the way for projecting both related and unrelated queries onto a linearly separable space (see Fig. a). 8 [12pt]{minimal} $$ {f}(q_{ },D) & = }}^{n}(| q_{ ,i}- _i| - | q_{ ,i} - c_{j,i}| ) }{ _{i=1}^{n} (q_{ ,i} - _i)^2} \\ {f}(q_{ },D) & = _{i=1}^{n} (q_{ ,i} - _i)^2}}} _{i=1}^{n}(| q_{ ,i}- _i| - | q_{ ,i} - c_{j,i}| ) \\ {f}(q_{ },D) & = \ }} _{i=1}^{n}(| q_{ ,i}- _i| - | q_{ ,i} - c_{j,i}| ) $$ f ( q ℓ , D ) = max j ∑ i = 1 n ( q ℓ , i - μ i - q ℓ , i - c j , i ) 1 n ∑ i = 1 n ( q ℓ , i - μ i ) 2 f ( q ℓ , D ) = 1 1 n ∑ i = 1 n ( q ℓ , i - μ i ) 2 max j ∑ i = 1 n ( q ℓ , i - μ i - q ℓ , i - c j , i ) f ( q ℓ , D ) = α max j ∑ i = 1 n ( q ℓ , i - μ i - q ℓ , i - c j , i ) Optimization for performance Since [12pt]{minimal} $$$$ α is constant for all queries [12pt]{minimal} $$q_{ }$$ q ℓ , the denominator is normalization factor and can go outside the max function. Thus, we shall concentrate on the numerator of Eq. to rank the predictions; thus, we establish [12pt]{minimal} $${f^{ }}(q_{ },D)$$ f ′ ( q ℓ , D ) in Eq. . By eliminating this normalization step, the algorithm becomes more efficient in detriment of possibly not preserving the original ranking among the queries. This relaxation to the original equation is valuable to improve the computational efficiency in the encrypted domain, and it was empirically verified not to affect the accuracy. 9 [12pt]{minimal} $$ {f^{ }}(q_{ },D) & = }} _{i=1}^{n}(| q_{ ,i}- _i| - | q_{ ,i} - c_{j,i}| ), \\ {where}~{f}(q_{ },D) & = {f^{ }}(q_{ },D) $$ f ′ ( q ℓ , D ) = max j ∑ i = 1 n ( q ℓ , i - μ i - q ℓ , i - c j , i ) , where f ( q ℓ , D ) = α f ′ ( q ℓ , D ) This effectively reduces the amount of required computation. Then, we further simplify the prediction function by replacing the operator [12pt]{minimal} $$_j$$ max j with [12pt]{minimal} $$ _{j=1}^{k}$$ ∑ j = 1 k , the sum over all computations across k centroids. The final score is now the aggregated voting of the k differences between the distance of query to the mean and the distance of query to centroid. Empirically, we verify that this does not alter the final predictions, such that the final objective becomes 10 [12pt]{minimal} $$ {f^{ }}(q_{ },D) = _{j=1}^{k} _{i=1}^{n}(( q_{ ,i}- _i) ^2 - ( q_{ ,i} - c_{j,i}) ^2), $$ f ″ ( q ℓ , D ) = ∑ j = 1 k ∑ i = 1 n ( q ℓ , i - μ i 2 - q ℓ , i - c j , i 2 ) , where [12pt]{minimal} $$c_{j,i}$$ c j , i is the [12pt]{minimal} $$i^{th}$$ i th genotype variant of the cluster [12pt]{minimal} $$j^{th}$$ j th centroid and n is total number of genotype variants, i.e. [12pt]{minimal} $$n=16,344$$ n = 16 , 344 . In this framework, Eq. takes on the role of quantifying the separation between the query and the database mean, while also subtracting the query’s separation from each sub-population. In the competition, this method requires [12pt]{minimal} $$400 k$$ 400 × k evaluations of Eq. since the challenges consists of testing 400 queries. For a choice of [12pt]{minimal} $$k=5$$ k = 5 , 2,000 evaluations are required, which is three orders of magnitude less operations than the naïve solution that requires 800,000 cross-correlations evaluations, as depicted in Fig. . Since, in Eq. , the mean [12pt]{minimal} $$$$ μ and the cluster centroids [12pt]{minimal} $$c_j$$ c j are precomputed offline and used during inference, we consider it as a supervised approach. This assumes that the characteristics about the underlying population mixture from the challenge dataset are sufficient to generalize predictions to unknown query data. For this reason, the algorithm to compute inference as Eq. requires only multiplicative depth [12pt]{minimal} $$L=1$$ L = 1 , greatly optimizing the multiplicative depth complexity and latency associated to it. This algorithm runs in two steps: first, it evaluates the database mean and computes the k cluster centroids in the clear text domain; secondly, it evaluates Eq. to output the kinship prediction scores. Step one is done offline with the genomic database still in the database owner’s premise; then, the database mean and cluster centroids are encrypted and sent to the computing entity as part of the input to the encrypted evaluation of Eq. . Security level and parameters selection This time, the coefficient modulus size [12pt]{minimal} $$ Q$$ log Q does not have to be comprised of many bits since the multiplicative depth equal 1. Even though a smaller Q is possible, accordingly the parameters [12pt]{minimal} $$ {log}Q$$ log Q and N , the scaling factor size [12pt]{minimal} $$ {log}$$ log Δ must be carefully chosen. In this case, the scaling factor [12pt]{minimal} $$ =2^{p}$$ Δ = 2 p will dictate how much arithmetic precision to compute the target workload without corrupting the decryption. Given those considerations, we select a scaling factor that allows us minimize as much as possible the polynomial degree N . We found that the scaling factor [12pt]{minimal} $$ =2^{29}$$ Δ = 2 29 is sufficient to keep the arithmetic precision afloat during computation of Eq. . To ensure 128-bit security level, we use polynomial ring size of degree of [12pt]{minimal} $$N=2^{12}$$ N = 2 12 and a coefficient modulus of size [12pt]{minimal} $$ {log}Q=109$$ log Q = 109 . The coefficient modulus chain comprises co-primes with bit lengths {40, 29, 40}. This choice of parameters provide a compact and fast implementation. Packing In order to reduce computational cost, it is crucial to streamline the organization of the CKKS ciphertexts. We perform data packing and encryption similarly to what Fig. illustrates. With polynomial ring degree of [12pt]{minimal} $$N=2^{12}$$ N = 2 12 , there are 2048 slots available to pack data within a single ciphertext, effectively accommodating all 16,344 genotype variants in 8 ciphertexts. Encrypted algorithm The full instructions of the encrypted algorithm is described in Algorithm 5. Lines 5 through 8 compute [12pt]{minimal} $$( q_{ }- ) ^2$$ q ℓ - μ 2 . Lines 15 through 18 compute [12pt]{minimal} $$( q_{ } - c_{j}) ^2$$ q ℓ - c j 2 . Lines 19 through 20 compute [12pt]{minimal} $$( q_{ }- ) ^2 - ( q_{ } - c_{j}) ^2$$ q ℓ - μ 2 - q ℓ - c j 2 . Lines 21 through 22 store and aggregate the relationship scores of query [12pt]{minimal} $$q_{ }$$ q ℓ with respect to the mean [12pt]{minimal} $$$$ μ and each centroid [12pt]{minimal} $$c_j$$ c j in separate ciphertext [12pt]{minimal} $${}[ ]$$ γ ^ [ ℓ ] . Line 26 concludes by performing the sum of all the scores for query [12pt]{minimal} $$q_{ }$$ q ℓ , as in [12pt]{minimal} $$ _{j=1}^{k} _{i=1}^{n}$$ ∑ j = 1 k ∑ i = 1 n . Linear regression method Our clustering-based supervised approach lifts accuracy to highest possible, [12pt]{minimal} $$ {auROC}=1$$ auROC = 1 , i.e. it achieves perfectly accurate predictions. Nonetheless, its limitation lies in knowing how to optimally choose k when no knowledge about the reference population is available and in hurting latency performance as the number of reference populations k increases. The choice of k is important because it will directly impact accuracy. This technique could also be regarded as less flexible, compared to the unsupervised approach, since if the reference population expands or shrinks drastically, it could require re-mining the cluster centroids; therefore, less adaptable to changes than the unsupervised approach that can handle it naturally. To mitigate those foreseen potential issues, we envision another supervised solution based on linear regression, which does not require tuning of hyperparameter such as k , even when the characteristics of the reference population mixture is unknown, and does not increase the amount of computation as k increases. It relies on extracting features from queries by apply a masking procedure with the database mean, and then optimizing the coefficients of a linear regression model to learn the underlying patterns, captured by these features, to discern between having or not having a relative in the database. As for adaptability, this approach could arguably be more robust to small changes in the reference mixture, given that its prediction power only depends on the pattern that has been learned in order to differentiate whether a query has a relative in a genomic database given its mean, which can be easily recomputed to apply new feature transformations to the queries. Linear regression has been widely used for tackling secure genome problems (e.g. ). The reason for this popularity is linked to its arithmetic simplicity and robustness, and track record (e.g. s), in finding hyperplanes separating distinct patterns in high-dimensional spaces . We embrace these virtues to devise a more robust and efficient approach to the problem, nonetheless, under strong assumption that sufficient information is available in the data characterizing the reference population mixture, even if not specifically annotated. This emphasizes the supervised approaches’ major limitation: robustness and adaptability to changes in the reference populations are constrained to small variations, unlike the proposed unsupervised approach described in “ ” section. Model training The ground-truth is a collection of 200 annotated pairwise relationships between 200 query samples and 200 database samples. Eighty percent out of those pairs are used for training and the remainder 20% are saved for testing. Hence, 160 queries known to have at least one relative in the database (i.e. positive queries) are separated for feature selection and model training. From the challenge query set Q , containing 400 queries, the remaining 200 that do not appear in the ground-truth annotation are genomes known not to have their genetic data shared with any of the 2000 samples from the database; thus, we consider 160 of them (80%) to represent negative queries, i.e. examples of queries that do not have a relative in the database, for training and 40 others (20%) for testing. These samples are unique and provided as part of the challenge dataset. First, these 160 positive queries plus 160 negative queries are used for selecting the most relevant features (genotype variants) out of 16344. Then, we create more positive and negative queries out of those 320 queries to increase the sample-feature ratio, i.e. synthesize as many samples as possible to reach the ratio of about 10 samples per relevant feature. Training of the linear regression model follows suit, fed with the augmented sample set. Feature selection To help the linear regression optimizer find a more robust hyperplane, and be less predisposed to overfitting, we perform dimensionality reduction using the Variable Threshold technique. In this case, dimensions located at genotype variants whose variance are less than a certain threshold are disregarded to represent the genome sequence of a query. For feature selection, we use all 320 samples reserved for training. Depending on the value of the variance threshold, more or less features are deemed as relevant. The goal is to have as less features are possible. We found that a threshold of 0.11, by varying from 0.2 to 0.1 considering two digits after the decimal point, yields robust performance with 3893 features out of the 16344 genotype variants. Data augmentation In addition, we increase the number of samples per features to improve generalization of the model. It consists of random resampling of the 320 data points with replacement. Resampling is applied to increase the positive and negative samples by a factor of 120, such that we end up with about 10 samples per feature. We use the resample function from the Python’s sklearn package to accomplish it – we perform oversampling, consisting of repeating some of the samples in the original collection. Feature transformation The original features of a query q are their genome genotype variants, a sequence of values in the set [12pt]{minimal} $$ {G}=\{0,1,2\}$$ G = { 0 , 1 , 2 } . We apply a transformation to the genotype variants to create features that are derived from computing its relationship with respect to the average of genotype variants found in the target genomic database. That is, the transformation uses the genome mean [12pt]{minimal} $$$$ μ of the database. This transformation is algebraically described in Eq. , 11 [12pt]{minimal} $$ {q^ } = q , $$ q ′ = q · μ , where [12pt]{minimal} $$$$ · corresponds to element-wise multiplication between q and [12pt]{minimal} $$$$ μ components (in clear text, i.e. non-encrypted data). The training queries transformed to features [12pt]{minimal} $${q^ }$$ q ′ populate the matrix X in Eq. , where each row of X is either a positive or negative sample, for training of a logistic regression model that separates queries that correlates with the mean from those queries that do not. Training The transformed queries [12pt]{minimal} $$q^$$ q ′ are samples indexed as rows of a sparse matrix X that is used to solve for the linear regression coefficients w . These samples become further sparse after the feature selection procedure, such that certain dimensions i are zeroed out. We use the Conjugate Gradient Method to optimize the cost function, via ridge regression , shown in Eq. , which finds coefficients that minimizes the squared error of predictions [12pt]{minimal} $$=Xw$$ y ^ = X w against the ground-truth y . This objective function includes a regularization term weighted by [12pt]{minimal} $$ =0.5$$ α = 0.5 that helps minimize the risk of overfitting in addition to the dimensionality reduction by the feature selection procedure. The ground-truth vector y holds values [12pt]{minimal} $$y=1$$ y = 1 for positive queries and [12pt]{minimal} $$y=0$$ y = 0 for negative queries. 12 [12pt]{minimal} $$ } Xw - y ^{2} + w ^{2}, $$ min w ‖ X w - y ‖ 2 + α ‖ w ‖ 2 , where 13 [12pt]{minimal} $$ X = [ q_{1,1} & & q_{1,i} & & q_{1,16344} \\ & & & & \\ q_{ ,i} & & q_{ ,i} & & q_{ ,16344} \\ & & & & \\ q_{38930,1} & & q_{38930,i} & & q_{38930,16344} ] , w = [ w_{1} \\ \\ w_{i} \\ \\ w_{16344} ] , y = [ 1 \\ \\ 1 \\ 0 \\ \\ 0 ] $$ X = q 1 , 1 … q 1 , i … q 1 , 16344 ⋮ ⋱ ⋮ ⋱ ⋮ q ℓ , i ⋯ q ℓ , i ⋯ q ℓ , 16344 ⋮ ⋱ ⋮ ⋱ ⋮ q 38930 , 1 ⋯ q 38930 , i ⋯ q 38930 , 16344 , w = w 1 ⋮ w i ⋮ w 16344 , y = 1 ⋮ 1 0 ⋮ 0 We measure the training performance using different metrics. To assess precision of the predicted values, we rely on both the R2-score and root-mean-square error (RMSE). On the training set, the R2-score is reported to reach 1.0, which means perfect accuracy, and the RMSE=0.0000014. As for classification accuracy, we rely on the auROC, which summarizes the reliability on Recall and False Positive Rates with a single score. On the training set, it reported 100% successful rate with auROC=1.0. Inference The prediction phase occurs in the encrypted domain and it consists of two steps. The first step consists of applying the transformation shown in Eq. to each of the 400 encrypted queries [12pt]{minimal} $$ct_{q_{ }}=Enc_{pk}( {m}_{q_{ }})$$ c t q ℓ = E n c pk ( m → q ℓ ) , where [12pt]{minimal} $$ {m_{q_{ }}}= {Encode}(q_{ }, )$$ m q ℓ → = Encode ( q ℓ , Δ ) . The result is a collection of transformed input ciphertexts [12pt]{minimal} $$ct_{q_{ }^ }$$ c t q ℓ ′ computed from the component-wise multiplication between [12pt]{minimal} $$ct_{q_{ }}$$ c t q ℓ and [12pt]{minimal} $$ct_{ }$$ c t μ (see Eq. ), where [12pt]{minimal} $$ct_{ }$$ c t μ is the encrypted mean of the searchable genomic database D . 14 [12pt]{minimal} $$ ct_{q_{ }^ }=Enc_{pk}( {Encode}(q_{ } , )) ct_{q_{ }} ct_{ }, $$ c t q ℓ ′ = E n c pk ( Encode ( q ℓ · μ , Δ ) ) ≈ c t q ℓ ⊙ c t μ , where [12pt]{minimal} $$ct_{q_{ }^ }$$ c t q ℓ ′ corresponds to the encrypted feature vector of a query [12pt]{minimal} $$q_{ }$$ q ℓ computed using the feature extraction procedure described in Eq. . This implies that the inference would consume one additional multiplicative depth to account for this preprocessing step; thus, requiring an encryption configuration that allows for multiplicative depth [12pt]{minimal} $$L=2$$ L = 2 instead of [12pt]{minimal} $$L=1$$ L = 1 as explained in “ ” section. In practice, we bypass this preprocessing step for efficiency, i.e. to avoid an additional level, by directly using the encrypted queries [12pt]{minimal} $$ct_{q_{ }}$$ c t q ℓ with their original values (see Eq. ) for inference. We empirically verified that this yields comparable results, not affecting the accuracy. Hence, we keep the multiplicative depth of the encrypted circuit of this linear regression-based approach down to [12pt]{minimal} $$L=1$$ L = 1 . 15 [12pt]{minimal} $$ ct_{q_{ }}=Enc_{pk}( {Encode}(q_{ }, )), $$ c t q ℓ = E n c pk ( Encode ( q ℓ , Δ ) ) , Note that the training step uses [12pt]{minimal} $$q_{ }^$$ q ℓ ′ as features to learn the classification hyperplane. The linear regression inference function for a single query in clear text is defined as [12pt]{minimal} $${} = q_{ }w + b$$ y ^ = q ℓ w + b . In the encrypted domain, this same inference function takes a different form and it is defined as follows 16 [12pt]{minimal} $$ ct_{r_{0}} & = _{j=1}^{M}ct_{q_{ }}[j] {ct_{w}}[j], \\ ct_{r_{k+1}} & = ct_{r_{k}} + Rotate(ct_{r_{k}},2^k), 0 k < {log}_2(N/2)-1 \\ ct_{} & = ct_{b} + ct_{r_{ {log}_2(N/2)}} $$ c t r 0 = ∑ j = 1 M c t q ℓ [ j ] ⊙ c t w [ j ] , c t r k + 1 = c t r k + R o t a t e ( c t r k , 2 k ) , 0 ≤ k < log 2 ( N / 2 ) - 1 c t y ^ = c t b + c t r log 2 ( N / 2 ) where [ j ] denotes indexing at the [12pt]{minimal} $$j^{th}$$ j th ciphertext of a collection of M ciphertexts encrypting query [12pt]{minimal} $$q_{ }$$ q ℓ and the weights w . [12pt]{minimal} $$ct_{w}$$ c t w and [12pt]{minimal} $$ct_{b}$$ c t b denote the encrypted linear regression coefficients and bias, respectively. [12pt]{minimal} $$ct_{}$$ c t y ^ corresponds to the encrypted real-valued prediction that measures the likelihood of query q to share genetic data with any of the database samples. M equals [12pt]{minimal} $$ 16344/(N/2) $$ ⌈ 16344 / ( N / 2 ) ⌉ , i.e. the number of ciphertexts used to encrypt all the features of a single query q . Rotation is executed [12pt]{minimal} $$ {log}_2(N/2)$$ log 2 ( N / 2 ) times to iteratively accumulate the sum of all the elements in the slots of the output ciphertext [12pt]{minimal} $$ct_{r_0}$$ c t r 0 , where [12pt]{minimal} $$ct_{r_0}$$ c t r 0 resulted from the homomorphic pointwise multiplication of the encrypted query and the encrypted weights (see Fig. for a toy illustration). At each iteration, rotation applies [12pt]{minimal} $$2^k$$ 2 k circular-shifting to the ciphertext [12pt]{minimal} $$ct_{r_0}$$ c t r 0 , resulted from previously rotation and accumulation with ciphertext [12pt]{minimal} $$ct_{k-1}$$ c t k - 1 . In the end, the sum of all elements in the slots is stored in all slots of the ciphertext [12pt]{minimal} $$ct_{r_{ {log}_2(N/2)}}$$ c t r log 2 ( N / 2 ) (see Fig. for a toy illustration). At last, the encrypted linear regression bias term, denoted as [12pt]{minimal} $$ct_{b}$$ c t b , is added to [12pt]{minimal} $$ct_{ {log}_2(N/2)}$$ c t log 2 ( N / 2 ) so as to complete the linear regression dot product as the encrypted prediction [12pt]{minimal} $$ct_{}$$ c t y ^ – the prediction score for the single query q appears in all the slots of ciphertext [12pt]{minimal} $$ct_{}$$ c t y ^ . Optimization for performance While Eq. is an easy-to-compute element-wise vector-vector multiplication, Eq. is a matrix-vector multiplication that entails matrix-row-number of dot products. Even though two consecutive multiplications are involved in this sequence of operations, only 1 level is consumed since the modulus switch operation is postponed until after the second multiplication is complete. We also optimize the number of rotations needed to accumulate the results of the element-wise multiplications involved in a dot product by first adding all the ciphertexts involved in a single query prediction (see Eq. ). That is, an encrypted query containing 16344 features is split into [12pt]{minimal} $$ 16344/(N/2) $$ ⌈ 16344 / ( N / 2 ) ⌉ ciphertexts; therefore, after multiplying them by the encrypted database mean, instead of applying [12pt]{minimal} $$ {log}_2(N/2)$$ log 2 ( N / 2 ) rotations on each of the individual ciphertexts to sum their internal components first, we first sum the ciphertexts to obtain a single ciphertext and only then [12pt]{minimal} $$ {log}_2(N/2)$$ log 2 ( N / 2 ) rotations are executed to perform the sum of the dot product. Security level and parameters selection Analogous to the clustering-based approach, we manage to maintain multiplicative depth [12pt]{minimal} $$L=1$$ L = 1 for the linear regression-based supervised method. By both providing precomputed database mean and postponing the modulus switch operation until after the second multiplication helps achieve that. Additionally, as briefly explained in “ ” section, for the inference step we do not apply the feature transformation to the query but instead directly use the original data values since that would demand to set [12pt]{minimal} $$L=2$$ L = 2 . This way, the same parameter values are used, i.e. coefficient modulus size [12pt]{minimal} $$ {log}Q=109$$ log Q = 109 , polynomial ring size [12pt]{minimal} $$N=2^{12}$$ N = 2 12 , scaling factor [12pt]{minimal} $$ =2^{29}$$ Δ = 2 29 , and modulus chain comprising a sequence of co-primes with bit lengths {40, 29, 40}. Encrypted algorithm The full set of instructions describing the encrypted linear regression algorithm is shown in Algorithm 6. Lines 5 to 9 perform component-wise multiplication of the linear coefficients and the query data (see top row of Fig. ). Line 10 performs the sum of all of elements in the slots resulted from the product of linear coefficients and input data (see bottom row of Fig. and top row of Fig. for a toy illustration of the sequence of operations). Line 12 performs the addition of the linear regression dot product with the bias term (see bottom row of Fig. ). The prediction results are stored in [12pt]{minimal} $${}$$ y ^ and returned. Unsupervised algorithms present a natural choice for addressing the relatedness problem. They require minimal assumptions about the dataset and contribute to more robust generalization. We proceed with the assumption that the genomic database primarily comprises genomes from individuals who are unrelated to the query individual. This assumption is grounded in the fact that on average fertility rate in the world population is 2.27 ; an individual is on average likely to have less than 97 relatives up to the third degree (see Table ). In addition, it is unlikely that all relative genomes have found their way to the database; however, if we use the historical highest average fertility rate of 6.8 , the number of third degree relatives could reach 2339 (see Table ), which is still much lower than a typical genomic database size but greater than our challenge dataset, in which case, our method could not be used. That is, if we rely on the assumption that our database characteristics follow the fertility rate is 6.8, implying 2339 relatives up to 3rd degree, then our assumption that the individuals in the database are mostly unrelated, on which our method relies, would not be suitable since the challenge database has only 2000 samples. In the real-world scenario, where the databases have tens to hundreds of thousands of samples, then our assumption that the samples in the database are mostly unrelated might still hold, for which our method could still be functionally suitable. We precomputed offline the correlations (dot products) between known queries that are confirmed to have a relative within the challenge database and every entry in the database. Our analysis reveals that 367 entries in the challenge database are related to at least one of the 200 positive query individuals (see Fig. ). Note that, by carefully observing Fig. , we may infer that correlation values around 13250 could indicate relatives of 1st degree, around 12000 will probably determine relatives of 2nd degree, around 11500 sits relatives of 3rd degree, and below and beyond lies distant relatives or unrelated individuals, i.e. individuals from different populations, with respect to query i (marked along the x -axis). This implies that, on average, each positive query is associated with just 1.83 relatives within the challenge database, out of a potential total of 97 existing relatives. It is worth noting again that only a small minority of these potential relatives have their genome data present in the challenge database. These findings validate the robustness of our unsupervised approach. Within this framework, Eq. serves the purpose of quantifying the distance between a query q and the mean [12pt]{minimal} $$$$ μ of the database considering all genotype variants, from [12pt]{minimal} $$i=1$$ i = 1 to [12pt]{minimal} $$i=16344$$ i = 16344 . Clearly, the database mean aligns closely with the centroid of the unrelated genomes, given their substantial presence compared to the related ones. In fact, the database mean is the average of genotype values across all genomes from the database. In this manner, the database mean essentially characterizes “unrelatedness to any individual in the database”. This can be confirmed by observing that the correlation of any entry in database with the database mean (see blue x plots on lower right corner of Fig. ) has lower correlation values than a correlation between a query and its relative in the database (see green solid circles plotted in Fig. ). Another way to support this interpretation is by observing the scatter plot of the correlations between queries and the database mean in Fig. . They appear entangled and hardly defined to judge if positive or negative queries correlate more or less to the database mean. Superficially, it appears there is more correlation of the mean with the negative queries. This observation is used to consider that more correlation to the mean signifies more likelihood to be unrelated to any specific individual in the database since the mean approximates the average of the populations. We extend this observation to interpret and explain the clustering-based formulation proposed in “ ” section. 4 [12pt]{minimal} $$ {f}(q_{ },D) = _{i=1}^{n}- _i) ^2}{ _i^2}. $$ f ( q ℓ , D ) = ∑ i = 1 n q ℓ , i - μ i 2 σ i 2 . In Eq. , [12pt]{minimal} $$q_{ }$$ q ℓ is an encrypted genome sample (query) [12pt]{minimal} $$$$ ℓ , D is the encrypted genomic database, [12pt]{minimal} $$q_{ ,i}$$ q ℓ , i is the encrypted value of genotype variant at gene locus i in query [12pt]{minimal} $$q_{ }$$ q ℓ , [12pt]{minimal} $$ _i$$ μ i is the average value of genotype variants at gene locus i across all individuals in admixture population making up D (2000 database samples). Similarly, [12pt]{minimal} $$ _i^2$$ σ i 2 is the variance of genotype variants at gene locus i . Inspired by the one-sample paired z-test, we first assume that the means [12pt]{minimal} $$ _i$$ μ i are continuous and simple random sample from the population of interest. Second, we assume that the data in the population is approximately normally distributed and, third, that we can compute the population standard deviation from the genomic database. From that, we proceed with hypothesis testing, making the Eq. an approximation of the distance from the query to the group of unrelated individuals. When this distance is small, the query yields an “unfound” result, whereas a larger distance results in a “found” outcome. Notably, observations of genotype variants from related queries exhibit more significant deviations from the mean compared to those in unrelated queries. Experimentally, we verify that the distance values (scores) derived from related queries using Eq. tend to be higher in comparison to the reference population, as opposed to scores from unrelated queries (see Fig. ). These scores allow for the projection of related and unrelated queries into a linearly separable space using a predefined threshold. Indeed, the choice of a threshold renders a linear decision boundary to realize the final classification/detection about whether the query has a relative in the database or not. Additionally, examining the classification performance (False Positive Rate, Precision and Recall) at varying thresholds allows us to plot the receiver operating characteristics (ROC) curve and select an optimal threshold value for final predictions of unseen queries (see Fig. ). Optimization for performance As follows, we make adjustments to Eq. to ensure its compatibility with Homomorphic Encryption. In Eq. , we add a small constant e to the denominator to avoid division by zero. In Eq. , we replace the variance [12pt]{minimal} $$ ^2$$ σ 2 by the mean [12pt]{minimal} $$$$ μ to avoid computations that would not change the ranking – this was verified experimentally. 5 [12pt]{minimal} $$ _{i=1}^{n}- _i) ^2}{ _i^2} _{i=1}^{n}- _i) ^2 }{ _i^2 +e} $$ ∑ i = 1 n q ℓ , i - μ i 2 σ i 2 ≈ ∑ i = 1 n q ℓ , i - μ i 2 σ i 2 + e 6 [12pt]{minimal} $$ _{i=1}^{n}- _i) ^2 }{ _i^2+e} _{i=1}^{n}- _i) ^2 }{ _i +e} $$ ∑ i = 1 n q ℓ , i - μ i 2 σ i 2 + e → ∑ i = 1 n q ℓ , i - μ i 2 μ i + e In Eq. , we approximate the division by [12pt]{minimal} $$$$ μ with a linear equation. 7 [12pt]{minimal} $$ _{i=1}^{n}- _i) ^2 }{ _i+e} & _{i=1}^{n}( q_{ ,i}- _i) ^2 ( a( _i + e) +b) ^2, \\ {where}~a( _i+e)+b & }, a=-10.51, b=12.49 $$ ∑ i = 1 n q ℓ , i - μ i 2 μ i + e ≈ ∑ i = 1 n q ℓ , i - μ i 2 a ( μ i + e ) + b 2 , where a ( μ i + e ) + b ≈ 1 μ i + e , a = - 10.51 , b = 12.49 Initially, we considered utilizing the Goldschmidt’s algorithm for variance division calculation. However, this approach calls for a staggering 26 multiplicative levels (in our implementation, without the need for bootstrapping), rendering it unsuitable for achieving low latency. In lieu of Goldschmidt’s, we opted for a linear approximation of division by the mean [12pt]{minimal} $$$$ μ , even though it introduces a degree of inaccuracy. The trade-off, however, is the substantial reduction in latency. Equation requires multiplicative depth [12pt]{minimal} $$L=4$$ L = 4 (i.e. 4 multiplication levels) with the CKKS scheme. We achieve this by choosing [12pt]{minimal} $$ {log}Q=218$$ log Q = 218 with scaling factor [12pt]{minimal} $$ =2^{30}$$ Δ = 2 30 , for which the choice of smallest polynomial degree to reach 128-bit security is [12pt]{minimal} $$N=2^{13}$$ N = 2 13 . Note that Q here denotes the coefficient modulus value and [12pt]{minimal} $$ {log}Q$$ log Q is the number of bits required to represent it in binary base. The scaling factor [12pt]{minimal} $$ =2^{30}$$ Δ = 2 30 , even though small, proved to offer sufficient noise budget to refrain from arithmetic precision loss, such that the results obtained homomorphically are equal to the outputs in clear text. Table shows how we heuristically find the optimal threshold for post-prediction decision making and the small constant e used in Eq. . Figure shows how the [12pt]{minimal} $$ {auROC}$$ auROC varies with the value e , where each ROC curve is plotted by varying the prediction decision threshold value. The optimal threshold value in each ROC curve is located at the point in the curve that satisfies [12pt]{minimal} $$_{x,y}( {TPR}-(1- {FPR}))$$ min x , y ( TPR - ( 1 - FPR ) ) , where FPR is the false positive rate at the x-axis and TPR is the true positive rate at the y-axis. Additionally, we employed OpenMP to parallelize the addition operations involved in the homomorphic computation of the mean [12pt]{minimal} $$$$ μ . Note that the homomorphic computation of the mean is only necessary when dealing with fully unsupervised case, where the order statistics of the population is unknown; otherwise, the mean can be precomputed ahead of time and available for inference in encrypted form to further reduce inference latency. Furthermore, we reorder the sequence of operations to delay the homomorphic rotations such that they are always applied to a reduced number of ciphertexts; thus, effectively reducing the number of rotations since they are only performed in extremely necessary cases. We called this “lazy” rotation, typically happening for outer sums of ciphertexts. We also perform multiple query predictions in parallel using OpenMP. This algorithm runs in two steps: it first evaluates the database mean, and secondly it evaluates Eq. to obtain the prediction score. Step one can be done offline with the challenge database or online using the competition database during inference. If computed offline, the mean database will be encrypted and be part of the input to the homomorphic evaluation of Eq. . Precomputing the mean allows us to reduce the required multiplicative depth of the homomorphic circuit, in which case the encryption parameters are set to [12pt]{minimal} $$L=3$$ L = 3 and [12pt]{minimal} $$ Q=188$$ log Q = 188 , which in turn also helps reduce latency. Security level and parameters selection A security level of 128 bits is enforced by using a polynomial modulus degree of [12pt]{minimal} $$N=2^{13}$$ N = 2 13 and coefficient modulus size [12pt]{minimal} $$ {log}Q=218$$ log Q = 218 . We follow the BKZ.sieve model discussed in to determine the values for those parameters, namely [12pt]{minimal} $$ {log}Q$$ log Q and N , to achieve 128-bit security level. We set the sequence of co-primes to have bit lengths {49, 30, 30, 30, 30, 49} whose product approximates Q , whereas for the case [12pt]{minimal} $$L=3$$ L = 3 with [12pt]{minimal} $$ Q=188$$ log Q = 188 , the sequence has one less inner co-prime and it becomes {49, 30, 30, 30, 49}. Packing In order to reduce computational cost, we streamline the data packing into as few CKKS ciphertext as possible. Figure illustrates how by selecting a polynomial ring degree of [12pt]{minimal} $$N=2^{13}$$ N = 2 13 , there are 4096 slots within a single ciphertext where we can effectively store up to 4096 genotype variants out of the 16344. Consequently, merely 4 ciphertexts are needed for encrypting a genome feature vector encompassing 16344 genotypes variants. This means that if the polynomials of ciphertexts have degree N , Microsoft SEAL’s implementation of CKKS offers enough slots to store N /2 fixed-point numbers. Henceforth this same data packing strategy is used across all solutions presented in this work. Encrypted algorithm The encrypted algorithm is described in Algorithm 4. Lines 5 through 12 compute the database mean [12pt]{minimal} $${}$$ μ ^ in the encrypted domain. Lines 13 thru 14 add the small constant that avoid division by zero in the cleartext domain, where line 14 (see Algorithm 1) ensures that the plaintext e has the same scale and level of [12pt]{minimal} $${}$$ μ ^ . Line 15 sums the encrypted mean [12pt]{minimal} $${}$$ μ ^ with a small constant e . From line 16 (Algorithm 2) to line 19, we compute the encrypted approximation for [12pt]{minimal} $$}+e}$$ 1 μ ^ + e , i.e. [12pt]{minimal} $$(a( +e)+b)^2$$ ( a ( μ + e ) + b ) 2 . Lines 20 through 25 compute [12pt]{minimal} $$(q_{ }- )^2$$ ( q ℓ - μ ) 2 . Lines 26 through 28 multiply the two terms [12pt]{minimal} $$(q_{ }- )^2$$ ( q ℓ - μ ) 2 and [12pt]{minimal} $$(a( +e)+b)^2$$ ( a ( μ + e ) + b ) 2 . Finally, the final score for query [12pt]{minimal} $$q_{ }$$ q ℓ is computed as the sum of all the elements in the slot of ciphertext [12pt]{minimal} $${}^ [ ]$$ χ ^ ′ [ ℓ ] , i.e. performing the sum [12pt]{minimal} $$ _{i=1}^{n}( q_{ ,i}- _i) ^2 ( a( _i + e) +b) ^2$$ ∑ i = 1 n q ℓ , i - μ i 2 a ( μ i + e ) + b 2 (see Algorithm 3 for details on the rotation-sum operation). As follows, we make adjustments to Eq. to ensure its compatibility with Homomorphic Encryption. In Eq. , we add a small constant e to the denominator to avoid division by zero. In Eq. , we replace the variance [12pt]{minimal} $$ ^2$$ σ 2 by the mean [12pt]{minimal} $$$$ μ to avoid computations that would not change the ranking – this was verified experimentally. 5 [12pt]{minimal} $$ _{i=1}^{n}- _i) ^2}{ _i^2} _{i=1}^{n}- _i) ^2 }{ _i^2 +e} $$ ∑ i = 1 n q ℓ , i - μ i 2 σ i 2 ≈ ∑ i = 1 n q ℓ , i - μ i 2 σ i 2 + e 6 [12pt]{minimal} $$ _{i=1}^{n}- _i) ^2 }{ _i^2+e} _{i=1}^{n}- _i) ^2 }{ _i +e} $$ ∑ i = 1 n q ℓ , i - μ i 2 σ i 2 + e → ∑ i = 1 n q ℓ , i - μ i 2 μ i + e In Eq. , we approximate the division by [12pt]{minimal} $$$$ μ with a linear equation. 7 [12pt]{minimal} $$ _{i=1}^{n}- _i) ^2 }{ _i+e} & _{i=1}^{n}( q_{ ,i}- _i) ^2 ( a( _i + e) +b) ^2, \\ {where}~a( _i+e)+b & }, a=-10.51, b=12.49 $$ ∑ i = 1 n q ℓ , i - μ i 2 μ i + e ≈ ∑ i = 1 n q ℓ , i - μ i 2 a ( μ i + e ) + b 2 , where a ( μ i + e ) + b ≈ 1 μ i + e , a = - 10.51 , b = 12.49 Initially, we considered utilizing the Goldschmidt’s algorithm for variance division calculation. However, this approach calls for a staggering 26 multiplicative levels (in our implementation, without the need for bootstrapping), rendering it unsuitable for achieving low latency. In lieu of Goldschmidt’s, we opted for a linear approximation of division by the mean [12pt]{minimal} $$$$ μ , even though it introduces a degree of inaccuracy. The trade-off, however, is the substantial reduction in latency. Equation requires multiplicative depth [12pt]{minimal} $$L=4$$ L = 4 (i.e. 4 multiplication levels) with the CKKS scheme. We achieve this by choosing [12pt]{minimal} $$ {log}Q=218$$ log Q = 218 with scaling factor [12pt]{minimal} $$ =2^{30}$$ Δ = 2 30 , for which the choice of smallest polynomial degree to reach 128-bit security is [12pt]{minimal} $$N=2^{13}$$ N = 2 13 . Note that Q here denotes the coefficient modulus value and [12pt]{minimal} $$ {log}Q$$ log Q is the number of bits required to represent it in binary base. The scaling factor [12pt]{minimal} $$ =2^{30}$$ Δ = 2 30 , even though small, proved to offer sufficient noise budget to refrain from arithmetic precision loss, such that the results obtained homomorphically are equal to the outputs in clear text. Table shows how we heuristically find the optimal threshold for post-prediction decision making and the small constant e used in Eq. . Figure shows how the [12pt]{minimal} $$ {auROC}$$ auROC varies with the value e , where each ROC curve is plotted by varying the prediction decision threshold value. The optimal threshold value in each ROC curve is located at the point in the curve that satisfies [12pt]{minimal} $$_{x,y}( {TPR}-(1- {FPR}))$$ min x , y ( TPR - ( 1 - FPR ) ) , where FPR is the false positive rate at the x-axis and TPR is the true positive rate at the y-axis. Additionally, we employed OpenMP to parallelize the addition operations involved in the homomorphic computation of the mean [12pt]{minimal} $$$$ μ . Note that the homomorphic computation of the mean is only necessary when dealing with fully unsupervised case, where the order statistics of the population is unknown; otherwise, the mean can be precomputed ahead of time and available for inference in encrypted form to further reduce inference latency. Furthermore, we reorder the sequence of operations to delay the homomorphic rotations such that they are always applied to a reduced number of ciphertexts; thus, effectively reducing the number of rotations since they are only performed in extremely necessary cases. We called this “lazy” rotation, typically happening for outer sums of ciphertexts. We also perform multiple query predictions in parallel using OpenMP. This algorithm runs in two steps: it first evaluates the database mean, and secondly it evaluates Eq. to obtain the prediction score. Step one can be done offline with the challenge database or online using the competition database during inference. If computed offline, the mean database will be encrypted and be part of the input to the homomorphic evaluation of Eq. . Precomputing the mean allows us to reduce the required multiplicative depth of the homomorphic circuit, in which case the encryption parameters are set to [12pt]{minimal} $$L=3$$ L = 3 and [12pt]{minimal} $$ Q=188$$ log Q = 188 , which in turn also helps reduce latency. A security level of 128 bits is enforced by using a polynomial modulus degree of [12pt]{minimal} $$N=2^{13}$$ N = 2 13 and coefficient modulus size [12pt]{minimal} $$ {log}Q=218$$ log Q = 218 . We follow the BKZ.sieve model discussed in to determine the values for those parameters, namely [12pt]{minimal} $$ {log}Q$$ log Q and N , to achieve 128-bit security level. We set the sequence of co-primes to have bit lengths {49, 30, 30, 30, 30, 49} whose product approximates Q , whereas for the case [12pt]{minimal} $$L=3$$ L = 3 with [12pt]{minimal} $$ Q=188$$ log Q = 188 , the sequence has one less inner co-prime and it becomes {49, 30, 30, 30, 49}. In order to reduce computational cost, we streamline the data packing into as few CKKS ciphertext as possible. Figure illustrates how by selecting a polynomial ring degree of [12pt]{minimal} $$N=2^{13}$$ N = 2 13 , there are 4096 slots within a single ciphertext where we can effectively store up to 4096 genotype variants out of the 16344. Consequently, merely 4 ciphertexts are needed for encrypting a genome feature vector encompassing 16344 genotypes variants. This means that if the polynomials of ciphertexts have degree N , Microsoft SEAL’s implementation of CKKS offers enough slots to store N /2 fixed-point numbers. Henceforth this same data packing strategy is used across all solutions presented in this work. The encrypted algorithm is described in Algorithm 4. Lines 5 through 12 compute the database mean [12pt]{minimal} $${}$$ μ ^ in the encrypted domain. Lines 13 thru 14 add the small constant that avoid division by zero in the cleartext domain, where line 14 (see Algorithm 1) ensures that the plaintext e has the same scale and level of [12pt]{minimal} $${}$$ μ ^ . Line 15 sums the encrypted mean [12pt]{minimal} $${}$$ μ ^ with a small constant e . From line 16 (Algorithm 2) to line 19, we compute the encrypted approximation for [12pt]{minimal} $$}+e}$$ 1 μ ^ + e , i.e. [12pt]{minimal} $$(a( +e)+b)^2$$ ( a ( μ + e ) + b ) 2 . Lines 20 through 25 compute [12pt]{minimal} $$(q_{ }- )^2$$ ( q ℓ - μ ) 2 . Lines 26 through 28 multiply the two terms [12pt]{minimal} $$(q_{ }- )^2$$ ( q ℓ - μ ) 2 and [12pt]{minimal} $$(a( +e)+b)^2$$ ( a ( μ + e ) + b ) 2 . Finally, the final score for query [12pt]{minimal} $$q_{ }$$ q ℓ is computed as the sum of all the elements in the slot of ciphertext [12pt]{minimal} $${}^ [ ]$$ χ ^ ′ [ ℓ ] , i.e. performing the sum [12pt]{minimal} $$ _{i=1}^{n}( q_{ ,i}- _i) ^2 ( a( _i + e) +b) ^2$$ ∑ i = 1 n q ℓ , i - μ i 2 a ( μ i + e ) + b 2 (see Algorithm 3 for details on the rotation-sum operation). The clustering-based approach was derived from the framework put forward in and it is similar in spirit to the approach by , which takes into account sub-populations. The database D is represented by a set of cluster centroids [12pt]{minimal} $$c_j$$ c j and the database mean [12pt]{minimal} $$$$ μ . The first term of Eq. measures the absolute distance between a query [12pt]{minimal} $$q_{ }$$ q ℓ and the database population mean [12pt]{minimal} $$$$ μ . The smaller this distance, the more uncertainty to determine whether a query has a relative in the underlying population mixture. The second term measures the absolute distance between a query [12pt]{minimal} $$q_{ }$$ q ℓ and a centroid [12pt]{minimal} $$c_j$$ c j , in which j denotes the [12pt]{minimal} $$j^{th}$$ j th centroid. The smaller this second distance, the greater the likelihood for a query to have a relative in the mixture. The maximum difference between these two terms across all k centroids results in the final predicted kinship score. The numerator represents a measurement of the relationship of a query q (point) with respect to the cluster representing the underlying mixture. The denominator is a normalization factor for the computed value in the numerator and is constant for each individual query prediction; therefore, it can be disregarded in the actual computation to save on latency. Initially, within the unveiled procedure, the genomic database undergoes cleartext domain clustering (on the database owner’s premise). This clustering is solved using Lloyd’s k -means algorithm to determine a centroid j for each underlying sub-population (see ). The average complexity is given by [12pt]{minimal} $$O(k T)$$ O ( k η T ) , where [12pt]{minimal} $$$$ η is the number of samples and T is the number of iterations. Subsequently, Eq. finds its application in the encrypted domain, leveraging the k encrypted centroids established during the k -means algorithm’s operation. The selection of parameter k is determined by the k -means algorithm’s assessment of the reference database. To minimize latency, a prudent choice is made to employ a smaller value of k . More specifically, we set [12pt]{minimal} $$k=5$$ k = 5 as it does not compromise accuracy. This cluster-point relationship solution is mathematically described in Eqs. , and . Let a centroid [12pt]{minimal} $$c_j$$ c j represent a sub-population j in the genomic database. When the difference between the query [12pt]{minimal} $$q_{ }$$ q ℓ and the mean [12pt]{minimal} $$$$ μ is larger than the difference of query [12pt]{minimal} $$q_{ }$$ q ℓ with centroid [12pt]{minimal} $$c_j$$ c j , then the value is positive indicating that it has a relative in the database. Conversely, if the difference between the query and the mean is smaller than the difference of query with centroids, then the value is negative indicating that it does not have a relative in the database. These calculated scores pave the way for projecting both related and unrelated queries onto a linearly separable space (see Fig. a). 8 [12pt]{minimal} $$ {f}(q_{ },D) & = }}^{n}(| q_{ ,i}- _i| - | q_{ ,i} - c_{j,i}| ) }{ _{i=1}^{n} (q_{ ,i} - _i)^2} \\ {f}(q_{ },D) & = _{i=1}^{n} (q_{ ,i} - _i)^2}}} _{i=1}^{n}(| q_{ ,i}- _i| - | q_{ ,i} - c_{j,i}| ) \\ {f}(q_{ },D) & = \ }} _{i=1}^{n}(| q_{ ,i}- _i| - | q_{ ,i} - c_{j,i}| ) $$ f ( q ℓ , D ) = max j ∑ i = 1 n ( q ℓ , i - μ i - q ℓ , i - c j , i ) 1 n ∑ i = 1 n ( q ℓ , i - μ i ) 2 f ( q ℓ , D ) = 1 1 n ∑ i = 1 n ( q ℓ , i - μ i ) 2 max j ∑ i = 1 n ( q ℓ , i - μ i - q ℓ , i - c j , i ) f ( q ℓ , D ) = α max j ∑ i = 1 n ( q ℓ , i - μ i - q ℓ , i - c j , i ) Optimization for performance Since [12pt]{minimal} $$$$ α is constant for all queries [12pt]{minimal} $$q_{ }$$ q ℓ , the denominator is normalization factor and can go outside the max function. Thus, we shall concentrate on the numerator of Eq. to rank the predictions; thus, we establish [12pt]{minimal} $${f^{ }}(q_{ },D)$$ f ′ ( q ℓ , D ) in Eq. . By eliminating this normalization step, the algorithm becomes more efficient in detriment of possibly not preserving the original ranking among the queries. This relaxation to the original equation is valuable to improve the computational efficiency in the encrypted domain, and it was empirically verified not to affect the accuracy. 9 [12pt]{minimal} $$ {f^{ }}(q_{ },D) & = }} _{i=1}^{n}(| q_{ ,i}- _i| - | q_{ ,i} - c_{j,i}| ), \\ {where}~{f}(q_{ },D) & = {f^{ }}(q_{ },D) $$ f ′ ( q ℓ , D ) = max j ∑ i = 1 n ( q ℓ , i - μ i - q ℓ , i - c j , i ) , where f ( q ℓ , D ) = α f ′ ( q ℓ , D ) This effectively reduces the amount of required computation. Then, we further simplify the prediction function by replacing the operator [12pt]{minimal} $$_j$$ max j with [12pt]{minimal} $$ _{j=1}^{k}$$ ∑ j = 1 k , the sum over all computations across k centroids. The final score is now the aggregated voting of the k differences between the distance of query to the mean and the distance of query to centroid. Empirically, we verify that this does not alter the final predictions, such that the final objective becomes 10 [12pt]{minimal} $$ {f^{ }}(q_{ },D) = _{j=1}^{k} _{i=1}^{n}(( q_{ ,i}- _i) ^2 - ( q_{ ,i} - c_{j,i}) ^2), $$ f ″ ( q ℓ , D ) = ∑ j = 1 k ∑ i = 1 n ( q ℓ , i - μ i 2 - q ℓ , i - c j , i 2 ) , where [12pt]{minimal} $$c_{j,i}$$ c j , i is the [12pt]{minimal} $$i^{th}$$ i th genotype variant of the cluster [12pt]{minimal} $$j^{th}$$ j th centroid and n is total number of genotype variants, i.e. [12pt]{minimal} $$n=16,344$$ n = 16 , 344 . In this framework, Eq. takes on the role of quantifying the separation between the query and the database mean, while also subtracting the query’s separation from each sub-population. In the competition, this method requires [12pt]{minimal} $$400 k$$ 400 × k evaluations of Eq. since the challenges consists of testing 400 queries. For a choice of [12pt]{minimal} $$k=5$$ k = 5 , 2,000 evaluations are required, which is three orders of magnitude less operations than the naïve solution that requires 800,000 cross-correlations evaluations, as depicted in Fig. . Since, in Eq. , the mean [12pt]{minimal} $$$$ μ and the cluster centroids [12pt]{minimal} $$c_j$$ c j are precomputed offline and used during inference, we consider it as a supervised approach. This assumes that the characteristics about the underlying population mixture from the challenge dataset are sufficient to generalize predictions to unknown query data. For this reason, the algorithm to compute inference as Eq. requires only multiplicative depth [12pt]{minimal} $$L=1$$ L = 1 , greatly optimizing the multiplicative depth complexity and latency associated to it. This algorithm runs in two steps: first, it evaluates the database mean and computes the k cluster centroids in the clear text domain; secondly, it evaluates Eq. to output the kinship prediction scores. Step one is done offline with the genomic database still in the database owner’s premise; then, the database mean and cluster centroids are encrypted and sent to the computing entity as part of the input to the encrypted evaluation of Eq. . Security level and parameters selection This time, the coefficient modulus size [12pt]{minimal} $$ Q$$ log Q does not have to be comprised of many bits since the multiplicative depth equal 1. Even though a smaller Q is possible, accordingly the parameters [12pt]{minimal} $$ {log}Q$$ log Q and N , the scaling factor size [12pt]{minimal} $$ {log}$$ log Δ must be carefully chosen. In this case, the scaling factor [12pt]{minimal} $$ =2^{p}$$ Δ = 2 p will dictate how much arithmetic precision to compute the target workload without corrupting the decryption. Given those considerations, we select a scaling factor that allows us minimize as much as possible the polynomial degree N . We found that the scaling factor [12pt]{minimal} $$ =2^{29}$$ Δ = 2 29 is sufficient to keep the arithmetic precision afloat during computation of Eq. . To ensure 128-bit security level, we use polynomial ring size of degree of [12pt]{minimal} $$N=2^{12}$$ N = 2 12 and a coefficient modulus of size [12pt]{minimal} $$ {log}Q=109$$ log Q = 109 . The coefficient modulus chain comprises co-primes with bit lengths {40, 29, 40}. This choice of parameters provide a compact and fast implementation. Packing In order to reduce computational cost, it is crucial to streamline the organization of the CKKS ciphertexts. We perform data packing and encryption similarly to what Fig. illustrates. With polynomial ring degree of [12pt]{minimal} $$N=2^{12}$$ N = 2 12 , there are 2048 slots available to pack data within a single ciphertext, effectively accommodating all 16,344 genotype variants in 8 ciphertexts. Encrypted algorithm The full instructions of the encrypted algorithm is described in Algorithm 5. Lines 5 through 8 compute [12pt]{minimal} $$( q_{ }- ) ^2$$ q ℓ - μ 2 . Lines 15 through 18 compute [12pt]{minimal} $$( q_{ } - c_{j}) ^2$$ q ℓ - c j 2 . Lines 19 through 20 compute [12pt]{minimal} $$( q_{ }- ) ^2 - ( q_{ } - c_{j}) ^2$$ q ℓ - μ 2 - q ℓ - c j 2 . Lines 21 through 22 store and aggregate the relationship scores of query [12pt]{minimal} $$q_{ }$$ q ℓ with respect to the mean [12pt]{minimal} $$$$ μ and each centroid [12pt]{minimal} $$c_j$$ c j in separate ciphertext [12pt]{minimal} $${}[ ]$$ γ ^ [ ℓ ] . Line 26 concludes by performing the sum of all the scores for query [12pt]{minimal} $$q_{ }$$ q ℓ , as in [12pt]{minimal} $$ _{j=1}^{k} _{i=1}^{n}$$ ∑ j = 1 k ∑ i = 1 n . Since [12pt]{minimal} $$$$ α is constant for all queries [12pt]{minimal} $$q_{ }$$ q ℓ , the denominator is normalization factor and can go outside the max function. Thus, we shall concentrate on the numerator of Eq. to rank the predictions; thus, we establish [12pt]{minimal} $${f^{ }}(q_{ },D)$$ f ′ ( q ℓ , D ) in Eq. . By eliminating this normalization step, the algorithm becomes more efficient in detriment of possibly not preserving the original ranking among the queries. This relaxation to the original equation is valuable to improve the computational efficiency in the encrypted domain, and it was empirically verified not to affect the accuracy. 9 [12pt]{minimal} $$ {f^{ }}(q_{ },D) & = }} _{i=1}^{n}(| q_{ ,i}- _i| - | q_{ ,i} - c_{j,i}| ), \\ {where}~{f}(q_{ },D) & = {f^{ }}(q_{ },D) $$ f ′ ( q ℓ , D ) = max j ∑ i = 1 n ( q ℓ , i - μ i - q ℓ , i - c j , i ) , where f ( q ℓ , D ) = α f ′ ( q ℓ , D ) This effectively reduces the amount of required computation. Then, we further simplify the prediction function by replacing the operator [12pt]{minimal} $$_j$$ max j with [12pt]{minimal} $$ _{j=1}^{k}$$ ∑ j = 1 k , the sum over all computations across k centroids. The final score is now the aggregated voting of the k differences between the distance of query to the mean and the distance of query to centroid. Empirically, we verify that this does not alter the final predictions, such that the final objective becomes 10 [12pt]{minimal} $$ {f^{ }}(q_{ },D) = _{j=1}^{k} _{i=1}^{n}(( q_{ ,i}- _i) ^2 - ( q_{ ,i} - c_{j,i}) ^2), $$ f ″ ( q ℓ , D ) = ∑ j = 1 k ∑ i = 1 n ( q ℓ , i - μ i 2 - q ℓ , i - c j , i 2 ) , where [12pt]{minimal} $$c_{j,i}$$ c j , i is the [12pt]{minimal} $$i^{th}$$ i th genotype variant of the cluster [12pt]{minimal} $$j^{th}$$ j th centroid and n is total number of genotype variants, i.e. [12pt]{minimal} $$n=16,344$$ n = 16 , 344 . In this framework, Eq. takes on the role of quantifying the separation between the query and the database mean, while also subtracting the query’s separation from each sub-population. In the competition, this method requires [12pt]{minimal} $$400 k$$ 400 × k evaluations of Eq. since the challenges consists of testing 400 queries. For a choice of [12pt]{minimal} $$k=5$$ k = 5 , 2,000 evaluations are required, which is three orders of magnitude less operations than the naïve solution that requires 800,000 cross-correlations evaluations, as depicted in Fig. . Since, in Eq. , the mean [12pt]{minimal} $$$$ μ and the cluster centroids [12pt]{minimal} $$c_j$$ c j are precomputed offline and used during inference, we consider it as a supervised approach. This assumes that the characteristics about the underlying population mixture from the challenge dataset are sufficient to generalize predictions to unknown query data. For this reason, the algorithm to compute inference as Eq. requires only multiplicative depth [12pt]{minimal} $$L=1$$ L = 1 , greatly optimizing the multiplicative depth complexity and latency associated to it. This algorithm runs in two steps: first, it evaluates the database mean and computes the k cluster centroids in the clear text domain; secondly, it evaluates Eq. to output the kinship prediction scores. Step one is done offline with the genomic database still in the database owner’s premise; then, the database mean and cluster centroids are encrypted and sent to the computing entity as part of the input to the encrypted evaluation of Eq. . This time, the coefficient modulus size [12pt]{minimal} $$ Q$$ log Q does not have to be comprised of many bits since the multiplicative depth equal 1. Even though a smaller Q is possible, accordingly the parameters [12pt]{minimal} $$ {log}Q$$ log Q and N , the scaling factor size [12pt]{minimal} $$ {log}$$ log Δ must be carefully chosen. In this case, the scaling factor [12pt]{minimal} $$ =2^{p}$$ Δ = 2 p will dictate how much arithmetic precision to compute the target workload without corrupting the decryption. Given those considerations, we select a scaling factor that allows us minimize as much as possible the polynomial degree N . We found that the scaling factor [12pt]{minimal} $$ =2^{29}$$ Δ = 2 29 is sufficient to keep the arithmetic precision afloat during computation of Eq. . To ensure 128-bit security level, we use polynomial ring size of degree of [12pt]{minimal} $$N=2^{12}$$ N = 2 12 and a coefficient modulus of size [12pt]{minimal} $$ {log}Q=109$$ log Q = 109 . The coefficient modulus chain comprises co-primes with bit lengths {40, 29, 40}. This choice of parameters provide a compact and fast implementation. In order to reduce computational cost, it is crucial to streamline the organization of the CKKS ciphertexts. We perform data packing and encryption similarly to what Fig. illustrates. With polynomial ring degree of [12pt]{minimal} $$N=2^{12}$$ N = 2 12 , there are 2048 slots available to pack data within a single ciphertext, effectively accommodating all 16,344 genotype variants in 8 ciphertexts. The full instructions of the encrypted algorithm is described in Algorithm 5. Lines 5 through 8 compute [12pt]{minimal} $$( q_{ }- ) ^2$$ q ℓ - μ 2 . Lines 15 through 18 compute [12pt]{minimal} $$( q_{ } - c_{j}) ^2$$ q ℓ - c j 2 . Lines 19 through 20 compute [12pt]{minimal} $$( q_{ }- ) ^2 - ( q_{ } - c_{j}) ^2$$ q ℓ - μ 2 - q ℓ - c j 2 . Lines 21 through 22 store and aggregate the relationship scores of query [12pt]{minimal} $$q_{ }$$ q ℓ with respect to the mean [12pt]{minimal} $$$$ μ and each centroid [12pt]{minimal} $$c_j$$ c j in separate ciphertext [12pt]{minimal} $${}[ ]$$ γ ^ [ ℓ ] . Line 26 concludes by performing the sum of all the scores for query [12pt]{minimal} $$q_{ }$$ q ℓ , as in [12pt]{minimal} $$ _{j=1}^{k} _{i=1}^{n}$$ ∑ j = 1 k ∑ i = 1 n . Our clustering-based supervised approach lifts accuracy to highest possible, [12pt]{minimal} $$ {auROC}=1$$ auROC = 1 , i.e. it achieves perfectly accurate predictions. Nonetheless, its limitation lies in knowing how to optimally choose k when no knowledge about the reference population is available and in hurting latency performance as the number of reference populations k increases. The choice of k is important because it will directly impact accuracy. This technique could also be regarded as less flexible, compared to the unsupervised approach, since if the reference population expands or shrinks drastically, it could require re-mining the cluster centroids; therefore, less adaptable to changes than the unsupervised approach that can handle it naturally. To mitigate those foreseen potential issues, we envision another supervised solution based on linear regression, which does not require tuning of hyperparameter such as k , even when the characteristics of the reference population mixture is unknown, and does not increase the amount of computation as k increases. It relies on extracting features from queries by apply a masking procedure with the database mean, and then optimizing the coefficients of a linear regression model to learn the underlying patterns, captured by these features, to discern between having or not having a relative in the database. As for adaptability, this approach could arguably be more robust to small changes in the reference mixture, given that its prediction power only depends on the pattern that has been learned in order to differentiate whether a query has a relative in a genomic database given its mean, which can be easily recomputed to apply new feature transformations to the queries. Linear regression has been widely used for tackling secure genome problems (e.g. ). The reason for this popularity is linked to its arithmetic simplicity and robustness, and track record (e.g. s), in finding hyperplanes separating distinct patterns in high-dimensional spaces . We embrace these virtues to devise a more robust and efficient approach to the problem, nonetheless, under strong assumption that sufficient information is available in the data characterizing the reference population mixture, even if not specifically annotated. This emphasizes the supervised approaches’ major limitation: robustness and adaptability to changes in the reference populations are constrained to small variations, unlike the proposed unsupervised approach described in “ ” section. Model training The ground-truth is a collection of 200 annotated pairwise relationships between 200 query samples and 200 database samples. Eighty percent out of those pairs are used for training and the remainder 20% are saved for testing. Hence, 160 queries known to have at least one relative in the database (i.e. positive queries) are separated for feature selection and model training. From the challenge query set Q , containing 400 queries, the remaining 200 that do not appear in the ground-truth annotation are genomes known not to have their genetic data shared with any of the 2000 samples from the database; thus, we consider 160 of them (80%) to represent negative queries, i.e. examples of queries that do not have a relative in the database, for training and 40 others (20%) for testing. These samples are unique and provided as part of the challenge dataset. First, these 160 positive queries plus 160 negative queries are used for selecting the most relevant features (genotype variants) out of 16344. Then, we create more positive and negative queries out of those 320 queries to increase the sample-feature ratio, i.e. synthesize as many samples as possible to reach the ratio of about 10 samples per relevant feature. Training of the linear regression model follows suit, fed with the augmented sample set. Feature selection To help the linear regression optimizer find a more robust hyperplane, and be less predisposed to overfitting, we perform dimensionality reduction using the Variable Threshold technique. In this case, dimensions located at genotype variants whose variance are less than a certain threshold are disregarded to represent the genome sequence of a query. For feature selection, we use all 320 samples reserved for training. Depending on the value of the variance threshold, more or less features are deemed as relevant. The goal is to have as less features are possible. We found that a threshold of 0.11, by varying from 0.2 to 0.1 considering two digits after the decimal point, yields robust performance with 3893 features out of the 16344 genotype variants. Data augmentation In addition, we increase the number of samples per features to improve generalization of the model. It consists of random resampling of the 320 data points with replacement. Resampling is applied to increase the positive and negative samples by a factor of 120, such that we end up with about 10 samples per feature. We use the resample function from the Python’s sklearn package to accomplish it – we perform oversampling, consisting of repeating some of the samples in the original collection. Feature transformation The original features of a query q are their genome genotype variants, a sequence of values in the set [12pt]{minimal} $$ {G}=\{0,1,2\}$$ G = { 0 , 1 , 2 } . We apply a transformation to the genotype variants to create features that are derived from computing its relationship with respect to the average of genotype variants found in the target genomic database. That is, the transformation uses the genome mean [12pt]{minimal} $$$$ μ of the database. This transformation is algebraically described in Eq. , 11 [12pt]{minimal} $$ {q^ } = q , $$ q ′ = q · μ , where [12pt]{minimal} $$$$ · corresponds to element-wise multiplication between q and [12pt]{minimal} $$$$ μ components (in clear text, i.e. non-encrypted data). The training queries transformed to features [12pt]{minimal} $${q^ }$$ q ′ populate the matrix X in Eq. , where each row of X is either a positive or negative sample, for training of a logistic regression model that separates queries that correlates with the mean from those queries that do not. Training The transformed queries [12pt]{minimal} $$q^$$ q ′ are samples indexed as rows of a sparse matrix X that is used to solve for the linear regression coefficients w . These samples become further sparse after the feature selection procedure, such that certain dimensions i are zeroed out. We use the Conjugate Gradient Method to optimize the cost function, via ridge regression , shown in Eq. , which finds coefficients that minimizes the squared error of predictions [12pt]{minimal} $$=Xw$$ y ^ = X w against the ground-truth y . This objective function includes a regularization term weighted by [12pt]{minimal} $$ =0.5$$ α = 0.5 that helps minimize the risk of overfitting in addition to the dimensionality reduction by the feature selection procedure. The ground-truth vector y holds values [12pt]{minimal} $$y=1$$ y = 1 for positive queries and [12pt]{minimal} $$y=0$$ y = 0 for negative queries. 12 [12pt]{minimal} $$ } Xw - y ^{2} + w ^{2}, $$ min w ‖ X w - y ‖ 2 + α ‖ w ‖ 2 , where 13 [12pt]{minimal} $$ X = [ q_{1,1} & & q_{1,i} & & q_{1,16344} \\ & & & & \\ q_{ ,i} & & q_{ ,i} & & q_{ ,16344} \\ & & & & \\ q_{38930,1} & & q_{38930,i} & & q_{38930,16344} ] , w = [ w_{1} \\ \\ w_{i} \\ \\ w_{16344} ] , y = [ 1 \\ \\ 1 \\ 0 \\ \\ 0 ] $$ X = q 1 , 1 … q 1 , i … q 1 , 16344 ⋮ ⋱ ⋮ ⋱ ⋮ q ℓ , i ⋯ q ℓ , i ⋯ q ℓ , 16344 ⋮ ⋱ ⋮ ⋱ ⋮ q 38930 , 1 ⋯ q 38930 , i ⋯ q 38930 , 16344 , w = w 1 ⋮ w i ⋮ w 16344 , y = 1 ⋮ 1 0 ⋮ 0 We measure the training performance using different metrics. To assess precision of the predicted values, we rely on both the R2-score and root-mean-square error (RMSE). On the training set, the R2-score is reported to reach 1.0, which means perfect accuracy, and the RMSE=0.0000014. As for classification accuracy, we rely on the auROC, which summarizes the reliability on Recall and False Positive Rates with a single score. On the training set, it reported 100% successful rate with auROC=1.0. Inference The prediction phase occurs in the encrypted domain and it consists of two steps. The first step consists of applying the transformation shown in Eq. to each of the 400 encrypted queries [12pt]{minimal} $$ct_{q_{ }}=Enc_{pk}( {m}_{q_{ }})$$ c t q ℓ = E n c pk ( m → q ℓ ) , where [12pt]{minimal} $$ {m_{q_{ }}}= {Encode}(q_{ }, )$$ m q ℓ → = Encode ( q ℓ , Δ ) . The result is a collection of transformed input ciphertexts [12pt]{minimal} $$ct_{q_{ }^ }$$ c t q ℓ ′ computed from the component-wise multiplication between [12pt]{minimal} $$ct_{q_{ }}$$ c t q ℓ and [12pt]{minimal} $$ct_{ }$$ c t μ (see Eq. ), where [12pt]{minimal} $$ct_{ }$$ c t μ is the encrypted mean of the searchable genomic database D . 14 [12pt]{minimal} $$ ct_{q_{ }^ }=Enc_{pk}( {Encode}(q_{ } , )) ct_{q_{ }} ct_{ }, $$ c t q ℓ ′ = E n c pk ( Encode ( q ℓ · μ , Δ ) ) ≈ c t q ℓ ⊙ c t μ , where [12pt]{minimal} $$ct_{q_{ }^ }$$ c t q ℓ ′ corresponds to the encrypted feature vector of a query [12pt]{minimal} $$q_{ }$$ q ℓ computed using the feature extraction procedure described in Eq. . This implies that the inference would consume one additional multiplicative depth to account for this preprocessing step; thus, requiring an encryption configuration that allows for multiplicative depth [12pt]{minimal} $$L=2$$ L = 2 instead of [12pt]{minimal} $$L=1$$ L = 1 as explained in “ ” section. In practice, we bypass this preprocessing step for efficiency, i.e. to avoid an additional level, by directly using the encrypted queries [12pt]{minimal} $$ct_{q_{ }}$$ c t q ℓ with their original values (see Eq. ) for inference. We empirically verified that this yields comparable results, not affecting the accuracy. Hence, we keep the multiplicative depth of the encrypted circuit of this linear regression-based approach down to [12pt]{minimal} $$L=1$$ L = 1 . 15 [12pt]{minimal} $$ ct_{q_{ }}=Enc_{pk}( {Encode}(q_{ }, )), $$ c t q ℓ = E n c pk ( Encode ( q ℓ , Δ ) ) , Note that the training step uses [12pt]{minimal} $$q_{ }^$$ q ℓ ′ as features to learn the classification hyperplane. The linear regression inference function for a single query in clear text is defined as [12pt]{minimal} $${} = q_{ }w + b$$ y ^ = q ℓ w + b . In the encrypted domain, this same inference function takes a different form and it is defined as follows 16 [12pt]{minimal} $$ ct_{r_{0}} & = _{j=1}^{M}ct_{q_{ }}[j] {ct_{w}}[j], \\ ct_{r_{k+1}} & = ct_{r_{k}} + Rotate(ct_{r_{k}},2^k), 0 k < {log}_2(N/2)-1 \\ ct_{} & = ct_{b} + ct_{r_{ {log}_2(N/2)}} $$ c t r 0 = ∑ j = 1 M c t q ℓ [ j ] ⊙ c t w [ j ] , c t r k + 1 = c t r k + R o t a t e ( c t r k , 2 k ) , 0 ≤ k < log 2 ( N / 2 ) - 1 c t y ^ = c t b + c t r log 2 ( N / 2 ) where [ j ] denotes indexing at the [12pt]{minimal} $$j^{th}$$ j th ciphertext of a collection of M ciphertexts encrypting query [12pt]{minimal} $$q_{ }$$ q ℓ and the weights w . [12pt]{minimal} $$ct_{w}$$ c t w and [12pt]{minimal} $$ct_{b}$$ c t b denote the encrypted linear regression coefficients and bias, respectively. [12pt]{minimal} $$ct_{}$$ c t y ^ corresponds to the encrypted real-valued prediction that measures the likelihood of query q to share genetic data with any of the database samples. M equals [12pt]{minimal} $$ 16344/(N/2) $$ ⌈ 16344 / ( N / 2 ) ⌉ , i.e. the number of ciphertexts used to encrypt all the features of a single query q . Rotation is executed [12pt]{minimal} $$ {log}_2(N/2)$$ log 2 ( N / 2 ) times to iteratively accumulate the sum of all the elements in the slots of the output ciphertext [12pt]{minimal} $$ct_{r_0}$$ c t r 0 , where [12pt]{minimal} $$ct_{r_0}$$ c t r 0 resulted from the homomorphic pointwise multiplication of the encrypted query and the encrypted weights (see Fig. for a toy illustration). At each iteration, rotation applies [12pt]{minimal} $$2^k$$ 2 k circular-shifting to the ciphertext [12pt]{minimal} $$ct_{r_0}$$ c t r 0 , resulted from previously rotation and accumulation with ciphertext [12pt]{minimal} $$ct_{k-1}$$ c t k - 1 . In the end, the sum of all elements in the slots is stored in all slots of the ciphertext [12pt]{minimal} $$ct_{r_{ {log}_2(N/2)}}$$ c t r log 2 ( N / 2 ) (see Fig. for a toy illustration). At last, the encrypted linear regression bias term, denoted as [12pt]{minimal} $$ct_{b}$$ c t b , is added to [12pt]{minimal} $$ct_{ {log}_2(N/2)}$$ c t log 2 ( N / 2 ) so as to complete the linear regression dot product as the encrypted prediction [12pt]{minimal} $$ct_{}$$ c t y ^ – the prediction score for the single query q appears in all the slots of ciphertext [12pt]{minimal} $$ct_{}$$ c t y ^ . Optimization for performance While Eq. is an easy-to-compute element-wise vector-vector multiplication, Eq. is a matrix-vector multiplication that entails matrix-row-number of dot products. Even though two consecutive multiplications are involved in this sequence of operations, only 1 level is consumed since the modulus switch operation is postponed until after the second multiplication is complete. We also optimize the number of rotations needed to accumulate the results of the element-wise multiplications involved in a dot product by first adding all the ciphertexts involved in a single query prediction (see Eq. ). That is, an encrypted query containing 16344 features is split into [12pt]{minimal} $$ 16344/(N/2) $$ ⌈ 16344 / ( N / 2 ) ⌉ ciphertexts; therefore, after multiplying them by the encrypted database mean, instead of applying [12pt]{minimal} $$ {log}_2(N/2)$$ log 2 ( N / 2 ) rotations on each of the individual ciphertexts to sum their internal components first, we first sum the ciphertexts to obtain a single ciphertext and only then [12pt]{minimal} $$ {log}_2(N/2)$$ log 2 ( N / 2 ) rotations are executed to perform the sum of the dot product. Security level and parameters selection Analogous to the clustering-based approach, we manage to maintain multiplicative depth [12pt]{minimal} $$L=1$$ L = 1 for the linear regression-based supervised method. By both providing precomputed database mean and postponing the modulus switch operation until after the second multiplication helps achieve that. Additionally, as briefly explained in “ ” section, for the inference step we do not apply the feature transformation to the query but instead directly use the original data values since that would demand to set [12pt]{minimal} $$L=2$$ L = 2 . This way, the same parameter values are used, i.e. coefficient modulus size [12pt]{minimal} $$ {log}Q=109$$ log Q = 109 , polynomial ring size [12pt]{minimal} $$N=2^{12}$$ N = 2 12 , scaling factor [12pt]{minimal} $$ =2^{29}$$ Δ = 2 29 , and modulus chain comprising a sequence of co-primes with bit lengths {40, 29, 40}. Encrypted algorithm The full set of instructions describing the encrypted linear regression algorithm is shown in Algorithm 6. Lines 5 to 9 perform component-wise multiplication of the linear coefficients and the query data (see top row of Fig. ). Line 10 performs the sum of all of elements in the slots resulted from the product of linear coefficients and input data (see bottom row of Fig. and top row of Fig. for a toy illustration of the sequence of operations). Line 12 performs the addition of the linear regression dot product with the bias term (see bottom row of Fig. ). The prediction results are stored in [12pt]{minimal} $${}$$ y ^ and returned. The ground-truth is a collection of 200 annotated pairwise relationships between 200 query samples and 200 database samples. Eighty percent out of those pairs are used for training and the remainder 20% are saved for testing. Hence, 160 queries known to have at least one relative in the database (i.e. positive queries) are separated for feature selection and model training. From the challenge query set Q , containing 400 queries, the remaining 200 that do not appear in the ground-truth annotation are genomes known not to have their genetic data shared with any of the 2000 samples from the database; thus, we consider 160 of them (80%) to represent negative queries, i.e. examples of queries that do not have a relative in the database, for training and 40 others (20%) for testing. These samples are unique and provided as part of the challenge dataset. First, these 160 positive queries plus 160 negative queries are used for selecting the most relevant features (genotype variants) out of 16344. Then, we create more positive and negative queries out of those 320 queries to increase the sample-feature ratio, i.e. synthesize as many samples as possible to reach the ratio of about 10 samples per relevant feature. Training of the linear regression model follows suit, fed with the augmented sample set. Feature selection To help the linear regression optimizer find a more robust hyperplane, and be less predisposed to overfitting, we perform dimensionality reduction using the Variable Threshold technique. In this case, dimensions located at genotype variants whose variance are less than a certain threshold are disregarded to represent the genome sequence of a query. For feature selection, we use all 320 samples reserved for training. Depending on the value of the variance threshold, more or less features are deemed as relevant. The goal is to have as less features are possible. We found that a threshold of 0.11, by varying from 0.2 to 0.1 considering two digits after the decimal point, yields robust performance with 3893 features out of the 16344 genotype variants. Data augmentation In addition, we increase the number of samples per features to improve generalization of the model. It consists of random resampling of the 320 data points with replacement. Resampling is applied to increase the positive and negative samples by a factor of 120, such that we end up with about 10 samples per feature. We use the resample function from the Python’s sklearn package to accomplish it – we perform oversampling, consisting of repeating some of the samples in the original collection. Feature transformation The original features of a query q are their genome genotype variants, a sequence of values in the set [12pt]{minimal} $$ {G}=\{0,1,2\}$$ G = { 0 , 1 , 2 } . We apply a transformation to the genotype variants to create features that are derived from computing its relationship with respect to the average of genotype variants found in the target genomic database. That is, the transformation uses the genome mean [12pt]{minimal} $$$$ μ of the database. This transformation is algebraically described in Eq. , 11 [12pt]{minimal} $$ {q^ } = q , $$ q ′ = q · μ , where [12pt]{minimal} $$$$ · corresponds to element-wise multiplication between q and [12pt]{minimal} $$$$ μ components (in clear text, i.e. non-encrypted data). The training queries transformed to features [12pt]{minimal} $${q^ }$$ q ′ populate the matrix X in Eq. , where each row of X is either a positive or negative sample, for training of a logistic regression model that separates queries that correlates with the mean from those queries that do not. Training The transformed queries [12pt]{minimal} $$q^$$ q ′ are samples indexed as rows of a sparse matrix X that is used to solve for the linear regression coefficients w . These samples become further sparse after the feature selection procedure, such that certain dimensions i are zeroed out. We use the Conjugate Gradient Method to optimize the cost function, via ridge regression , shown in Eq. , which finds coefficients that minimizes the squared error of predictions [12pt]{minimal} $$=Xw$$ y ^ = X w against the ground-truth y . This objective function includes a regularization term weighted by [12pt]{minimal} $$ =0.5$$ α = 0.5 that helps minimize the risk of overfitting in addition to the dimensionality reduction by the feature selection procedure. The ground-truth vector y holds values [12pt]{minimal} $$y=1$$ y = 1 for positive queries and [12pt]{minimal} $$y=0$$ y = 0 for negative queries. 12 [12pt]{minimal} $$ } Xw - y ^{2} + w ^{2}, $$ min w ‖ X w - y ‖ 2 + α ‖ w ‖ 2 , where 13 [12pt]{minimal} $$ X = [ q_{1,1} & & q_{1,i} & & q_{1,16344} \\ & & & & \\ q_{ ,i} & & q_{ ,i} & & q_{ ,16344} \\ & & & & \\ q_{38930,1} & & q_{38930,i} & & q_{38930,16344} ] , w = [ w_{1} \\ \\ w_{i} \\ \\ w_{16344} ] , y = [ 1 \\ \\ 1 \\ 0 \\ \\ 0 ] $$ X = q 1 , 1 … q 1 , i … q 1 , 16344 ⋮ ⋱ ⋮ ⋱ ⋮ q ℓ , i ⋯ q ℓ , i ⋯ q ℓ , 16344 ⋮ ⋱ ⋮ ⋱ ⋮ q 38930 , 1 ⋯ q 38930 , i ⋯ q 38930 , 16344 , w = w 1 ⋮ w i ⋮ w 16344 , y = 1 ⋮ 1 0 ⋮ 0 We measure the training performance using different metrics. To assess precision of the predicted values, we rely on both the R2-score and root-mean-square error (RMSE). On the training set, the R2-score is reported to reach 1.0, which means perfect accuracy, and the RMSE=0.0000014. As for classification accuracy, we rely on the auROC, which summarizes the reliability on Recall and False Positive Rates with a single score. On the training set, it reported 100% successful rate with auROC=1.0. Inference The prediction phase occurs in the encrypted domain and it consists of two steps. The first step consists of applying the transformation shown in Eq. to each of the 400 encrypted queries [12pt]{minimal} $$ct_{q_{ }}=Enc_{pk}( {m}_{q_{ }})$$ c t q ℓ = E n c pk ( m → q ℓ ) , where [12pt]{minimal} $$ {m_{q_{ }}}= {Encode}(q_{ }, )$$ m q ℓ → = Encode ( q ℓ , Δ ) . The result is a collection of transformed input ciphertexts [12pt]{minimal} $$ct_{q_{ }^ }$$ c t q ℓ ′ computed from the component-wise multiplication between [12pt]{minimal} $$ct_{q_{ }}$$ c t q ℓ and [12pt]{minimal} $$ct_{ }$$ c t μ (see Eq. ), where [12pt]{minimal} $$ct_{ }$$ c t μ is the encrypted mean of the searchable genomic database D . 14 [12pt]{minimal} $$ ct_{q_{ }^ }=Enc_{pk}( {Encode}(q_{ } , )) ct_{q_{ }} ct_{ }, $$ c t q ℓ ′ = E n c pk ( Encode ( q ℓ · μ , Δ ) ) ≈ c t q ℓ ⊙ c t μ , where [12pt]{minimal} $$ct_{q_{ }^ }$$ c t q ℓ ′ corresponds to the encrypted feature vector of a query [12pt]{minimal} $$q_{ }$$ q ℓ computed using the feature extraction procedure described in Eq. . This implies that the inference would consume one additional multiplicative depth to account for this preprocessing step; thus, requiring an encryption configuration that allows for multiplicative depth [12pt]{minimal} $$L=2$$ L = 2 instead of [12pt]{minimal} $$L=1$$ L = 1 as explained in “ ” section. In practice, we bypass this preprocessing step for efficiency, i.e. to avoid an additional level, by directly using the encrypted queries [12pt]{minimal} $$ct_{q_{ }}$$ c t q ℓ with their original values (see Eq. ) for inference. We empirically verified that this yields comparable results, not affecting the accuracy. Hence, we keep the multiplicative depth of the encrypted circuit of this linear regression-based approach down to [12pt]{minimal} $$L=1$$ L = 1 . 15 [12pt]{minimal} $$ ct_{q_{ }}=Enc_{pk}( {Encode}(q_{ }, )), $$ c t q ℓ = E n c pk ( Encode ( q ℓ , Δ ) ) , Note that the training step uses [12pt]{minimal} $$q_{ }^$$ q ℓ ′ as features to learn the classification hyperplane. The linear regression inference function for a single query in clear text is defined as [12pt]{minimal} $${} = q_{ }w + b$$ y ^ = q ℓ w + b . In the encrypted domain, this same inference function takes a different form and it is defined as follows 16 [12pt]{minimal} $$ ct_{r_{0}} & = _{j=1}^{M}ct_{q_{ }}[j] {ct_{w}}[j], \\ ct_{r_{k+1}} & = ct_{r_{k}} + Rotate(ct_{r_{k}},2^k), 0 k < {log}_2(N/2)-1 \\ ct_{} & = ct_{b} + ct_{r_{ {log}_2(N/2)}} $$ c t r 0 = ∑ j = 1 M c t q ℓ [ j ] ⊙ c t w [ j ] , c t r k + 1 = c t r k + R o t a t e ( c t r k , 2 k ) , 0 ≤ k < log 2 ( N / 2 ) - 1 c t y ^ = c t b + c t r log 2 ( N / 2 ) where [ j ] denotes indexing at the [12pt]{minimal} $$j^{th}$$ j th ciphertext of a collection of M ciphertexts encrypting query [12pt]{minimal} $$q_{ }$$ q ℓ and the weights w . [12pt]{minimal} $$ct_{w}$$ c t w and [12pt]{minimal} $$ct_{b}$$ c t b denote the encrypted linear regression coefficients and bias, respectively. [12pt]{minimal} $$ct_{}$$ c t y ^ corresponds to the encrypted real-valued prediction that measures the likelihood of query q to share genetic data with any of the database samples. M equals [12pt]{minimal} $$ 16344/(N/2) $$ ⌈ 16344 / ( N / 2 ) ⌉ , i.e. the number of ciphertexts used to encrypt all the features of a single query q . Rotation is executed [12pt]{minimal} $$ {log}_2(N/2)$$ log 2 ( N / 2 ) times to iteratively accumulate the sum of all the elements in the slots of the output ciphertext [12pt]{minimal} $$ct_{r_0}$$ c t r 0 , where [12pt]{minimal} $$ct_{r_0}$$ c t r 0 resulted from the homomorphic pointwise multiplication of the encrypted query and the encrypted weights (see Fig. for a toy illustration). At each iteration, rotation applies [12pt]{minimal} $$2^k$$ 2 k circular-shifting to the ciphertext [12pt]{minimal} $$ct_{r_0}$$ c t r 0 , resulted from previously rotation and accumulation with ciphertext [12pt]{minimal} $$ct_{k-1}$$ c t k - 1 . In the end, the sum of all elements in the slots is stored in all slots of the ciphertext [12pt]{minimal} $$ct_{r_{ {log}_2(N/2)}}$$ c t r log 2 ( N / 2 ) (see Fig. for a toy illustration). At last, the encrypted linear regression bias term, denoted as [12pt]{minimal} $$ct_{b}$$ c t b , is added to [12pt]{minimal} $$ct_{ {log}_2(N/2)}$$ c t log 2 ( N / 2 ) so as to complete the linear regression dot product as the encrypted prediction [12pt]{minimal} $$ct_{}$$ c t y ^ – the prediction score for the single query q appears in all the slots of ciphertext [12pt]{minimal} $$ct_{}$$ c t y ^ . To help the linear regression optimizer find a more robust hyperplane, and be less predisposed to overfitting, we perform dimensionality reduction using the Variable Threshold technique. In this case, dimensions located at genotype variants whose variance are less than a certain threshold are disregarded to represent the genome sequence of a query. For feature selection, we use all 320 samples reserved for training. Depending on the value of the variance threshold, more or less features are deemed as relevant. The goal is to have as less features are possible. We found that a threshold of 0.11, by varying from 0.2 to 0.1 considering two digits after the decimal point, yields robust performance with 3893 features out of the 16344 genotype variants. In addition, we increase the number of samples per features to improve generalization of the model. It consists of random resampling of the 320 data points with replacement. Resampling is applied to increase the positive and negative samples by a factor of 120, such that we end up with about 10 samples per feature. We use the resample function from the Python’s sklearn package to accomplish it – we perform oversampling, consisting of repeating some of the samples in the original collection. The original features of a query q are their genome genotype variants, a sequence of values in the set [12pt]{minimal} $$ {G}=\{0,1,2\}$$ G = { 0 , 1 , 2 } . We apply a transformation to the genotype variants to create features that are derived from computing its relationship with respect to the average of genotype variants found in the target genomic database. That is, the transformation uses the genome mean [12pt]{minimal} $$$$ μ of the database. This transformation is algebraically described in Eq. , 11 [12pt]{minimal} $$ {q^ } = q , $$ q ′ = q · μ , where [12pt]{minimal} $$$$ · corresponds to element-wise multiplication between q and [12pt]{minimal} $$$$ μ components (in clear text, i.e. non-encrypted data). The training queries transformed to features [12pt]{minimal} $${q^ }$$ q ′ populate the matrix X in Eq. , where each row of X is either a positive or negative sample, for training of a logistic regression model that separates queries that correlates with the mean from those queries that do not. The transformed queries [12pt]{minimal} $$q^$$ q ′ are samples indexed as rows of a sparse matrix X that is used to solve for the linear regression coefficients w . These samples become further sparse after the feature selection procedure, such that certain dimensions i are zeroed out. We use the Conjugate Gradient Method to optimize the cost function, via ridge regression , shown in Eq. , which finds coefficients that minimizes the squared error of predictions [12pt]{minimal} $$=Xw$$ y ^ = X w against the ground-truth y . This objective function includes a regularization term weighted by [12pt]{minimal} $$ =0.5$$ α = 0.5 that helps minimize the risk of overfitting in addition to the dimensionality reduction by the feature selection procedure. The ground-truth vector y holds values [12pt]{minimal} $$y=1$$ y = 1 for positive queries and [12pt]{minimal} $$y=0$$ y = 0 for negative queries. 12 [12pt]{minimal} $$ } Xw - y ^{2} + w ^{2}, $$ min w ‖ X w - y ‖ 2 + α ‖ w ‖ 2 , where 13 [12pt]{minimal} $$ X = [ q_{1,1} & & q_{1,i} & & q_{1,16344} \\ & & & & \\ q_{ ,i} & & q_{ ,i} & & q_{ ,16344} \\ & & & & \\ q_{38930,1} & & q_{38930,i} & & q_{38930,16344} ] , w = [ w_{1} \\ \\ w_{i} \\ \\ w_{16344} ] , y = [ 1 \\ \\ 1 \\ 0 \\ \\ 0 ] $$ X = q 1 , 1 … q 1 , i … q 1 , 16344 ⋮ ⋱ ⋮ ⋱ ⋮ q ℓ , i ⋯ q ℓ , i ⋯ q ℓ , 16344 ⋮ ⋱ ⋮ ⋱ ⋮ q 38930 , 1 ⋯ q 38930 , i ⋯ q 38930 , 16344 , w = w 1 ⋮ w i ⋮ w 16344 , y = 1 ⋮ 1 0 ⋮ 0 We measure the training performance using different metrics. To assess precision of the predicted values, we rely on both the R2-score and root-mean-square error (RMSE). On the training set, the R2-score is reported to reach 1.0, which means perfect accuracy, and the RMSE=0.0000014. As for classification accuracy, we rely on the auROC, which summarizes the reliability on Recall and False Positive Rates with a single score. On the training set, it reported 100% successful rate with auROC=1.0. The prediction phase occurs in the encrypted domain and it consists of two steps. The first step consists of applying the transformation shown in Eq. to each of the 400 encrypted queries [12pt]{minimal} $$ct_{q_{ }}=Enc_{pk}( {m}_{q_{ }})$$ c t q ℓ = E n c pk ( m → q ℓ ) , where [12pt]{minimal} $$ {m_{q_{ }}}= {Encode}(q_{ }, )$$ m q ℓ → = Encode ( q ℓ , Δ ) . The result is a collection of transformed input ciphertexts [12pt]{minimal} $$ct_{q_{ }^ }$$ c t q ℓ ′ computed from the component-wise multiplication between [12pt]{minimal} $$ct_{q_{ }}$$ c t q ℓ and [12pt]{minimal} $$ct_{ }$$ c t μ (see Eq. ), where [12pt]{minimal} $$ct_{ }$$ c t μ is the encrypted mean of the searchable genomic database D . 14 [12pt]{minimal} $$ ct_{q_{ }^ }=Enc_{pk}( {Encode}(q_{ } , )) ct_{q_{ }} ct_{ }, $$ c t q ℓ ′ = E n c pk ( Encode ( q ℓ · μ , Δ ) ) ≈ c t q ℓ ⊙ c t μ , where [12pt]{minimal} $$ct_{q_{ }^ }$$ c t q ℓ ′ corresponds to the encrypted feature vector of a query [12pt]{minimal} $$q_{ }$$ q ℓ computed using the feature extraction procedure described in Eq. . This implies that the inference would consume one additional multiplicative depth to account for this preprocessing step; thus, requiring an encryption configuration that allows for multiplicative depth [12pt]{minimal} $$L=2$$ L = 2 instead of [12pt]{minimal} $$L=1$$ L = 1 as explained in “ ” section. In practice, we bypass this preprocessing step for efficiency, i.e. to avoid an additional level, by directly using the encrypted queries [12pt]{minimal} $$ct_{q_{ }}$$ c t q ℓ with their original values (see Eq. ) for inference. We empirically verified that this yields comparable results, not affecting the accuracy. Hence, we keep the multiplicative depth of the encrypted circuit of this linear regression-based approach down to [12pt]{minimal} $$L=1$$ L = 1 . 15 [12pt]{minimal} $$ ct_{q_{ }}=Enc_{pk}( {Encode}(q_{ }, )), $$ c t q ℓ = E n c pk ( Encode ( q ℓ , Δ ) ) , Note that the training step uses [12pt]{minimal} $$q_{ }^$$ q ℓ ′ as features to learn the classification hyperplane. The linear regression inference function for a single query in clear text is defined as [12pt]{minimal} $${} = q_{ }w + b$$ y ^ = q ℓ w + b . In the encrypted domain, this same inference function takes a different form and it is defined as follows 16 [12pt]{minimal} $$ ct_{r_{0}} & = _{j=1}^{M}ct_{q_{ }}[j] {ct_{w}}[j], \\ ct_{r_{k+1}} & = ct_{r_{k}} + Rotate(ct_{r_{k}},2^k), 0 k < {log}_2(N/2)-1 \\ ct_{} & = ct_{b} + ct_{r_{ {log}_2(N/2)}} $$ c t r 0 = ∑ j = 1 M c t q ℓ [ j ] ⊙ c t w [ j ] , c t r k + 1 = c t r k + R o t a t e ( c t r k , 2 k ) , 0 ≤ k < log 2 ( N / 2 ) - 1 c t y ^ = c t b + c t r log 2 ( N / 2 ) where [ j ] denotes indexing at the [12pt]{minimal} $$j^{th}$$ j th ciphertext of a collection of M ciphertexts encrypting query [12pt]{minimal} $$q_{ }$$ q ℓ and the weights w . [12pt]{minimal} $$ct_{w}$$ c t w and [12pt]{minimal} $$ct_{b}$$ c t b denote the encrypted linear regression coefficients and bias, respectively. [12pt]{minimal} $$ct_{}$$ c t y ^ corresponds to the encrypted real-valued prediction that measures the likelihood of query q to share genetic data with any of the database samples. M equals [12pt]{minimal} $$ 16344/(N/2) $$ ⌈ 16344 / ( N / 2 ) ⌉ , i.e. the number of ciphertexts used to encrypt all the features of a single query q . Rotation is executed [12pt]{minimal} $$ {log}_2(N/2)$$ log 2 ( N / 2 ) times to iteratively accumulate the sum of all the elements in the slots of the output ciphertext [12pt]{minimal} $$ct_{r_0}$$ c t r 0 , where [12pt]{minimal} $$ct_{r_0}$$ c t r 0 resulted from the homomorphic pointwise multiplication of the encrypted query and the encrypted weights (see Fig. for a toy illustration). At each iteration, rotation applies [12pt]{minimal} $$2^k$$ 2 k circular-shifting to the ciphertext [12pt]{minimal} $$ct_{r_0}$$ c t r 0 , resulted from previously rotation and accumulation with ciphertext [12pt]{minimal} $$ct_{k-1}$$ c t k - 1 . In the end, the sum of all elements in the slots is stored in all slots of the ciphertext [12pt]{minimal} $$ct_{r_{ {log}_2(N/2)}}$$ c t r log 2 ( N / 2 ) (see Fig. for a toy illustration). At last, the encrypted linear regression bias term, denoted as [12pt]{minimal} $$ct_{b}$$ c t b , is added to [12pt]{minimal} $$ct_{ {log}_2(N/2)}$$ c t log 2 ( N / 2 ) so as to complete the linear regression dot product as the encrypted prediction [12pt]{minimal} $$ct_{}$$ c t y ^ – the prediction score for the single query q appears in all the slots of ciphertext [12pt]{minimal} $$ct_{}$$ c t y ^ . While Eq. is an easy-to-compute element-wise vector-vector multiplication, Eq. is a matrix-vector multiplication that entails matrix-row-number of dot products. Even though two consecutive multiplications are involved in this sequence of operations, only 1 level is consumed since the modulus switch operation is postponed until after the second multiplication is complete. We also optimize the number of rotations needed to accumulate the results of the element-wise multiplications involved in a dot product by first adding all the ciphertexts involved in a single query prediction (see Eq. ). That is, an encrypted query containing 16344 features is split into [12pt]{minimal} $$ 16344/(N/2) $$ ⌈ 16344 / ( N / 2 ) ⌉ ciphertexts; therefore, after multiplying them by the encrypted database mean, instead of applying [12pt]{minimal} $$ {log}_2(N/2)$$ log 2 ( N / 2 ) rotations on each of the individual ciphertexts to sum their internal components first, we first sum the ciphertexts to obtain a single ciphertext and only then [12pt]{minimal} $$ {log}_2(N/2)$$ log 2 ( N / 2 ) rotations are executed to perform the sum of the dot product. Analogous to the clustering-based approach, we manage to maintain multiplicative depth [12pt]{minimal} $$L=1$$ L = 1 for the linear regression-based supervised method. By both providing precomputed database mean and postponing the modulus switch operation until after the second multiplication helps achieve that. Additionally, as briefly explained in “ ” section, for the inference step we do not apply the feature transformation to the query but instead directly use the original data values since that would demand to set [12pt]{minimal} $$L=2$$ L = 2 . This way, the same parameter values are used, i.e. coefficient modulus size [12pt]{minimal} $$ {log}Q=109$$ log Q = 109 , polynomial ring size [12pt]{minimal} $$N=2^{12}$$ N = 2 12 , scaling factor [12pt]{minimal} $$ =2^{29}$$ Δ = 2 29 , and modulus chain comprising a sequence of co-primes with bit lengths {40, 29, 40}. The full set of instructions describing the encrypted linear regression algorithm is shown in Algorithm 6. Lines 5 to 9 perform component-wise multiplication of the linear coefficients and the query data (see top row of Fig. ). Line 10 performs the sum of all of elements in the slots resulted from the product of linear coefficients and input data (see bottom row of Fig. and top row of Fig. for a toy illustration of the sequence of operations). Line 12 performs the addition of the linear regression dot product with the bias term (see bottom row of Fig. ). The prediction results are stored in [12pt]{minimal} $${}$$ y ^ and returned. Secure detection of relatives in forensic genomics In this section, we provide the context, within the scope of the challenge, under which the secure protocol performance results were obtained. We present the problem space, data and the computing and software resources used. We also describe the use case model in which this application is useful in practice and the performance evaluation metric. Other design considerations that affect performance are also discussed before introducing the performance results of the methods. Problem and data description We tackle the problem of creating a secure outsourcing protocol for kinship prediction, ensuring the protection of both genotypes and model parameters, using datasets assigned in the iDASH 2023 Track1 competition, which we briefly describe in the following. The problem involves having a query, a forensic genome comprising 16344 genotype variants, to be matched against a database containing 2000 archived genomes, each of which have the same sequence of 16344 genotype variants. The response to a single query will provide the probability (or likelihood rate) that there exists in the database a relative of the individual from whom the query genotypes were extracted. An illustration of the genome sequence data files for queries and database is shown in Fig. . In addition to the database with 2000 entries, participants are given 400 test queries, half of which have a relative in the database whereas in the other half this relationship is nonexistent. The primary challenge arises from the need to optimize the encrypted query search algorithm, with focus on improving accuracy, minimizing latency, while enhancing its capability to generalize to new data – more details on “ ” and “ ” sections. The challenge database includes a matrix [12pt]{minimal} $$D {G}^{16344 2000}$$ D ∈ G 16344 × 2000 , where [12pt]{minimal} $$ {G}=\{0,1,2\}$$ G = { 0 , 1 , 2 } is the set of genotype kinds, and D has 2000 columns denoting genomic samples of different individuals and 16344 rows denoting the genotype variants. The query set [12pt]{minimal} $$Q {G}^{16344 400}$$ Q ∈ G 16344 × 400 comprises 400 queries as column vectors, each representing the genome of 16344 genotypes variants for an unidentified suspect, for which annotation is provided about whether the query sample has a relative in the database or not. This annotation is provided as a separate file containing a ground truth binary vector of size 400 where 0 indicates no family member in the database while 1 indicates that there exists at least one family member in the database for the query genome. The inference (or query kinship prediction) is a response vector [12pt]{minimal} $$ {R}^{400}$$ y ^ ∈ R 400 (where [12pt]{minimal} $$ {R}$$ R is the set of real numbers) computed from the query set Q , and when compared to ground truth vector [12pt]{minimal} $$y {I}\{0,1\}^{400}$$ y ∈ I { 0 , 1 } 400 yields the prediction accuracy rates. The goal is to compute the function [12pt]{minimal} $$_i = f(Q_i,D)$$ y ^ i = f ( Q i , D ) as accurately as possible in the encrypted domain, for all [12pt]{minimal} $$i \{1, ,400\}$$ i ∈ { 1 , ⋯ , 400 } , where matrix D contains all 2000 genomes of known subjects and their pedigrees, [12pt]{minimal} $$Q_i$$ Q i denotes the query genome i , i.e. Q indexed at column i , and the [12pt]{minimal} $$_i$$ y ^ i is the predicted relatedness score for query [12pt]{minimal} $$Q_i$$ Q i . Problem setting and secure protocol There are three parties: Query Client (QE, short for query entity), Data and Model Owner (DE, short for database entity), and Evaluator (CE, short for computing entity). The QE wants to use her sensitive genotype data to perform kinship prediction by using either the DE’s models or database entries directly. The DE builds the kinship prediction models that take genotypes as input. Models contain sensitive information (e.g. IP that could be monetized) and cannot be shared in plain form. Therefore, the modeler, i.e. DE, releases her models only in encrypted form. The CE performs model evaluation using encrypted genomes and encrypted model parameters. The challenge involves generating cryptographic keys (Client), building the models (Data and Model Owner) and the secure evaluation of the models and functions on encrypted genotype data (Evaluator). As described above, the models and genomic data are sensitive and must be encrypted before they are sent to the Evaluator. See a detailed depiction of this secure protocol in Fig. . Design considerations The challenge involves the computation of 400 kinship scores using encrypted data and an encrypted search model. There are three primary design considerations in this task. Firstly, performing computations on encrypted data is notably slow, potentially taking hours or even days instead of just minutes to complete. Secondly, conducting computations on encrypted floating-point data may introduce errors due to limitations in precision and noise budgets. Finally, it is crucial to configure the permissible number of consecutive multiplications, also known as multiplicative depth ( L ), in a way that prevents data corruption during the decryption of the output. Techniques like bootstrapping to increase the multiplicative depth cannot be used for this competition because low latency is the focus. These limitations might restrict the use of advanced algorithms, such as deep neural networks, which, from the perspective of homomorphic encryption and this competition, demand excessive latency. Therefore only low complexity homomorphic encryption friendly algorithms are viable solutions to address these constraints. This is due to heavy computations with polynomials, the basic construct of the homomorphic encryption schemes. In addition, if the prediction algorithm involves non-linear functions, the polynomial approximation of these functions operating in encrypted domain could become the main bottleneck or even require several calls to the most expensive homomorphic operation, the bootstrapping. We tailor the algorithm steps and optimization strategies to avoid both. To avoid the use of high-degree polynomial approximations, we constraint to specific data ranges that are sufficiently general – this may depend heavily on the dataset characteristics and the datapath of the algorithm. Additionally, we also avoid certain non-linear functions by replacing them with linear and polynomial approximations and others HE-friendly reformulations. Those alternatives were empirically verified to retain the functionality and behave equivalently to the original formulation. Bootstrapping operations can be avoided by carefully selecting the scaling factor [12pt]{minimal} $$$$ Δ to achieve sufficient multiplicative depth so as to compute the algorithm without running out of noise budget while keeping the precision afloat. Optimizing computing and resources We opted to use the CKKS Homomorphic Encryption scheme implemented in the Microsoft SEAL library , including Intel ® HEXL (Homomorphic Encryption Accelerated) library by . Our choice is motivated by the following reasons: it can work with real numbers through fixed-point arithmetic, it has an efficient packing method that allows computations in an SIMD fashion, and its implementation in the Microsoft SEAL library is fast, especially when accelerated with Intel ® HEXL 1.2.3 library, in which case the code takes full advantage of the hardware features such as Intel ® AVX512, available in several Intel servers, including in the iDASH competition. The choice of data packing strategy is important because it dictates how data will be organized in the ciphertexts. This impacts on reduction in the number of operations and simplification of the algorithm steps with homomorphic operations. Additionally, it can also affect reordering of the sequence of operations in order to decrease the multiplicative depth required. The data packing step happens before encoding and encryption. It is technically independent of the type of encoding and encryption employed but it determines the number of ciphertexts required to encrypt all data. As a result, not only does it influence on computing latency savings and optimizations in the steps of the algorithms but also on the required memory footprint, storage capacity in DRAM and disk, and memory bandwidth. When choosing the data packing strategy, all of these computing resource aspects should be taken into account together to conceive a good design for the target application. We decided to pack the genotypes of the one same genome sequence in the available slots of the same ciphertext in its original order – if a single ciphertext is not sufficient, then a single genome will be encrypted by multiple ciphertexts. We found this strategy to be sufficiently good and optimal for the target algorithms out of a few evaluated strategies, for which detailed analysis is out of scope of this manuscript. Figure depicts the data packing strategy used in this work. Computation environments To provide a performance characterization of our solutions in the context of the iDASH 2023 competition, we evaluated our solutions on a dual-socket server that hosts two Intel ® Xeon ® Gold 6140 CPUs, carrying 18 physical cores each. The system also hosts 32GB DDR SDRAM and 745GB of storage. As per the competition rule, by default the execution is constrained to run on exactly 4 physical CPU cores, unless specified, on a single NUMA node. Although obsolete and discontinued, this hardware configuration approximates the one employed in the competition and we perform experiments with it for the sake of completeness of this study. We implement the source code in C++, using Microsoft SEAL 4.0 APIs enabled with the acceleration kernels provided by the Intel ® HEXL 1.2.3 library. Finally, the code compilation uses GCC 10 on CentOS 7. These results are discussed across subsections of “ ” section as we introduce implementation details of the different algorithms and they are expanded in Table and Fig. . A comparative performance analysis for all the three different proposed algorithms, using the same secure protocol described in Fig. , outside the context of the iDASH competition is performed on more contemporary system configuration. Its hardware configuration consists of a dual-socket server that hosts two Intel ® Xeon ® Platinum 8480+ CPUs, carrying 56 physical cores each. This is a more modern processor and likely to be readily available in mainstream cloud service providers. The system also hosts 256GB DDR5 SDRAM and 447GB of disk storage. All experiments are run on a single NUMA node varying the number of cores to a maximum of 32 cores. To scale the execution across multiple cores, we use OpenMP 4.5. We implement the source code in C++ and program the HE support using the Microsoft SEAL 4.0 APIs. We also analyze the performance impact of enabling the acceleration kernels optimized with AVX512 offered in the Intel ® HEXL 1.2.3 library. The Intel ® AVX512 extension is a set of instructions that can boost performance for vector processing–intensive workloads. With wide 512-bit vector-operations capabilities, the CPU largest register supports 32 double-precision and 64 single-precision floating-point numbers or, alternatively, 8 64-bit and 16 32-bit integers. Intel ® AVX512 also provides up to two 512-bit fused-multiply add (FMA) units. Doubling the width of the vector processing doubles the number of registers compared to its predecessor, Intel ® AVX2. Intel ® HEXL 1.2.3 library offerings are currently integrated into the Microsoft SEAL 4.0 library and can be enabled at compilation time. The compiler used is GCC 11.4 and the OS is Ubuntu 22.4. The results are discussed in detail in “ ” section. Evaluation criteria All tests were performed on a hold-out dataset of 400 genomes in an isolated environment in terms of performance. Accuracy and time/memory requirements were used for the benchmark and ranking of the solutions. The formula used to rank the fitness of the solutions was 17 [12pt]{minimal} $$ {score} = ) }, $$ score = auROC exp t 5 , where auROC is the area under (au) the receiver operating characteristic (ROC) curve, metric used to assess the accuracy of predictions, and t is the execution time in minutes. Observe that the ranking is highly impacted by the exponential weight factor of the demanded computational time, regardless of the prediction accuracy. Performance on iDASH competition Firstly, we discuss the performance results where the algorithm performance was tuned for the iDASH competition, from “ ” to “ ” sections. Then, in “ ” section, we discuss a broader comparative performance analysis that goes beyond the constraints of the competition. Fully unsupervised method The fully unsupervised solution is inspired by the z-test hypothesis test. It is targeted to the cases where the database owner has to encrypted the whole database and send the encrypted samples to the computing entity because it is unable to perform any pre-computation to reduce the computational burden on the server. Thus, the encrypted computation is fully unsupervised in the sense that there is no pre-computation needed for the prediction step on the server. Since no training is done for this approach, lower accuracy is expected. We obtain 0.90 in accuracy, recall and precision, which naturally yields F1-score=0.90. The optimal auROC is 0.9794 out of all of the ones plotted in Fig. ; however, in the encrypted domain the auROC value lowers to 0.9685 due to replacement of the variance by the mean as discussed above. Its computing efficiency is marked by latency of [12pt]{minimal} $$t=14.44$$ t = 14.44 seconds to execute the full secure protocol including database encryption, while running on 4 CPU cores. The database encryption alone takes 8.22 seconds (see Table ). Note that the database encryption requirement imposed by the rules of the iDASH 2023 competition implies that the problem should be solved in an unsupervised manner, although not necessary since the mean could be precomputed ahead of time and encrypted before leaving custody of the database owner. In fact, if this is the case, the latency of the full protocol reduces from 14.44 to 6.22 seconds (see Table ), an improvement by a factor of 2.32x. The ranking score of this solution, including the data encryption overhead, has value [12pt]{minimal} $$ {aucROC} e^{} = 0.92305$$ aucROC × e - t 5 = 0.92305 , which shows that the accuracy is heavily penalized by compute resources and total time of the protocol. This yields a throughput of about 27.7 queries per seconds (q/s). The unsupervised algorithm required minimal assumptions about the dataset and offered more robust generalization at the cost of sub-optimal accuracy and recall. As previously stated, this approach does not assume any knowledge of the database, and it is completely unsupervised, requiring computing the database mean using the encrypted database samples in the third-party computing entity. In practice, if future predictions are known to be drawn from a similar data distribution with same first-order and second-order statistics of the underlying mixture, then we can further optimize the required computation during the inference process by providing encrypted precomputed mean (and possibly inverse of variance if higher accuracy is desired). As discussed earlier, we are able to reduce the number of levels to [12pt]{minimal} $$L=3$$ L = 3 to speed up computation even more. All the results discussed in “ ” section were obtained using this version since it is outside the iDASH competition context and we assume that it is acceptable to have the mean precomputed. Clustering-based supervised Clustering is an unsupervised technique to mine and organize data points of similar characteristics into distinct groups. We consider this approach supervised for the sole fact of re-utilizing the representative entities of these groups, the cluster centroids, as anchors of knowledge for the inference on incoming queries. Essentially, the assumption is that these distinct representative groups summarize all information about the target underlying populations. For the iDASH 2023 competition, the centroids are fixed but, in practice, they could evolve as the underlying mixture changes, such that clustering could be performed offline as often as necessary, before the time for querying an unknown suspect. The accuracy obtained for the iDASH 2023 competition hits 100% success rate (Recall=1.0, Precision=1.0, False Positive Rate=0.0) for all 400 queries and is summarized by F1-score=1.0 and auROC=1.0 (see Figs. a and ). This was achieved both on the 400 queries from the challenge dataset and on the 400 unknown queries of the competition. In Fig. b, we observe forced assignment of the negative queries to clusters 1 and 4. This type of behavior is expected since in this case we only rely on the existing clusters (patterns of population characteristics) for the final decision. This issue is resolved by subtracting the distance between the query and the assigned cluster centroid (right term of Eq. ) from the distance of the query and the database mean (left term of Eq. ). This mechanism allows us to reject the wrong assignments by producing reliable prediction values (shown along y-axis of Fig. a) that facilitate to decide whether it is a positive or negative query with the choice of a threshold value; thus, achieving perfect results (see Fig. ). Figure b also shows that setting [12pt]{minimal} $$k=5$$ k = 5 probably led to overfitting of the centroids, having two or more centroids representing one same true underlying cluster. This probably means there are less populations than anticipated (i.e. [12pt]{minimal} $$k=5$$ k = 5 ), and we can visually inspect and suggest that number of populations might be [12pt]{minimal} $$k=2$$ k = 2 . We make an educated guess about k by analyzing the performance with the validation set with k varying from 2 to 25. As for computing performance, the time to complete the prediction of all 400 queries amounts to [12pt]{minimal} $$t=13.36$$ t = 13.36 seconds (see Table ), which includes the database encryption. However, database encryption is not needed since only the encrypted centroids are required for the inference step. The database encryption is merely a requirement imposed by the rules of the iDASH 2023 competition. In practice, this can be ignored since the encrypted database samples are not directly utilized for inference. Having considered this, it provides a throughput of about 29.9 queries per seconds (q/s) due to the database encryption overhead. This solution ranking score is [12pt]{minimal} $$ {auROC} e^{}=0.9565$$ auROC × e t 5 = 0.9565 , which is a significant improvement over the unsupervised solution score of 0.923. As for the classification performance, this method creates a clear separation between related queries and unrelated queries (see Fig. a). Note that, in practice, our solution can be deployed more efficiently in a more general setting, achieving better latency and higher throughput. Without the data encryption overhead, the full protocol completes in 7.53 seconds, yielding a higher throughput of 53q/s. We measure the computing performance of the full protocol without the iDASH competition constraints and verified that latency can be as low as 2.38 seconds, for all 400 query predictions, by running with 32 CPU cores (see Table for more details). As a result, throughput can be as high as 168 query predictions per second (see Table for more details), while keeping the required computation lean (i.e., avoiding unhelpful computing work). Linear regression model As for the trained linear regression model, the performance was evaluated on a test set containing 40 positive queries and 40 negative queries. The model achieves perfect accuracy, precision and recall, yielding F1-score=1.0 and auROC=1.0 (see Fig. a and b). As for precision of the predicted values in comparison with the expected, the model reaches R2-score=0.8363 and RMSE=0.2022 on the test set. Computing performance is characterized by an average latency of [12pt]{minimal} $$t=5.92$$ t = 5.92 seconds for predicting a batch of 400 queries using 4 CPU cores (see Table ). This yields a throughput of about 68 queries per seconds (see Table ). The estimated iDASH 2023 score for this solution is [12pt]{minimal} $$ {auROC} e^{}=0.9804$$ auROC × e - t 5 = 0.9804 , which is an improvement of about 2.5% over the clustering-based solution. This method creates a clear separation between related queries and unrelated queries (see Fig. b). For a more general understanding of latency performance, we show in Fig. that the latency scales almost linearly with the number of cores, predicting 400 queries in 2.22 seconds when running on 16 CPU cores. It also outputs prediction values in a range [-0.2,1.2] that approximates the probability range [0.0,1.0] (finding optimal prediction threshold value at 0.4958); thus, more naturally interpretable. It is instructive to discuss why true positive rate and precision are remarkably perfect, yielding an auROC score of 1.0. We refer to a few factors to justify it and claim that this does not mean overfitting of the trained model. First, we employ feature selection to find the most relevant predictors (features) capturing the main characteristics of underlying mixture, which in turn contributes to avoid overfitting of the linear regression model. Additionally, we resort to data augmentation to increase the number of samples per feature to encourage better generalization of the model, even though the curated dataset is not imbalanced. Finally, during the optimization of the objective function, we introduce a regularization term that also minimize the chances of the model from overfitting and instead guide it to achieve better generalization. However, it is true that the model is trained to make predictions under the assumption of a known underlying mixture and that the provided dataset holds sufficient and representative statistics of the target populations. Comparative performance analysis We contrast and compare the throughput and latency performance of the three different prediction algorithms. Our analysis is conducted using a contemporary Intel server processor, in particular, the Intel ® Xeon ® Platinum 8480+ CPU. In the previous section, the performance results were carried out with an older generation Intel processor that approximates the specification of the one used by the iDASH 2023 competition. Ideally, these workloads shall run on later generations with the latest and greatest of performance features. This section is intended to discuss performance gains due to different software configurations and choice of algorithm on a platform powered by Intel ® Xeon ® Platinum 8480+ server processor. Experimental setup The performance benchmark is organized into 4 different scenarios. The first scenario consists of the baseline performance configuration. Two others bring specific capabilities in isolation, namely, the usage of vectorized instructions with Intel ® AVX512 feature and the use of OpenMP 4.5 to parallelize encryption, decryption and the inference code with multicore execution. The fourth scenario is a combination of the last two configurations, i.e. simultaneously leveraging the instruction-level and core-level data parallelism. As follows, we describe the scenarios in detail. The baseline performance configuration entails a single-core execution of the workloads programmed with the Microsoft SEAL 4.0 API. In the second scenario the workload is programmed to execute with vectorized instructions using Intel ® AVX512 by enabling at compilation time Intel ® HEXL 1.2.3 in the Microsoft SEAL 4.0 API. The third scenario involves enabling parallel processing with OpenMP 4.5, where SIMD-friendly parts of the workloads are executed on multiple cores - the number of cores varies from 2 to 32. The fourth scenario combines scenarios 3 and 4. The performance metrics used for analysis are throughput (queries per second), latency (seconds), and normalized performance (throughput divided by baseline throughput). The experiments consist of the workload processing a batch of 400 inference predictions. Performance gain due to algorithm choice The choice of the algorithm for the application depends on several factors, including security, accuracy, and computing efficiency. For example, if little knowledge is known about the population mixture, then the z-test-inspired approach, i.e. the unsupervised method, can generalize better than the supervised approaches and can be less predisposed to overfitting; thus, providing less biased predictions. If statistically sufficient information about the population mixture is known, then parameter learning techniques allow more computationally efficient inference and can deliver more accurate predictions. In this regard, we assume the latter case, and analyze how much performance gain is expected if sufficient knowledge about the population mixture is known a priori. This means that we precompute the mean and variance for the z-test-inspired method, turning it into a supervised approach, such that it becomes more computationally efficient for requiring a lesser number of levels (multiplicative depth). In Fig. , each bar represents the expected performance gain in a specific scenario (i.e., baseline as single core execution, vector instructions, multicore, and multicore plus vector instructions). In each scenario, we collect the performance numbers of all three algorithms. We then compute the normalized performance between each algorithm as the throughput ratio [12pt]{minimal} $$T_A/T_B$$ T A / T B , where the workload A performed significantly better than the workload B; thus, the ratio corresponds to the performance gain of algorithm A over algorithm B (see more in Table ). We say that the expected performance gain due to the choice of the algorithm, if an algorithm performs better than the other, is given by the geometric mean of all the ratios [12pt]{minimal} $$T_A/T_B$$ T A / T B , where [12pt]{minimal} $$T_A>> T_B$$ T A > > T B , under the constraints of a particular execution environment scenario. In short, we pick each algorithm’s throughput and calculate its relative performance gain over each worse counterpart, then we report the expected performance gain, shown in Fig. , as the geometric mean over all these ratios computed within a specific scenario. In Table , we can observe that in the multicore execution environment, the workloads z-test and cluster perform comparably, in which case one could choose either; therefore, we do not consider their ratios to compute the geometric mean of the performance gain. Additionally, the linear regression method notably benefits the most from the multicore execution environment for the simplicity of its set of instructions. Overall, the expected performance gain considering all scenarios due to algorithm choice is characterized by a geometric mean of 1.52x. Performance gain due to vector instructions Intel ® has actively participated in open-source code contributions to accelerate HE’s arithmetic computing kernels. Subproducts of these efforts are Intel ® HE Acceleration Library (Intel ® HEXL) and Intel ® HE Acceleration Library for FPGAs (Intel ® HEXL-FPGA). Several existing HE API libraries, such as Microsoft SEAL, incorporate Intel ® HEXL kernels to accelerate their HE API calls on Intel platforms. These tools leverage AVX512 vector instructions offered as hardware features by Intel ® Xeon ® CPUs, e.g. Intel ® Xeon ® Platinum 8480+. The results are summarized in Fig. . Overall, the expected performance gain across all workloads has a geometric mean 2.06x. The throughput achieved when enabling execution with vector instructions is normalized against the baseline. It is worth noting that this gain is based on single-core execution. The performance impact of capitalizing the use of vector instructions when running with multiple working cores is discussed in “ ” section. Performance gain due to parallel processing To increase throughput, executing on multiple cores is essential. We test the scalability of throughput for these workloads through data parallel processing under execution with multiple cores. Each core gets shards of the 400 inferences and other parallelizable areas of code are also executed with multiple threads, each pinned to a specific physical core. In Fig. , we show how latency scales with increasing number of cores for each workload. The final performance gain for each workload is the highest speed-up achieved with a specific number of cores, i.e. the optimal number of cores to run that workload. Typically, either 16 or 32 cores performs the best. The performance gain is the throughput using multiple cores normalized by the baseline (single core), as presented in Fig. . Overall, the expected performance gain across all scenarios owing to parallel processing using optimal number of cores amounts to geometric mean of 15.39x. Performance gain due to vector instructions and parallel processing We also assess the performance gain achieved out of the combination of vector instructions and parallel processing using multiple cores. The performance analysis for this scenario can be summarized in Fig. and follows the same methodology used to compute the performance gain for parallel processing only. On average, the expected performance gain is 18.59x. In Fig. , we observe that the latency of the workloads on multicore execution with AVX512 does not scale at equivalent rates as it does during multicore execution alone (see Fig. ), despite displaying lower latency as reported in Table in contrast to the values in Table . Note also that the clustering-based workload scales more effectively in both scenarios. This phenomenon can be attributed to the clustering-based method algorithm featuring more parallelism-friendly code sections compared to the other algorithms. Specifically, in the linear regression code, homomorphic rotations are executed sequentially due to data dependency, constituting a substantial portion of the computational time. In the case of the z-test-inspired workload, a significant portion of the algorithm code is non-parallelizable, particularly the linear approximation involving division by the mean. In this section, we provide the context, within the scope of the challenge, under which the secure protocol performance results were obtained. We present the problem space, data and the computing and software resources used. We also describe the use case model in which this application is useful in practice and the performance evaluation metric. Other design considerations that affect performance are also discussed before introducing the performance results of the methods. Problem and data description We tackle the problem of creating a secure outsourcing protocol for kinship prediction, ensuring the protection of both genotypes and model parameters, using datasets assigned in the iDASH 2023 Track1 competition, which we briefly describe in the following. The problem involves having a query, a forensic genome comprising 16344 genotype variants, to be matched against a database containing 2000 archived genomes, each of which have the same sequence of 16344 genotype variants. The response to a single query will provide the probability (or likelihood rate) that there exists in the database a relative of the individual from whom the query genotypes were extracted. An illustration of the genome sequence data files for queries and database is shown in Fig. . In addition to the database with 2000 entries, participants are given 400 test queries, half of which have a relative in the database whereas in the other half this relationship is nonexistent. The primary challenge arises from the need to optimize the encrypted query search algorithm, with focus on improving accuracy, minimizing latency, while enhancing its capability to generalize to new data – more details on “ ” and “ ” sections. The challenge database includes a matrix [12pt]{minimal} $$D {G}^{16344 2000}$$ D ∈ G 16344 × 2000 , where [12pt]{minimal} $$ {G}=\{0,1,2\}$$ G = { 0 , 1 , 2 } is the set of genotype kinds, and D has 2000 columns denoting genomic samples of different individuals and 16344 rows denoting the genotype variants. The query set [12pt]{minimal} $$Q {G}^{16344 400}$$ Q ∈ G 16344 × 400 comprises 400 queries as column vectors, each representing the genome of 16344 genotypes variants for an unidentified suspect, for which annotation is provided about whether the query sample has a relative in the database or not. This annotation is provided as a separate file containing a ground truth binary vector of size 400 where 0 indicates no family member in the database while 1 indicates that there exists at least one family member in the database for the query genome. The inference (or query kinship prediction) is a response vector [12pt]{minimal} $$ {R}^{400}$$ y ^ ∈ R 400 (where [12pt]{minimal} $$ {R}$$ R is the set of real numbers) computed from the query set Q , and when compared to ground truth vector [12pt]{minimal} $$y {I}\{0,1\}^{400}$$ y ∈ I { 0 , 1 } 400 yields the prediction accuracy rates. The goal is to compute the function [12pt]{minimal} $$_i = f(Q_i,D)$$ y ^ i = f ( Q i , D ) as accurately as possible in the encrypted domain, for all [12pt]{minimal} $$i \{1, ,400\}$$ i ∈ { 1 , ⋯ , 400 } , where matrix D contains all 2000 genomes of known subjects and their pedigrees, [12pt]{minimal} $$Q_i$$ Q i denotes the query genome i , i.e. Q indexed at column i , and the [12pt]{minimal} $$_i$$ y ^ i is the predicted relatedness score for query [12pt]{minimal} $$Q_i$$ Q i . Problem setting and secure protocol There are three parties: Query Client (QE, short for query entity), Data and Model Owner (DE, short for database entity), and Evaluator (CE, short for computing entity). The QE wants to use her sensitive genotype data to perform kinship prediction by using either the DE’s models or database entries directly. The DE builds the kinship prediction models that take genotypes as input. Models contain sensitive information (e.g. IP that could be monetized) and cannot be shared in plain form. Therefore, the modeler, i.e. DE, releases her models only in encrypted form. The CE performs model evaluation using encrypted genomes and encrypted model parameters. The challenge involves generating cryptographic keys (Client), building the models (Data and Model Owner) and the secure evaluation of the models and functions on encrypted genotype data (Evaluator). As described above, the models and genomic data are sensitive and must be encrypted before they are sent to the Evaluator. See a detailed depiction of this secure protocol in Fig. . Design considerations The challenge involves the computation of 400 kinship scores using encrypted data and an encrypted search model. There are three primary design considerations in this task. Firstly, performing computations on encrypted data is notably slow, potentially taking hours or even days instead of just minutes to complete. Secondly, conducting computations on encrypted floating-point data may introduce errors due to limitations in precision and noise budgets. Finally, it is crucial to configure the permissible number of consecutive multiplications, also known as multiplicative depth ( L ), in a way that prevents data corruption during the decryption of the output. Techniques like bootstrapping to increase the multiplicative depth cannot be used for this competition because low latency is the focus. These limitations might restrict the use of advanced algorithms, such as deep neural networks, which, from the perspective of homomorphic encryption and this competition, demand excessive latency. Therefore only low complexity homomorphic encryption friendly algorithms are viable solutions to address these constraints. This is due to heavy computations with polynomials, the basic construct of the homomorphic encryption schemes. In addition, if the prediction algorithm involves non-linear functions, the polynomial approximation of these functions operating in encrypted domain could become the main bottleneck or even require several calls to the most expensive homomorphic operation, the bootstrapping. We tailor the algorithm steps and optimization strategies to avoid both. To avoid the use of high-degree polynomial approximations, we constraint to specific data ranges that are sufficiently general – this may depend heavily on the dataset characteristics and the datapath of the algorithm. Additionally, we also avoid certain non-linear functions by replacing them with linear and polynomial approximations and others HE-friendly reformulations. Those alternatives were empirically verified to retain the functionality and behave equivalently to the original formulation. Bootstrapping operations can be avoided by carefully selecting the scaling factor [12pt]{minimal} $$$$ Δ to achieve sufficient multiplicative depth so as to compute the algorithm without running out of noise budget while keeping the precision afloat. Optimizing computing and resources We opted to use the CKKS Homomorphic Encryption scheme implemented in the Microsoft SEAL library , including Intel ® HEXL (Homomorphic Encryption Accelerated) library by . Our choice is motivated by the following reasons: it can work with real numbers through fixed-point arithmetic, it has an efficient packing method that allows computations in an SIMD fashion, and its implementation in the Microsoft SEAL library is fast, especially when accelerated with Intel ® HEXL 1.2.3 library, in which case the code takes full advantage of the hardware features such as Intel ® AVX512, available in several Intel servers, including in the iDASH competition. The choice of data packing strategy is important because it dictates how data will be organized in the ciphertexts. This impacts on reduction in the number of operations and simplification of the algorithm steps with homomorphic operations. Additionally, it can also affect reordering of the sequence of operations in order to decrease the multiplicative depth required. The data packing step happens before encoding and encryption. It is technically independent of the type of encoding and encryption employed but it determines the number of ciphertexts required to encrypt all data. As a result, not only does it influence on computing latency savings and optimizations in the steps of the algorithms but also on the required memory footprint, storage capacity in DRAM and disk, and memory bandwidth. When choosing the data packing strategy, all of these computing resource aspects should be taken into account together to conceive a good design for the target application. We decided to pack the genotypes of the one same genome sequence in the available slots of the same ciphertext in its original order – if a single ciphertext is not sufficient, then a single genome will be encrypted by multiple ciphertexts. We found this strategy to be sufficiently good and optimal for the target algorithms out of a few evaluated strategies, for which detailed analysis is out of scope of this manuscript. Figure depicts the data packing strategy used in this work. Computation environments To provide a performance characterization of our solutions in the context of the iDASH 2023 competition, we evaluated our solutions on a dual-socket server that hosts two Intel ® Xeon ® Gold 6140 CPUs, carrying 18 physical cores each. The system also hosts 32GB DDR SDRAM and 745GB of storage. As per the competition rule, by default the execution is constrained to run on exactly 4 physical CPU cores, unless specified, on a single NUMA node. Although obsolete and discontinued, this hardware configuration approximates the one employed in the competition and we perform experiments with it for the sake of completeness of this study. We implement the source code in C++, using Microsoft SEAL 4.0 APIs enabled with the acceleration kernels provided by the Intel ® HEXL 1.2.3 library. Finally, the code compilation uses GCC 10 on CentOS 7. These results are discussed across subsections of “ ” section as we introduce implementation details of the different algorithms and they are expanded in Table and Fig. . A comparative performance analysis for all the three different proposed algorithms, using the same secure protocol described in Fig. , outside the context of the iDASH competition is performed on more contemporary system configuration. Its hardware configuration consists of a dual-socket server that hosts two Intel ® Xeon ® Platinum 8480+ CPUs, carrying 56 physical cores each. This is a more modern processor and likely to be readily available in mainstream cloud service providers. The system also hosts 256GB DDR5 SDRAM and 447GB of disk storage. All experiments are run on a single NUMA node varying the number of cores to a maximum of 32 cores. To scale the execution across multiple cores, we use OpenMP 4.5. We implement the source code in C++ and program the HE support using the Microsoft SEAL 4.0 APIs. We also analyze the performance impact of enabling the acceleration kernels optimized with AVX512 offered in the Intel ® HEXL 1.2.3 library. The Intel ® AVX512 extension is a set of instructions that can boost performance for vector processing–intensive workloads. With wide 512-bit vector-operations capabilities, the CPU largest register supports 32 double-precision and 64 single-precision floating-point numbers or, alternatively, 8 64-bit and 16 32-bit integers. Intel ® AVX512 also provides up to two 512-bit fused-multiply add (FMA) units. Doubling the width of the vector processing doubles the number of registers compared to its predecessor, Intel ® AVX2. Intel ® HEXL 1.2.3 library offerings are currently integrated into the Microsoft SEAL 4.0 library and can be enabled at compilation time. The compiler used is GCC 11.4 and the OS is Ubuntu 22.4. The results are discussed in detail in “ ” section. Evaluation criteria All tests were performed on a hold-out dataset of 400 genomes in an isolated environment in terms of performance. Accuracy and time/memory requirements were used for the benchmark and ranking of the solutions. The formula used to rank the fitness of the solutions was 17 [12pt]{minimal} $$ {score} = ) }, $$ score = auROC exp t 5 , where auROC is the area under (au) the receiver operating characteristic (ROC) curve, metric used to assess the accuracy of predictions, and t is the execution time in minutes. Observe that the ranking is highly impacted by the exponential weight factor of the demanded computational time, regardless of the prediction accuracy. We tackle the problem of creating a secure outsourcing protocol for kinship prediction, ensuring the protection of both genotypes and model parameters, using datasets assigned in the iDASH 2023 Track1 competition, which we briefly describe in the following. The problem involves having a query, a forensic genome comprising 16344 genotype variants, to be matched against a database containing 2000 archived genomes, each of which have the same sequence of 16344 genotype variants. The response to a single query will provide the probability (or likelihood rate) that there exists in the database a relative of the individual from whom the query genotypes were extracted. An illustration of the genome sequence data files for queries and database is shown in Fig. . In addition to the database with 2000 entries, participants are given 400 test queries, half of which have a relative in the database whereas in the other half this relationship is nonexistent. The primary challenge arises from the need to optimize the encrypted query search algorithm, with focus on improving accuracy, minimizing latency, while enhancing its capability to generalize to new data – more details on “ ” and “ ” sections. The challenge database includes a matrix [12pt]{minimal} $$D {G}^{16344 2000}$$ D ∈ G 16344 × 2000 , where [12pt]{minimal} $$ {G}=\{0,1,2\}$$ G = { 0 , 1 , 2 } is the set of genotype kinds, and D has 2000 columns denoting genomic samples of different individuals and 16344 rows denoting the genotype variants. The query set [12pt]{minimal} $$Q {G}^{16344 400}$$ Q ∈ G 16344 × 400 comprises 400 queries as column vectors, each representing the genome of 16344 genotypes variants for an unidentified suspect, for which annotation is provided about whether the query sample has a relative in the database or not. This annotation is provided as a separate file containing a ground truth binary vector of size 400 where 0 indicates no family member in the database while 1 indicates that there exists at least one family member in the database for the query genome. The inference (or query kinship prediction) is a response vector [12pt]{minimal} $$ {R}^{400}$$ y ^ ∈ R 400 (where [12pt]{minimal} $$ {R}$$ R is the set of real numbers) computed from the query set Q , and when compared to ground truth vector [12pt]{minimal} $$y {I}\{0,1\}^{400}$$ y ∈ I { 0 , 1 } 400 yields the prediction accuracy rates. The goal is to compute the function [12pt]{minimal} $$_i = f(Q_i,D)$$ y ^ i = f ( Q i , D ) as accurately as possible in the encrypted domain, for all [12pt]{minimal} $$i \{1, ,400\}$$ i ∈ { 1 , ⋯ , 400 } , where matrix D contains all 2000 genomes of known subjects and their pedigrees, [12pt]{minimal} $$Q_i$$ Q i denotes the query genome i , i.e. Q indexed at column i , and the [12pt]{minimal} $$_i$$ y ^ i is the predicted relatedness score for query [12pt]{minimal} $$Q_i$$ Q i . There are three parties: Query Client (QE, short for query entity), Data and Model Owner (DE, short for database entity), and Evaluator (CE, short for computing entity). The QE wants to use her sensitive genotype data to perform kinship prediction by using either the DE’s models or database entries directly. The DE builds the kinship prediction models that take genotypes as input. Models contain sensitive information (e.g. IP that could be monetized) and cannot be shared in plain form. Therefore, the modeler, i.e. DE, releases her models only in encrypted form. The CE performs model evaluation using encrypted genomes and encrypted model parameters. The challenge involves generating cryptographic keys (Client), building the models (Data and Model Owner) and the secure evaluation of the models and functions on encrypted genotype data (Evaluator). As described above, the models and genomic data are sensitive and must be encrypted before they are sent to the Evaluator. See a detailed depiction of this secure protocol in Fig. . The challenge involves the computation of 400 kinship scores using encrypted data and an encrypted search model. There are three primary design considerations in this task. Firstly, performing computations on encrypted data is notably slow, potentially taking hours or even days instead of just minutes to complete. Secondly, conducting computations on encrypted floating-point data may introduce errors due to limitations in precision and noise budgets. Finally, it is crucial to configure the permissible number of consecutive multiplications, also known as multiplicative depth ( L ), in a way that prevents data corruption during the decryption of the output. Techniques like bootstrapping to increase the multiplicative depth cannot be used for this competition because low latency is the focus. These limitations might restrict the use of advanced algorithms, such as deep neural networks, which, from the perspective of homomorphic encryption and this competition, demand excessive latency. Therefore only low complexity homomorphic encryption friendly algorithms are viable solutions to address these constraints. This is due to heavy computations with polynomials, the basic construct of the homomorphic encryption schemes. In addition, if the prediction algorithm involves non-linear functions, the polynomial approximation of these functions operating in encrypted domain could become the main bottleneck or even require several calls to the most expensive homomorphic operation, the bootstrapping. We tailor the algorithm steps and optimization strategies to avoid both. To avoid the use of high-degree polynomial approximations, we constraint to specific data ranges that are sufficiently general – this may depend heavily on the dataset characteristics and the datapath of the algorithm. Additionally, we also avoid certain non-linear functions by replacing them with linear and polynomial approximations and others HE-friendly reformulations. Those alternatives were empirically verified to retain the functionality and behave equivalently to the original formulation. Bootstrapping operations can be avoided by carefully selecting the scaling factor [12pt]{minimal} $$$$ Δ to achieve sufficient multiplicative depth so as to compute the algorithm without running out of noise budget while keeping the precision afloat. We opted to use the CKKS Homomorphic Encryption scheme implemented in the Microsoft SEAL library , including Intel ® HEXL (Homomorphic Encryption Accelerated) library by . Our choice is motivated by the following reasons: it can work with real numbers through fixed-point arithmetic, it has an efficient packing method that allows computations in an SIMD fashion, and its implementation in the Microsoft SEAL library is fast, especially when accelerated with Intel ® HEXL 1.2.3 library, in which case the code takes full advantage of the hardware features such as Intel ® AVX512, available in several Intel servers, including in the iDASH competition. The choice of data packing strategy is important because it dictates how data will be organized in the ciphertexts. This impacts on reduction in the number of operations and simplification of the algorithm steps with homomorphic operations. Additionally, it can also affect reordering of the sequence of operations in order to decrease the multiplicative depth required. The data packing step happens before encoding and encryption. It is technically independent of the type of encoding and encryption employed but it determines the number of ciphertexts required to encrypt all data. As a result, not only does it influence on computing latency savings and optimizations in the steps of the algorithms but also on the required memory footprint, storage capacity in DRAM and disk, and memory bandwidth. When choosing the data packing strategy, all of these computing resource aspects should be taken into account together to conceive a good design for the target application. We decided to pack the genotypes of the one same genome sequence in the available slots of the same ciphertext in its original order – if a single ciphertext is not sufficient, then a single genome will be encrypted by multiple ciphertexts. We found this strategy to be sufficiently good and optimal for the target algorithms out of a few evaluated strategies, for which detailed analysis is out of scope of this manuscript. Figure depicts the data packing strategy used in this work. To provide a performance characterization of our solutions in the context of the iDASH 2023 competition, we evaluated our solutions on a dual-socket server that hosts two Intel ® Xeon ® Gold 6140 CPUs, carrying 18 physical cores each. The system also hosts 32GB DDR SDRAM and 745GB of storage. As per the competition rule, by default the execution is constrained to run on exactly 4 physical CPU cores, unless specified, on a single NUMA node. Although obsolete and discontinued, this hardware configuration approximates the one employed in the competition and we perform experiments with it for the sake of completeness of this study. We implement the source code in C++, using Microsoft SEAL 4.0 APIs enabled with the acceleration kernels provided by the Intel ® HEXL 1.2.3 library. Finally, the code compilation uses GCC 10 on CentOS 7. These results are discussed across subsections of “ ” section as we introduce implementation details of the different algorithms and they are expanded in Table and Fig. . A comparative performance analysis for all the three different proposed algorithms, using the same secure protocol described in Fig. , outside the context of the iDASH competition is performed on more contemporary system configuration. Its hardware configuration consists of a dual-socket server that hosts two Intel ® Xeon ® Platinum 8480+ CPUs, carrying 56 physical cores each. This is a more modern processor and likely to be readily available in mainstream cloud service providers. The system also hosts 256GB DDR5 SDRAM and 447GB of disk storage. All experiments are run on a single NUMA node varying the number of cores to a maximum of 32 cores. To scale the execution across multiple cores, we use OpenMP 4.5. We implement the source code in C++ and program the HE support using the Microsoft SEAL 4.0 APIs. We also analyze the performance impact of enabling the acceleration kernels optimized with AVX512 offered in the Intel ® HEXL 1.2.3 library. The Intel ® AVX512 extension is a set of instructions that can boost performance for vector processing–intensive workloads. With wide 512-bit vector-operations capabilities, the CPU largest register supports 32 double-precision and 64 single-precision floating-point numbers or, alternatively, 8 64-bit and 16 32-bit integers. Intel ® AVX512 also provides up to two 512-bit fused-multiply add (FMA) units. Doubling the width of the vector processing doubles the number of registers compared to its predecessor, Intel ® AVX2. Intel ® HEXL 1.2.3 library offerings are currently integrated into the Microsoft SEAL 4.0 library and can be enabled at compilation time. The compiler used is GCC 11.4 and the OS is Ubuntu 22.4. The results are discussed in detail in “ ” section. All tests were performed on a hold-out dataset of 400 genomes in an isolated environment in terms of performance. Accuracy and time/memory requirements were used for the benchmark and ranking of the solutions. The formula used to rank the fitness of the solutions was 17 [12pt]{minimal} $$ {score} = ) }, $$ score = auROC exp t 5 , where auROC is the area under (au) the receiver operating characteristic (ROC) curve, metric used to assess the accuracy of predictions, and t is the execution time in minutes. Observe that the ranking is highly impacted by the exponential weight factor of the demanded computational time, regardless of the prediction accuracy. Firstly, we discuss the performance results where the algorithm performance was tuned for the iDASH competition, from “ ” to “ ” sections. Then, in “ ” section, we discuss a broader comparative performance analysis that goes beyond the constraints of the competition. Fully unsupervised method The fully unsupervised solution is inspired by the z-test hypothesis test. It is targeted to the cases where the database owner has to encrypted the whole database and send the encrypted samples to the computing entity because it is unable to perform any pre-computation to reduce the computational burden on the server. Thus, the encrypted computation is fully unsupervised in the sense that there is no pre-computation needed for the prediction step on the server. Since no training is done for this approach, lower accuracy is expected. We obtain 0.90 in accuracy, recall and precision, which naturally yields F1-score=0.90. The optimal auROC is 0.9794 out of all of the ones plotted in Fig. ; however, in the encrypted domain the auROC value lowers to 0.9685 due to replacement of the variance by the mean as discussed above. Its computing efficiency is marked by latency of [12pt]{minimal} $$t=14.44$$ t = 14.44 seconds to execute the full secure protocol including database encryption, while running on 4 CPU cores. The database encryption alone takes 8.22 seconds (see Table ). Note that the database encryption requirement imposed by the rules of the iDASH 2023 competition implies that the problem should be solved in an unsupervised manner, although not necessary since the mean could be precomputed ahead of time and encrypted before leaving custody of the database owner. In fact, if this is the case, the latency of the full protocol reduces from 14.44 to 6.22 seconds (see Table ), an improvement by a factor of 2.32x. The ranking score of this solution, including the data encryption overhead, has value [12pt]{minimal} $$ {aucROC} e^{} = 0.92305$$ aucROC × e - t 5 = 0.92305 , which shows that the accuracy is heavily penalized by compute resources and total time of the protocol. This yields a throughput of about 27.7 queries per seconds (q/s). The unsupervised algorithm required minimal assumptions about the dataset and offered more robust generalization at the cost of sub-optimal accuracy and recall. As previously stated, this approach does not assume any knowledge of the database, and it is completely unsupervised, requiring computing the database mean using the encrypted database samples in the third-party computing entity. In practice, if future predictions are known to be drawn from a similar data distribution with same first-order and second-order statistics of the underlying mixture, then we can further optimize the required computation during the inference process by providing encrypted precomputed mean (and possibly inverse of variance if higher accuracy is desired). As discussed earlier, we are able to reduce the number of levels to [12pt]{minimal} $$L=3$$ L = 3 to speed up computation even more. All the results discussed in “ ” section were obtained using this version since it is outside the iDASH competition context and we assume that it is acceptable to have the mean precomputed. Clustering-based supervised Clustering is an unsupervised technique to mine and organize data points of similar characteristics into distinct groups. We consider this approach supervised for the sole fact of re-utilizing the representative entities of these groups, the cluster centroids, as anchors of knowledge for the inference on incoming queries. Essentially, the assumption is that these distinct representative groups summarize all information about the target underlying populations. For the iDASH 2023 competition, the centroids are fixed but, in practice, they could evolve as the underlying mixture changes, such that clustering could be performed offline as often as necessary, before the time for querying an unknown suspect. The accuracy obtained for the iDASH 2023 competition hits 100% success rate (Recall=1.0, Precision=1.0, False Positive Rate=0.0) for all 400 queries and is summarized by F1-score=1.0 and auROC=1.0 (see Figs. a and ). This was achieved both on the 400 queries from the challenge dataset and on the 400 unknown queries of the competition. In Fig. b, we observe forced assignment of the negative queries to clusters 1 and 4. This type of behavior is expected since in this case we only rely on the existing clusters (patterns of population characteristics) for the final decision. This issue is resolved by subtracting the distance between the query and the assigned cluster centroid (right term of Eq. ) from the distance of the query and the database mean (left term of Eq. ). This mechanism allows us to reject the wrong assignments by producing reliable prediction values (shown along y-axis of Fig. a) that facilitate to decide whether it is a positive or negative query with the choice of a threshold value; thus, achieving perfect results (see Fig. ). Figure b also shows that setting [12pt]{minimal} $$k=5$$ k = 5 probably led to overfitting of the centroids, having two or more centroids representing one same true underlying cluster. This probably means there are less populations than anticipated (i.e. [12pt]{minimal} $$k=5$$ k = 5 ), and we can visually inspect and suggest that number of populations might be [12pt]{minimal} $$k=2$$ k = 2 . We make an educated guess about k by analyzing the performance with the validation set with k varying from 2 to 25. As for computing performance, the time to complete the prediction of all 400 queries amounts to [12pt]{minimal} $$t=13.36$$ t = 13.36 seconds (see Table ), which includes the database encryption. However, database encryption is not needed since only the encrypted centroids are required for the inference step. The database encryption is merely a requirement imposed by the rules of the iDASH 2023 competition. In practice, this can be ignored since the encrypted database samples are not directly utilized for inference. Having considered this, it provides a throughput of about 29.9 queries per seconds (q/s) due to the database encryption overhead. This solution ranking score is [12pt]{minimal} $$ {auROC} e^{}=0.9565$$ auROC × e t 5 = 0.9565 , which is a significant improvement over the unsupervised solution score of 0.923. As for the classification performance, this method creates a clear separation between related queries and unrelated queries (see Fig. a). Note that, in practice, our solution can be deployed more efficiently in a more general setting, achieving better latency and higher throughput. Without the data encryption overhead, the full protocol completes in 7.53 seconds, yielding a higher throughput of 53q/s. We measure the computing performance of the full protocol without the iDASH competition constraints and verified that latency can be as low as 2.38 seconds, for all 400 query predictions, by running with 32 CPU cores (see Table for more details). As a result, throughput can be as high as 168 query predictions per second (see Table for more details), while keeping the required computation lean (i.e., avoiding unhelpful computing work). Linear regression model As for the trained linear regression model, the performance was evaluated on a test set containing 40 positive queries and 40 negative queries. The model achieves perfect accuracy, precision and recall, yielding F1-score=1.0 and auROC=1.0 (see Fig. a and b). As for precision of the predicted values in comparison with the expected, the model reaches R2-score=0.8363 and RMSE=0.2022 on the test set. Computing performance is characterized by an average latency of [12pt]{minimal} $$t=5.92$$ t = 5.92 seconds for predicting a batch of 400 queries using 4 CPU cores (see Table ). This yields a throughput of about 68 queries per seconds (see Table ). The estimated iDASH 2023 score for this solution is [12pt]{minimal} $$ {auROC} e^{}=0.9804$$ auROC × e - t 5 = 0.9804 , which is an improvement of about 2.5% over the clustering-based solution. This method creates a clear separation between related queries and unrelated queries (see Fig. b). For a more general understanding of latency performance, we show in Fig. that the latency scales almost linearly with the number of cores, predicting 400 queries in 2.22 seconds when running on 16 CPU cores. It also outputs prediction values in a range [-0.2,1.2] that approximates the probability range [0.0,1.0] (finding optimal prediction threshold value at 0.4958); thus, more naturally interpretable. It is instructive to discuss why true positive rate and precision are remarkably perfect, yielding an auROC score of 1.0. We refer to a few factors to justify it and claim that this does not mean overfitting of the trained model. First, we employ feature selection to find the most relevant predictors (features) capturing the main characteristics of underlying mixture, which in turn contributes to avoid overfitting of the linear regression model. Additionally, we resort to data augmentation to increase the number of samples per feature to encourage better generalization of the model, even though the curated dataset is not imbalanced. Finally, during the optimization of the objective function, we introduce a regularization term that also minimize the chances of the model from overfitting and instead guide it to achieve better generalization. However, it is true that the model is trained to make predictions under the assumption of a known underlying mixture and that the provided dataset holds sufficient and representative statistics of the target populations. The fully unsupervised solution is inspired by the z-test hypothesis test. It is targeted to the cases where the database owner has to encrypted the whole database and send the encrypted samples to the computing entity because it is unable to perform any pre-computation to reduce the computational burden on the server. Thus, the encrypted computation is fully unsupervised in the sense that there is no pre-computation needed for the prediction step on the server. Since no training is done for this approach, lower accuracy is expected. We obtain 0.90 in accuracy, recall and precision, which naturally yields F1-score=0.90. The optimal auROC is 0.9794 out of all of the ones plotted in Fig. ; however, in the encrypted domain the auROC value lowers to 0.9685 due to replacement of the variance by the mean as discussed above. Its computing efficiency is marked by latency of [12pt]{minimal} $$t=14.44$$ t = 14.44 seconds to execute the full secure protocol including database encryption, while running on 4 CPU cores. The database encryption alone takes 8.22 seconds (see Table ). Note that the database encryption requirement imposed by the rules of the iDASH 2023 competition implies that the problem should be solved in an unsupervised manner, although not necessary since the mean could be precomputed ahead of time and encrypted before leaving custody of the database owner. In fact, if this is the case, the latency of the full protocol reduces from 14.44 to 6.22 seconds (see Table ), an improvement by a factor of 2.32x. The ranking score of this solution, including the data encryption overhead, has value [12pt]{minimal} $$ {aucROC} e^{} = 0.92305$$ aucROC × e - t 5 = 0.92305 , which shows that the accuracy is heavily penalized by compute resources and total time of the protocol. This yields a throughput of about 27.7 queries per seconds (q/s). The unsupervised algorithm required minimal assumptions about the dataset and offered more robust generalization at the cost of sub-optimal accuracy and recall. As previously stated, this approach does not assume any knowledge of the database, and it is completely unsupervised, requiring computing the database mean using the encrypted database samples in the third-party computing entity. In practice, if future predictions are known to be drawn from a similar data distribution with same first-order and second-order statistics of the underlying mixture, then we can further optimize the required computation during the inference process by providing encrypted precomputed mean (and possibly inverse of variance if higher accuracy is desired). As discussed earlier, we are able to reduce the number of levels to [12pt]{minimal} $$L=3$$ L = 3 to speed up computation even more. All the results discussed in “ ” section were obtained using this version since it is outside the iDASH competition context and we assume that it is acceptable to have the mean precomputed. Clustering is an unsupervised technique to mine and organize data points of similar characteristics into distinct groups. We consider this approach supervised for the sole fact of re-utilizing the representative entities of these groups, the cluster centroids, as anchors of knowledge for the inference on incoming queries. Essentially, the assumption is that these distinct representative groups summarize all information about the target underlying populations. For the iDASH 2023 competition, the centroids are fixed but, in practice, they could evolve as the underlying mixture changes, such that clustering could be performed offline as often as necessary, before the time for querying an unknown suspect. The accuracy obtained for the iDASH 2023 competition hits 100% success rate (Recall=1.0, Precision=1.0, False Positive Rate=0.0) for all 400 queries and is summarized by F1-score=1.0 and auROC=1.0 (see Figs. a and ). This was achieved both on the 400 queries from the challenge dataset and on the 400 unknown queries of the competition. In Fig. b, we observe forced assignment of the negative queries to clusters 1 and 4. This type of behavior is expected since in this case we only rely on the existing clusters (patterns of population characteristics) for the final decision. This issue is resolved by subtracting the distance between the query and the assigned cluster centroid (right term of Eq. ) from the distance of the query and the database mean (left term of Eq. ). This mechanism allows us to reject the wrong assignments by producing reliable prediction values (shown along y-axis of Fig. a) that facilitate to decide whether it is a positive or negative query with the choice of a threshold value; thus, achieving perfect results (see Fig. ). Figure b also shows that setting [12pt]{minimal} $$k=5$$ k = 5 probably led to overfitting of the centroids, having two or more centroids representing one same true underlying cluster. This probably means there are less populations than anticipated (i.e. [12pt]{minimal} $$k=5$$ k = 5 ), and we can visually inspect and suggest that number of populations might be [12pt]{minimal} $$k=2$$ k = 2 . We make an educated guess about k by analyzing the performance with the validation set with k varying from 2 to 25. As for computing performance, the time to complete the prediction of all 400 queries amounts to [12pt]{minimal} $$t=13.36$$ t = 13.36 seconds (see Table ), which includes the database encryption. However, database encryption is not needed since only the encrypted centroids are required for the inference step. The database encryption is merely a requirement imposed by the rules of the iDASH 2023 competition. In practice, this can be ignored since the encrypted database samples are not directly utilized for inference. Having considered this, it provides a throughput of about 29.9 queries per seconds (q/s) due to the database encryption overhead. This solution ranking score is [12pt]{minimal} $$ {auROC} e^{}=0.9565$$ auROC × e t 5 = 0.9565 , which is a significant improvement over the unsupervised solution score of 0.923. As for the classification performance, this method creates a clear separation between related queries and unrelated queries (see Fig. a). Note that, in practice, our solution can be deployed more efficiently in a more general setting, achieving better latency and higher throughput. Without the data encryption overhead, the full protocol completes in 7.53 seconds, yielding a higher throughput of 53q/s. We measure the computing performance of the full protocol without the iDASH competition constraints and verified that latency can be as low as 2.38 seconds, for all 400 query predictions, by running with 32 CPU cores (see Table for more details). As a result, throughput can be as high as 168 query predictions per second (see Table for more details), while keeping the required computation lean (i.e., avoiding unhelpful computing work). As for the trained linear regression model, the performance was evaluated on a test set containing 40 positive queries and 40 negative queries. The model achieves perfect accuracy, precision and recall, yielding F1-score=1.0 and auROC=1.0 (see Fig. a and b). As for precision of the predicted values in comparison with the expected, the model reaches R2-score=0.8363 and RMSE=0.2022 on the test set. Computing performance is characterized by an average latency of [12pt]{minimal} $$t=5.92$$ t = 5.92 seconds for predicting a batch of 400 queries using 4 CPU cores (see Table ). This yields a throughput of about 68 queries per seconds (see Table ). The estimated iDASH 2023 score for this solution is [12pt]{minimal} $$ {auROC} e^{}=0.9804$$ auROC × e - t 5 = 0.9804 , which is an improvement of about 2.5% over the clustering-based solution. This method creates a clear separation between related queries and unrelated queries (see Fig. b). For a more general understanding of latency performance, we show in Fig. that the latency scales almost linearly with the number of cores, predicting 400 queries in 2.22 seconds when running on 16 CPU cores. It also outputs prediction values in a range [-0.2,1.2] that approximates the probability range [0.0,1.0] (finding optimal prediction threshold value at 0.4958); thus, more naturally interpretable. It is instructive to discuss why true positive rate and precision are remarkably perfect, yielding an auROC score of 1.0. We refer to a few factors to justify it and claim that this does not mean overfitting of the trained model. First, we employ feature selection to find the most relevant predictors (features) capturing the main characteristics of underlying mixture, which in turn contributes to avoid overfitting of the linear regression model. Additionally, we resort to data augmentation to increase the number of samples per feature to encourage better generalization of the model, even though the curated dataset is not imbalanced. Finally, during the optimization of the objective function, we introduce a regularization term that also minimize the chances of the model from overfitting and instead guide it to achieve better generalization. However, it is true that the model is trained to make predictions under the assumption of a known underlying mixture and that the provided dataset holds sufficient and representative statistics of the target populations. We contrast and compare the throughput and latency performance of the three different prediction algorithms. Our analysis is conducted using a contemporary Intel server processor, in particular, the Intel ® Xeon ® Platinum 8480+ CPU. In the previous section, the performance results were carried out with an older generation Intel processor that approximates the specification of the one used by the iDASH 2023 competition. Ideally, these workloads shall run on later generations with the latest and greatest of performance features. This section is intended to discuss performance gains due to different software configurations and choice of algorithm on a platform powered by Intel ® Xeon ® Platinum 8480+ server processor. Experimental setup The performance benchmark is organized into 4 different scenarios. The first scenario consists of the baseline performance configuration. Two others bring specific capabilities in isolation, namely, the usage of vectorized instructions with Intel ® AVX512 feature and the use of OpenMP 4.5 to parallelize encryption, decryption and the inference code with multicore execution. The fourth scenario is a combination of the last two configurations, i.e. simultaneously leveraging the instruction-level and core-level data parallelism. As follows, we describe the scenarios in detail. The baseline performance configuration entails a single-core execution of the workloads programmed with the Microsoft SEAL 4.0 API. In the second scenario the workload is programmed to execute with vectorized instructions using Intel ® AVX512 by enabling at compilation time Intel ® HEXL 1.2.3 in the Microsoft SEAL 4.0 API. The third scenario involves enabling parallel processing with OpenMP 4.5, where SIMD-friendly parts of the workloads are executed on multiple cores - the number of cores varies from 2 to 32. The fourth scenario combines scenarios 3 and 4. The performance metrics used for analysis are throughput (queries per second), latency (seconds), and normalized performance (throughput divided by baseline throughput). The experiments consist of the workload processing a batch of 400 inference predictions. Performance gain due to algorithm choice The choice of the algorithm for the application depends on several factors, including security, accuracy, and computing efficiency. For example, if little knowledge is known about the population mixture, then the z-test-inspired approach, i.e. the unsupervised method, can generalize better than the supervised approaches and can be less predisposed to overfitting; thus, providing less biased predictions. If statistically sufficient information about the population mixture is known, then parameter learning techniques allow more computationally efficient inference and can deliver more accurate predictions. In this regard, we assume the latter case, and analyze how much performance gain is expected if sufficient knowledge about the population mixture is known a priori. This means that we precompute the mean and variance for the z-test-inspired method, turning it into a supervised approach, such that it becomes more computationally efficient for requiring a lesser number of levels (multiplicative depth). In Fig. , each bar represents the expected performance gain in a specific scenario (i.e., baseline as single core execution, vector instructions, multicore, and multicore plus vector instructions). In each scenario, we collect the performance numbers of all three algorithms. We then compute the normalized performance between each algorithm as the throughput ratio [12pt]{minimal} $$T_A/T_B$$ T A / T B , where the workload A performed significantly better than the workload B; thus, the ratio corresponds to the performance gain of algorithm A over algorithm B (see more in Table ). We say that the expected performance gain due to the choice of the algorithm, if an algorithm performs better than the other, is given by the geometric mean of all the ratios [12pt]{minimal} $$T_A/T_B$$ T A / T B , where [12pt]{minimal} $$T_A>> T_B$$ T A > > T B , under the constraints of a particular execution environment scenario. In short, we pick each algorithm’s throughput and calculate its relative performance gain over each worse counterpart, then we report the expected performance gain, shown in Fig. , as the geometric mean over all these ratios computed within a specific scenario. In Table , we can observe that in the multicore execution environment, the workloads z-test and cluster perform comparably, in which case one could choose either; therefore, we do not consider their ratios to compute the geometric mean of the performance gain. Additionally, the linear regression method notably benefits the most from the multicore execution environment for the simplicity of its set of instructions. Overall, the expected performance gain considering all scenarios due to algorithm choice is characterized by a geometric mean of 1.52x. Performance gain due to vector instructions Intel ® has actively participated in open-source code contributions to accelerate HE’s arithmetic computing kernels. Subproducts of these efforts are Intel ® HE Acceleration Library (Intel ® HEXL) and Intel ® HE Acceleration Library for FPGAs (Intel ® HEXL-FPGA). Several existing HE API libraries, such as Microsoft SEAL, incorporate Intel ® HEXL kernels to accelerate their HE API calls on Intel platforms. These tools leverage AVX512 vector instructions offered as hardware features by Intel ® Xeon ® CPUs, e.g. Intel ® Xeon ® Platinum 8480+. The results are summarized in Fig. . Overall, the expected performance gain across all workloads has a geometric mean 2.06x. The throughput achieved when enabling execution with vector instructions is normalized against the baseline. It is worth noting that this gain is based on single-core execution. The performance impact of capitalizing the use of vector instructions when running with multiple working cores is discussed in “ ” section. Performance gain due to parallel processing To increase throughput, executing on multiple cores is essential. We test the scalability of throughput for these workloads through data parallel processing under execution with multiple cores. Each core gets shards of the 400 inferences and other parallelizable areas of code are also executed with multiple threads, each pinned to a specific physical core. In Fig. , we show how latency scales with increasing number of cores for each workload. The final performance gain for each workload is the highest speed-up achieved with a specific number of cores, i.e. the optimal number of cores to run that workload. Typically, either 16 or 32 cores performs the best. The performance gain is the throughput using multiple cores normalized by the baseline (single core), as presented in Fig. . Overall, the expected performance gain across all scenarios owing to parallel processing using optimal number of cores amounts to geometric mean of 15.39x. Performance gain due to vector instructions and parallel processing We also assess the performance gain achieved out of the combination of vector instructions and parallel processing using multiple cores. The performance analysis for this scenario can be summarized in Fig. and follows the same methodology used to compute the performance gain for parallel processing only. On average, the expected performance gain is 18.59x. In Fig. , we observe that the latency of the workloads on multicore execution with AVX512 does not scale at equivalent rates as it does during multicore execution alone (see Fig. ), despite displaying lower latency as reported in Table in contrast to the values in Table . Note also that the clustering-based workload scales more effectively in both scenarios. This phenomenon can be attributed to the clustering-based method algorithm featuring more parallelism-friendly code sections compared to the other algorithms. Specifically, in the linear regression code, homomorphic rotations are executed sequentially due to data dependency, constituting a substantial portion of the computational time. In the case of the z-test-inspired workload, a significant portion of the algorithm code is non-parallelizable, particularly the linear approximation involving division by the mean. The performance benchmark is organized into 4 different scenarios. The first scenario consists of the baseline performance configuration. Two others bring specific capabilities in isolation, namely, the usage of vectorized instructions with Intel ® AVX512 feature and the use of OpenMP 4.5 to parallelize encryption, decryption and the inference code with multicore execution. The fourth scenario is a combination of the last two configurations, i.e. simultaneously leveraging the instruction-level and core-level data parallelism. As follows, we describe the scenarios in detail. The baseline performance configuration entails a single-core execution of the workloads programmed with the Microsoft SEAL 4.0 API. In the second scenario the workload is programmed to execute with vectorized instructions using Intel ® AVX512 by enabling at compilation time Intel ® HEXL 1.2.3 in the Microsoft SEAL 4.0 API. The third scenario involves enabling parallel processing with OpenMP 4.5, where SIMD-friendly parts of the workloads are executed on multiple cores - the number of cores varies from 2 to 32. The fourth scenario combines scenarios 3 and 4. The performance metrics used for analysis are throughput (queries per second), latency (seconds), and normalized performance (throughput divided by baseline throughput). The experiments consist of the workload processing a batch of 400 inference predictions. The choice of the algorithm for the application depends on several factors, including security, accuracy, and computing efficiency. For example, if little knowledge is known about the population mixture, then the z-test-inspired approach, i.e. the unsupervised method, can generalize better than the supervised approaches and can be less predisposed to overfitting; thus, providing less biased predictions. If statistically sufficient information about the population mixture is known, then parameter learning techniques allow more computationally efficient inference and can deliver more accurate predictions. In this regard, we assume the latter case, and analyze how much performance gain is expected if sufficient knowledge about the population mixture is known a priori. This means that we precompute the mean and variance for the z-test-inspired method, turning it into a supervised approach, such that it becomes more computationally efficient for requiring a lesser number of levels (multiplicative depth). In Fig. , each bar represents the expected performance gain in a specific scenario (i.e., baseline as single core execution, vector instructions, multicore, and multicore plus vector instructions). In each scenario, we collect the performance numbers of all three algorithms. We then compute the normalized performance between each algorithm as the throughput ratio [12pt]{minimal} $$T_A/T_B$$ T A / T B , where the workload A performed significantly better than the workload B; thus, the ratio corresponds to the performance gain of algorithm A over algorithm B (see more in Table ). We say that the expected performance gain due to the choice of the algorithm, if an algorithm performs better than the other, is given by the geometric mean of all the ratios [12pt]{minimal} $$T_A/T_B$$ T A / T B , where [12pt]{minimal} $$T_A>> T_B$$ T A > > T B , under the constraints of a particular execution environment scenario. In short, we pick each algorithm’s throughput and calculate its relative performance gain over each worse counterpart, then we report the expected performance gain, shown in Fig. , as the geometric mean over all these ratios computed within a specific scenario. In Table , we can observe that in the multicore execution environment, the workloads z-test and cluster perform comparably, in which case one could choose either; therefore, we do not consider their ratios to compute the geometric mean of the performance gain. Additionally, the linear regression method notably benefits the most from the multicore execution environment for the simplicity of its set of instructions. Overall, the expected performance gain considering all scenarios due to algorithm choice is characterized by a geometric mean of 1.52x. Intel ® has actively participated in open-source code contributions to accelerate HE’s arithmetic computing kernels. Subproducts of these efforts are Intel ® HE Acceleration Library (Intel ® HEXL) and Intel ® HE Acceleration Library for FPGAs (Intel ® HEXL-FPGA). Several existing HE API libraries, such as Microsoft SEAL, incorporate Intel ® HEXL kernels to accelerate their HE API calls on Intel platforms. These tools leverage AVX512 vector instructions offered as hardware features by Intel ® Xeon ® CPUs, e.g. Intel ® Xeon ® Platinum 8480+. The results are summarized in Fig. . Overall, the expected performance gain across all workloads has a geometric mean 2.06x. The throughput achieved when enabling execution with vector instructions is normalized against the baseline. It is worth noting that this gain is based on single-core execution. The performance impact of capitalizing the use of vector instructions when running with multiple working cores is discussed in “ ” section. To increase throughput, executing on multiple cores is essential. We test the scalability of throughput for these workloads through data parallel processing under execution with multiple cores. Each core gets shards of the 400 inferences and other parallelizable areas of code are also executed with multiple threads, each pinned to a specific physical core. In Fig. , we show how latency scales with increasing number of cores for each workload. The final performance gain for each workload is the highest speed-up achieved with a specific number of cores, i.e. the optimal number of cores to run that workload. Typically, either 16 or 32 cores performs the best. The performance gain is the throughput using multiple cores normalized by the baseline (single core), as presented in Fig. . Overall, the expected performance gain across all scenarios owing to parallel processing using optimal number of cores amounts to geometric mean of 15.39x. We also assess the performance gain achieved out of the combination of vector instructions and parallel processing using multiple cores. The performance analysis for this scenario can be summarized in Fig. and follows the same methodology used to compute the performance gain for parallel processing only. On average, the expected performance gain is 18.59x. In Fig. , we observe that the latency of the workloads on multicore execution with AVX512 does not scale at equivalent rates as it does during multicore execution alone (see Fig. ), despite displaying lower latency as reported in Table in contrast to the values in Table . Note also that the clustering-based workload scales more effectively in both scenarios. This phenomenon can be attributed to the clustering-based method algorithm featuring more parallelism-friendly code sections compared to the other algorithms. Specifically, in the linear regression code, homomorphic rotations are executed sequentially due to data dependency, constituting a substantial portion of the computational time. In the case of the z-test-inspired workload, a significant portion of the algorithm code is non-parallelizable, particularly the linear approximation involving division by the mean. We propose three different methods to query kinship in genomic database. We submitted two of these methods, specifically the ones described in “ ” and “ ” sections, as solutions to the iDASH 2023 Track 1 competition. The submissions served us as study cases to validate their robustness on unseen data. Accordingly, we put emphasis on low latency since it bears an exponential weight on the final score used to rank the submissions. To comply with these rules we also avoided any processing in the cleartext domain at runtime during inference on unseen data. Our solutions improve the computing latency by three orders of magnitude over the naive solution. The performance results of the submissions to the iDASH 2023 competition are summarized in Table . We placed 3rd with the supervised solution described in “ ” section. They also guarantee 128-bit security (through a lattice cryptography scheme), ensuring genomic data privacy protection during computation of the predictions. This directly addresses the weakness of the other methodologies of privacy protection discussed in “ ” section, in which the private data can still leak or become unprotected. Although our methods are strongly influenced by the iDASH2023 competition challenge, a broader study on performance and design was carried out in “ ” section, allowing us to expand the scope of our findings. The proposed methods are sufficiently functional, adaptable and practically feasible to address secure computation in genomics applications related to the FGG use case. Their applicability goes beyond what it has been demonstrated in this work. For example, changing the comparison reference from the database mean to an individual genome leads to expand the scope of application to predict exact or partial matches between pairs of genomes and to estimate their familial relationship (step 5 in the FGG task) given the predicted score. Generally, we also note that considerable streamlining of the prediction algorithm and reformulation of its objectives are imperative for rendering it amenable to Homomorphic Encryption while ensuring computational efficiency. In scenarios where the domain is well-established and constrained, supervised solutions prove to be the more efficient and precise choice, particularly, with admixture populations. Conversely, unsupervised solutions, although entailing greater computational cost and higher number of multiplicative levels, tend to exhibit superior generalization capabilities when the population statistics is uncertain. The obtained results demonstrate that privacy-preserving solutions based on homomorphic encryption can be computationally practical to protect genomic privacy during the stage of filtering candidate matches for further genealogy study in Forensic Genetic Genealogy (FGG). The screening of the searchable databases can happen in seconds and with high accuracy; thus, providing the ability to expedite the identification process of unknown suspects by narrowing down the number of databases in which to perform genealogy analysis without compromising genomic privacy. |
Health literacy in parents of children with Hirschsprung disease: a novel study | eb3b22bb-ad9e-4005-b0f0-e99a520ab1ec | 11618141 | Health Literacy[mh] | Hirschsprung disease (HD) affects one in 5000 newborns with a male predominance of four to one. The condition involves the lack of ganglion cells in the myenteric and submucosal plexuses along a variable length of the distal gut, causing functional bowel obstruction . Up to 30% of patients with HD have other comorbidities, Down syndrome being the most common involving around 10% of cases . Although primary surgery for HD is generally successful, post-operative bowel dysfunction is common to varying degrees long term . Bowel management in children with HD can be complex, involving medication, bowel evacuation routines, and special diets that require close parental control . Parents coordinate care, communicate with daycare and schools, and are central in treatment decisions. They often need to cope with mental, physical, and social stress related to their child’s condition, which can negatively impact the daily life of both the child and the rest of the family . Health literacy (HL) is the ability to access, comprehend, evaluate, and apply health-related information . Enhanced HL is considered fundamental for future healthcare, enabling digitalization, home-based care, shared decision-making, and equity . Parental HL encompasses a range of skills and competencies that allow parents to effectively navigate the healthcare system, understand medical instructions, communicate with healthcare providers (HCP), and make informed choices about their child’s health . A recent systematic review on the relationship between parental HL and health outcomes for children with chronic diseases found a clear link between parental HL, health behavior and child health outcomes . HL in parents of children with HD has not previously been studied. The aim of this study was, therefore, to explore parental HL in the context of HD and to investigate the possible effects of demographic factors and self-efficacy on parental HL.
Study design and recruitment A cross-sectional study was conducted with parents of children under 16 years who had undergone HD surgery at Oslo University Hospital. The hospital is a tertiary referral center for pediatric surgery and treats around 80% of the country’s HD patients. The department participates in the European reference network ERNICA and offers multidisciplinary follow-up, including psychosocial support to families. We identified 137 patients who underwent HD surgery between 2007 and 2024 through patient records. Two patients had died and five had moved abroad, leaving 130 eligible participants. Primary caregivers able to answer the questionnaire in Norwegian were invited via mailed invitations or at the outpatient clinic by an independent person from October 2023 to May 2024. Participants could complete the form online or on paper and non-responders received a reminder after three weeks. Measures Patient and parent characteristics Clinical data such as diagnosis, surgeries, comorbidities and age at diagnosis were collected from records. Caregiver information, such as living situation, education, home language and work situation was collected via questionnaire. Hirschsprung disease study-specific questionnaire A study-specific questionnaire on general knowledge about HD included 6 statements on basic facts and misconceptions (Fig. ). This questionnaire was designed to get a general impression of the participants’ disease-specific knowledge about HD. We tested the questions with parents and colleagues and revised them locally. Parents used a 5-point Likert scale to indicate agreement. For analysis, “strongly agree” and “agree” were grouped as “agree”, and “strongly disagree” and “disagree” as “disagree”. A comment section was provided for additional remarks. The health literacy questionnaire – parent (HLQ-p) The HLQ is a generic, multidimensional instrument designed to assess an individual’s HL skills and abilities . We used the parent-specific version of the HLQ (HLQ-p), which is a validated tool used to assess HL levels among parents in relation to the healthcare of their children . It evaluates the ability of parents to access, understand and communicate health information related to their child’s healthcare. The questionnaire consists of nine domains relating to parental HL; (1) feeling that healthcare providers understand and support their child’s situation, (2) having sufficient information to manage their child’s health, (3) actively managing their child’s health, (4) social support for health, (5) appraisal of health information, (6) ability to actively engage with healthcare providers, (7) navigating the healthcare system, (8) ability to find good health information, and (9) understanding health information well enough to know what to do. Responses for domains 1 to 5 are measured on a 4-point Likert scale ranging from “strongly disagree” to “strongly agree”, while domains 6 to 9 utilize a 5-point Likert scale assessing capability/difficulty ranging from “can’t do/always difficult” to “always easy”. A total score is not calculated for the nine HLQ-p scales. Instead, mean scale scores are calculated and interpreted separately (10). Scores > 2 on scales 1–5 signal a change from “disagree” to “agree”, and > 3 on scales 6–9 suggest a shift from “sometimes difficult” to “usually easy”, reflecting changes in HL without setting a fixed threshold for limitation. The HLQ-p has been validated in Norwegian with satisfactory results (11). The reliability of the HLQ-p was assessed using Cronbach’s alpha (Cronbach’s α 0.8–0.9). The electronic health literacy scale (eHEALS) The eHEALS assessed the participants’ perceived level of electronic HL (eHL), regarding finding, evaluating and utilizing online healthcare information . The 8-question survey uses a 5-point Likert scale, with a higher score suggesting a higher level of eHL . The tool has shown robust construct validity and reliability across various settings . Cronbach’s alpha was 0.9. The general self-efficacy scale (GSES) The GSES is a psychological assessment tool measuring the participants’ belief in their ability to handle challenges and accomplish goals . Comprising 10 items, scores range from 10 to 40, with higher scores indicating higher self-perceived self-efficacy. For this study, scores were normalized to a 1 to 4 scale. The GSES has shown validity and reliability in studies on patients with different conditions . Cronbach’s alpha was 0.9. Statistical analysis Data analyses were performed using Stata 18.0. Initially, general characteristics were summarized using means and standard deviations. Independent t -test was used to compare differences between parent and child factors against HLQ-p domains. To assess the connections between HLQ-p, eHEALS, and various parental and child factors, bivariate correlation (Pearson’s R) was utilized. Next, a hierarchical linear multiple regression analysis in three steps was performed using the enter method; Step 1 including age, education and language; Step 2 living arrangements; and Step 3 involved adjustment for GSES score. The selection of variables included in the regression models was guided by the initial analyses. The associations are presented as standardized beta coefficients. Adjusted R 2 explained variation in the associations. A cluster variable for paired parents (88 pairs) ensured valid regression analysis, however, adjustments for clustering revealed no significant differences, therefore all parents ( n = 132) were included. Significance was set at p < 0.05. The online form ensured no missing data for the HLQ-p, GSES and HD-questionnaire by making responses mandatory. We achieved a 98% completion rate for the optional eHEALS, which was deemed negligible for the analysis. Ethics The project was ethically approved by the Regional Committee for Medical Ethics (REK; 402,216) and the Hospital’s Data Protection Officer (22/03367). All parents gave written consent. The children received age-appropriate information about the study.
A cross-sectional study was conducted with parents of children under 16 years who had undergone HD surgery at Oslo University Hospital. The hospital is a tertiary referral center for pediatric surgery and treats around 80% of the country’s HD patients. The department participates in the European reference network ERNICA and offers multidisciplinary follow-up, including psychosocial support to families. We identified 137 patients who underwent HD surgery between 2007 and 2024 through patient records. Two patients had died and five had moved abroad, leaving 130 eligible participants. Primary caregivers able to answer the questionnaire in Norwegian were invited via mailed invitations or at the outpatient clinic by an independent person from October 2023 to May 2024. Participants could complete the form online or on paper and non-responders received a reminder after three weeks.
Patient and parent characteristics Clinical data such as diagnosis, surgeries, comorbidities and age at diagnosis were collected from records. Caregiver information, such as living situation, education, home language and work situation was collected via questionnaire. Hirschsprung disease study-specific questionnaire A study-specific questionnaire on general knowledge about HD included 6 statements on basic facts and misconceptions (Fig. ). This questionnaire was designed to get a general impression of the participants’ disease-specific knowledge about HD. We tested the questions with parents and colleagues and revised them locally. Parents used a 5-point Likert scale to indicate agreement. For analysis, “strongly agree” and “agree” were grouped as “agree”, and “strongly disagree” and “disagree” as “disagree”. A comment section was provided for additional remarks. The health literacy questionnaire – parent (HLQ-p) The HLQ is a generic, multidimensional instrument designed to assess an individual’s HL skills and abilities . We used the parent-specific version of the HLQ (HLQ-p), which is a validated tool used to assess HL levels among parents in relation to the healthcare of their children . It evaluates the ability of parents to access, understand and communicate health information related to their child’s healthcare. The questionnaire consists of nine domains relating to parental HL; (1) feeling that healthcare providers understand and support their child’s situation, (2) having sufficient information to manage their child’s health, (3) actively managing their child’s health, (4) social support for health, (5) appraisal of health information, (6) ability to actively engage with healthcare providers, (7) navigating the healthcare system, (8) ability to find good health information, and (9) understanding health information well enough to know what to do. Responses for domains 1 to 5 are measured on a 4-point Likert scale ranging from “strongly disagree” to “strongly agree”, while domains 6 to 9 utilize a 5-point Likert scale assessing capability/difficulty ranging from “can’t do/always difficult” to “always easy”. A total score is not calculated for the nine HLQ-p scales. Instead, mean scale scores are calculated and interpreted separately (10). Scores > 2 on scales 1–5 signal a change from “disagree” to “agree”, and > 3 on scales 6–9 suggest a shift from “sometimes difficult” to “usually easy”, reflecting changes in HL without setting a fixed threshold for limitation. The HLQ-p has been validated in Norwegian with satisfactory results (11). The reliability of the HLQ-p was assessed using Cronbach’s alpha (Cronbach’s α 0.8–0.9). The electronic health literacy scale (eHEALS) The eHEALS assessed the participants’ perceived level of electronic HL (eHL), regarding finding, evaluating and utilizing online healthcare information . The 8-question survey uses a 5-point Likert scale, with a higher score suggesting a higher level of eHL . The tool has shown robust construct validity and reliability across various settings . Cronbach’s alpha was 0.9. The general self-efficacy scale (GSES) The GSES is a psychological assessment tool measuring the participants’ belief in their ability to handle challenges and accomplish goals . Comprising 10 items, scores range from 10 to 40, with higher scores indicating higher self-perceived self-efficacy. For this study, scores were normalized to a 1 to 4 scale. The GSES has shown validity and reliability in studies on patients with different conditions . Cronbach’s alpha was 0.9.
Clinical data such as diagnosis, surgeries, comorbidities and age at diagnosis were collected from records. Caregiver information, such as living situation, education, home language and work situation was collected via questionnaire.
A study-specific questionnaire on general knowledge about HD included 6 statements on basic facts and misconceptions (Fig. ). This questionnaire was designed to get a general impression of the participants’ disease-specific knowledge about HD. We tested the questions with parents and colleagues and revised them locally. Parents used a 5-point Likert scale to indicate agreement. For analysis, “strongly agree” and “agree” were grouped as “agree”, and “strongly disagree” and “disagree” as “disagree”. A comment section was provided for additional remarks.
The HLQ is a generic, multidimensional instrument designed to assess an individual’s HL skills and abilities . We used the parent-specific version of the HLQ (HLQ-p), which is a validated tool used to assess HL levels among parents in relation to the healthcare of their children . It evaluates the ability of parents to access, understand and communicate health information related to their child’s healthcare. The questionnaire consists of nine domains relating to parental HL; (1) feeling that healthcare providers understand and support their child’s situation, (2) having sufficient information to manage their child’s health, (3) actively managing their child’s health, (4) social support for health, (5) appraisal of health information, (6) ability to actively engage with healthcare providers, (7) navigating the healthcare system, (8) ability to find good health information, and (9) understanding health information well enough to know what to do. Responses for domains 1 to 5 are measured on a 4-point Likert scale ranging from “strongly disagree” to “strongly agree”, while domains 6 to 9 utilize a 5-point Likert scale assessing capability/difficulty ranging from “can’t do/always difficult” to “always easy”. A total score is not calculated for the nine HLQ-p scales. Instead, mean scale scores are calculated and interpreted separately (10). Scores > 2 on scales 1–5 signal a change from “disagree” to “agree”, and > 3 on scales 6–9 suggest a shift from “sometimes difficult” to “usually easy”, reflecting changes in HL without setting a fixed threshold for limitation. The HLQ-p has been validated in Norwegian with satisfactory results (11). The reliability of the HLQ-p was assessed using Cronbach’s alpha (Cronbach’s α 0.8–0.9).
The eHEALS assessed the participants’ perceived level of electronic HL (eHL), regarding finding, evaluating and utilizing online healthcare information . The 8-question survey uses a 5-point Likert scale, with a higher score suggesting a higher level of eHL . The tool has shown robust construct validity and reliability across various settings . Cronbach’s alpha was 0.9.
The GSES is a psychological assessment tool measuring the participants’ belief in their ability to handle challenges and accomplish goals . Comprising 10 items, scores range from 10 to 40, with higher scores indicating higher self-perceived self-efficacy. For this study, scores were normalized to a 1 to 4 scale. The GSES has shown validity and reliability in studies on patients with different conditions . Cronbach’s alpha was 0.9.
Data analyses were performed using Stata 18.0. Initially, general characteristics were summarized using means and standard deviations. Independent t -test was used to compare differences between parent and child factors against HLQ-p domains. To assess the connections between HLQ-p, eHEALS, and various parental and child factors, bivariate correlation (Pearson’s R) was utilized. Next, a hierarchical linear multiple regression analysis in three steps was performed using the enter method; Step 1 including age, education and language; Step 2 living arrangements; and Step 3 involved adjustment for GSES score. The selection of variables included in the regression models was guided by the initial analyses. The associations are presented as standardized beta coefficients. Adjusted R 2 explained variation in the associations. A cluster variable for paired parents (88 pairs) ensured valid regression analysis, however, adjustments for clustering revealed no significant differences, therefore all parents ( n = 132) were included. Significance was set at p < 0.05. The online form ensured no missing data for the HLQ-p, GSES and HD-questionnaire by making responses mandatory. We achieved a 98% completion rate for the optional eHEALS, which was deemed negligible for the analysis.
The project was ethically approved by the Regional Committee for Medical Ethics (REK; 402,216) and the Hospital’s Data Protection Officer (22/03367). All parents gave written consent. The children received age-appropriate information about the study.
Cohort characteristics Parents of 91/130 (70%) children completed the questionnaires. The median age of the children was 8 (0–15) years (Table ). We received 132 parent responses, of which 79 (60%) were from mothers. Responses included 44 cases where both parents participated. In addition, one parent responded for 3 siblings, and another parent responded for 2 siblings. For the remaining 42 children, one parent responded to the child, providing a total number of 132 responses. Mean parental age was 39.8 years (SD 6.8), with no significant age difference between fathers and mothers (41.1 versus 38.9 years, p = 0.7). Most parents lived with the other parent of the child (79.5%), had higher education (52%), and worked full time (79%). 36% of the parents spoke another language or combined another language with Norwegian at home. Of the children, 75% were male, 74% had short segment HD and 28% had additional comorbidity, Down syndrome being the most common (13%). General knowledge about Hirschsprung disease The results from the HD study-specific questionnaire indicated that most parents had a good general knowledge about the congenital nature and rarity of HD (Fig. ). They also recognized the necessity for regular bowel movements and acknowledged that children with HD can get more ill from stomach flu than other children. Awareness about the existence of a national patient association was limited. Health literacy, eHEALS and GSES scores The average HLQ-p scores were above the critical low thresholds, with the highest scores in the domains “understanding health information well enough to know what to do” (domain 9) and “active engagement” (domain 6) (Table ). The lowest scores were observed in the domains “feeling that HCP understands and supports my child’s situation” (domain 1), “appraisal of health information” (domain 5) and “social support” (domain 4). The parents generally demonstrated high eHL scores with 82% of the parents having a total score of > 3 points (maximum score 5), suggesting good ability to use electronic resources to manage their child’s health. For self-efficacy, the mean GSES score was 3.2 (SD 0.5, maximum score 4) with 69% of scoring high, defined as score > 2. GSES scores were comparable between mothers and fathers (mean score 3.2 versus 3.1, p = 0.5) (Table ). Factors influencing health literacy Higher self-efficacy, living with the child’s other parent, and higher education correlated with higher scores in most HLQ-p domains (Table ). Norwegian-only speakers at home and parents over 40 years also scored higher in certain domains. Parental sex and child-related factors such as the child’s age, time since diagnosis, length of aganglionosis, comorbidity or syndromes showed no correlation with HLQ-p scores and were therefore excluded from the multivariate regression analysis. In summary, the regression analysis revealed that parental age, language spoken at home, education and living arrangements significantly influenced HL scores (Table , supplement). Parents over 40 years scored higher in understanding health information and managing their child’s health (domains 2 and 9, St. β 0.2). Norwegian-only speakers scored higher in communication and healthcare system navigation (domains 6 and 7, St. β 0.3). Higher education correlated with higher scores in all domains (St. β 0.2 to 0.5). When including living arrangements (Step 2), parents living together scored higher in most domains except in domain 7, navigation and 3, active management (St. β 0.2 to 0.5). Meanwhile, higher education remained significant for all domains except in domain 3, active management (St. β 0.2 to 0.4). Norwegian-only parents continued to score higher in communication and navigation (domains 6 and 7, St. β 0.3 to 0.4). When adding the GSES score (Step 3), higher self-efficacy correlated with higher scores across all HLQ-p domains (St. β 0.2 to 0.7). Cohabitating parents still scored higher in HCP support and communication (domains 1 and 6, St. β 0.3, 0.5), social support (domain 4, St. β 0.5), critical appraisal (domain 5, St. β 0.2) and finding and understanding health information (domain 8 and 9, St. β 0.3, 0.4). The final model explained 20–50% of the variance in the HLQ-p scales.
Parents of 91/130 (70%) children completed the questionnaires. The median age of the children was 8 (0–15) years (Table ). We received 132 parent responses, of which 79 (60%) were from mothers. Responses included 44 cases where both parents participated. In addition, one parent responded for 3 siblings, and another parent responded for 2 siblings. For the remaining 42 children, one parent responded to the child, providing a total number of 132 responses. Mean parental age was 39.8 years (SD 6.8), with no significant age difference between fathers and mothers (41.1 versus 38.9 years, p = 0.7). Most parents lived with the other parent of the child (79.5%), had higher education (52%), and worked full time (79%). 36% of the parents spoke another language or combined another language with Norwegian at home. Of the children, 75% were male, 74% had short segment HD and 28% had additional comorbidity, Down syndrome being the most common (13%).
The results from the HD study-specific questionnaire indicated that most parents had a good general knowledge about the congenital nature and rarity of HD (Fig. ). They also recognized the necessity for regular bowel movements and acknowledged that children with HD can get more ill from stomach flu than other children. Awareness about the existence of a national patient association was limited.
The average HLQ-p scores were above the critical low thresholds, with the highest scores in the domains “understanding health information well enough to know what to do” (domain 9) and “active engagement” (domain 6) (Table ). The lowest scores were observed in the domains “feeling that HCP understands and supports my child’s situation” (domain 1), “appraisal of health information” (domain 5) and “social support” (domain 4). The parents generally demonstrated high eHL scores with 82% of the parents having a total score of > 3 points (maximum score 5), suggesting good ability to use electronic resources to manage their child’s health. For self-efficacy, the mean GSES score was 3.2 (SD 0.5, maximum score 4) with 69% of scoring high, defined as score > 2. GSES scores were comparable between mothers and fathers (mean score 3.2 versus 3.1, p = 0.5) (Table ).
Higher self-efficacy, living with the child’s other parent, and higher education correlated with higher scores in most HLQ-p domains (Table ). Norwegian-only speakers at home and parents over 40 years also scored higher in certain domains. Parental sex and child-related factors such as the child’s age, time since diagnosis, length of aganglionosis, comorbidity or syndromes showed no correlation with HLQ-p scores and were therefore excluded from the multivariate regression analysis. In summary, the regression analysis revealed that parental age, language spoken at home, education and living arrangements significantly influenced HL scores (Table , supplement). Parents over 40 years scored higher in understanding health information and managing their child’s health (domains 2 and 9, St. β 0.2). Norwegian-only speakers scored higher in communication and healthcare system navigation (domains 6 and 7, St. β 0.3). Higher education correlated with higher scores in all domains (St. β 0.2 to 0.5). When including living arrangements (Step 2), parents living together scored higher in most domains except in domain 7, navigation and 3, active management (St. β 0.2 to 0.5). Meanwhile, higher education remained significant for all domains except in domain 3, active management (St. β 0.2 to 0.4). Norwegian-only parents continued to score higher in communication and navigation (domains 6 and 7, St. β 0.3 to 0.4). When adding the GSES score (Step 3), higher self-efficacy correlated with higher scores across all HLQ-p domains (St. β 0.2 to 0.7). Cohabitating parents still scored higher in HCP support and communication (domains 1 and 6, St. β 0.3, 0.5), social support (domain 4, St. β 0.5), critical appraisal (domain 5, St. β 0.2) and finding and understanding health information (domain 8 and 9, St. β 0.3, 0.4). The final model explained 20–50% of the variance in the HLQ-p scales.
The main finding of this study exploring HL in parents of HD children is that the parents generally have good knowledge about the disease, but struggle with social and emotional aspects of caring for their child. A comprehensive study on HL in parents of children with HD has not been conducted previously, and our results offer several new insights. Parents reported a lack of social support related to their child’s HD. We do not know the reasons for this but hypothesize that the stigma associated with defecation problems and the rarity of HD contribute to the sense of isolation . Furthermore, Norway’s geography makes finding peers and support networks locally challenging. Besides, only half of the parents in this study were aware of the HD patient association, suggesting a possible source for peer support and shared experiences may be underutilized. Previous research has found similar issues among HD families , with one study stressing parents’ lack of self-efficacy in seeking social support when caring for a child with HD . Parents of children with anorectal malformation (ARM) experience similar psychosocial burdens , indicating a need for accessible support systems. Nevertheless, HCP should inform families about patient groups and support networks. Parents generally perceived a lack of support and understanding from HCP about the child’s situation, which is surprising as our center offers HD families direct contact with their care team, including stoma nurses, and patients are routinely followed until age 18 with a transition consultation to prepare for adult healthcare systems. The reasons for this perception are not clear, but it is possible that specialized HD professionals unintentionally make parents feel overlooked in their efforts to normalize the condition and reduce over-medicalization. Additionally, some parents found interactions with general practitioners and emergency room staff challenging due to their unfamiliarity with HD, leading to difficulties in symptom interpretation and appropriate treatment. Effective family-centered care requires HCP to provide parents with appropriate information, discuss treatment options and value their preferences and concerns . Improving these aspects is crucial in building trust and ensuring parents and their children feel supported and informed. HD parents struggled with evaluating the quality and relevance of health information. This may contribute to their perception of being less capable of managing their child’s condition compared to parents of children with other chronic illnesses . Since HD management is different for every child, parents need to adapt advice to their child´s specific needs, which requires critical HL skills . If HCP acknowledge these challenges, they can give better support and help families feel confident. Sociodemographic factors influencing health literacy Parental sex did not influence HL levels in this study. Some research suggests fathers are less engaged in health services than mothers . However, one study found higher communicative HL in fathers, although, they were also more educated than the mothers . Our study is unique due to the high participation of fathers, possibly reflecting Norway’s emphasis on equal parental rights and social-gender equality. The similar HL levels in mothers and fathers may reflect mutual involvement in caring for a child with HD. Nevertheless, parental collaboration is crucial in alleviating the adverse impacts of chronic conditions on a child’s overall well-being . We found that younger HD parents had lower HL, aligning with some, but not all studies on parental HL . Interestingly, time since HD diagnosis (a measure of experience) did not influence HL levels, suggesting that age, rather than experience, plays a role in enhancing HL. This may be due to parental experience and maturity and implies that young parents need extra support. The finding that lower education predicts low HL is expected and consistent with global literature . One study linked reduced HL to lower socioeconomic status, revealing barriers to care access and shared decision-making for those parents . Academic education likely improves HL through accumulated knowledge and skills . However, higher education does not guarantee high HL as many highly educated parents also had HL challenges. Parents not living with the child’s other parent had more HL challenges. Research on social determinants for health in HD found that parental marital status affected a child’s risk of developing Hirschsprung-associated enterocolitis . Similarly, unmarried maternal status has been linked to increased birth-related risks . These findings underscore the need to consider family structure in HD management, suggesting targeted interventions for HL challenges in diverse family settings. Language barriers and cultural disparities are known to complicate communication and HL and may even affect postoperative outcomes . Immigrants and their Norwegian-born children make up roughly 20% of Norway’s population, and significant HL disparities exist among these communities . Parents who spoke only Norwegian at home had better engagement with HCP and understanding of healthcare systems compared to those who also spoke another language. This suggests that even proficient Norwegian-speaking bilingual parents may have HL challenges related to language and that the use of interpreter services is crucial. Besides, excluding non-Norwegian-speaking parents likely skews our findings towards higher HL. No child-related factors influenced parental HL. Since children with complex HD or comorbidity have more interactions with healthcare, we expected their parents to have increased HL. However, having a child with comorbidity, long-segment HD, permanent stoma or appendicostomy showed no link to improved HL. Research has not conclusively established a relationship between comorbidities and parental HL, and one study in fact linked comorbidity to lower HL . Comorbidities may require parents to comprehend diverse information, potentially challenging their HL skills. Our results point out self-efficacy as a strong predictor of parental HL, consistent with existing research in various pediatric patient groups . Enhancing self-efficacy through tailored interventions like education, mastery classes, and support networks could effectively improve HL in HD parents. Furthermore, parents demonstrated high levels of eHL, similar to findings among Swedish parents of children needing surgical care , suggesting eHL interventions could be effective. Electronic resources can provide accessible, tailored information, enabling informed decisions and active participation in their child’s care . Strengths and weaknesses An important strength of the study is the authentic representation of the parent population. Oslo University Hospital treats about 80% of HD patients in Norway, and we evaluated HL in parents of 70% of these children. Families not included are those living in the northern part of Norway, typically with longer distances to the local hospital. Additionally, the study includes a substantial number of fathers and non-native speakers. Offering both online and paper surveys ensured diverse eHL levels. Another strength lies in the use of validated tools. Weaknesses involve the cross-sectional design lacking long-term follow-up, the relatively small population limiting advanced statistical analysis, lack of data on non-responders, and insufficient information on non-Norwegian-speaking parents’ HL. Lastly, the study-specific questionnaire has not undergone formal validation, so we cannot be certain that it accurately measures parents’ knowledge about HD.
Parental sex did not influence HL levels in this study. Some research suggests fathers are less engaged in health services than mothers . However, one study found higher communicative HL in fathers, although, they were also more educated than the mothers . Our study is unique due to the high participation of fathers, possibly reflecting Norway’s emphasis on equal parental rights and social-gender equality. The similar HL levels in mothers and fathers may reflect mutual involvement in caring for a child with HD. Nevertheless, parental collaboration is crucial in alleviating the adverse impacts of chronic conditions on a child’s overall well-being . We found that younger HD parents had lower HL, aligning with some, but not all studies on parental HL . Interestingly, time since HD diagnosis (a measure of experience) did not influence HL levels, suggesting that age, rather than experience, plays a role in enhancing HL. This may be due to parental experience and maturity and implies that young parents need extra support. The finding that lower education predicts low HL is expected and consistent with global literature . One study linked reduced HL to lower socioeconomic status, revealing barriers to care access and shared decision-making for those parents . Academic education likely improves HL through accumulated knowledge and skills . However, higher education does not guarantee high HL as many highly educated parents also had HL challenges. Parents not living with the child’s other parent had more HL challenges. Research on social determinants for health in HD found that parental marital status affected a child’s risk of developing Hirschsprung-associated enterocolitis . Similarly, unmarried maternal status has been linked to increased birth-related risks . These findings underscore the need to consider family structure in HD management, suggesting targeted interventions for HL challenges in diverse family settings. Language barriers and cultural disparities are known to complicate communication and HL and may even affect postoperative outcomes . Immigrants and their Norwegian-born children make up roughly 20% of Norway’s population, and significant HL disparities exist among these communities . Parents who spoke only Norwegian at home had better engagement with HCP and understanding of healthcare systems compared to those who also spoke another language. This suggests that even proficient Norwegian-speaking bilingual parents may have HL challenges related to language and that the use of interpreter services is crucial. Besides, excluding non-Norwegian-speaking parents likely skews our findings towards higher HL. No child-related factors influenced parental HL. Since children with complex HD or comorbidity have more interactions with healthcare, we expected their parents to have increased HL. However, having a child with comorbidity, long-segment HD, permanent stoma or appendicostomy showed no link to improved HL. Research has not conclusively established a relationship between comorbidities and parental HL, and one study in fact linked comorbidity to lower HL . Comorbidities may require parents to comprehend diverse information, potentially challenging their HL skills. Our results point out self-efficacy as a strong predictor of parental HL, consistent with existing research in various pediatric patient groups . Enhancing self-efficacy through tailored interventions like education, mastery classes, and support networks could effectively improve HL in HD parents. Furthermore, parents demonstrated high levels of eHL, similar to findings among Swedish parents of children needing surgical care , suggesting eHL interventions could be effective. Electronic resources can provide accessible, tailored information, enabling informed decisions and active participation in their child’s care .
An important strength of the study is the authentic representation of the parent population. Oslo University Hospital treats about 80% of HD patients in Norway, and we evaluated HL in parents of 70% of these children. Families not included are those living in the northern part of Norway, typically with longer distances to the local hospital. Additionally, the study includes a substantial number of fathers and non-native speakers. Offering both online and paper surveys ensured diverse eHL levels. Another strength lies in the use of validated tools. Weaknesses involve the cross-sectional design lacking long-term follow-up, the relatively small population limiting advanced statistical analysis, lack of data on non-responders, and insufficient information on non-Norwegian-speaking parents’ HL. Lastly, the study-specific questionnaire has not undergone formal validation, so we cannot be certain that it accurately measures parents’ knowledge about HD.
Parents of children with HD feel HCP lack understanding of their child’s challenges, experience limited social support and struggle with health information interpretation. HCP should address these barriers and offer targeted HL efforts to young, lower-educated, non-cohabitating parents, and to those who do not primarily speak the official language at home. Understanding these factors can guide tailored HL interventions to specific groups.
Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 20 KB)
|
Harnessing digital health interventions to bridge the gap in prevention for older adults | 7d490059-e186-467c-a884-136d7cb5dbc8 | 10800474 | Preventive Medicine[mh] | The current global population of older adults is undergoing a notable and swift growth, which presents substantial health-related obstacles to public health systems on a global scale . The anticipated doubling of the aging population by 2050 has resulted in an increased incidence of age-related illnesses, including falls, sarcopenia, and dementia . The health issues associated with aging not only have a significant impact on the overall wellbeing of older individuals but also puts a lot of pressure of the healthcare system . The implementation of preventive measures aimed at addressing falls, sarcopenia, and dementia is therefore of utmost importance in order to minimize the negative impact of these conditions on the wellbeing, quality of life, and autonomy of older individuals . Nevertheless, it is important to acknowledge that public health systems encounter distinct obstacles when it comes to efficiently tackling these concerns . These constraints encompass for example restricted resources, insufficient infrastructure, and discrepancies in healthcare accessibility . The World Health Organization (WHO) highlighted five recommendations to promote physical activity: strengthen government (ownership and leadership), provide practical tools and guidance, support partnerships and build capacity, reinforce data systems and knowledge translation, secure and align funding with national policy . Therefore, it is crucial to prioritize the development and implementation of preventative measures that take into account the unique requirements and circumstances of older persons in various socioeconomic contexts. This is essential in order to promote healthy aging and enhance public health outcomes on a global scale . In recent times, there has been a notable emergence of digital health treatments, such as wearables sensors, mobile health app, virtual reality, that show promise in the domain of preventative care for older adults . These interventions present creative solutions aimed at effectively addressing the health concerns faced by this demographic. The pervasive utilization of mobile health applications and wearable sensors has fundamentally transformed the delivery of healthcare, enabling the provision of real-time monitoring, tailored interventions, and remote healthcare assistance . The use of digital platforms has a unique and unparalleled prospect to effectively engage a broader demographic of older individuals, while simultaneously reducing expenses in comparison to conventional healthcare approaches . Through the use of technological advancements, digital health interventions have the potential to enable older adults to actively engage in the management of their health, hence facilitating early identification, prevention, and the provision of individualized care. The potential for altering public health methods and improving healthcare solutions for older individuals globally is considerable through the seamless integration of these technologies into preventive initiatives . Falls and sarcopenia are prominent health issues that exert a substantial influence on the overall welfare and autonomy of older adult individuals. The older population is particularly vulnerable to falls , which present a substantial risk resulting in injuries, hospitalizations, and reduced mobility . Sarcopenia, an age-related phenomenon characterized by the progressive reduction in muscle mass and strength, is a significant factor in the development of frailty and functional decline among older adults . This condition increases the risk of falls and several other health concerns. In order to tackle these aforementioned issues, the utilization of wearable sensors and digital health has emerged as a valuable means for assessing fall risk and providing balance training in the older population . The sensors have the ability to track alterations in gait and balance, hence facilitating the prompt recognition of persons who may be susceptible to falling. Wearable sensors facilitate the customization of balance training programs for individuals by offering real-time data on gait patterns and postural stability . This capability empowers healthcare providers to mitigate the likelihood of falls and enhance overall mobility. In conjunction with fall prevention, mobile applications present considerable opportunities for strength training and tele-rehabilitation among the older population . These applications have the ability to provide personalized workout routines that specifically focus on particular muscle groups, hence assisting in the reduction of sarcopenia's impact and enhancing overall physical functionality . In addition, the use of mobile applications for telerehabilitation facilitates the provision of rehabilitation services from a distance, therefore offering significant advantages to older adult individuals residing in geographically isolated or underserved regions. This technique facilitates self-regulation and compliance with exercise protocols, hence encouraging sustained enhancements in functionality and autonomy among older adults. Through the use of wearable sensors and mobile applications, healthcare practitioners have the ability to customize preventive treatments (i.e., personalized rehabilitation) to the exact needs and abilities of the older adults, thereby effectively addressing issues such as falls, sarcopenia, and other health-related concerns. Digital health solutions have the capacity to increase the quality of life for older individuals, diminish healthcare expenditures, and foster the promotion of healthy aging inside a progressively aging global population . The increasing prevalence of dementia and cognitive impairment in older populations has emerged as a major public health concern . The increasing worldwide population aging phenomenon has led to a corresponding rise in the prevalence of cognitive illnesses, such as dementia. These disorders not only have an impact on an individual's cognitive ability but also have substantial effects on their daily functioning and overall quality of life. Additionally, they impose a burden on healthcare systems and carers . To overcome this challenge, the use of digital tools has emerged as a valuable means of cognitive assessment and monitoring among older persons . These instruments have the capability to offer impartial and consistent assessments of cognitive abilities, hence enabling the timely identification and implementation of appropriate measures. Through the use of digital cognitive evaluations, healthcare providers are able to discern minor alterations in cognitive capabilities . This enables them to implement interventions that are both timely and individualized, with the aim of mitigating or managing cognitive decline. In addition, cognitive training applications present encouraging prospects for enhancing cognitive abilities among the older population . These applications often comprise of stimulating and interactive activities that are specifically developed to address several cognitive areas, including memory, attention, and problem-solving. The consistent utilization of these applications has demonstrated promise in augmenting cognitive functioning and fostering neuroplasticity among older adults . The customization of cognitive training allows for an individualized and inclusive method to effectively target cognitive decline in older people. The incorporation of digital tools in cognitive healthcare presents a promising prospect for revolutionizing our approach to addressing dementia and cognitive decline in the older population. The use of digital assessments and cognitive training applications possesses the capacity to facilitate timely intervention and boost cognitive wellbeing in older populations. The use of digital therapies becomes progressively imperative in addressing the escalating incidence of cognitive disorders and promoting healthy aging among the aging global population . The implementation of digital health treatments for older individuals is hindered by the considerable hurdles posed by the insufficiency of resources and the inadequacy of healthcare infrastructure. The lack of sufficient technology, internet connectivity, limited digital literacy skills, data privacy, security threats, limited device performance, and healthcare infrastructure can impede the widespread implementation of digital solutions in these particular places . Furthermore, the limited availability of adequately qualified healthcare personnel and financial limitations may pose obstacles to the advancement and execution of digital health initiatives targeting the older population . In order to optimize the efficacy of digital treatments, it is imperative to customize and designing these solutions with older adults to align with the cultural and educational contexts of older individuals across various geographical areas . The design and execution of digital health initiatives necessitate the consideration of cultural views, linguistic preferences, and diverse degrees of digital competence. By tailoring interventions to be congruent with local cultures and educational levels and gaining insight into the technical challenges that older adults encounter will empower developers to craft future digital interventions that are precisely attuned to their needs . The successful implementation of digital health interventions necessitates the resolution of obstacles pertaining to the acceptability and utilization of technology among the senior population. Older adult individuals may exhibit apprehension about technology due to perceived complexities and apprehensions around privacy and data security . Addressing these concerns is of utmost importance by implementing user-friendly interfaces, fewer buttons, larger text, improved color contrast, providing clear instructions, and adopting transparent data management methods . The present systems struggle to cope with the demands of processing and securing the vast influx of multi-sensory data captured. Employing advanced data processing techniques becomes imperative to seamlessly integrate the rich of information acquired from wearable sensors, translating it into clinically relevant outputs. The provision of sufficient training and assistance for older persons in the utilization of digital technologies has the potential to enhance their self-assurance and self-efficacy and ease in engaging with these treatments for the purpose of promoting their health and overall wellbeing. In addition, implementing rigorous regulations, fostering strong societal support, and engaging proactively with older adults can contribute to enhancing their utilization and confidence in digital healthcare. This multifaceted approach also ensures proper usage, privacy, and security. Digital health treatments have the potential to significantly narrow the healthcare disparity among older individuals in low- and middle-income countries by effectively acknowledging and tackling the obstacles related to limited resources, cultural diversity, and adoption of technology . Tailored digital solutions possess the capacity to empower the older population, enhance their accessibility to preventive healthcare, and augment their general wellbeing and quality of life. Furthermore, through the promotion of cooperation among policymakers, healthcare practitioners, and technology innovators, it is possible to collectively establish enduring and all-encompassing digital health initiatives that yield advantages for older individuals in many socioeconomic contexts . displays our proposed suggestions for advancing research and enhancing the efficacy of digital health interventions targeted toward the older population. The table was divided into three distinct areas, including the suggestions, the possible impact of digital health on the healthcare sector, and initiatives aimed at ensuring sustainability. In , we presented an adapted version of the NASSS framework (Non-adoption, Abandonment, Scale-up, Spread, Sustainability) as an analytical tool to depict both the challenges and opportunities . This framework offers a comprehensive perspective on the complexities associated with implementing healthcare technologies, particularly in the context of older adults. The NASSS framework acknowledges that the successful adoption of technology depends on a myriad of factors, encompassing technological, organizational, social, and contextual dimensions. Given the unique challenges inherent in introducing digital health interventions to older adults, the NASSS framework proves invaluable in identifying potential obstacles and essential considerations throughout the implementation process. Through the application of this framework, we delved into various facets, including the interaction between technology and the older adult population, organizational readiness, societal attitudes, and the overall ecological context. Furthermore, we integrated the NASSS framework with the WHO People-Centered Health Service Strategies. These strategies underscore the significance of tailoring health services to individual needs, preferences, and experiences. By combining these frameworks, we offer a more holistic understanding of the interplay between technology implementation and the human-centric aspects of healthcare delivery for older adults. This synergistic approach allows us to explore not only the technical intricacies of digital health interventions but also their alignment with the preferences and values of the older adult population. The urgent need for quick action is evident in the imperative to address the preventative gap among older persons. The prevalence of age-related illnesses, including falls, sarcopenia, and dementia, is increasing in tandem with the global aging population. In the absence of efficient preventive measures, the welfare and autonomy of older adult individuals are in jeopardy, potentially leading to significant burdens on healthcare systems in terms of delivering sufficient care and assistance. The prompt emphasizes the importance of promptly and resolutely implementing certain preventive measures aimed at mitigating the effects of these disorders on the lives of older adults. The incorporation of digital health interventions presents a potentially effective approach to improve the physical function and cognitive health of older adult individuals. Through the use of wearable sensors, smartphone applications, and tele-rehabilitation platforms, digital treatments have the potential to enable older adults to actively engage in the management of their health. These interventions have the potential to enhance the timely identification of health hazards, provide tailored exercise and cognitive training programs, and encourage the adoption of healthy behaviors. In addition, the accessibility and scalability of digital health solutions have the potential to expand their impact on older persons residing in distant or underserved regions, hence mitigating gaps in healthcare accessibility. The transformative impact of digital health interventions on preventative care for older individuals should not be underestimated. By providing older adults with the tools and resources to actively monitor their health, participate in preventive activities, and effectively manage chronic illnesses, these interventions possess the potential to significantly improve overall wellbeing and foster healthy aging on a global level. The digital health revolution has far-reaching effects that transcend beyond the realm of individual health outcomes. It has the potential to alleviate the strain on healthcare systems, mitigate healthcare expenditures, and enhance the overall quality of life for older persons and their careers. In summary, it is of utmost importance to address the pressing need for closing the gap in preventative measures for the older population. The incorporation of digital health interventions into the realm of preventative care for older individuals on a global scale presents a remarkable prospect for advancing physical function and cognitive health. This has the potential to bring about a transformative shift in the field. By strategically allocating resources toward research, development, and implementation endeavors, it is possible to enhance the autonomy and wellbeing of older adults, thereby securing a more promising outlook for aging populations on a global scale. KD: Visualization, Writing—original draft, Writing—review & editing. BB: Conceptualization, Supervision, Writing—original draft, Writing—review & editing. |
The Multifaceted Impact of Bioactive Lipids on Gut Health and Disease | 9f19b95d-8edc-491a-a64e-c98f5e70da5e | 11728145 | Biochemistry[mh] | Lipids represent a broad class of hydrophobic molecules categorized based on structure, functional groups, and carbon chain properties. Bioactive lipids are lipids that are capable of affecting cell function via changes in their concentrations. While lipids may have once been considered simply structural components of cellular membranes, the concept of lipids modulating cellular biology has spurred great interest in recent years. This is largely due to improvements in methods of detecting, quantifying, and characterizing lipids and the emergence of the field of lipidomics, which seeks to profile the total lipids in a system, or the “lipidome”. The field of lipidomics has been advanced through methods like mass spectrometry, enabling detailed lipid characterization . Techniques such as electron spray ionization and liquid chromatography allow for efficient separation and identification of lipids, despite challenges related to lipid insolubility in water. These methods have also helped reveal the role of lipids in health, particularly through targeted vs. untargeted analyses, such as those found in the LIPID MAPS database. A major focus area of lipid research is their interactions in the intestinal tract, which sees a variety of lipids including those made by the host, ingested dietary lipids, and those produced by the gut microbiome. There are several major lipid classes that have been investigated and found to modulate host health and disease. Short-chain fatty acids, such as butyrate, are produced by gut bacteria and play significant roles in immune modulation by suppressing inflammation . In contrast, long-chain saturated fatty acids, like dietary palmitic acid, can induce inflammation, especially in obesity-related conditions . Conversely, unsaturated fatty acids like omega-3s have been shown to reduce inflammation and promote beneficial immune outcomes in diseases . Likewise, fatty acid derivatives, such as prostaglandins, which are generally produced by host cells, can also modulate inflammation and help control immune activity by regulating immune cell behavior . Though its direct effects on gut immunity are less explored, bile acids—a sterol lipid—modulate immune responses in the gut. Finally, sphingolipids, which are produced by host cells and bacteria, are essential for cell signaling, immune modulation, and membrane structure, and their presence in both eukaryotic and bacterial cells highlights their broad significance. Moreover, the interplay between bacterial sphingolipids and host immune responses implicates their potential as therapeutic targets . While all the bioactivities of the lipids of the gut are of interest to science, one of the greatest interests is the effects of these lipids in the context of human disease and immunity. For example, many major lipid categories are known to play a role in inflammatory bowel diseases and colon cancer. The complex interactions between dietary lipids, gut-derived lipids, and the microbiome emphasize the role of lipids in maintaining immune balance. Future research will likely uncover more about the lipidome’s impact on systemic health and disease states beyond the gut, such as non-alcoholic steatohepatitis and neurological conditions. Lipids are a very broad class of hydrophobic/amphiphilic hydrocarbon containing molecules. Lipids have been defined for over 200 years, but due to the complexity of identifying and categorizing these molecules, lipidomics as a field has only evolved over the last two decades . Identification of lipids involves separating molecules into categories based on common structure and origin, resulting in eight main lipid classes. These include (1) fatty acids consisting of elongated carbon chains starting from acetyl-CoA, (2) glycerolipids, which consist of glycerol with one or more fatty acid substitutions often by ester bonds, (3) glycerophospholipids, which are structurally similar to glycerolipids but have a phosphate group substitution on the glycerol head, (4) sphingolipids that originate from a sphingosine base and are formed by the addition of serine to the head of a fatty acid, (5) sterol lipids, which have a distinct four-ring structure, (6) prenol lipids that consist of 5 carbon isoprenoid structures, (7) saccharolipids, which are made of a sugar backbone with fatty acid substitutions, and (8) polyketides, which are defined by alternating ketone and methyl groups. Subcategories of these main lipid classes are based on common modifications to and variants of the head groups as well as subdivisions of lipid biogenesis pathways . Specific lipid identification involves separating them into categories based on common structure, then into subcategories based on the number of carbons as well as the bond positions in the lipid tail groups . Lipid identification is largely dependent on mass spectrometry (MS), which is a complex process that involves aerosolizing and ionizing the lipid molecules, and then separating the particles/molecules by shooting them through a charged chamber, allowing for separation by mass and charge upon detection . Recent advancements in lipid analysis include the development of targeted vs. untargeted lipidomics. Untargeted methods allow for analysis of the global lipidome while targeted methods rely on the properties of specific lipid categories to separate and further identify specific functional lipid groupings. Lipidomics has also expanded by further characterizing the known categories of lipids, improving detection of smaller concentrations of lipids with more complex mass spectrometry methods , and advancing the study of lipids and their function in the context of health and disease . Multiple forms of MS are used to analyze lipids. In all lipid MS applications, the lipids must first be isolated, which requires their separation from the aqueous phase, usually with non-polar solvents. This need for separation presents unique challenges when compared to MS of water-soluble compounds like most metabolites and often requires adaptation of techniques to fit the unique properties of lipids. Even though lipids are not soluble in water, they can still be analyzed by liquid chromatography (LC) prior to MS. This commonly used technique separates categories of lipids prior to MS analysis and allows for broad analysis of complex samples. LC-MS enriches low-concentration lipid categories and can also aid in differentiation of isomeric states of lipids prior to MS . Electron spray ionization (ESI)–MS is another technique commonly used to identify lipids. In this process, a strong electric field is applied to the liquid lipid extract as it slowly passes through a capillary tube, which induces either a positive or negative charge on the lipids prior to their separation by spray through an inert gas . By exploiting the acquisition of either positive or negative charges by different lipid classes, this process enables the separation of lipids by m/z ratio without the need for prior chromatographic separation, which adds significant complexity and possible experimental variation to the process . Removal of LC for lipid analysis makes ESI-MS a faster and less complex technique than LC-MS. However, this comes with the trade-off of reduced sensitivity in samples containing isobaric lipid species or with significant lipid–lipid interactions that may cause difficulties in ionization . Other variations of MS are also used to identify lipid species, and many of these are modifications on the principles of ESI. These include paper spray ionization (PSI), in which the capillary tube of ESI is replaced by a triangular piece of paper with inherent porosity and absorptive properties, making this technique inexpensive and simpler to execute by comparison . PSI also allows for analysis of more crude extracts, including extracts containing particulate matter such as bacterial cultures . Differential mobility separation (DMS)–MS is another alternate technique, which separates charged ions in a gaseous phase in the presence of low- and high-strength electric fields prior to MS analysis. DMS-MS functions similar to ESI-MS but can confer greater sensitivity as it separates molecules with variable electric fields . While the hydrophobic nature and diversity of lipids has made them difficult to properly analyze in the past, this diversity is now being exploited through a variety of mass spectrometry techniques leading to significant gains in the field of lipidomics. 3.1. Short-Chain Fatty Acids One subcategory of lipids that has been studied a great deal in human health is short chain fatty acids (SCFAs, ). These fatty acids are carboxylic acids with small, two to six carbons, aliphatic tails, which are produced by anaerobic bacteria that reside in the gastrointestinal tract and result from bacterial breakdown of certain dietary carbohydrates (i.e., fiber; Ref. ). SCFAs, especially butyrate, are major contributors to the health benefits associated with consumption of dietary fiber . Research using wild-type mouse models has shown that butyrate can activate nuclear erythroid 2-related factor 2 (Nrf2), which serves as a master regulator for antioxidant genes leading to a positive correlation with host health . Moreover, in a diabetes animal model, both knockout of Nrf2 and inhibition of co-activator P300 eliminated the benefits of butyrate on aortic oxidative stress and increased aortic damage . These data confirm Nrf2 as a major pathway through which these lipids function and the system-wide benefits of butyrate in both healthy and diseased hosts. SCFAs have also been associated with a wide range of immune signals in the gut, and commonly function as suppressors of those signals. For example, SCFAs can inhibit the production of both MCP1 and IL10 in human monocytes in the presence of lipopolysaccharide (LPS—another lipid associated molecule with potent immune modulatory effects . SCFAs can also affect the expression of adhesion molecules on leukocytes (L-selectin) and endothelial cells (ICAM-1 and VCAM-1), impacting leukocyte and neutrophil recruitment to sites of inflammation. Although the research on SCFAs’ role in cell recruitment has shown mixed results, this is likely due to the complex nature of the process, which involves multiple signals and cell types. Under inflammatory conditions, SCFAs reduce endothelial adhesion molecule expression . Thus, neutrophil migration and the subsequent exerting an anti-inflammatory effect are decreased. However, SCFAs can also increase L-selectin expression and neutrophil chemotaxis under some circumstances while lowering expression pro-inflammatory cytokines such as tnfα , cinc2α/ β , and nfkβ under other conditions . This leads to varied effects on inflammation. One important caveat is that these studies were performed using mouse models of disease, and their translational relationship to human disease is in question. Ultimately, SCFAs play a significant, condition-dependent role in modulation host inflammation in the gut via leukocyte recruitment. SCFAs have also been shown to improve the function and homeostasis of the intestinal barrier. For example, propionate given to murine intestinal epithelial cells (IECs) in cultures was shown to promote the migration of the IECs through enhanced cell polarization and actin remodeling . In epithelial organoid models, it has been shown that acetate treatment decreases expression of inflammatory cytokines further supporting the positive impact of SCFAs on intestinal health . On a more general level, the impact of SCFAs extends to modulating the gut microbiota composition. A balanced microbial community is essential in preventing dysbiosis, a common feature in IBD that leads to reduced SCFA production and plays a significant role in cancer prevention. High levels of SCFAs promote the growth of beneficial bacteria, which further produce anti-inflammatory molecules, creating an environment that is less conducive to cancer development and can resist pathogenic bacteria, which can alleviate IBD symptoms . 3.2. Eicosanoids Another type of lipid that is shown to have anti-inflammatory effects are eicosanoids. These are a family of immune regulating poly-unsaturated fatty acids produced by mammals. These lipids contain the well-known prostaglandins and other derivatives of arachidonic acid and similar unsaturated fatty acids . Prostaglandins are generally thought to be pro-inflammatory as they are associated with inflammation and help to regulate blood flow and pain sensation . However, their effects are likely determined by the context in which these lipids are being produced. One prostaglandin found in the gut, prostaglandin E2 (PGE2), is known to prevent innate immune responses by binding to the EP4 receptor. This activity makes it one of the many tolerogenic signals that allows for the maintenance of the gut microbiome . PGE2 may also reduce the production of inflammatory TNFα while increasing the production of anti-inflammatory IL-10 leading to a localized reduction in inflammation . Another variety of eicosanoids are the specialized pro-resolving lipid mediators (SPMs). These lipids can inhibit the production of inflammatory eicosanoids while increasing the production of more SPMs . SPMs also have some control over the maturation of antigen presenting dendritic cells and the cytokine production of T cells, preventing the maturation of cytotoxic T cells . These complex fatty acids have a great deal of importance in regulating immune responses. In fact, these lipids play a significant role in the inflammatory mechanisms of IBD (particularly ulcerative colitis and Crohn’s disease), colon cancer, and diverticulitis . 3.3. Dietary Lipids In addition to SCFAs and eicosanoids, other dietary lipids have been associated with immune modulation in the intestinal tract . The term “fat” is generally used to refer to dietary lipids, which includes saturated and unsaturated fatty acids. Saturated fatty acids (SFAs) have no double bonds in the carbon backbone and are common in animal meats and the western diet in general. SFAs are not their own class of lipid and can generally be free fatty acids (like the SCFAs) or associated with other molecules like glycerol as in the case of mono-, di-, and triacylglycerols, which are part of the glycerolipids category of lipids . SFAs and especially triacylglycerol (triglyceride) are the main components of high-fat diets , and heavy consumption of these lipids are generally associated with negative health effects. For example, mice fed a diet of saturated fats had increased sensitivity to LPS and increased mortality . This same study also found that pre-treating mice with palmitic acid resulted in hyper-inflammatory responses by macrophages and that this response was mediated by ceramides (another type of lipid discussed below). The saturated fat palmitic acid has also been shown to increase the production of pro-inflammatory cytokines MCP1, IL-6, and IL-8 in macrophages, as well as attracting neutrophils . The induction of these cytokines in macrophages and dendritic cells is thought to result from palmitic acid functioning as a TLR4 agonist. This ultimately leads to activation of the NLRP3 inflammatory pathway to release these cytokines downstream . In fact, the TLR4 agonist theory helped explain how the long-chain SFAs could mediate the low-grade, persistent inflammation seen in obesity and associated conditions. However, work by Lancaster et al. found that long-chain fatty acids still induced inflammatory responses in macrophages even in a TLR4 knockout, but only if other toll-like receptors were activated first. This suggested that long-chain fatty acids function as a “second hit” to activate inflammatory responses . This finding is especially interesting in the context of the gut, since LPS (a canonical TLR4) agonist, is a ubiquitous component of gram-negative bacteria and thus found throughout the intestinal tract. Furthermore, in addition to macrophages and dendritic cells, palmitic acid also induces pro-inflammatory phenotypes in T cells . Altogether, this ability to induce pro-inflammatory states in a wide array of immune cell types highlights how SFAs can modify the immune system in the gut and contribute to chronic inflammation without having to directly induce the inflammatory environment. Unlike saturated fatty acids, some unsaturated fatty acids, which are acquired from dietary intake of oils from plants or fish, are viewed as healthy components of the diet. Unsaturated fatty acids have at least one double bond in their carbon chain, and a primary example of the beneficial effects of these lipids is the activity of omega-3 fatty acids. Omega-3s have been shown to help reduce intestinal inflammation in both healthy and disease conditions . It is thought that omega-3 fatty acids function as competitors for the conversion of arachidonic acid into prostaglandins and leukotrienes. Thus, limiting the production of omega-3s from arachidonic acid leads to a reduction in inflammation . Omega-3 fatty acids also demonstrate their beneficial effects through modulation of cytokine expression and chemotaxis . For example, docosahexaenoic acid (DHA) has been shown to increase M2 polarization in macrophages , and the M2 activation state is an anti-inflammatory and immunoregulatory phenotype . In addition, DHA decreases expression of pro-inflammatory genes in macrophages . These findings highlight the potential of omega-3 fatty acids as powerful dietary tools for modulating inflammation and promoting immune balance. 3.4. Glycerophospholipids Glycerophospholipids are one of eight major lipid classes. These lipids are composed of fatty acid chains linked to a glycerol by ester linkages, and a phosphate group associated with the glycerol . Glycerophospholipids are the major class of lipids in biological membranes, and their structure is largely the same between bacteria and eukaryotes . There are slight differences between the kingdoms due to the structure of the fatty acid chains, where bacteria may produce shorter fatty acid chains or have different double bond positions . The greatest difference in glycerophospholipid structure is in those produced by archaea. Archaeal glycerophospholipids utilize isoprene chains rather than fatty acids and are linked by ether linkages to L-glycerol rather than the D stereoisomer found in bacteria and eukaryotes . These differences are likely due to the different requirements of cellular membranes across these domains. Despite their structural and functional importance across all domains of life, research on the health impacts of glycerophospholipids is limited. Findings, such as changes to their metabolism being observed in models of depression , have been reported. In fact, their extreme ubiquity, making up most of the membrane of most cells, can make it difficult to study the effects of these lipids. For example, experiments involving deletion or extreme modification of these lipids are not possible as they are often lethal to the organism. Their ubiquity also makes glycerophospholipids a poor choice for receptor binding, as receptors would be saturated in most situations. Yet, having these lipids readily available in cell membranes, where receptors are often found, does open up their uses as signaling molecules. For example, it is more favorable for an organism to use a modified membrane lipid to signal receptor activation than to produce a new molecule for signaling receptor activation. Overall, these characteristics underscore the dual role of glycerophospholipids as essential structural components and versatile signaling molecules, highlighting their significance in both cellular functionality and potential health implications regardless of the challenges in studying their specific effects. 3.5. Bile Acids Cholesterol, which is a sterol lipid, is a well-known molecule of interest in the context of health, possibly best known for its contribution to heart and circulatory diseases, but also of critical function in cell membranes. There is little research showing an impact of cholesterol on gut immunity. However, other sterol lipids (which are derived from cholesterol metabolism) are known to modify immunity and potentially the gut microbiota . One example is bile acids, which are the main component of fat-digesting bile that is released into the small intestine . Research on the regionalization of norovirus infection within the gut has shown that microbiome-dependent regionalization of infection in the gut is largely due to bile acid, in which host-derived bile acids promote a pro-inflammatory environment, while those modified by gut bacteria mediate anti-inflammatory effects . During times of disease and inflammation, bile acid metabolism is often disrupted, which ultimately leads to additional inflammatory potential . Moreover, bile acids also interact with the intestinal microbiome and can shape the bacterial composition by promoting or inhibiting the growth of select bacterial species . While research into the effects of sterol lipids on the immune responses of the gut still needs to be developed, these lipids and their derivatives are among the most significant bioactive molecules in immune regulation and beyond. 3.6. Sphingolipids Another class of bioactive lipids that have been of significant interest are sphingolipids. These are defined by a sphingosine backbone and contain associated groups and residues that give each of them a specific subclass and structure . Sphingolipids contain a long chain of carbons that are associated with a hydrophilic head group. Many sphingolipids also contain a fatty acid residue. When this is the only addition to the sphingosine head, then the lipid is referred to as a ceramide. Since their discovery in 1870, these lipids have been associated with a variety of functions, including use in cellular membrane domains, cell signaling, cell proliferation, cell death, cell migration and invasion, central nervous system development, and immunomodulation . In fact, sphingolipids are essential for the maintenance of most eukaryotic cell membranes, although they have traditionally been observed within the context of the nervous system . One of the more studied functions of sphingolipids in immunomodulation is the trafficking of immune cells. This includes T cells, which are often needed to clear viral and bacterial infections. To perform this function, T cells need to enter circulation after exiting the lymph nodes. Sphingosine 1 phosphate (S1P) has been observed to play a major role in this process. This process benefits from the presence of an S1P gradient between circulation and tissue, in which there is a greater expression of S1P in circulation . Moreover, when hematopoietic cells are unable to express the receptor S1PR, T cell egress from lymph nodes is significantly reduced . Sphingolipids are also involved in immune processes in macrophages. For example, the stimulation of TLR2 and TLR4 by bacterial lipopeptides and LPS induces S1P production by upregulating expression of its activating kinase . It is suggested that S1P interaction with its receptor (S1PR3) contributes to the production of the chemokine MCP1, which in turn recruits monocytes to the site of infection . Stimulation of another S1P receptor (S1PR2) has also been shown to induce expression of inflammatory molecules IL-1β and IL-18 in mice . These findings, among others, demonstrate the critical role of sphingolipids in immune cell trafficking and inflammatory regulation, emphasizing their importance as mediators of the immune response and pointing to their involvement in inflammatory diseases. 3.7. Bacterial Sphingolipids Sphingolipids were first described and initially known only to exist in eukaryotic organisms. However, in recent years, production of these lipids has been observed in prokaryotes. Structural differences exist between bacterial and eukaryotic sphingolipids, which may be due to differences in associated head groups or fatty acids, similar to in glycerophospholipids. However, one primary difference that makes eukaryotic sphingolipids distinguishable from bacterial sphingolipids is that eukaryotic lipids have 18 carbons with a single double bond in the sphingolipid base . Sphingolipid production has been best described for the phyla Bacteroidetes and Proteobacteria ; however, prokaryotes outside of these phyla may also produce sphingolipids . The most studied source of bacterial sphingolipids in the gut are members of the Bacteroidetes phylum, including Bacteroides thetaiotaomicron (which is one of the most abundant bacterial groups in the human gut microbiome). Lipidomic analysis of B. thetaiotaomicron has shown that sphingolipids are one of the most abundant lipid classes in their membranes . The same has also been shown for the lipid content of outer membrane vesicles (OMVs) produced by this bacterium . Like eukaryotic sphingolipids, bacterial sphingolipids also have immune modulatory capabilities. In germ-free mice that were monocolonized with either wild-type or a sphingolipid knockout strain of B. thetaiotaomicron , a significant elevation in IL-6 and MCP1 levels and crypt hyperplasia were observed in mice colonized with the sphingolipid mutant . Moreover, it was also shown that host-derived sphingolipids were increased in IBD patients and positively correlated with inflammation and those derived from Bacteroides were decreased and negatively correlated with inflammation . While the mechanism by which Bacteroides sphingolipids modulate host inflammation is still unclear, it has been suggested that these lipids may have a role in tolerogenic responses in T cells . Taken together it may be that as bacterial-derived sphingolipids are decreased due to dysbiosis, host-derived sphingolipids increase, and this dysregulation leads to inflammation. The inflammation may be a result of increased immune cell trafficking toward sphingolipids like S1P, or it may be a result of the loss of tolerogenic signals from bacterial sphingolipids to regulatory T cells, but this area requires more study. Both bacterial sphingolipids and SCFAs have been shown to reduce the expression of MCP1 among other inflammatory signals. The opposite has been shown of other lipids, such as the long-chain fatty acid, palmitic acid. Taken together, the propensity for lipids to modulate immune signaling in the gut suggests that the gut microbiome and diet, by way of lipids, help mediate inflammation and that modification to lipid profiles of immune cells may also induce inflammation. One subcategory of lipids that has been studied a great deal in human health is short chain fatty acids (SCFAs, ). These fatty acids are carboxylic acids with small, two to six carbons, aliphatic tails, which are produced by anaerobic bacteria that reside in the gastrointestinal tract and result from bacterial breakdown of certain dietary carbohydrates (i.e., fiber; Ref. ). SCFAs, especially butyrate, are major contributors to the health benefits associated with consumption of dietary fiber . Research using wild-type mouse models has shown that butyrate can activate nuclear erythroid 2-related factor 2 (Nrf2), which serves as a master regulator for antioxidant genes leading to a positive correlation with host health . Moreover, in a diabetes animal model, both knockout of Nrf2 and inhibition of co-activator P300 eliminated the benefits of butyrate on aortic oxidative stress and increased aortic damage . These data confirm Nrf2 as a major pathway through which these lipids function and the system-wide benefits of butyrate in both healthy and diseased hosts. SCFAs have also been associated with a wide range of immune signals in the gut, and commonly function as suppressors of those signals. For example, SCFAs can inhibit the production of both MCP1 and IL10 in human monocytes in the presence of lipopolysaccharide (LPS—another lipid associated molecule with potent immune modulatory effects . SCFAs can also affect the expression of adhesion molecules on leukocytes (L-selectin) and endothelial cells (ICAM-1 and VCAM-1), impacting leukocyte and neutrophil recruitment to sites of inflammation. Although the research on SCFAs’ role in cell recruitment has shown mixed results, this is likely due to the complex nature of the process, which involves multiple signals and cell types. Under inflammatory conditions, SCFAs reduce endothelial adhesion molecule expression . Thus, neutrophil migration and the subsequent exerting an anti-inflammatory effect are decreased. However, SCFAs can also increase L-selectin expression and neutrophil chemotaxis under some circumstances while lowering expression pro-inflammatory cytokines such as tnfα , cinc2α/ β , and nfkβ under other conditions . This leads to varied effects on inflammation. One important caveat is that these studies were performed using mouse models of disease, and their translational relationship to human disease is in question. Ultimately, SCFAs play a significant, condition-dependent role in modulation host inflammation in the gut via leukocyte recruitment. SCFAs have also been shown to improve the function and homeostasis of the intestinal barrier. For example, propionate given to murine intestinal epithelial cells (IECs) in cultures was shown to promote the migration of the IECs through enhanced cell polarization and actin remodeling . In epithelial organoid models, it has been shown that acetate treatment decreases expression of inflammatory cytokines further supporting the positive impact of SCFAs on intestinal health . On a more general level, the impact of SCFAs extends to modulating the gut microbiota composition. A balanced microbial community is essential in preventing dysbiosis, a common feature in IBD that leads to reduced SCFA production and plays a significant role in cancer prevention. High levels of SCFAs promote the growth of beneficial bacteria, which further produce anti-inflammatory molecules, creating an environment that is less conducive to cancer development and can resist pathogenic bacteria, which can alleviate IBD symptoms . Another type of lipid that is shown to have anti-inflammatory effects are eicosanoids. These are a family of immune regulating poly-unsaturated fatty acids produced by mammals. These lipids contain the well-known prostaglandins and other derivatives of arachidonic acid and similar unsaturated fatty acids . Prostaglandins are generally thought to be pro-inflammatory as they are associated with inflammation and help to regulate blood flow and pain sensation . However, their effects are likely determined by the context in which these lipids are being produced. One prostaglandin found in the gut, prostaglandin E2 (PGE2), is known to prevent innate immune responses by binding to the EP4 receptor. This activity makes it one of the many tolerogenic signals that allows for the maintenance of the gut microbiome . PGE2 may also reduce the production of inflammatory TNFα while increasing the production of anti-inflammatory IL-10 leading to a localized reduction in inflammation . Another variety of eicosanoids are the specialized pro-resolving lipid mediators (SPMs). These lipids can inhibit the production of inflammatory eicosanoids while increasing the production of more SPMs . SPMs also have some control over the maturation of antigen presenting dendritic cells and the cytokine production of T cells, preventing the maturation of cytotoxic T cells . These complex fatty acids have a great deal of importance in regulating immune responses. In fact, these lipids play a significant role in the inflammatory mechanisms of IBD (particularly ulcerative colitis and Crohn’s disease), colon cancer, and diverticulitis . In addition to SCFAs and eicosanoids, other dietary lipids have been associated with immune modulation in the intestinal tract . The term “fat” is generally used to refer to dietary lipids, which includes saturated and unsaturated fatty acids. Saturated fatty acids (SFAs) have no double bonds in the carbon backbone and are common in animal meats and the western diet in general. SFAs are not their own class of lipid and can generally be free fatty acids (like the SCFAs) or associated with other molecules like glycerol as in the case of mono-, di-, and triacylglycerols, which are part of the glycerolipids category of lipids . SFAs and especially triacylglycerol (triglyceride) are the main components of high-fat diets , and heavy consumption of these lipids are generally associated with negative health effects. For example, mice fed a diet of saturated fats had increased sensitivity to LPS and increased mortality . This same study also found that pre-treating mice with palmitic acid resulted in hyper-inflammatory responses by macrophages and that this response was mediated by ceramides (another type of lipid discussed below). The saturated fat palmitic acid has also been shown to increase the production of pro-inflammatory cytokines MCP1, IL-6, and IL-8 in macrophages, as well as attracting neutrophils . The induction of these cytokines in macrophages and dendritic cells is thought to result from palmitic acid functioning as a TLR4 agonist. This ultimately leads to activation of the NLRP3 inflammatory pathway to release these cytokines downstream . In fact, the TLR4 agonist theory helped explain how the long-chain SFAs could mediate the low-grade, persistent inflammation seen in obesity and associated conditions. However, work by Lancaster et al. found that long-chain fatty acids still induced inflammatory responses in macrophages even in a TLR4 knockout, but only if other toll-like receptors were activated first. This suggested that long-chain fatty acids function as a “second hit” to activate inflammatory responses . This finding is especially interesting in the context of the gut, since LPS (a canonical TLR4) agonist, is a ubiquitous component of gram-negative bacteria and thus found throughout the intestinal tract. Furthermore, in addition to macrophages and dendritic cells, palmitic acid also induces pro-inflammatory phenotypes in T cells . Altogether, this ability to induce pro-inflammatory states in a wide array of immune cell types highlights how SFAs can modify the immune system in the gut and contribute to chronic inflammation without having to directly induce the inflammatory environment. Unlike saturated fatty acids, some unsaturated fatty acids, which are acquired from dietary intake of oils from plants or fish, are viewed as healthy components of the diet. Unsaturated fatty acids have at least one double bond in their carbon chain, and a primary example of the beneficial effects of these lipids is the activity of omega-3 fatty acids. Omega-3s have been shown to help reduce intestinal inflammation in both healthy and disease conditions . It is thought that omega-3 fatty acids function as competitors for the conversion of arachidonic acid into prostaglandins and leukotrienes. Thus, limiting the production of omega-3s from arachidonic acid leads to a reduction in inflammation . Omega-3 fatty acids also demonstrate their beneficial effects through modulation of cytokine expression and chemotaxis . For example, docosahexaenoic acid (DHA) has been shown to increase M2 polarization in macrophages , and the M2 activation state is an anti-inflammatory and immunoregulatory phenotype . In addition, DHA decreases expression of pro-inflammatory genes in macrophages . These findings highlight the potential of omega-3 fatty acids as powerful dietary tools for modulating inflammation and promoting immune balance. Glycerophospholipids are one of eight major lipid classes. These lipids are composed of fatty acid chains linked to a glycerol by ester linkages, and a phosphate group associated with the glycerol . Glycerophospholipids are the major class of lipids in biological membranes, and their structure is largely the same between bacteria and eukaryotes . There are slight differences between the kingdoms due to the structure of the fatty acid chains, where bacteria may produce shorter fatty acid chains or have different double bond positions . The greatest difference in glycerophospholipid structure is in those produced by archaea. Archaeal glycerophospholipids utilize isoprene chains rather than fatty acids and are linked by ether linkages to L-glycerol rather than the D stereoisomer found in bacteria and eukaryotes . These differences are likely due to the different requirements of cellular membranes across these domains. Despite their structural and functional importance across all domains of life, research on the health impacts of glycerophospholipids is limited. Findings, such as changes to their metabolism being observed in models of depression , have been reported. In fact, their extreme ubiquity, making up most of the membrane of most cells, can make it difficult to study the effects of these lipids. For example, experiments involving deletion or extreme modification of these lipids are not possible as they are often lethal to the organism. Their ubiquity also makes glycerophospholipids a poor choice for receptor binding, as receptors would be saturated in most situations. Yet, having these lipids readily available in cell membranes, where receptors are often found, does open up their uses as signaling molecules. For example, it is more favorable for an organism to use a modified membrane lipid to signal receptor activation than to produce a new molecule for signaling receptor activation. Overall, these characteristics underscore the dual role of glycerophospholipids as essential structural components and versatile signaling molecules, highlighting their significance in both cellular functionality and potential health implications regardless of the challenges in studying their specific effects. Cholesterol, which is a sterol lipid, is a well-known molecule of interest in the context of health, possibly best known for its contribution to heart and circulatory diseases, but also of critical function in cell membranes. There is little research showing an impact of cholesterol on gut immunity. However, other sterol lipids (which are derived from cholesterol metabolism) are known to modify immunity and potentially the gut microbiota . One example is bile acids, which are the main component of fat-digesting bile that is released into the small intestine . Research on the regionalization of norovirus infection within the gut has shown that microbiome-dependent regionalization of infection in the gut is largely due to bile acid, in which host-derived bile acids promote a pro-inflammatory environment, while those modified by gut bacteria mediate anti-inflammatory effects . During times of disease and inflammation, bile acid metabolism is often disrupted, which ultimately leads to additional inflammatory potential . Moreover, bile acids also interact with the intestinal microbiome and can shape the bacterial composition by promoting or inhibiting the growth of select bacterial species . While research into the effects of sterol lipids on the immune responses of the gut still needs to be developed, these lipids and their derivatives are among the most significant bioactive molecules in immune regulation and beyond. Another class of bioactive lipids that have been of significant interest are sphingolipids. These are defined by a sphingosine backbone and contain associated groups and residues that give each of them a specific subclass and structure . Sphingolipids contain a long chain of carbons that are associated with a hydrophilic head group. Many sphingolipids also contain a fatty acid residue. When this is the only addition to the sphingosine head, then the lipid is referred to as a ceramide. Since their discovery in 1870, these lipids have been associated with a variety of functions, including use in cellular membrane domains, cell signaling, cell proliferation, cell death, cell migration and invasion, central nervous system development, and immunomodulation . In fact, sphingolipids are essential for the maintenance of most eukaryotic cell membranes, although they have traditionally been observed within the context of the nervous system . One of the more studied functions of sphingolipids in immunomodulation is the trafficking of immune cells. This includes T cells, which are often needed to clear viral and bacterial infections. To perform this function, T cells need to enter circulation after exiting the lymph nodes. Sphingosine 1 phosphate (S1P) has been observed to play a major role in this process. This process benefits from the presence of an S1P gradient between circulation and tissue, in which there is a greater expression of S1P in circulation . Moreover, when hematopoietic cells are unable to express the receptor S1PR, T cell egress from lymph nodes is significantly reduced . Sphingolipids are also involved in immune processes in macrophages. For example, the stimulation of TLR2 and TLR4 by bacterial lipopeptides and LPS induces S1P production by upregulating expression of its activating kinase . It is suggested that S1P interaction with its receptor (S1PR3) contributes to the production of the chemokine MCP1, which in turn recruits monocytes to the site of infection . Stimulation of another S1P receptor (S1PR2) has also been shown to induce expression of inflammatory molecules IL-1β and IL-18 in mice . These findings, among others, demonstrate the critical role of sphingolipids in immune cell trafficking and inflammatory regulation, emphasizing their importance as mediators of the immune response and pointing to their involvement in inflammatory diseases. Sphingolipids were first described and initially known only to exist in eukaryotic organisms. However, in recent years, production of these lipids has been observed in prokaryotes. Structural differences exist between bacterial and eukaryotic sphingolipids, which may be due to differences in associated head groups or fatty acids, similar to in glycerophospholipids. However, one primary difference that makes eukaryotic sphingolipids distinguishable from bacterial sphingolipids is that eukaryotic lipids have 18 carbons with a single double bond in the sphingolipid base . Sphingolipid production has been best described for the phyla Bacteroidetes and Proteobacteria ; however, prokaryotes outside of these phyla may also produce sphingolipids . The most studied source of bacterial sphingolipids in the gut are members of the Bacteroidetes phylum, including Bacteroides thetaiotaomicron (which is one of the most abundant bacterial groups in the human gut microbiome). Lipidomic analysis of B. thetaiotaomicron has shown that sphingolipids are one of the most abundant lipid classes in their membranes . The same has also been shown for the lipid content of outer membrane vesicles (OMVs) produced by this bacterium . Like eukaryotic sphingolipids, bacterial sphingolipids also have immune modulatory capabilities. In germ-free mice that were monocolonized with either wild-type or a sphingolipid knockout strain of B. thetaiotaomicron , a significant elevation in IL-6 and MCP1 levels and crypt hyperplasia were observed in mice colonized with the sphingolipid mutant . Moreover, it was also shown that host-derived sphingolipids were increased in IBD patients and positively correlated with inflammation and those derived from Bacteroides were decreased and negatively correlated with inflammation . While the mechanism by which Bacteroides sphingolipids modulate host inflammation is still unclear, it has been suggested that these lipids may have a role in tolerogenic responses in T cells . Taken together it may be that as bacterial-derived sphingolipids are decreased due to dysbiosis, host-derived sphingolipids increase, and this dysregulation leads to inflammation. The inflammation may be a result of increased immune cell trafficking toward sphingolipids like S1P, or it may be a result of the loss of tolerogenic signals from bacterial sphingolipids to regulatory T cells, but this area requires more study. Both bacterial sphingolipids and SCFAs have been shown to reduce the expression of MCP1 among other inflammatory signals. The opposite has been shown of other lipids, such as the long-chain fatty acid, palmitic acid. Taken together, the propensity for lipids to modulate immune signaling in the gut suggests that the gut microbiome and diet, by way of lipids, help mediate inflammation and that modification to lipid profiles of immune cells may also induce inflammation. 4.1. Inflammatory Bowel Disease Lipids play diverse roles in modulating inflammatory and metabolic diseases through their ability to mediate inflammation. While some lipids exacerbate inflammation and disease, others reduce them and thus aid in prevention and resolution of disease. Among the lipids that reduce inflammation, SCFAs are crucial in managing IBD conditions such as Crohn’s disease and ulcerative colitis . Molecules such as butyrate, acetate, and propionate function as an energy source for colon cells and promote a healthy gut barrier. Butyrate, for instance, strengthens colonocytes, which helps maintain the integrity of the intestinal barrier , preventing harmful substances from crossing into the bloodstream—a critical factor in managing IBD symptoms . 4.1.1. Bioactive Lipids Associated with Reducing Intestinal Inflammation As discussed above, SCFAs have anti-inflammatory properties, which are crucial for reducing the immune response associated with disease . SCFAs stimulate the activity of regulatory T cells (Tregs—immune cells that help prevent excessive inflammation) through pathways involving G-protein coupled receptors such as FFAR2 (GPR43) and HCAR2 (GPR109A) on dendritic cells (DCs) and macrophages. This activation triggers IL-10 production, which facilitates Treg cell differentiation and ultimately aids in maintaining gut homeostasis and reducing inflammatory responses . Through these mechanisms, SCFAs reduce inflammation and improve gut health. Moreover, murine studies have demonstrated that supplementation with butyrate and other SCFAs enhance Treg function and subsequently lower inflammation . In addition, clinical interventions focusing on dietary fiber intake and probiotic interventions to boost SCFA production have shown promise in supporting gut health and reducing the severity of IBD, emphasizing the importance of SCFAs in disease management . This indicates that increasing dietary SCFA levels shows promise in IBD management. In addition to the direct effects on immune modulation of disease, SCFAs also exert cellular effects . One way through which this occurs is via epigenetic mechanisms, notably by inhibiting histone deacetylases (HDACs). HDACs enhance the expression of genes associated with anti-inflammatory pathways. HDAC inhibition by SCFAs promotes the acetylation of histones, leading to the activation of FoxP3+ Tregs. For instance, butyrate enhances the production of Treg cells in the colon by directly increasing the expression of FoxP3 through histone acetylation at conserved sequences, thus contributing to a reduced inflammatory state . These molecular changes bolster the production of IL-10, further supporting Treg cell function and mitigating inflammation in IBD . Interestingly, HDAC inhibition by SCFAs may also exert control epithelial barrier integrity. Using murine IECs in vitro, propionate treatment showed that HDAC inhibition was directly related to improved IEC migration, which is associated with improved barrier integrity . Additionally, SCFAs indirectly promote Treg activity by modulating metabolic pathways within T cells. Butyrate increases glycolysis and mitochondrial activity, providing the necessary energy and substrates for Treg proliferation and function. By acting as a substrate for acetyl-CoA in T cells, SCFAs also enhance their metabolic flexibility, enabling them to respond efficiently to inflammatory signals. This metabolic support underlies SCFAs’ role in maintaining an anti-inflammatory environment in the gut, highlighting their potential as therapeutic agents in IBD . Overall, these findings suggest that increasing SCFA levels, either through dietary interventions or targeted therapies, may be beneficial for IBD patients. Like SCFAs, some omega-3 fatty acids (e.g., EPA and DHA) also exhibit anti-inflammatory effects and have shown potential in managing IBD . Interestingly, these fatty acids modulate inflammation in the gut by altering cell membrane composition. The incorporation of EPA and DHA into cell membranes reduces the presence of arachidonic acid, which is a precursor to pro-inflammatory eicosanoids . By shifting this balance, omega-3s decrease the production of inflammatory molecules and increase the synthesis of anti-inflammatory mediators, which help resolve inflammation . Mechanistically, omega-3 fatty acids influence gene expression through nuclear receptors such as peroxisome proliferator-activated receptor gamma (PPARγ; ). When activated by omega-3s, PPARγ reduces the expression of pro-inflammatory cytokines, including TNF-α and IL-6, both of which are elevated in IBD. This modulation of gene expression helps in reducing inflammation in the gut mucosa and supports tissue healing, making omega-3 fatty acids beneficial as a dietary supplement in managing IBD flare-ups . 4.1.2. Bioactive Lipids Associated with Inducing Intestinal Inflammation In contrast to SCFAs and omega-3 fatty acids whose presence leads to improved barrier integrity and reduction of IBD symptoms, saturated fatty acids promote intestinal inflammation and disruption of gut homeostasis. Thus, these lipids can contribute significantly to the development and exacerbation of IBD . High saturated fatty acid (SFA) intake can lead to an altered gut microbiome, favoring the growth of pro-inflammatory bacteria while reducing beneficial species . This microbial dysbiosis enhances the intestinal inflammatory response, which is central to IBD pathology. The resulting inflammation can reduce gut barrier integrity and aggravate IBD symptoms . TLR4 is one of the primary pro-inflammatory pathways activated by SFAs within the gut . Stimulation of TLR4 by SFAs triggers the release of TNF-α and IL-1β which can lead to the chronic inflammatory state characteristic of IBD . Studies have noted that elevated TLR4 signaling due to high SFA intake is associated with increased disease severity in Crohn’s disease and ulcerative colitis patients . In addition, SFAs induce oxidative stress in the intestinal lining, leading to damage of cellular structures and DNA. This oxidative stress occurs when SFAs increase the production of reactive oxygen species (ROS), causing damage to intestinal cells and deterioration of the gut barrier . This compromised barrier not only allows pathogens to enter but also maintains an inflammatory cycle that perpetuates tissue damage. For IBD patients, this oxidative stress is particularly detrimental, as it contributes to the progression of inflammation and increases disease severity . Moreover, SFA-rich environments are linked to impaired Treg function, which leads to uncontrolled inflammation and exacerbation of IBD . Therefore, diets high in SFAs contribute to the persistence and severity of IBD, which also underscores the importance of dietary management in these affected individuals . Like SFAs, the poly-unsaturated eicosanoids also drive inflammation but do so through differing pathways that involve cyclooxygenase (COX) and lipoxygenase (LOX) enzymes . The COX-2 enzyme produces high levels of PGE2, which acts on receptors in immune and epithelial cells to promote inflammation, increase vascular permeability, and recruit immune cells to the gut lining . This process sustains the chronic inflammatory environment that is characteristic of IBD, contributing to tissue damage and exacerbating disease symptoms. Leukotrienes, produced by the LOX pathway, also contribute to IBD pathology by attracting immune cells like neutrophils to inflamed areas in the intestines . Leukotriene B4 amplifies the inflammatory response by binding to its receptors on immune cells, which promotes further cytokine release and intensifies gut inflammation. Moreover, PGE2 and leukotrienes also activate NF-κB amplifying inflammatory gene expression and leading to sustained inflammation . This leukotriene-driven recruitment of immune cells is especially problematic in Crohn’s disease, where it contributes to deep tissue damage and may worsen disease severity . Eicosanoids also play a role in altering the epithelial barrier integrity in IBD. Eicosanoid induction of PGE2 and thromboxanes can disrupt tight junctions between epithelial cells, making the intestinal barrier more permeable and contributing to the “leaky gut” phenomenon observed in many IBD patients . This compromised barrier function exacerbates inflammation and increases the risk of infection, highlighting the role of eicosanoids in disease progression . Sphingolipids are also associated with IBD; however, their role in disease is more complex. Some sphingolipids reduce inflammation and colonic damage, while others mediated immune cell trafficking and inflammation. To further complicate the role of sphingolipids in disease, those produced by the gut microbiota appear to modulate inflammation, suggesting a potential therapeutic avenue for these lipids. The involvement of sphingolipids in IBD was first considered due to the association between TNFα ( ; a common target in IBD therapies) signaling and sphingolipids . It was also found that sphingolipids are one of the most differentially abundant metabolites in IBD patients compared to healthy patients . Based on these observations, further studies using dextran sulfate sodium-induced colitis in mice were performed and showed that a sphingomyelin analog significantly reduced colonic damage and cytokine production associated with colitis . This was thought to result from the sphingomyelin leading to a reduction in ceramide production, which was shown to increase colonic cell viability . In an IL10 −/− mouse model of IBD, another S1PR agonist, KRP-203 was shown to reduce TH1 response and colitis, which was associated with T cell trafficking and linked to previously identified effects of S1P on T cells . However, it should be noted that KRP-203 was tested for efficacy as an IBD treatment and did not pass clinical trials . Taken together, sphingolipids may have paradoxical effects in the gut, both decreasing inflammation and increasing it in models of colitis. This becomes more interesting when exploring the effects of sphingolipids derived from the gut microbiome. 4.2. Colon Cancer Colon cancer development is influenced by a complex interplay of dietary, microbial, and host-derived lipids, which can either promote tumorigenesis or provide protective effects through immune and metabolic modulation. As with IBD, protective lipids include SCFAs and omega-3 fatty acids. These lipids share some overlapping mechanisms with IBD that are also protective against cancer. One example is the ability of butyrate to serve as an energy source for colonocytes, strengthening the cells lining the colon, and thus providing a protective role against colon cancer . However, these lipids also have cancer-specific effects. For example, the ability of butyrate to inhibit HDACs leading to changes in gene expression that suppress cancer cell proliferation and promote apoptosis in cancerous cells, highlighting its potential as a therapeutic agent . Beyond epigenetic regulation, SCFAs impact cellular metabolism through the mTOR pathway, further contributing to tumor suppression by enhancing autophagy. SCFAs can suppress mTOR activity through a long non-coding RNA (RMST) activated pathway, which ultimately enhances autophagy . Autophagy induced by SCFAs prevents the accumulation of cellular damage and supports normal cell function, while limiting the growth of cancerous cells . SCFAs can also contribute to immune regulation in the colon, influencing inflammatory responses that are often implicated in colorectal cancer. By activating receptors such as GPR43 (FFAR2) and GPR109A, SCFAs enhance the release of anti-inflammatory cytokines, including IL-10, to aid in maintaining homeostasis . This reduction in inflammation can limit the conditions that promote tumor formation. Additionally, SCFAs play a role in maintaining gut barrier integrity through mechanisms that reduce oxidative stress and inflammation, which are both risk factors for cancer development . Epigenetically, SCFAs influence DNA methylation and histone modification processes, which are essential for regulating gene expression in colon cells. The inhibition of HDACs by butyrate leads to increased acetylation of histones, thereby promoting the expression of tumor suppressor genes . This epigenetic modulation by SCFAs results in decreased tumor growth and improved cancer cell apoptosis rates. Studies have demonstrated that diets high in fiber, which increase SCFA production, are associated with lower rates of colon cancer, emphasizing the potential of dietary interventions as preventative measures . Thus, like IBD, it is the presence of SFCAs and the anti-inflammatory impact of these lipids that can potentially prevent the onset and progression of disease. Interestingly, omega-3 fatty acids can impact colon cancer through altering cancer cell metabolism . Cancer cells rely heavily on lipid synthesis for growth and survival, and omega-3s can interfere with this process by modulating enzymes involved in lipid metabolism. For example, they reduce the expression of fatty acid synthase, which is an enzyme often overexpressed in colorectal cancer that supports cancer cell proliferation. By downregulating this enzyme, omega-3s can inhibit cancer growth and make cells more susceptible to apoptosis, thereby slowing cancer progression . Omega-3 fatty acids also enhance the effectiveness of conventional cancer therapies like chemotherapy and radiation by increasing oxidative stress within cancer cells. This selective increase in oxidative stress renders cancer cells more vulnerable to treatment, enhancing therapeutic outcomes. Moreover, the anti-inflammatory and immunomodulatory effects of omega-3s help mitigate some of the inflammatory side effects associated with these treatments, providing additional support for patients undergoing cancer therapy . In contrast to the protective effects of SCFAs and omega-3 fatty acids, saturated fats and pro-inflammatory eicosanoids exacerbate cancer progression by promoting chronic inflammation and oxidative stress . In addition to inducing inflammation, saturated fatty acids (SFAs) also interact with lipid metabolic pathways that are often dysregulated in cancer. High levels of SFAs in the diet can elevate expression of enzymes like SCD1, which converts SFAs into monounsaturated fatty acids (MUFAs). SCD1 overexpression has been correlated with poor cancer prognosis, as it enhances cancer cell survival and resistance to apoptosis . By promoting the synthesis of MUFAs, SFAs indirectly support lipid signaling and membrane synthesis crucial for rapid cancer cell proliferation. The upregulation of SCD1 and its associated pathways highlights how dietary SFAs may stimulate lipid-dependent mechanisms, exacerbating cancer progression . Additionally, SFAs influence oxidative stress within the colon, which can damage DNA and lead to mutations, further contributing to cancer risk. Reactive oxygen species (ROS), which are produced as a byproduct of high SFA levels, cause oxidative damage in cellular structures and DNA. This oxidative stress, combined with the inflammatory environment created by SFA-induced TLR activation, fosters a cycle of DNA damage and cell proliferation, increasing the likelihood of malignant transformations in colon cells . Together, these findings highlight the dual role of lipids in colon cancer, with dietary interventions targeting SCFAs and omega-3 fatty acids offering promising therapeutic potential to mitigate tumor progression and improve patient outcomes. Lipids play diverse roles in modulating inflammatory and metabolic diseases through their ability to mediate inflammation. While some lipids exacerbate inflammation and disease, others reduce them and thus aid in prevention and resolution of disease. Among the lipids that reduce inflammation, SCFAs are crucial in managing IBD conditions such as Crohn’s disease and ulcerative colitis . Molecules such as butyrate, acetate, and propionate function as an energy source for colon cells and promote a healthy gut barrier. Butyrate, for instance, strengthens colonocytes, which helps maintain the integrity of the intestinal barrier , preventing harmful substances from crossing into the bloodstream—a critical factor in managing IBD symptoms . 4.1.1. Bioactive Lipids Associated with Reducing Intestinal Inflammation As discussed above, SCFAs have anti-inflammatory properties, which are crucial for reducing the immune response associated with disease . SCFAs stimulate the activity of regulatory T cells (Tregs—immune cells that help prevent excessive inflammation) through pathways involving G-protein coupled receptors such as FFAR2 (GPR43) and HCAR2 (GPR109A) on dendritic cells (DCs) and macrophages. This activation triggers IL-10 production, which facilitates Treg cell differentiation and ultimately aids in maintaining gut homeostasis and reducing inflammatory responses . Through these mechanisms, SCFAs reduce inflammation and improve gut health. Moreover, murine studies have demonstrated that supplementation with butyrate and other SCFAs enhance Treg function and subsequently lower inflammation . In addition, clinical interventions focusing on dietary fiber intake and probiotic interventions to boost SCFA production have shown promise in supporting gut health and reducing the severity of IBD, emphasizing the importance of SCFAs in disease management . This indicates that increasing dietary SCFA levels shows promise in IBD management. In addition to the direct effects on immune modulation of disease, SCFAs also exert cellular effects . One way through which this occurs is via epigenetic mechanisms, notably by inhibiting histone deacetylases (HDACs). HDACs enhance the expression of genes associated with anti-inflammatory pathways. HDAC inhibition by SCFAs promotes the acetylation of histones, leading to the activation of FoxP3+ Tregs. For instance, butyrate enhances the production of Treg cells in the colon by directly increasing the expression of FoxP3 through histone acetylation at conserved sequences, thus contributing to a reduced inflammatory state . These molecular changes bolster the production of IL-10, further supporting Treg cell function and mitigating inflammation in IBD . Interestingly, HDAC inhibition by SCFAs may also exert control epithelial barrier integrity. Using murine IECs in vitro, propionate treatment showed that HDAC inhibition was directly related to improved IEC migration, which is associated with improved barrier integrity . Additionally, SCFAs indirectly promote Treg activity by modulating metabolic pathways within T cells. Butyrate increases glycolysis and mitochondrial activity, providing the necessary energy and substrates for Treg proliferation and function. By acting as a substrate for acetyl-CoA in T cells, SCFAs also enhance their metabolic flexibility, enabling them to respond efficiently to inflammatory signals. This metabolic support underlies SCFAs’ role in maintaining an anti-inflammatory environment in the gut, highlighting their potential as therapeutic agents in IBD . Overall, these findings suggest that increasing SCFA levels, either through dietary interventions or targeted therapies, may be beneficial for IBD patients. Like SCFAs, some omega-3 fatty acids (e.g., EPA and DHA) also exhibit anti-inflammatory effects and have shown potential in managing IBD . Interestingly, these fatty acids modulate inflammation in the gut by altering cell membrane composition. The incorporation of EPA and DHA into cell membranes reduces the presence of arachidonic acid, which is a precursor to pro-inflammatory eicosanoids . By shifting this balance, omega-3s decrease the production of inflammatory molecules and increase the synthesis of anti-inflammatory mediators, which help resolve inflammation . Mechanistically, omega-3 fatty acids influence gene expression through nuclear receptors such as peroxisome proliferator-activated receptor gamma (PPARγ; ). When activated by omega-3s, PPARγ reduces the expression of pro-inflammatory cytokines, including TNF-α and IL-6, both of which are elevated in IBD. This modulation of gene expression helps in reducing inflammation in the gut mucosa and supports tissue healing, making omega-3 fatty acids beneficial as a dietary supplement in managing IBD flare-ups . 4.1.2. Bioactive Lipids Associated with Inducing Intestinal Inflammation In contrast to SCFAs and omega-3 fatty acids whose presence leads to improved barrier integrity and reduction of IBD symptoms, saturated fatty acids promote intestinal inflammation and disruption of gut homeostasis. Thus, these lipids can contribute significantly to the development and exacerbation of IBD . High saturated fatty acid (SFA) intake can lead to an altered gut microbiome, favoring the growth of pro-inflammatory bacteria while reducing beneficial species . This microbial dysbiosis enhances the intestinal inflammatory response, which is central to IBD pathology. The resulting inflammation can reduce gut barrier integrity and aggravate IBD symptoms . TLR4 is one of the primary pro-inflammatory pathways activated by SFAs within the gut . Stimulation of TLR4 by SFAs triggers the release of TNF-α and IL-1β which can lead to the chronic inflammatory state characteristic of IBD . Studies have noted that elevated TLR4 signaling due to high SFA intake is associated with increased disease severity in Crohn’s disease and ulcerative colitis patients . In addition, SFAs induce oxidative stress in the intestinal lining, leading to damage of cellular structures and DNA. This oxidative stress occurs when SFAs increase the production of reactive oxygen species (ROS), causing damage to intestinal cells and deterioration of the gut barrier . This compromised barrier not only allows pathogens to enter but also maintains an inflammatory cycle that perpetuates tissue damage. For IBD patients, this oxidative stress is particularly detrimental, as it contributes to the progression of inflammation and increases disease severity . Moreover, SFA-rich environments are linked to impaired Treg function, which leads to uncontrolled inflammation and exacerbation of IBD . Therefore, diets high in SFAs contribute to the persistence and severity of IBD, which also underscores the importance of dietary management in these affected individuals . Like SFAs, the poly-unsaturated eicosanoids also drive inflammation but do so through differing pathways that involve cyclooxygenase (COX) and lipoxygenase (LOX) enzymes . The COX-2 enzyme produces high levels of PGE2, which acts on receptors in immune and epithelial cells to promote inflammation, increase vascular permeability, and recruit immune cells to the gut lining . This process sustains the chronic inflammatory environment that is characteristic of IBD, contributing to tissue damage and exacerbating disease symptoms. Leukotrienes, produced by the LOX pathway, also contribute to IBD pathology by attracting immune cells like neutrophils to inflamed areas in the intestines . Leukotriene B4 amplifies the inflammatory response by binding to its receptors on immune cells, which promotes further cytokine release and intensifies gut inflammation. Moreover, PGE2 and leukotrienes also activate NF-κB amplifying inflammatory gene expression and leading to sustained inflammation . This leukotriene-driven recruitment of immune cells is especially problematic in Crohn’s disease, where it contributes to deep tissue damage and may worsen disease severity . Eicosanoids also play a role in altering the epithelial barrier integrity in IBD. Eicosanoid induction of PGE2 and thromboxanes can disrupt tight junctions between epithelial cells, making the intestinal barrier more permeable and contributing to the “leaky gut” phenomenon observed in many IBD patients . This compromised barrier function exacerbates inflammation and increases the risk of infection, highlighting the role of eicosanoids in disease progression . Sphingolipids are also associated with IBD; however, their role in disease is more complex. Some sphingolipids reduce inflammation and colonic damage, while others mediated immune cell trafficking and inflammation. To further complicate the role of sphingolipids in disease, those produced by the gut microbiota appear to modulate inflammation, suggesting a potential therapeutic avenue for these lipids. The involvement of sphingolipids in IBD was first considered due to the association between TNFα ( ; a common target in IBD therapies) signaling and sphingolipids . It was also found that sphingolipids are one of the most differentially abundant metabolites in IBD patients compared to healthy patients . Based on these observations, further studies using dextran sulfate sodium-induced colitis in mice were performed and showed that a sphingomyelin analog significantly reduced colonic damage and cytokine production associated with colitis . This was thought to result from the sphingomyelin leading to a reduction in ceramide production, which was shown to increase colonic cell viability . In an IL10 −/− mouse model of IBD, another S1PR agonist, KRP-203 was shown to reduce TH1 response and colitis, which was associated with T cell trafficking and linked to previously identified effects of S1P on T cells . However, it should be noted that KRP-203 was tested for efficacy as an IBD treatment and did not pass clinical trials . Taken together, sphingolipids may have paradoxical effects in the gut, both decreasing inflammation and increasing it in models of colitis. This becomes more interesting when exploring the effects of sphingolipids derived from the gut microbiome. As discussed above, SCFAs have anti-inflammatory properties, which are crucial for reducing the immune response associated with disease . SCFAs stimulate the activity of regulatory T cells (Tregs—immune cells that help prevent excessive inflammation) through pathways involving G-protein coupled receptors such as FFAR2 (GPR43) and HCAR2 (GPR109A) on dendritic cells (DCs) and macrophages. This activation triggers IL-10 production, which facilitates Treg cell differentiation and ultimately aids in maintaining gut homeostasis and reducing inflammatory responses . Through these mechanisms, SCFAs reduce inflammation and improve gut health. Moreover, murine studies have demonstrated that supplementation with butyrate and other SCFAs enhance Treg function and subsequently lower inflammation . In addition, clinical interventions focusing on dietary fiber intake and probiotic interventions to boost SCFA production have shown promise in supporting gut health and reducing the severity of IBD, emphasizing the importance of SCFAs in disease management . This indicates that increasing dietary SCFA levels shows promise in IBD management. In addition to the direct effects on immune modulation of disease, SCFAs also exert cellular effects . One way through which this occurs is via epigenetic mechanisms, notably by inhibiting histone deacetylases (HDACs). HDACs enhance the expression of genes associated with anti-inflammatory pathways. HDAC inhibition by SCFAs promotes the acetylation of histones, leading to the activation of FoxP3+ Tregs. For instance, butyrate enhances the production of Treg cells in the colon by directly increasing the expression of FoxP3 through histone acetylation at conserved sequences, thus contributing to a reduced inflammatory state . These molecular changes bolster the production of IL-10, further supporting Treg cell function and mitigating inflammation in IBD . Interestingly, HDAC inhibition by SCFAs may also exert control epithelial barrier integrity. Using murine IECs in vitro, propionate treatment showed that HDAC inhibition was directly related to improved IEC migration, which is associated with improved barrier integrity . Additionally, SCFAs indirectly promote Treg activity by modulating metabolic pathways within T cells. Butyrate increases glycolysis and mitochondrial activity, providing the necessary energy and substrates for Treg proliferation and function. By acting as a substrate for acetyl-CoA in T cells, SCFAs also enhance their metabolic flexibility, enabling them to respond efficiently to inflammatory signals. This metabolic support underlies SCFAs’ role in maintaining an anti-inflammatory environment in the gut, highlighting their potential as therapeutic agents in IBD . Overall, these findings suggest that increasing SCFA levels, either through dietary interventions or targeted therapies, may be beneficial for IBD patients. Like SCFAs, some omega-3 fatty acids (e.g., EPA and DHA) also exhibit anti-inflammatory effects and have shown potential in managing IBD . Interestingly, these fatty acids modulate inflammation in the gut by altering cell membrane composition. The incorporation of EPA and DHA into cell membranes reduces the presence of arachidonic acid, which is a precursor to pro-inflammatory eicosanoids . By shifting this balance, omega-3s decrease the production of inflammatory molecules and increase the synthesis of anti-inflammatory mediators, which help resolve inflammation . Mechanistically, omega-3 fatty acids influence gene expression through nuclear receptors such as peroxisome proliferator-activated receptor gamma (PPARγ; ). When activated by omega-3s, PPARγ reduces the expression of pro-inflammatory cytokines, including TNF-α and IL-6, both of which are elevated in IBD. This modulation of gene expression helps in reducing inflammation in the gut mucosa and supports tissue healing, making omega-3 fatty acids beneficial as a dietary supplement in managing IBD flare-ups . In contrast to SCFAs and omega-3 fatty acids whose presence leads to improved barrier integrity and reduction of IBD symptoms, saturated fatty acids promote intestinal inflammation and disruption of gut homeostasis. Thus, these lipids can contribute significantly to the development and exacerbation of IBD . High saturated fatty acid (SFA) intake can lead to an altered gut microbiome, favoring the growth of pro-inflammatory bacteria while reducing beneficial species . This microbial dysbiosis enhances the intestinal inflammatory response, which is central to IBD pathology. The resulting inflammation can reduce gut barrier integrity and aggravate IBD symptoms . TLR4 is one of the primary pro-inflammatory pathways activated by SFAs within the gut . Stimulation of TLR4 by SFAs triggers the release of TNF-α and IL-1β which can lead to the chronic inflammatory state characteristic of IBD . Studies have noted that elevated TLR4 signaling due to high SFA intake is associated with increased disease severity in Crohn’s disease and ulcerative colitis patients . In addition, SFAs induce oxidative stress in the intestinal lining, leading to damage of cellular structures and DNA. This oxidative stress occurs when SFAs increase the production of reactive oxygen species (ROS), causing damage to intestinal cells and deterioration of the gut barrier . This compromised barrier not only allows pathogens to enter but also maintains an inflammatory cycle that perpetuates tissue damage. For IBD patients, this oxidative stress is particularly detrimental, as it contributes to the progression of inflammation and increases disease severity . Moreover, SFA-rich environments are linked to impaired Treg function, which leads to uncontrolled inflammation and exacerbation of IBD . Therefore, diets high in SFAs contribute to the persistence and severity of IBD, which also underscores the importance of dietary management in these affected individuals . Like SFAs, the poly-unsaturated eicosanoids also drive inflammation but do so through differing pathways that involve cyclooxygenase (COX) and lipoxygenase (LOX) enzymes . The COX-2 enzyme produces high levels of PGE2, which acts on receptors in immune and epithelial cells to promote inflammation, increase vascular permeability, and recruit immune cells to the gut lining . This process sustains the chronic inflammatory environment that is characteristic of IBD, contributing to tissue damage and exacerbating disease symptoms. Leukotrienes, produced by the LOX pathway, also contribute to IBD pathology by attracting immune cells like neutrophils to inflamed areas in the intestines . Leukotriene B4 amplifies the inflammatory response by binding to its receptors on immune cells, which promotes further cytokine release and intensifies gut inflammation. Moreover, PGE2 and leukotrienes also activate NF-κB amplifying inflammatory gene expression and leading to sustained inflammation . This leukotriene-driven recruitment of immune cells is especially problematic in Crohn’s disease, where it contributes to deep tissue damage and may worsen disease severity . Eicosanoids also play a role in altering the epithelial barrier integrity in IBD. Eicosanoid induction of PGE2 and thromboxanes can disrupt tight junctions between epithelial cells, making the intestinal barrier more permeable and contributing to the “leaky gut” phenomenon observed in many IBD patients . This compromised barrier function exacerbates inflammation and increases the risk of infection, highlighting the role of eicosanoids in disease progression . Sphingolipids are also associated with IBD; however, their role in disease is more complex. Some sphingolipids reduce inflammation and colonic damage, while others mediated immune cell trafficking and inflammation. To further complicate the role of sphingolipids in disease, those produced by the gut microbiota appear to modulate inflammation, suggesting a potential therapeutic avenue for these lipids. The involvement of sphingolipids in IBD was first considered due to the association between TNFα ( ; a common target in IBD therapies) signaling and sphingolipids . It was also found that sphingolipids are one of the most differentially abundant metabolites in IBD patients compared to healthy patients . Based on these observations, further studies using dextran sulfate sodium-induced colitis in mice were performed and showed that a sphingomyelin analog significantly reduced colonic damage and cytokine production associated with colitis . This was thought to result from the sphingomyelin leading to a reduction in ceramide production, which was shown to increase colonic cell viability . In an IL10 −/− mouse model of IBD, another S1PR agonist, KRP-203 was shown to reduce TH1 response and colitis, which was associated with T cell trafficking and linked to previously identified effects of S1P on T cells . However, it should be noted that KRP-203 was tested for efficacy as an IBD treatment and did not pass clinical trials . Taken together, sphingolipids may have paradoxical effects in the gut, both decreasing inflammation and increasing it in models of colitis. This becomes more interesting when exploring the effects of sphingolipids derived from the gut microbiome. Colon cancer development is influenced by a complex interplay of dietary, microbial, and host-derived lipids, which can either promote tumorigenesis or provide protective effects through immune and metabolic modulation. As with IBD, protective lipids include SCFAs and omega-3 fatty acids. These lipids share some overlapping mechanisms with IBD that are also protective against cancer. One example is the ability of butyrate to serve as an energy source for colonocytes, strengthening the cells lining the colon, and thus providing a protective role against colon cancer . However, these lipids also have cancer-specific effects. For example, the ability of butyrate to inhibit HDACs leading to changes in gene expression that suppress cancer cell proliferation and promote apoptosis in cancerous cells, highlighting its potential as a therapeutic agent . Beyond epigenetic regulation, SCFAs impact cellular metabolism through the mTOR pathway, further contributing to tumor suppression by enhancing autophagy. SCFAs can suppress mTOR activity through a long non-coding RNA (RMST) activated pathway, which ultimately enhances autophagy . Autophagy induced by SCFAs prevents the accumulation of cellular damage and supports normal cell function, while limiting the growth of cancerous cells . SCFAs can also contribute to immune regulation in the colon, influencing inflammatory responses that are often implicated in colorectal cancer. By activating receptors such as GPR43 (FFAR2) and GPR109A, SCFAs enhance the release of anti-inflammatory cytokines, including IL-10, to aid in maintaining homeostasis . This reduction in inflammation can limit the conditions that promote tumor formation. Additionally, SCFAs play a role in maintaining gut barrier integrity through mechanisms that reduce oxidative stress and inflammation, which are both risk factors for cancer development . Epigenetically, SCFAs influence DNA methylation and histone modification processes, which are essential for regulating gene expression in colon cells. The inhibition of HDACs by butyrate leads to increased acetylation of histones, thereby promoting the expression of tumor suppressor genes . This epigenetic modulation by SCFAs results in decreased tumor growth and improved cancer cell apoptosis rates. Studies have demonstrated that diets high in fiber, which increase SCFA production, are associated with lower rates of colon cancer, emphasizing the potential of dietary interventions as preventative measures . Thus, like IBD, it is the presence of SFCAs and the anti-inflammatory impact of these lipids that can potentially prevent the onset and progression of disease. Interestingly, omega-3 fatty acids can impact colon cancer through altering cancer cell metabolism . Cancer cells rely heavily on lipid synthesis for growth and survival, and omega-3s can interfere with this process by modulating enzymes involved in lipid metabolism. For example, they reduce the expression of fatty acid synthase, which is an enzyme often overexpressed in colorectal cancer that supports cancer cell proliferation. By downregulating this enzyme, omega-3s can inhibit cancer growth and make cells more susceptible to apoptosis, thereby slowing cancer progression . Omega-3 fatty acids also enhance the effectiveness of conventional cancer therapies like chemotherapy and radiation by increasing oxidative stress within cancer cells. This selective increase in oxidative stress renders cancer cells more vulnerable to treatment, enhancing therapeutic outcomes. Moreover, the anti-inflammatory and immunomodulatory effects of omega-3s help mitigate some of the inflammatory side effects associated with these treatments, providing additional support for patients undergoing cancer therapy . In contrast to the protective effects of SCFAs and omega-3 fatty acids, saturated fats and pro-inflammatory eicosanoids exacerbate cancer progression by promoting chronic inflammation and oxidative stress . In addition to inducing inflammation, saturated fatty acids (SFAs) also interact with lipid metabolic pathways that are often dysregulated in cancer. High levels of SFAs in the diet can elevate expression of enzymes like SCD1, which converts SFAs into monounsaturated fatty acids (MUFAs). SCD1 overexpression has been correlated with poor cancer prognosis, as it enhances cancer cell survival and resistance to apoptosis . By promoting the synthesis of MUFAs, SFAs indirectly support lipid signaling and membrane synthesis crucial for rapid cancer cell proliferation. The upregulation of SCD1 and its associated pathways highlights how dietary SFAs may stimulate lipid-dependent mechanisms, exacerbating cancer progression . Additionally, SFAs influence oxidative stress within the colon, which can damage DNA and lead to mutations, further contributing to cancer risk. Reactive oxygen species (ROS), which are produced as a byproduct of high SFA levels, cause oxidative damage in cellular structures and DNA. This oxidative stress, combined with the inflammatory environment created by SFA-induced TLR activation, fosters a cycle of DNA damage and cell proliferation, increasing the likelihood of malignant transformations in colon cells . Together, these findings highlight the dual role of lipids in colon cancer, with dietary interventions targeting SCFAs and omega-3 fatty acids offering promising therapeutic potential to mitigate tumor progression and improve patient outcomes. A healthy gut microbiome and lipidome inhibit inflammation, and an aberrant one can lead to inflammation and disease. Advancements in mass spectrometry have revolutionized the field of lipid research, enabling more precise identification and characterization of diverse lipid species. This has ultimately expanded our understanding of bioactive lipids and their diverse roles in biological processes and disease. Techniques such as LC-MS, ESI-MS, and their variations have addressed the challenges posed by the hydrophobic nature and structural diversity of lipids, allowing for both targeted and untargeted analyses. These innovations have significantly expanded our understanding of lipids, their biogenesis, and their roles in health and disease, marking a new era in lipid research. Among the most studied bioactive lipids, SFCAs have been proven to play a pivotal role in gut health, acting as key modulators of inflammation and contributors to microbiota balance . Their ability to suppress inflammatory signals, regulate leukocyte recruitment, and promote the growth of beneficial bacteria highlights their broad impact on both immune function and disease prevention. These findings emphasize the therapeutic potential of dietary fiber and probiotics in boosting SCFA production to support gut health and manage conditions such as IBD and metabolic disease. Moreover, recent research has shown that SCFAs have benefits beyond gut health. For example, murine models have shown these fatty acids can reduce inflammation in non-alcoholic steatohepatitis . In addition, their ability to improve gut barrier integrity is also linked to improved outcomes of pancreatitis in mouse models . Future research exploring the systemic impact of SCFAs will likely reveal additional benefits in host health and remediation of disease. Another critical regulator of immune responses are eicosanoids, with their effects largely determined by the context in which they are produced. Prostaglandin E2 and SPMs exemplify the anti-inflammatory potential of these lipids, highlighting their role in maintaining gut homeostasis and reducing localized inflammation . These findings underscore the importance of eicosanoids in modulating inflammatory conditions such as IBD, colon cancer, and diverticulitis. Dietary lipids are also heavily involved in shaping immune responses in the gut, with saturated and unsaturated fatty acids exerting contrasting effects. Saturated fats, such as palmitic acid, are associated with pro-inflammatory states, contributing to chronic inflammation through mechanisms involving toll-like receptor activation and inflammatory cytokine production . Conversely, unsaturated fatty acids, particularly omega-3s, demonstrate anti-inflammatory properties by modulating cytokine expression and promoting immune-regulatory phenotypes like M2 macrophage polarization. These data also highlight the therapeutic potential of unsaturated fatty acids and SCFAs in managing inflammation and promoting overall health. Future research investigating the direction interaction of omega-3s with nuclear receptors, transcription factors, and signaling cascades that are specific to intestinal immunity would further clarify their role of omega-3 fatty acids in gut health. The less heavily researched glycerophospholipids serve as both indispensable structural components of cellular membranes and versatile signaling molecules, emphasizing their critical role in cellular functionality. Despite their ubiquity posing challenges for targeted research, their involvement in signaling and metabolism underscores their potential health implications. These dual roles highlight the need for innovative approaches and future research to further understand their complex biological functions. Likewise, with both pro- and anti-inflammatory effects, bile acids, depending on their source demonstrate their ability to interact with and influence the microbiome. These findings highlight the complex role of sterol lipids in immune regulation and the need for further investigation into their functions in gut health. Likewise, sphingolipids play a pivotal role in cellular and immune processes, serving as key mediators in immune cell trafficking and inflammatory regulation . Their involvement in T cell egress, macrophage activation, and chemokine production demonstrates their importance in maintaining immune homeostasis and responding to infections. The ability of sphingolipids like S1P to influence inflammatory pathways highlights their dual role in promoting immune responses and contributing to inflammatory diseases. These findings emphasize the need for further exploration into sphingolipid-mediated mechanisms as potential therapeutic targets in immune-related conditions. In conclusion, bioactive lipids are instrumental in shaping immune responses. Short-chain fatty acids like butyrate exhibit anti-inflammatory effects, whereas long-chain saturated fats, such as palmitic acid, often induce inflammation. The interplay between dietary lipids, immune modulation, and gut health underscores their significance in chronic conditions such as IBD, metabolic disorders, and obesity. Meanwhile, unsaturated fats, particularly omega-3 fatty acids, offer protective effects by reducing inflammation and promoting immune balance, suggesting dietary interventions such as modulation of lipid consumption could be employed as a therapeutic strategy for improving gut health and preventing or resolving chronic disease. |
Die Bedeutung der Konzepte One Health und Planetary Health für die Umweltmedizin im 21. Jahrhundert | c94f5240-9997-4ed2-8f25-b2114116dd87 | 10235829 | Preventive Medicine[mh] | Im 21. Jahrhundert nehmen vom Menschen verursachte Risiken und Bedrohungen zu. Beispiele hierfür sind der Klimawandel, der Biodiversitätsverlust, neu auftretende Infektionskrankheiten, die zu Pandemien führen können, wachsende Ungleichheiten und Kriege. Die daraus folgenden Gesundheitsgefahren können katastrophale Ausmaße annehmen, insbesondere wenn sogenannte Kipppunkte überschritten werden. Beispielsweise kann der Klimawandel schwerwiegende Veränderungen zur Folge haben, auch solche, die bis heute kaum vorhersehbar sind, weil die zugrunde liegenden ökologischen Prozesse zu wenig verstanden sind . Im Zusammenhang mit den anthropogenen Umweltveränderungen und der Destabilisierung des Erdsystems sind neue vielseitige und systemische Gesundheitsansätze wie Eco Health, Geo Health, Conservation Medicine, One Health und Planetary Health entstanden. Dabei haben insbesondere die letzten beiden eine weite Verbreitung und Popularität gefunden. Planetary Health und One Health versuchen mit einer systemischen Herangehensweise die Zusammenhänge im Bereich Umwelt und Gesundheit und darüber hinaus in den Blick zu nehmen und dabei mögliche Lösungen für die umfänglichen Probleme zu finden. Umweltmedizinische Fragestellungen sind seit Entstehung der Heilkünste von großem menschlichen Interesse und wurden im Laufe der Jahrhunderte durch die jeweiligen gesellschaftlichen und politischen Entwicklungen geprägt. Heutzutage ist es für Ärztinnen und Ärzte in Deutschland möglich, sich für den Bereich Umweltmedizin zu qualifizieren, indem sie den Facharzttitel für „Hygiene und Umweltmedizin“ erwerben oder an einer curricularen Fortbildung teilnehmen, die sich vorrangig an klinisch tätige Ärztinnen und Ärzte anderer Fachrichtungen oder im öffentlichen Gesundheitsdienst richtet . Das Fachgebiet Umweltmedizin befasst sich einerseits mit der klinischen Behandlung von Personen mit umweltassoziierten Krankheiten (Individualebene), andererseits betrachtet es auch die Gesamtbevölkerung im Sinne von Public Health oder Global Health. Die zunehmende Popularität der Ansätze One Health und Planetary Health beeinflusst die umweltmedizinische Denkweise im klinischen Alltag und in der Forschung. Hier ergeben sich neue Sichtweisen und Zielkonflikte, die einer gesellschaftlichen Diskussion bedürfen. Dieser Diskussionsartikel gibt zunächst einen Überblick über die Entwicklung der Konzepte One Health und Planetary Health. Anschließend werden die Aufgaben der Umweltmedizin im zeitlichen Verlauf dargestellt und die Bedeutung von One Health und Planetary Health bei der Weiterentwicklung dieses Fachbereichs aufgezeigt. „One Medicine“, das als Vorläufer des Konzepts One Health angesehen werden kann, wird hauptsächlich auf Calvin Schwabe und sein 1964 erschienenes Lehrbuch Veterinary medicine and human health , welches die menschliche und tierische Gesundheit und deren Zusammenhänge behandelt, zurückgeführt . Um den Bereich der Ökosysteme/Umwelt erweitert wurde der Begriff im Jahr 2004 im Rahmen der 12 „Manhattan Principles“ der Wildlife Conservation Society. Das darauf aufbauende Konzept One Health legte den Fokus auf die Prävention von neu auftretenden und ansteckenden Krankheiten, insbesondere Zoonosen , . Der Mensch mit seiner Gesundheit wird als Teil des Tierreichs betrachtet, das wiederum Teil einer gemeinsamen Umwelt ist . Über die letzten Jahre, insbesondere im Kontext der Antibiotikaresistenzen und des Arzneimitteleinsatzes in der Massentierhaltung, wurde das Konzept auch außerhalb von Fachkreisen bekannt. Seit der COVID-19-Pandemie nimmt die Bedeutung von One Health als wichtiger Ansatz im Gesundheitsschutz und speziell Pandemieschutz weiter zu . In den letzten Jahren fand eine Erweiterung des One-Health-Ansatzes statt, da ein zu starker Fokus auf Veterinärmedizin und die Vernachlässigung der Umwelt zunehmend kritisiert wurden. Lagen die Schwerpunkte von One Health in den vergangenen zwei Jahrzehnten auf den Themen antimikrobielle Resistenzen, Lebensmittelsicherheit und Zoonosen, griffen die „Berlin Principles“ von Gruetzmacher et al. im Jahr 2019 als Aktualisierung der Manhattan Principles auch Themen wie den Klimawandel und Biodiversität sowie die Vermittlung von Planetary-Health-Ansätzen und ein Bewusstsein für die Existenz einer globalen Gemeinschaft auf . Die Bedeutung und der Schutz der Biodiversität und der Ökosysteme für die Gesundheit und das Wohlergehen der Menschen spielen eine zentrale Rolle in den 10 Kernaussagen der Berlin Principles. Intakte Ökosysteme gelten als Basis für Gesundheit und Wohlergehen des Menschen. Dies ist sowohl in der Prävention als auch im Umgang mit Infektions- und nichtübertragbaren Krankheiten von Bedeutung. Auch sprechen sich die Verfasser*innen der Berlin Principles für eine Stärkung der nationalen und globalen Zusammenarbeit sowie des öffentlichen Sektors aus, damit wissenschaftliche Erkenntnisse in politische Entscheidungen und in die Praxis umgesetzt werden können . Der One-Health-Ansatz erfährt auf diese Weise eine Einbettung in ökonomische und soziopolitische Kontexte, die die grundlegenden Zusammenhänge des Konzepts ausweiten und gesundheitliche Herausforderungen des 21. Jahrhunderts wie Klimawandel, Biodiversitätsverlust, Infektions- und nichtübertragbare Krankheiten einbeziehen. Wurde One Health vor einigen Jahren für eine zu starke Konzentration auf Veterinär- und humanmedizinische Themen kritisiert, sind die Berlin Principles ein Beispiel für die verstärkte Integration des Umweltsektors und das Zusammenrücken der Ansätze Planetary Health und One Health . In Anbetracht der aktuellen bereits genannten komplexen Probleme an den Schnittstellen zwischen Mensch, Tier und Umwelt wurde 2021 das „One Health High-Level Expert Panel“ (OHHLEP) gegründet, ein aus 26 internationalen Mitgliedern bestehendes multidisziplinäres Experten*innengremium. Seine Aufgabe ist es, Partner*innen aus Wissenschaft und Politik in One-Health-Themen zu unterstützen und zu beraten sowie die sektorenübergreifende Zusammenarbeit auch zwischen den Organisationen der „Quadripartite Alliance“ zu stärken, in der die Weltgesundheitsorganisation (WHO), die Weltorganisation für Tiergesundheit (WOAH), die Ernährungs- und Landwirtschaftsorganisation der Vereinten Nationen (FAO) und das Umweltprogramm der Vereinten Nationen (UNEP) im Rahmen von One Health zusammenarbeiten. Diese hat 2022 den gemeinsamen „One Health Joint Plan of Action“ veröffentlicht, der die Zusammenarbeit in Bezug auf Gesundheitsgefahren im Anthropozän fördern und strukturieren soll. Um ein gemeinsames Verständnis von One Health zu schaffen, erarbeitete das OHHLEP eine Definition des Konzepts, die auf breite internationale Zustimmung stieß. Die zugehörige Grafik (Abb. ) zeigt, wie die Umsetzung theoretischer Inhalte von One Health in die Praxis gelingen kann. Hierfür sind die vier „K“: Kommunikation, Koordination, Kapazitätsaufbau und Kollaboration (als englische Begriffe: „communication“, „coordination“, „capacity“ und „collaboration“) essenziell, um Gesundheit und Wohlergehen für Menschen, Tiere und Ökosysteme zu ermöglichen . Der Ansatz Planetary Health betrachtet – ähnlich wie One Health – die menschliche Gesundheit als integrativen Bestandteil intakter Ökosysteme der Erde. Das Konzept wurde erstmals 2014 von Richard Horton und Kolleg*innen in einem Artikel des Lancet vorgestellt und 2015 im Bericht von Whitmee et al. definiert . Der Bericht beschreibt die anthropogenen Auswirkungen auf die natürlichen Systeme des Planeten und stellt zivilisatorische Erfolge der vergangenen Jahrzehnte (z. B. Steigerung der Lebenserwartung und Verringerung von Armut) den negativen globalen Entwicklungen (z. B. steigender Energie- und Wasserverbrauch, Verlust von Biodiversität und des tropischen Regenwalds, zunehmende Erderwärmung) gegenüber. Die meisten dieser Trends beginnen mit der Industrialisierung gegen Ende des 19. Jahrhunderts und steigern sich besonders ab den 1950er-Jahren . Die starke Zunahme menschlicher Aktivitäten auf der Erde ab der zweiten Hälfte des 20. Jahrhunderts wird als „große Beschleunigung“ (Great Acceleration) bezeichnet . Whitmee et al. zufolge ist ein integrierender Umgang mit den politischen, ökonomischen und sozialen Systemen der Erde erforderlich, die den Zustand natürlicher Systeme wesentlich beeinflussen und somit auch menschliche Gesundheit, Wohlergehen und Gerechtigkeit ermöglichen. Gegenwärtig gefährdet jedoch die Missachtung natürlicher Regenerationskapazitäten von Ökosystemen durch den Menschen die Lebensqualität jetziger und zukünftiger Generationen und droht, bereits erreichte gesundheitliche Fortschritte der vergangenen Jahrzehnte zunichtezumachen . Planetary Health baut auf dem 2009 von Rockström et al. vorgestellten und 2015 von Steffen et al. überarbeiteten Modell der planetaren Belastungsgrenzen auf, welches 9 Systeme definiert, die die Stabilität des gesamten Erdsystems gewährleisten. Hierzu zählen beispielsweise die Bereiche Klimawandel, Integrität der Biosphäre, biogeochemische Flüsse, Versauerung der Ozeane und Landnutzungswandel. Für 7 der 9 Sektoren wurden sogenannte Kipppunkte quantifiziert, deren Überschreitung das Auftreten nicht linearer, relativ abrupt einsetzender und irreversibler Veränderungen für das Erdsystem zur Folge hat, Ökosysteme und Biozönosen in ihrer Integrität beschädigt und dadurch die sozioökologische Belastbarkeit von Gesellschaften herausfordert. Diese Veränderungen können für Gesellschaften wie auch Individuen katastrophale Ausmaße annehmen. Innerhalb der definierten Belastungsgrenzen liegt der „sichere Handlungsspielraum“ für die menschliche Entwicklung, mit dem das System Erde auch zukünftig in einem Zustand gehalten werden kann, der menschliches Leben unter den besten Bedingungen ermöglicht, ein Zustand, der seit ca. 10.000 Jahren andauert und als Erdzeitalter „Holozän“ bezeichnet wird. Als Kernsektoren des Modells der planetaren Grenzen gelten die Bereiche Klimawandel und Biosphäre (hierunter wird die funktionale und genetische Vielfalt der Biodiversität verstanden). In diesen Sektoren können bei Überschreitung des sicheren Handlungsspielraums zahlreiche weitere Kipppunkte ausgelöst und auf diese Weise das System Erde in einen Zustand versetzt werden, in dem das Wohlergehen und potenziell auch der Fortbestand der Menschheit gefährdet sind . Laut Steffen et al. hat die Menschheit in den Bereichen Klimawandel, Integrität der Biosphäre mit diversen biogeochemischen Kreisläufen (Austausch von Elementen und Stoffen in unterschiedlichen Biosphärenkomponenten) und Landnutzungswandel den sicheren Rahmen bereits 2015 überschritten . Da die nicht nachhaltige Nutzung von Ressourcen weiter ansteigt, setzt sich diese Entwicklung seitdem weiter fort . Das Modell der planetaren Belastungsgrenzen wird stetig weiterentwickelt. So schlugen Persson et al. im Jahr 2022 Schwellenwerte für den bis dato nicht quantifizierten Bereich der „neuartigen Stoffe“ (hierzu zählen unter anderem nicht natürlich vorkommende Materialien, Chemikalien und Organismen) vor, deren Anwendung ebenfalls auf eine Überschreitung planetarer Kapazitäten in diesem Bereich hinweisen . Im selben Jahr sprachen sich Wang-Erlandsson et al. dafür aus, in den Sektor „globaler Süßwasserverbrauch“, der bis dato ausschließlich das Vorkommen von „blauem Wasser“ (Oberflächen- und Grundwasser) berücksichtigte, zusätzlich „grünes Wasser“ (Niederschläge, Wassergehalt in Böden) zu integrieren. Die Messergebnisse für Bodenfeuchte, welche als Marker für das Maß der Überschreitung dieses Sektors vorgeschlagen wird, deuten darauf hin, dass die Menschheit den sicheren Handlungsspielraum im Bereich des grünen Wassers ebenfalls bereits verlassen hat . Solche Veränderungen der lebenswichtigen Erdsysteme haben gravierende Auswirkungen nicht nur auf die Weiterentwicklung der Menschheit, sondern insbesondere auch auf die Gesundheit und das Wohlbefinden der Menschen. Hitzebedingte Krankheits- und Todesfälle im Zusammenhang mit immer häufiger auftretenden Hitzeextremen infolge des Klimawandels sind nur ein Beispiel für Auswirkungen auf die Gesundheit, die auch in Deutschland bereits spürbar sind. Das 2012 von der Wirtschaftswissenschaftlerin Kate Raworth entwickelte „Donut-Modell“ (Abb. ) greift den Ansatz der planetaren Grenzen auf und erweitert ihn um soziale Determinanten, beispielsweise den gleichberechtigten Zugang aller Menschen zu Bildung, Wasser, Nahrung und Wohnraum sowie die soziale Chancengleichheit, welche die Gesundheit beeinflussen . Dieses Modell bildet unter anderem die Grundlage des neuen Wirtschaftsmodells der „Ökonomie des Wohlbefindens“ , . Die Berücksichtigung von planetaren Grenzen und des Wohlbefindens der Menschen in solchen Modellen kann, in die Praxis umgesetzt, positive Wirkungen auf unsere Gesundheitssysteme entfalten. In diesem Zusammenhang werden auch mögliche „Dreifachgewinne“ („triple-wins“) beschrieben , die sich auf die Bereiche Chancengerechtigkeit in der Gesundheit, soziale Gerechtigkeit und Nachhaltigkeit beziehen. Die Stadt Amsterdam ist eines der ersten Beispiele, bei dem derzeit ein Wirtschaftsmodell in Anlehnung an das „Donut-Modell“ zu einer Kreislaufwirtschaft transformiert wird . Dass Klimaschutz auch Gesundheitsschutz bedeutet, wird insbesondere durch sogenannte Co-Benefits verdeutlicht: Veränderungen, welche im Sinne des Klimaschutzes umgesetzt werden, beinhalten meist einen Zusatznutzen (Co-Benefit) für die Gesundheit und umgekehrt. Beispielsweise ist belegt, dass eine vorwiegend pflanzliche Ernährung nicht nur die natürlichen Ressourcen schützt und klimaschonend ist, sondern u. a. auch das Risiko für Herzkreislauf- und Krebserkrankungen verringert. Ähnliches gilt für umweltschonende Arten der Fortbewegung und andere Lebensstilfaktoren . Im Vergleich zu One Health liegt der Fokus von Planetary Health verstärkt auf den Themen Klimawandel, Biodiversität, Umweltverschmutzung, Umweltgerechtigkeit und gesellschaftliche Transformation, während One Health aufgrund seiner Entstehungsgeschichte schwerpunktmäßig die Schnittstelle zwischen Tier- und Humanmedizin behandelt. Besonders kennzeichnend für Planetary Health sind unter anderem die Aspekte der Transdisziplinarität und der Dringlichkeit der transformativen Maßnahmen . Daher bemühten sich Vertreter*innen des One-Health-Ansatzes in jüngster Zeit um eine verstärkte Integration des Umweltsektors in das Konzept, sodass eine klare Trennung von One Health und Planetary Health inzwischen nicht immer möglich und sinnvoll ist . Beide Ansätze stehen für die integrative Betrachtung von Gesundheitsthemen vor dem Hintergrund weiterer Wissenschaften, wie beispielsweise den Sozial‑, Natur- oder Geisteswissenschaften, wodurch ein umfassenderes Verständnis von Gesundheitsförderung, allgemeiner Prävention, Krankheitsursache, -diagnose und -therapie ermöglicht wird. Viele Inhalte von Planetary Health überschneiden sich mit denen der Jahrtausende alten Traditionen indigener Bevölkerungsgruppen. Deshalb ist es wichtig zu erwähnen, dass es sich bei Planetary Health um ein westliches, primär wissenschaftlich begründetes Konzept handelt. Im Gegensatz dazu betrachten indigene Völker ihre natürliche Umwelt mehr als Teil ihrer kulturellen Identität, den es zu schützen, zu pflegen und zu respektieren gilt. Obwohl indigene Traditionen nur in ihrem Kontext betrachtet werden können, da sie sich auf den Ort, die Menschen, die dort herrschende Ethik und den Glauben beziehen, können sie die heutigen anthropozentrischen Sichtweisen und Vorstellungen infrage stellen und zu einem nachhaltigeren Umgang mit den Systemen der Erde beitragen . 2014 wurde dem Whanganui-Fluss in Neuseeland, der für die vor Ort lebenden Maori von großer Bedeutung ist, der Status einer juristischen Person verliehen. Beispiele wie dieses zeigen, wie der Wandel menschlicher Vorstellungen über die Natur von reinem Besitz bis zur Anerkennung ihres intrinsischen Wertes möglich ist . Obwohl die Umweltmedizin in ihrem heutigen Verständnis ein relativ junges Fachgebiet ist, sind Umwelt- und Gesundheitsthemen seit dem Altertum von menschlichem Interesse . Inhaltlich versteht sich die Umweltmedizin als Wissenschaft, die die Entstehung von Krankheit und die Erhaltung der Gesundheit des Menschen durch Aspekte aus dessen Umwelt erforscht. Der Begriff „Umwelt“ beinhaltet eine gewisse Unschärfe und kann auf vielfältige Weise – beispielsweise als natürliche, soziale, kulturelle oder wirtschaftliche Umwelt – definiert werden. Um dem Fachgebiet einen überschaubaren Rahmen zu geben, wurden in der Umweltmedizin bis heute vorrangig die durch anthropogene Einflüsse entstandenen biologischen, physikalischen und chemischen Faktoren und ihre Wirkungen auf den Menschen behandelt . In Deutschland umfasst dieses wissenschaftlich geprägte Teilgebiet der Medizin einerseits einen primärpräventiven Ansatz im Sinne der Erkennung, Bewertung und Vermeidung von schädlichen Einflüssen aus der Umwelt bezogen auf die Gesamtbevölkerung, das Individuum oder einzelne exponierte Gruppen. Andererseits bezieht sich der klinische Zweig der Umweltmedizin auf die Behandlung und Betreuung von Patient*innen, die bereits veränderte klinische Parameter (Sekundärprävention) oder manifeste umweltbedingte beziehungsweise durch Umweltfaktoren beeinflusste Krankheiten (Tertiärprävention) aufweisen . Der Nachweis einer Kausalität zwischen Umwelteinwirkung und gesundheitlicher Schädigung erweist sich oftmals als schwierig, da die Krankheitsbilder häufig vielfältige Symptome aufweisen und durch weitere Faktoren – beispielsweise den psychosozialen Kontext, die genetische Prädisposition oder die individuelle Empfindlichkeit – beeinflusst werden . Im Laufe des 20. Jahrhunderts nahmen umweltmedizinische Fragestellungen eine zunehmend globalere Perspektive ein, die anthropogene Einflüsse auf die Umwelt stärker berücksichtigte. Veröffentlichungen, wie Silent spring von Rachel Carson (1962), Limits to growth des Club of Rome (1972), die Ottawa Charta (1986) sowie Umweltkatastrophen wie das Seveso-Unglück 1976 oder Tschernobyl 1986 sorgten für ein bis dato kaum existierendes kollektives Umweltbewusstsein. Hierbei wurden insbesondere die komplexen Verstrickungen zwischen Menschen und Umwelt hervorgehoben, welche „die Grundlage für einen sozioökologischen Weg zur Gesundheit“ darstellen. Diese Entwicklungen beeinflussten die Umweltbewegung der 1970er- und 1980er-Jahre maßgeblich und hatten in Deutschland zum Beispiel die Gründung der Partei „Die Grünen“ oder des Umweltministeriums zur Folge. Zudem befeuerten diese Entwicklungen die Diskussionen zum Thema „Umwelt und Gesundheit in den Industrienationen“ . In Deutschland spiegelte sich die Anerkennung der engen Verbindung zwischen Ökologie und Medizin bereits in der Approbationsordnung von 1972 wider, in der die Gebiete Sozial‑, Arbeits‑, Rechtsmedizin, Hygiene, Informationsverarbeitung und medizinische Statistik unter dem Begriff „ökologische Fächer“ gebündelt wurden . Im internationalen Vergleich erweist sich eine länderübergreifende Betrachtung des Feldes Umweltmedizin als schwierig, da sich die Einflüsse auf die Entwicklung umweltmedizinischer Strömungen zwar ähneln, die Institutionalisierung und der Umgang mit umweltmedizinischen Inhalten aber teilweise sehr verschieden sind. So existieren im englischsprachigen Raum sowohl der Begriff „environmental health“ als auch „environmental medicine“; Ersterer begreift sich eher als übergreifender Bereich mit Fokus auf Public Health, während Letzterer sich auf den klinischen Bereich der Umweltmedizin bezieht . Die Abgrenzung ist aber unscharf, teilweise werden beide Begriffe synonym verwendet. Auch die deutsche Universitäts- und Forschungslandschaft wird mittlerweile von internationalen umweltmedizinischen Fachrichtungen beeinflusst, wie die Einrichtung eines Forschungsschwerpunktes der medizinischen Fakultät der Universität Augsburg unter dem Namen des verwandten Fachgebiets „Environmental Health Sciences“ aus den USA zeigt. Der Blick auf umweltmedizinische Fragestellungen ist oft ein Spiegel der jeweiligen zeitgeschichtlichen Geschehnisse und aktuellen gesellschaftlichen Aufgaben. Besonders durch die Entwicklungen des 20. Jahrhunderts wird der Fokus der Umweltmedizin auf anthropogene Umwelteinflüsse und ihre gesundheitlichen Auswirkungen gerichtet. Aktuelle Beispiele wie der Klimawandel, der Biodiversitätsverlust und Pandemien sowie deren Folgen zeigen, dass bereits heute irreversible Veränderungen unserer Umwelt eingetreten sind, die sich auf die Gesundheit der Weltbevölkerung auswirken. Die Folgen des Umgangs der Menschen mit den natürlichen Systemen der Erde sind vielfältig, miteinander verzahnt und betreffen infolge der Globalisierung und des schieren Ausmaßes menschlichen Ressourcenverbrauchs und Konsums die ganze Welt. Hier spielt der Aspekt der Umweltgerechtigkeit eine zentrale Rolle: Reiche Nationen im globalen Norden verursachen deutlich höhere CO 2 -Emissionen und sind hauptverantwortlich für die Überschreitung planetarer Grenzen, deren Auswirkungen ärmere Länder im globalen Süden am stärksten zu spüren bekommen . Gesundheitliche Belastungen durch anthropogene Umwelteinflüsse sind seit Beginn des industriellen Zeitalters stetig fortgeschritten und heutzutage überall möglich, die Trennung zwischen natürlicher, unberührter Natur und menschlicher Besiedelung ist zunehmend fraglich . Umweltverschmutzungen können in den entlegensten Gebieten der Erde nachgewiesen werden, die Folgen des Klimawandels sind mess- und sichtbar . Während One Health zwar eine integrative Sichtweise von Mensch, Tier und Umwelt anstrebt, findet bei Planetary Health eine Ausweitung des Gesundheitsbegriffs auf den ganzen Planeten statt. Dies bedeutet nicht nur die enge Verbindung von Systemen weltweit, sondern auch die Einbeziehung künftiger Generationen, wie die Ursprungsdefinition im Artikel von Whitmee et al. bereits andeutet. Wurde bis vor einigen Jahren in der klinischen Umweltmedizin hauptsächlich die lineare kausale und überwiegend lokale Wirkungskette zwischen Umwelteinfluss und Individuum betrachtet, muss im Sinne von Planetary Health und One Health eine systemische Herangehensweise bezüglich Krankheitsätiologie, -diagnose und -therapie gewählt werden. So hat nicht nur ein Umweltfaktor multiple Wirkungen auf die menschliche Gesundheit auf individueller und Bevölkerungsebene, auch können Diagnose und Therapie im weitesten Sinne globale Auswirkungen haben. Beispielsweise haben Krankenhäuser und Praxen derzeit einen hohen Verbrauch an Einwegprodukten, der häufig damit begründet wird, dass Hygienevorgaben besser eingehalten werden können. Auch das Medikament Diclofenac, das in verschiedenen Darreichungsformen (auch für die Lokalanwendung) häufig verschrieben wird, führt zu einer Belastung der Gewässer mit Folgen für Wasserorganismen . Obwohl die Sinnhaftigkeit der Anwendung dieser Produkte in vielen Fällen begründet ist, müssen in einer Gesamtbetrachtung die Auswirkungen auf die Gesundheit aller Menschen in der Gegenwart und in der Zukunft mit beachtet werden. So können zum Beispiel hygienische Maßnahmen, die den gesundheitlichen Schutz eines Einzelnen bedeuten, die Gesundheit jetziger und künftiger Generationen beeinträchtigen, wenn sie nicht nachhaltig sind und die Umwelt schädigen. Die Einbeziehung globaler Umwelt- und Gesundheitseffekte in bisher vorwiegend auf lokaler Ebene betrachtete Sachverhalte zeigt, dass im Sinne von Planetary Health und One Health eine umweltmedizinische Betrachtung immer auf zwei Ebenen erfolgen muss – zuerst wird die Ebene des individuellen Falls betrachtet und im zweiten Schritt die regionalen, nationalen und globalen Auswirkungen, die beispielsweise durch Ressourcenverbrauch oder Schädigung der Umwelt ausgelöst werden. Vor diesem Hintergrund kann die Schwierigkeit entstehen, dass in konkreten Fällen der Gesundheitsschutz verschiedener Personengruppen zu unterschiedlichen Zeitpunkten (heute und in der Zukunft) gegeneinander aufgewogen und priorisiert werden oder gegebenenfalls ein Kompromiss gefunden werden muss. Unter Berücksichtigung der Konzepte One Health und Planetary Health braucht es eine Diskussion in nahezu allen medizinischen Bereichen (hier liefert die Zusammenstellung von Traidl-Hoffmann et al. einen guten Überblick über viele betroffene Fachdisziplinen ). Oftmals wird dies die Abwägung einer medizinischen Evidenz (z. B. der belegbare Vorteil von Einwegprodukten) gegenüber dem erhöhten Ressourcenverbrauch oder umweltschädlichen Einflüssen, welche ihrerseits gesundheitsschädlich wirken können, bedeuten. In diesem Zusammenhang ist ebenfalls eine gesellschaftliche Diskussion erforderlich. Letztere bedarf einer Berücksichtigung gesundheitlicher Fragestellungen in allen Politikfeldern, im Sinne eines „Health-in-all-Policies“-Ansatzes. Interventionen umwelt- und gesundheitspolitischer Art können gemäß dem Konzept der „sozialen Kipppunkte“ z. B. über Multiplikatoreneffekte und die Beseitigung von Fehlanreizen wirkungsvolle Transformationsprozesse innerhalb der Gesellschaften anstoßen. Da der Umweltmedizin die Aufgabe zukommt, umweltbedingte Gesundheitseffekte zu bewerten und Lösungsvorschläge auf individueller wie auf Bevölkerungsebene zu unterbreiten, ist es wahrscheinlich, dass in diesem Bereich durch die beiden neuen Konzepte ein Paradigmenwechsel in der Herangehensweise an Bewertungsfragen eintreten wird. Sowohl in der individuellen als auch in der bevölkerungsbezogenen Umweltmedizin (dem Bereich der sog. Environmental Public Health) werden in Zukunft Fragen der Nachhaltigkeit in Beratungs- und Bewertungsprozessen eine wesentlich größere Rolle spielen. |
Advances in pediatrics in 2023: choices in allergy, analgesia, cardiology, endocrinology, gastroenterology, genetics, global health, hematology, infectious diseases, neonatology, neurology, pulmonology | 5ccf48b1-1732-4225-9536-57d4e26794d5 | 11562862 | Internal Medicine[mh] | The most important papers from distinct specialties that were published in the Italian Journal of Pediatrics in the first half of 2023 have been included in this review. We have selected key information based on those articles that were most cited or accessed on our website. The aim is to provide an overview of the most influential published papers of the past year in the fields of allergy, analgesia, cardiology, endocrinology, gastroenterology, genetics, global health, infectious diseases, neonatology, neurology and pulmonology. The papers in our analysis covered a variety of novel insights in risk factors, mechanisms, diagnosis, treatment options and prevention. The advances that were more relevant in clinical practice, have been commented looking to the future. Eosinophilic gastrointestinal disorders Eosinophilic gastrointestinal disorders (EGID) are a group of disorders characterized by pathological eosinophilic infiltration of the esophagus, stomach, small intestine or colon, leading to organ dysfunction and clinical symptoms. In recent years there has been an increase in reports of Eosinophilic Esophagitis (EoE) above all . Votto et al. studied 60 patients with EGIDs. EGID diagnosis was made approximately 12 months after symptom onset, which was shorter than the delay observed in other studies . However, the diagnosis was delayed in children with EoE who had failure to thrive and feeding problems than in children without growth and feeding problems. So, a prompt diagnosis is crucial to prevent failure to thrive. Furthermore, they observed an increased frequency of coexisting allergic diseases, especially food allergy. An elimination diet is beneficial in most children with EoE. These elements can indicate that a mixed IgE-mediated non IgE-mediated mechanism can be involved in the pathogenesis. An oral food challenge to the food in question would be necessary to reach the diagnosis. Mastocytosis Pediatric mastocytosis is a rare and heterogeneous group of disorders characterized by an abnormal clonal expansion of mast cells that accumulate in the skin (Cutaneous Mastocytosis) and/or, less frequently, in other organs or tissues (Systemic Mastocytosis). The release of mast cell mediators, including histamine and other vasoactive substances, is responsible for the clinical manifestations. Cutaneous Mastocytosis is defined by typical skin lesions with a positive Darier’s sign. Diagnosis of systemic mastocytosis is based on organ enlargements, elevated serum tryptase levels, cytoreduction and characteristic histopathological findings in biopsies of affected tissue . Children with systemic mastocytosis are at risk of severe reactions due to mediator release mainly induced by allergens such as hymenoptera venom, foods , non IgE-mediated stimuli or spontaneous. The management is based on identifying of triggers by IgE tests . It is aimed at preventing the release of mast cell mediators and controlling symptoms with second-generation anti-H1 antihistamines, systemic corticosteroids and organ specific drugs. Bossi et al. highlighted that a child affected by systemic mastocytosis with persistent rash, diarrhea, abdominal pain, palpitations, musculoskeletal symptoms, fatigue, refractory to anti-H1 and oral steroids became quickly asymptomatic following administration of omalizumab, a monoclonal antibody against IgE. Symptoms recurred when omalizumab was suspended. The child responded to restart of omalizumab. Side effects to omalizumab were not recorded. Eosinophilic gastrointestinal disorders (EGID) are a group of disorders characterized by pathological eosinophilic infiltration of the esophagus, stomach, small intestine or colon, leading to organ dysfunction and clinical symptoms. In recent years there has been an increase in reports of Eosinophilic Esophagitis (EoE) above all . Votto et al. studied 60 patients with EGIDs. EGID diagnosis was made approximately 12 months after symptom onset, which was shorter than the delay observed in other studies . However, the diagnosis was delayed in children with EoE who had failure to thrive and feeding problems than in children without growth and feeding problems. So, a prompt diagnosis is crucial to prevent failure to thrive. Furthermore, they observed an increased frequency of coexisting allergic diseases, especially food allergy. An elimination diet is beneficial in most children with EoE. These elements can indicate that a mixed IgE-mediated non IgE-mediated mechanism can be involved in the pathogenesis. An oral food challenge to the food in question would be necessary to reach the diagnosis. Pediatric mastocytosis is a rare and heterogeneous group of disorders characterized by an abnormal clonal expansion of mast cells that accumulate in the skin (Cutaneous Mastocytosis) and/or, less frequently, in other organs or tissues (Systemic Mastocytosis). The release of mast cell mediators, including histamine and other vasoactive substances, is responsible for the clinical manifestations. Cutaneous Mastocytosis is defined by typical skin lesions with a positive Darier’s sign. Diagnosis of systemic mastocytosis is based on organ enlargements, elevated serum tryptase levels, cytoreduction and characteristic histopathological findings in biopsies of affected tissue . Children with systemic mastocytosis are at risk of severe reactions due to mediator release mainly induced by allergens such as hymenoptera venom, foods , non IgE-mediated stimuli or spontaneous. The management is based on identifying of triggers by IgE tests . It is aimed at preventing the release of mast cell mediators and controlling symptoms with second-generation anti-H1 antihistamines, systemic corticosteroids and organ specific drugs. Bossi et al. highlighted that a child affected by systemic mastocytosis with persistent rash, diarrhea, abdominal pain, palpitations, musculoskeletal symptoms, fatigue, refractory to anti-H1 and oral steroids became quickly asymptomatic following administration of omalizumab, a monoclonal antibody against IgE. Symptoms recurred when omalizumab was suspended. The child responded to restart of omalizumab. Side effects to omalizumab were not recorded. Adverse reactions to ibuprofen or paracetamol Type 1 (or type “A”, Augmented) adverse reaction to drugs are dose-dependent, related to the pharmacologic mechanism and occur in normal subjects. Type 2 (or type “B”, Bizarre) are not dose-dependent, unrelated to the pharmacologic mechanism and occur in predisposed subjects. They include anaphylaxis and severe cutaneous reactions . In children, first-line treatment for mild-to-moderate pain and fever is either ibuprofen or paracetamol that have similar safety and tolerability profiles . Marano et al. analyzed 351 patients who contacted the hospital’s pediatric poison control center (PPCC) for exposure to ibuprofen and paracetamol from January 1, 2018 to September 30, 2022, to assess the incidence of any adverse reactions. Misuse or accidental ingestion was the most common reason for inappropriate oral use of paracetamol or ibuprofen, with a fifth of patients taking it for suicidal purposes. Most patients were not intoxicated and hospitalization was necessary for 30.5% of children. Type 1 adverse reactions were recorded in 10.8% of patients taking paracetamol and in 10.1% of cases after ibuprofen. The most common adverse reactions to paracetamol were vomiting, hypertransaminasemia, coagulopathy and headache those to ibuprofen were nausea, vomiting, abdominal pain, increased serum creatinine and dizziness. Pain in emergency department Pain is one of the most frequent reasons for referral to pediatric emergency department, especially in younger children and those with special needs, a category in which undertreatment of pain (so-called “oligoanalgesia”) is very common. Oligoanalgesia is related to long-term negative behavioral and psychological consequences. Management of pain and anxiety is of fundamental importance and good pain control could help the entire medical team in the evaluation and treatment of a child . Several studies have shown that very often the treatment of pain in children is inadequate and have highlighted the importance of adequate pain treatment in terms of immediate but also future well-being and neurological development of the patient . Bevacqua et al. report the current state of the art of pediatric sedation and analgesia in Italian emergency rooms and identify existing gaps that need to be addressed. The survey proposed a case vignette and questions addressing different domains, such as pain management, availability of medications, protocols and safety aspects, staff training, and availability of human resources regarding sedation and procedural analgesia. Eighteen Italian sites participated in the study, 66% of which were represented by University Hospitals and/or Tertiary Care Centres. It was found that 27% of patients receive inadequate sedation. In many emergency departments there is a lack of availability of some drugs such as nitrous oxide, lack of use of intranasal fentanyl and topical anesthetics at triage, the use of safety protocols and pre-procedural checklists is rare, there is also a lack of staff training and lack of space. Moreover, the availability of child life specialists and hypnosis, as a non-pharmacological practice of sedation and analgesia is insufficient. The study highlights that, although much progress has been made in recent years in the treatment of pain in the pediatric emergency department setting, there is still much work to be done due to the complexity of pediatric patients and, sometimes, the need of adequate instruments/medicines as well as training of health personnel. Pain in surgery, oncology and hematology Pain control is universally recognized as a human right and the correct assessment of pain is now one of the standards for the accreditation of health institutions. Proper pain management can reduce the incidence of complications, reduce hospital stays, achieve faster discharges and decrease the use of hospital resources. On the contrary, inadequate pain management can lead to persistent or chronic pain, alterations in nociception and emotional and psychological complications; pain can have negative effects on the physical and mental conditions of hospitalized patients, worsening the quality of life and increasing costs . However, the assessment and especially the treatment of pain are still important health problems in hospitalized patients . Marchetti et al. compare a one-day survey that analyzed the prevalence of pain, pain intensity and pain therapy conducted in 2016, in which they showed suboptimal pain management in the surgery and oncohematology departments, with the same survey conducted in 2020. They found a higher prevalence of moderate/severe pain in the 2020 survey compared to the previous 2016 survey, both during hospitalization and in the 24 h preceding the day of the survey, despite hospital training initiatives aimed at doctors and nurses on pain therapy. On the other hand, the daily prescription of pain therapy has significantly improved both in terms of time indications and as needed. There were fewer children who were not prescribed any pain therapy compared to the 2016 survey. However, the quality of analgesic therapy was low in 2020 also compared to 2016. Indeed, the therapy administered led to a statistically significant undertreatment of pain, and it was unable to alleviate moderate/severe pain. Basically, many steps forward still need to be made, not so much in the evaluation, which most of the time appears correct, but in the correct use of drugs, also in relation to the type of pain . Type 1 (or type “A”, Augmented) adverse reaction to drugs are dose-dependent, related to the pharmacologic mechanism and occur in normal subjects. Type 2 (or type “B”, Bizarre) are not dose-dependent, unrelated to the pharmacologic mechanism and occur in predisposed subjects. They include anaphylaxis and severe cutaneous reactions . In children, first-line treatment for mild-to-moderate pain and fever is either ibuprofen or paracetamol that have similar safety and tolerability profiles . Marano et al. analyzed 351 patients who contacted the hospital’s pediatric poison control center (PPCC) for exposure to ibuprofen and paracetamol from January 1, 2018 to September 30, 2022, to assess the incidence of any adverse reactions. Misuse or accidental ingestion was the most common reason for inappropriate oral use of paracetamol or ibuprofen, with a fifth of patients taking it for suicidal purposes. Most patients were not intoxicated and hospitalization was necessary for 30.5% of children. Type 1 adverse reactions were recorded in 10.8% of patients taking paracetamol and in 10.1% of cases after ibuprofen. The most common adverse reactions to paracetamol were vomiting, hypertransaminasemia, coagulopathy and headache those to ibuprofen were nausea, vomiting, abdominal pain, increased serum creatinine and dizziness. Pain is one of the most frequent reasons for referral to pediatric emergency department, especially in younger children and those with special needs, a category in which undertreatment of pain (so-called “oligoanalgesia”) is very common. Oligoanalgesia is related to long-term negative behavioral and psychological consequences. Management of pain and anxiety is of fundamental importance and good pain control could help the entire medical team in the evaluation and treatment of a child . Several studies have shown that very often the treatment of pain in children is inadequate and have highlighted the importance of adequate pain treatment in terms of immediate but also future well-being and neurological development of the patient . Bevacqua et al. report the current state of the art of pediatric sedation and analgesia in Italian emergency rooms and identify existing gaps that need to be addressed. The survey proposed a case vignette and questions addressing different domains, such as pain management, availability of medications, protocols and safety aspects, staff training, and availability of human resources regarding sedation and procedural analgesia. Eighteen Italian sites participated in the study, 66% of which were represented by University Hospitals and/or Tertiary Care Centres. It was found that 27% of patients receive inadequate sedation. In many emergency departments there is a lack of availability of some drugs such as nitrous oxide, lack of use of intranasal fentanyl and topical anesthetics at triage, the use of safety protocols and pre-procedural checklists is rare, there is also a lack of staff training and lack of space. Moreover, the availability of child life specialists and hypnosis, as a non-pharmacological practice of sedation and analgesia is insufficient. The study highlights that, although much progress has been made in recent years in the treatment of pain in the pediatric emergency department setting, there is still much work to be done due to the complexity of pediatric patients and, sometimes, the need of adequate instruments/medicines as well as training of health personnel. Pain control is universally recognized as a human right and the correct assessment of pain is now one of the standards for the accreditation of health institutions. Proper pain management can reduce the incidence of complications, reduce hospital stays, achieve faster discharges and decrease the use of hospital resources. On the contrary, inadequate pain management can lead to persistent or chronic pain, alterations in nociception and emotional and psychological complications; pain can have negative effects on the physical and mental conditions of hospitalized patients, worsening the quality of life and increasing costs . However, the assessment and especially the treatment of pain are still important health problems in hospitalized patients . Marchetti et al. compare a one-day survey that analyzed the prevalence of pain, pain intensity and pain therapy conducted in 2016, in which they showed suboptimal pain management in the surgery and oncohematology departments, with the same survey conducted in 2020. They found a higher prevalence of moderate/severe pain in the 2020 survey compared to the previous 2016 survey, both during hospitalization and in the 24 h preceding the day of the survey, despite hospital training initiatives aimed at doctors and nurses on pain therapy. On the other hand, the daily prescription of pain therapy has significantly improved both in terms of time indications and as needed. There were fewer children who were not prescribed any pain therapy compared to the 2016 survey. However, the quality of analgesic therapy was low in 2020 also compared to 2016. Indeed, the therapy administered led to a statistically significant undertreatment of pain, and it was unable to alleviate moderate/severe pain. Basically, many steps forward still need to be made, not so much in the evaluation, which most of the time appears correct, but in the correct use of drugs, also in relation to the type of pain . Intravenous immunoglobulin in Kawasaki disease Kawasaki disease (KD), although a typically, a self-limited condition, lasting for an average of 12 days without therapy, is the main cause of acquired heart disease in western countries, as far as patients may develop cardiovascular complications, mainly coronary arteries aneurism, with life threatening issues such as coronary occlusion and sudden cardiac death. Treatment with intravenous immune globulin (IVIG) have dramatically change the outcome of KD, because of effectiveness in preventing coronary arteries abnormalities, and decreasing frequency of coronary arteries aneurysm development. Timely diagnosis and treatment are critical for the clinical outcome, but although a general consensus on immunoglobulin as first line treatment, optimal timing, with or without adjunctive therapy, is still debated , with immune globulin resistance as a matter of concern because of correlation with earlier therapy (within 4 days of disease) and coronary arteries aneurism development, as shown in a large review and meta-analysis. Several studies aimed to determine the optimal window for IVIG therapy, and although some controversies, starting within 7 days of illness seems to be the best . Acute myocarditis Pediatric myocarditis is a challenging inflammatory disease because of the wide spectrum of clinical signs and symptoms, the multiple etiologies, and the complications and sequelae ranging from hemodynamic instability, ventricular dysfunction, dilated-cardiomyopathy, life-threatening arrhythmias and sudden cardiac death. Though improvement in the understanding of pathogenesis, several studies and attempts at meta-analysis, optimal treatment remains controversial and debated, because of small sample sizes and the quality of studies . In addition to standard supportive care for heart failure and arrhythmias, current therapeutic strategies look for etiologically oriented treatment . Anti-inflammatory and immune responses modulating agents have been considered beneficial , in particular corticosteroids and IVIG for their broad and overlapping effects. If no treatment has demonstrated significant improvement in reducing the risk of mortality, corticosteroids seem to produce significant effects on left ventricle ejection, as shown in a meta-analysis , even if treatment effects are difficult to ascertain as far as ventricular function improves fully in many patients . Prevention of respiratory syncytial virus infection in infants with congenital heart disease Respiratory syncytial virus bronchiolitis is the leading cause of hospitalizations for infants and children under 2 years of age. Patients with hemodynamically significant congenital heart disease have a higher rate of hospitalization, need for intensive care and ventilator support . Full passive immune prophylaxis with palivizumab prophylaxis has shown to be effective against respiratory syncytial virus (RSV) infections, thus reducing RSV-related hospitalization rate, morbidity, and mortality, avoiding delay in interventional and surgical procedures in this category of patients . Although cost-effectiveness is still debated, it may impact healthcare resource availability and utilization . Kawasaki disease (KD), although a typically, a self-limited condition, lasting for an average of 12 days without therapy, is the main cause of acquired heart disease in western countries, as far as patients may develop cardiovascular complications, mainly coronary arteries aneurism, with life threatening issues such as coronary occlusion and sudden cardiac death. Treatment with intravenous immune globulin (IVIG) have dramatically change the outcome of KD, because of effectiveness in preventing coronary arteries abnormalities, and decreasing frequency of coronary arteries aneurysm development. Timely diagnosis and treatment are critical for the clinical outcome, but although a general consensus on immunoglobulin as first line treatment, optimal timing, with or without adjunctive therapy, is still debated , with immune globulin resistance as a matter of concern because of correlation with earlier therapy (within 4 days of disease) and coronary arteries aneurism development, as shown in a large review and meta-analysis. Several studies aimed to determine the optimal window for IVIG therapy, and although some controversies, starting within 7 days of illness seems to be the best . Pediatric myocarditis is a challenging inflammatory disease because of the wide spectrum of clinical signs and symptoms, the multiple etiologies, and the complications and sequelae ranging from hemodynamic instability, ventricular dysfunction, dilated-cardiomyopathy, life-threatening arrhythmias and sudden cardiac death. Though improvement in the understanding of pathogenesis, several studies and attempts at meta-analysis, optimal treatment remains controversial and debated, because of small sample sizes and the quality of studies . In addition to standard supportive care for heart failure and arrhythmias, current therapeutic strategies look for etiologically oriented treatment . Anti-inflammatory and immune responses modulating agents have been considered beneficial , in particular corticosteroids and IVIG for their broad and overlapping effects. If no treatment has demonstrated significant improvement in reducing the risk of mortality, corticosteroids seem to produce significant effects on left ventricle ejection, as shown in a meta-analysis , even if treatment effects are difficult to ascertain as far as ventricular function improves fully in many patients . Respiratory syncytial virus bronchiolitis is the leading cause of hospitalizations for infants and children under 2 years of age. Patients with hemodynamically significant congenital heart disease have a higher rate of hospitalization, need for intensive care and ventilator support . Full passive immune prophylaxis with palivizumab prophylaxis has shown to be effective against respiratory syncytial virus (RSV) infections, thus reducing RSV-related hospitalization rate, morbidity, and mortality, avoiding delay in interventional and surgical procedures in this category of patients . Although cost-effectiveness is still debated, it may impact healthcare resource availability and utilization . Diabetic ketoacidosis Rates of diabetic ketoacidosis (DKA) at diagnosis vary from 11 to 80% depending upon region, even in developed countries. The risk of DKA in children after diagnosis of type 1 diabetes mellitus (T1D) is 1–10/100 person-years. DKA is usually provoked by intentional or inadvertent insulin omission, sometimes associated with intercurrent illness and increased insulin requirement . It has been shown that more than 50% of diabetic children were treated in the pediatric intensive care unit (PICU) due to DKA in Croatia . Passanisi et al. showed that 51.5% of 103 children and adolescents with a new diagnosis of T1D had DKA and 10 subjects with T1D onset needed to be treated in PICU for severe clinical manifestations. Among these four children were younger than 5 years of age. Acute kidney injury was the most common complication of DKA followed by cerebral oedema, papilledema and acute esophageal necrosis. The authors emphasized that increased public awareness campaigns should be promoted to facilitate the recognition of early symptoms of diabetes and to reduce the morbidity and mortality associated with DKA. Vitamin D Galeazzi et al. report that in children aged between 5 and 10 years, living in a coastal area of Central Italy (Ancona) and subjected to screening for celiac disease, blood values of 25-hydroxyvitamin D (25(OH)D were sufficient in 36% of subjects, according to the classification proposed by a recent Italian Consensus which considers such values to be ≥ 30 ng/ml (≥ 75nmol/L). 21% had values classifiable as deficient (10–20 ng/ml) and 6% as severely deficient (< 10 ng/ml). It should be remembered that, in general, values < 12 ng/ml are considered at risk of rickets as confirmed by an extensive review on children aged under 4 years with radiological signs of rickets from which it appears that over 60% have values below this limit . The prevalence data found in the study is substantially in line with epidemiological studies carried out in various European countries regardless of latitude. This confirms that other factors, in addition to sun exposure, which differs in the various latitudes, can play a significant role. In particular, socioeconomic conditions, lifestyles and eating habits must be taken into account. Furthermore, the study reports a higher percentage of deficient values in subjects of non-Caucasian ethnicity and in obese subjects. The latter have been and are the subject of various studies aimed at explaining the reason for this phenomenon . The causes are not yet fully clarified and it is thought that various factors may contribute, such as: less sun exposure of this group due to decreased outdoor activities; vitamin D sequestration in the adipose tissue or uptake by this tissue; impaired liver vitamin D synthesis in fatty liver of severely obese subjects. The high prevalence of children with vitamin D deficient levels is stimulating a broad discussion on the modalities of a prophylaxis that is today recommended with a defined empirical modality due to the absence of strong scientific evidence but which, ideally, should be personalized at the individual level or at risk subpopulations, taking into account the specific individual needs and the type of pathology, especially extra-skeletal, that one wants to prevent . Treatment and prevention of obesity Obesity is today considered, due to its prevalence, which continues to increase in various populations, and to the known risk of complications, such as cardiometabolic, psychosocial comorbidity and premature mortality . a chronic disease of primary interest for public health. In 2023, a joint task force of the Italian Society of Pediatric Endocrinology and Diabetology, the Italian Society of Pediatrics and the Italian Society of Pediatric Surgery developed a consensus position statement on the treatment of obesity in children and adolescents . Lifestyle intervention is the first step in treatment. In children over the age of 12, pharmacotherapy is the second step and bariatric surgery is the third, in selected cases. There are new developments in the medical treatment of obesity . In particular, new medicines have demonstrated efficacy and safety and have been approved for use in adolescents . The Food and Drug Administration has approved once-daily liraglutide, orlistat, and phentermine–topiramate for adolescents at least 12 years of age; only liraglutide is approved by the European Medicines Agency. On the other hand, origin is polyfactorial and the various ways of dealing with it at a therapeutic level have proven, especially in the long term, not particularly effective also due to the various barriers that can oppose it, making therapeutic success more unlikely . For a long time, therefore, attempts have been made to develop prevention with a particular focus on pre-school age and the first school cycles. A recent review that only takes into consideration randomized controlled trials (RCTs) in the 5–11 age group suggests “that a range of activity interventions, and interventions that combine diet with activity, can have a modest beneficial effect on developing obesity” . Very little research has taken into consideration the cost/benefit ratio of these results even if, as shown by the work of Guarino et al. the economic analysis would be positive in a high percentage of them. However, there is a great heterogeneity of data and methodological settings that make a correct comparison difficult. There is a need to use global approaches (school, family, environment, society, etc.) to agree on measurable outcomes and to obtain longitudinal data. It would also be useful to distinguish between true prevention of obesity (subjects therefore initially not overweight and/or obese) and prevention of the worsening of obesity. Rates of diabetic ketoacidosis (DKA) at diagnosis vary from 11 to 80% depending upon region, even in developed countries. The risk of DKA in children after diagnosis of type 1 diabetes mellitus (T1D) is 1–10/100 person-years. DKA is usually provoked by intentional or inadvertent insulin omission, sometimes associated with intercurrent illness and increased insulin requirement . It has been shown that more than 50% of diabetic children were treated in the pediatric intensive care unit (PICU) due to DKA in Croatia . Passanisi et al. showed that 51.5% of 103 children and adolescents with a new diagnosis of T1D had DKA and 10 subjects with T1D onset needed to be treated in PICU for severe clinical manifestations. Among these four children were younger than 5 years of age. Acute kidney injury was the most common complication of DKA followed by cerebral oedema, papilledema and acute esophageal necrosis. The authors emphasized that increased public awareness campaigns should be promoted to facilitate the recognition of early symptoms of diabetes and to reduce the morbidity and mortality associated with DKA. Galeazzi et al. report that in children aged between 5 and 10 years, living in a coastal area of Central Italy (Ancona) and subjected to screening for celiac disease, blood values of 25-hydroxyvitamin D (25(OH)D were sufficient in 36% of subjects, according to the classification proposed by a recent Italian Consensus which considers such values to be ≥ 30 ng/ml (≥ 75nmol/L). 21% had values classifiable as deficient (10–20 ng/ml) and 6% as severely deficient (< 10 ng/ml). It should be remembered that, in general, values < 12 ng/ml are considered at risk of rickets as confirmed by an extensive review on children aged under 4 years with radiological signs of rickets from which it appears that over 60% have values below this limit . The prevalence data found in the study is substantially in line with epidemiological studies carried out in various European countries regardless of latitude. This confirms that other factors, in addition to sun exposure, which differs in the various latitudes, can play a significant role. In particular, socioeconomic conditions, lifestyles and eating habits must be taken into account. Furthermore, the study reports a higher percentage of deficient values in subjects of non-Caucasian ethnicity and in obese subjects. The latter have been and are the subject of various studies aimed at explaining the reason for this phenomenon . The causes are not yet fully clarified and it is thought that various factors may contribute, such as: less sun exposure of this group due to decreased outdoor activities; vitamin D sequestration in the adipose tissue or uptake by this tissue; impaired liver vitamin D synthesis in fatty liver of severely obese subjects. The high prevalence of children with vitamin D deficient levels is stimulating a broad discussion on the modalities of a prophylaxis that is today recommended with a defined empirical modality due to the absence of strong scientific evidence but which, ideally, should be personalized at the individual level or at risk subpopulations, taking into account the specific individual needs and the type of pathology, especially extra-skeletal, that one wants to prevent . Obesity is today considered, due to its prevalence, which continues to increase in various populations, and to the known risk of complications, such as cardiometabolic, psychosocial comorbidity and premature mortality . a chronic disease of primary interest for public health. In 2023, a joint task force of the Italian Society of Pediatric Endocrinology and Diabetology, the Italian Society of Pediatrics and the Italian Society of Pediatric Surgery developed a consensus position statement on the treatment of obesity in children and adolescents . Lifestyle intervention is the first step in treatment. In children over the age of 12, pharmacotherapy is the second step and bariatric surgery is the third, in selected cases. There are new developments in the medical treatment of obesity . In particular, new medicines have demonstrated efficacy and safety and have been approved for use in adolescents . The Food and Drug Administration has approved once-daily liraglutide, orlistat, and phentermine–topiramate for adolescents at least 12 years of age; only liraglutide is approved by the European Medicines Agency. On the other hand, origin is polyfactorial and the various ways of dealing with it at a therapeutic level have proven, especially in the long term, not particularly effective also due to the various barriers that can oppose it, making therapeutic success more unlikely . For a long time, therefore, attempts have been made to develop prevention with a particular focus on pre-school age and the first school cycles. A recent review that only takes into consideration randomized controlled trials (RCTs) in the 5–11 age group suggests “that a range of activity interventions, and interventions that combine diet with activity, can have a modest beneficial effect on developing obesity” . Very little research has taken into consideration the cost/benefit ratio of these results even if, as shown by the work of Guarino et al. the economic analysis would be positive in a high percentage of them. However, there is a great heterogeneity of data and methodological settings that make a correct comparison difficult. There is a need to use global approaches (school, family, environment, society, etc.) to agree on measurable outcomes and to obtain longitudinal data. It would also be useful to distinguish between true prevention of obesity (subjects therefore initially not overweight and/or obese) and prevention of the worsening of obesity. Trisomy 3q syndrome Serra et al. report on a female preterm newborn with a de novo 3q27.1-q29 duplication. The article provides interesting insights related to the clinical and diagnostic management of the newborn carrying a genetic pathology and related individual and multidimensional follow-up strategies. In the article the analysis of correlations between genes involved in duplication and phenotypic manifestations is discussed, with a comparative review of previous described patients. The presence of risk factors related to advanced parental age, responsible for potentials chromosomal and/or genomic anomalies and assisted reproduction techniques (ART) for epigenomic defects is emphasized . The whole diagnostic pathway allowing the diagnosis of the contiguous gene syndrome (non-invasive prenatal diagnosis, karyotype and a-CGH) is well outlined . In the clinical approach 3q27.1-q29 duplication should be included in the differential diagnosis of hypergrowth syndromes. Serra et al. report on a female preterm newborn with a de novo 3q27.1-q29 duplication. The article provides interesting insights related to the clinical and diagnostic management of the newborn carrying a genetic pathology and related individual and multidimensional follow-up strategies. In the article the analysis of correlations between genes involved in duplication and phenotypic manifestations is discussed, with a comparative review of previous described patients. The presence of risk factors related to advanced parental age, responsible for potentials chromosomal and/or genomic anomalies and assisted reproduction techniques (ART) for epigenomic defects is emphasized . The whole diagnostic pathway allowing the diagnosis of the contiguous gene syndrome (non-invasive prenatal diagnosis, karyotype and a-CGH) is well outlined . In the clinical approach 3q27.1-q29 duplication should be included in the differential diagnosis of hypergrowth syndromes. Telemedicine for pediatric care The use of telemedicine for pediatric care is increasing worldwide. According to the recent guidelines issued in 2020 by the Italian Ministry of Health, has been recognized as an integral part of the services of the National Health Service. The adoption of telemedicine in the field of care has found a significant impact during the covid pandemic and has allowed to continue and implement virtuous clinical care processes with improvement of the quality of health care, increasing the usability of treatments, diagnostic services and remote medical advice, along with positive economic impact too. Zuccotti et al. report the peculiarities of a regional operational center for telemedicine to ensure continuing care in pediatrics. The services included routine pediatric hospital activities and innovative programs, such as early discharge, telecardiology, online supervised exercise training and preventive healthcare . The proposed platform of telemedicine can be a useful model for other experiences in this field. The use of telemedicine for pediatric care is increasing worldwide. According to the recent guidelines issued in 2020 by the Italian Ministry of Health, has been recognized as an integral part of the services of the National Health Service. The adoption of telemedicine in the field of care has found a significant impact during the covid pandemic and has allowed to continue and implement virtuous clinical care processes with improvement of the quality of health care, increasing the usability of treatments, diagnostic services and remote medical advice, along with positive economic impact too. Zuccotti et al. report the peculiarities of a regional operational center for telemedicine to ensure continuing care in pediatrics. The services included routine pediatric hospital activities and innovative programs, such as early discharge, telecardiology, online supervised exercise training and preventive healthcare . The proposed platform of telemedicine can be a useful model for other experiences in this field. Thiol disulfide balance and vitamin B12 deficiency Several studies support a relation between an increased use of cell phones and technological devices with high specific absorption rate (SAR) values, fast-food consumption, smoking cigarettes and other tobacco products and the increase in the oxidative stress levels (OSL). An increase in OSL has been linked to negative functional consequences on the central and peripheral nervous system . Demirtas et al. conducted a case-controlled observational study, on adolescents with symptoms attributable to headache by evaluating the levels of oxidation markers and B12 levels that were lower in affected ones. The statistically significant results showed that in the group with vitamin B12 deficiency native thiol levels were lower, while the disulfide and HCY levels were higher. Interestingly identification of B12 deficiency did not correlate, as in previous studies, with significant differences in MCV, or identifiable macrocytic anemia. Thus, central nervous system findings can be seen prominently in children with vitamin B12 deficiency who have normal hematological findings. Several studies support a relation between an increased use of cell phones and technological devices with high specific absorption rate (SAR) values, fast-food consumption, smoking cigarettes and other tobacco products and the increase in the oxidative stress levels (OSL). An increase in OSL has been linked to negative functional consequences on the central and peripheral nervous system . Demirtas et al. conducted a case-controlled observational study, on adolescents with symptoms attributable to headache by evaluating the levels of oxidation markers and B12 levels that were lower in affected ones. The statistically significant results showed that in the group with vitamin B12 deficiency native thiol levels were lower, while the disulfide and HCY levels were higher. Interestingly identification of B12 deficiency did not correlate, as in previous studies, with significant differences in MCV, or identifiable macrocytic anemia. Thus, central nervous system findings can be seen prominently in children with vitamin B12 deficiency who have normal hematological findings. Bronchiolitis Bronchiolitis is one of the most warning cause of hospitalization for infants less than two years of age . Seasonally, bronchiolitis hospitalization mainly correlates to RSV and mostly affects young infants less than 3 months of age not eligible to current available prophylaxis with palivizumab . During CoronaVIrus Disease – 2019 (COVID-19) pandemic, a significant decrease of respiratory infections had been globally reported, including bronchiolitis . In line with respiratory tract infections decrease, there was also a significant decrease in antibiotic prescriptions, which are too often inappropriately prescribed in children due to a lack of readily available tests or to limit parental anxiety . Later, in the following season, an anticipated peak with an increase of the overall number of cases had been described by epidemiological reports . They confirmed that most cases of bronchiolitis are caused by RSV and more frequently affect infants less than three months of age. According to the “UPDATE − 2022 Italian guidelines on the management of bronchiolitis in infants”, the diagnosis is made by anamnestic and clinical evaluation and the management is supportive . Since specific etiological treatment is not available, the authors suggest fluid and/or respiratory supplement, avoiding salbutamol, glucocorticosteroids and antibiotics . Oxygen therapy should be provided in case of respiratory distress and hypoxemia and may be discontinued when saturation levels equal to or greater than 93–94% . Recent epidemiological reports highlight that oxygen support as well as sub intensive or even intensive care hospitalization were more frequently required compared to previous seasons. Long COVID-19/post-COVID condition In the COVID-19 era, we encounter a new disease, named “Long COVID” which may affect even children . To meet the criteria for the diagnosis, young people with a history of confirmed Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2 infection), should present with at least one persisting physical symptom for a minimum duration of 12 weeks in the absence of an alternative diagnosis . Symptoms may vary, including fatigue, hemicrania, dizziness or disequilibrium, asthenia or weakness, chest pain, cough and respiratory distress under exertion . Aggregatibacter actinomycetemcomitans infection Aggregatibacter actinomycetemcomitans is an oral flora colonizing bacterium which may cause dental caries and periodontitis . In literature it has been associated to severe extra-oral infections including endocarditis, soft tissue abscesses and more rarely to osteomyelitis, brain abscess and pneumonia . A prolong-term antibiotic treatment is suggested to get a complete eradication. Nevertheless, the optimal duration of therapy is not known depends on multiple variables including patient clinical response, the extent of tissue involvement . Acute otitis media and facial nerve palsy Acute mastoiditis is the most frequent complication of acute otitis media while meningitis, subperiosteal, brain abscesses and facial nerve paralysis are more severe but rarely reported during childhood . After the widespread use of antibiotics, the prognosis of acute otitis media complications is generally good after the appropriate therapy, even though residual dysfunction may happen . Metagenomic next-generation sequencing for the detection of pathogens Recently, Metagenomic Next-generation sequencing (mNGS) has started to been used in the detection of bacteria to clarify the aetiology and guide anti-infection treatment . The benefits are a rapid and accurate identification of pathogens searching for pathogens not commonly identifiable using conventional technology . Evidence suggests that in most cases treatment may be changed on mNGS results, with a faster clinical improvement . Bronchiolitis is one of the most warning cause of hospitalization for infants less than two years of age . Seasonally, bronchiolitis hospitalization mainly correlates to RSV and mostly affects young infants less than 3 months of age not eligible to current available prophylaxis with palivizumab . During CoronaVIrus Disease – 2019 (COVID-19) pandemic, a significant decrease of respiratory infections had been globally reported, including bronchiolitis . In line with respiratory tract infections decrease, there was also a significant decrease in antibiotic prescriptions, which are too often inappropriately prescribed in children due to a lack of readily available tests or to limit parental anxiety . Later, in the following season, an anticipated peak with an increase of the overall number of cases had been described by epidemiological reports . They confirmed that most cases of bronchiolitis are caused by RSV and more frequently affect infants less than three months of age. According to the “UPDATE − 2022 Italian guidelines on the management of bronchiolitis in infants”, the diagnosis is made by anamnestic and clinical evaluation and the management is supportive . Since specific etiological treatment is not available, the authors suggest fluid and/or respiratory supplement, avoiding salbutamol, glucocorticosteroids and antibiotics . Oxygen therapy should be provided in case of respiratory distress and hypoxemia and may be discontinued when saturation levels equal to or greater than 93–94% . Recent epidemiological reports highlight that oxygen support as well as sub intensive or even intensive care hospitalization were more frequently required compared to previous seasons. In the COVID-19 era, we encounter a new disease, named “Long COVID” which may affect even children . To meet the criteria for the diagnosis, young people with a history of confirmed Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2 infection), should present with at least one persisting physical symptom for a minimum duration of 12 weeks in the absence of an alternative diagnosis . Symptoms may vary, including fatigue, hemicrania, dizziness or disequilibrium, asthenia or weakness, chest pain, cough and respiratory distress under exertion . Aggregatibacter actinomycetemcomitans is an oral flora colonizing bacterium which may cause dental caries and periodontitis . In literature it has been associated to severe extra-oral infections including endocarditis, soft tissue abscesses and more rarely to osteomyelitis, brain abscess and pneumonia . A prolong-term antibiotic treatment is suggested to get a complete eradication. Nevertheless, the optimal duration of therapy is not known depends on multiple variables including patient clinical response, the extent of tissue involvement . Acute mastoiditis is the most frequent complication of acute otitis media while meningitis, subperiosteal, brain abscesses and facial nerve paralysis are more severe but rarely reported during childhood . After the widespread use of antibiotics, the prognosis of acute otitis media complications is generally good after the appropriate therapy, even though residual dysfunction may happen . Recently, Metagenomic Next-generation sequencing (mNGS) has started to been used in the detection of bacteria to clarify the aetiology and guide anti-infection treatment . The benefits are a rapid and accurate identification of pathogens searching for pathogens not commonly identifiable using conventional technology . Evidence suggests that in most cases treatment may be changed on mNGS results, with a faster clinical improvement . Vitamin D level and neonatal respiratory distress syndrome The results of studies on the association between vitamin D levels and respiratory distress are not consistent, with differences in both maternal and fetal several variables involved, and cord blood 25(OH)D3 levels considered normal for the gestational ages . Liu W et al. address the potential relationship between cord blood 25(OH)D3 levels and the onset of neonatal respiratory distress syndrome (NRDS). This retrospective study was conducted on infants (gestational age 28–36 weeks) diagnosed with NRDS and non-NRDS preterm infants as the control group. The results of a monofactor analysis showed a correlation between lower cord blood 25(OH)D3 levels and NRDS. In addition, a multivariate logistic regression analysis identified as independent risk factors for NRDS the following: 25(OH)D3 cord blood levels < 57.69 nmol/L (24 ng/ml), gestational age < 31 weeks, birth weight < 1.86 kg, Apgar score (1 min) < 7 and Apgar score (5 min) < 8. The Authors conclude that 25(OH)D3 level is an independent risk factor for NRDS in preterm infants. Neurodevelopmental outcomes of very low birth weight preterms Battajon et al. conducted a single tertiary center prospective cohort study enrolling all infants < 30 weeks GA and birthweight < 1500 g admitted to NICU over a period of three years. The preterm baby is at risk of presenting neurodevelopmental disorders whose early identification allows targeted treatments. The study adopted up-to-date child development evaluation tools, and a valid methodology of statistical analysis, providing a valid model for further research in this field. The 2- and 4-year development evaluations showed a different expression in terms of percentages of subjects with developmental abnormalities, related risk factors and areas of development involved. At two years Bayley motor scale resulted worse in the lowest GA groups ( p = 0.0282). No disability was present in 59.6%, a minor one in 31.1% and a major disability in 9.3%. Risk factors associated with disability were early neonatal sepsis ( p = 0.0377), grade ≥ 3 intra ventricular hemorrhage ( p = 0.0245), BPD ( p = 0.0130), ROP ( p = 0.0342), late neonatal sepsis ( p = 0.0180), and length of hospitalization ( p < 0.0001). Assessment at four-years, using WPPSI scale and scores with mABC 2, showed major disability in 19.7%, a minor one in 47.2%, or no disability in 33.1%. Disability was only associated with BPD ( p = 0.0441) and length of hospitalization ( p = 0.0077. A progressively worse performance was noted in relation to reduction of the GA, while using multivariate analysis, only the length of stay was predictive. At both ages there was no difference in the incidence of disabilities considering AGA and SGA groups, ( p = 0.2689). The analysis of the conjoint distribution of disability at age of two and four years revealed how children without disabilities at the age of two (62.1%) developed impairments at the age of four in 58.4% of cases ( p < 0.0001), with significant correlation between processing speed and manual dexterity with Spearman’s coefficient = 0.47 ( p < 0.0001) and between processing speed and aiming and grasping with Spear man’s coefficient = 0.27 ( p < 0.0001). This study demonstrated a clear shift in the incidence of disabilities since about half of children completely free from disability at two years of age, showed a disability related to fine motor skills that impacted an alteration in processing speed at four years. The authors suggest that attentional capacity may not be the primary cognitive problem, but a motor impairment and a difficulty with oculo-motor coordination. Children with oculo-motor impairment have less cognitive results and this does not reflect their true cognitive abilities. Therefore, for proper assessment of school learning problems, it is necessary to conduct a careful follow-up on all cognitive, motor and behavioral aspects as early as possible to detect the real problem. This allows intervention with appropriate neuropsychological techniques and thus improves school performance . The results of studies on the association between vitamin D levels and respiratory distress are not consistent, with differences in both maternal and fetal several variables involved, and cord blood 25(OH)D3 levels considered normal for the gestational ages . Liu W et al. address the potential relationship between cord blood 25(OH)D3 levels and the onset of neonatal respiratory distress syndrome (NRDS). This retrospective study was conducted on infants (gestational age 28–36 weeks) diagnosed with NRDS and non-NRDS preterm infants as the control group. The results of a monofactor analysis showed a correlation between lower cord blood 25(OH)D3 levels and NRDS. In addition, a multivariate logistic regression analysis identified as independent risk factors for NRDS the following: 25(OH)D3 cord blood levels < 57.69 nmol/L (24 ng/ml), gestational age < 31 weeks, birth weight < 1.86 kg, Apgar score (1 min) < 7 and Apgar score (5 min) < 8. The Authors conclude that 25(OH)D3 level is an independent risk factor for NRDS in preterm infants. Battajon et al. conducted a single tertiary center prospective cohort study enrolling all infants < 30 weeks GA and birthweight < 1500 g admitted to NICU over a period of three years. The preterm baby is at risk of presenting neurodevelopmental disorders whose early identification allows targeted treatments. The study adopted up-to-date child development evaluation tools, and a valid methodology of statistical analysis, providing a valid model for further research in this field. The 2- and 4-year development evaluations showed a different expression in terms of percentages of subjects with developmental abnormalities, related risk factors and areas of development involved. At two years Bayley motor scale resulted worse in the lowest GA groups ( p = 0.0282). No disability was present in 59.6%, a minor one in 31.1% and a major disability in 9.3%. Risk factors associated with disability were early neonatal sepsis ( p = 0.0377), grade ≥ 3 intra ventricular hemorrhage ( p = 0.0245), BPD ( p = 0.0130), ROP ( p = 0.0342), late neonatal sepsis ( p = 0.0180), and length of hospitalization ( p < 0.0001). Assessment at four-years, using WPPSI scale and scores with mABC 2, showed major disability in 19.7%, a minor one in 47.2%, or no disability in 33.1%. Disability was only associated with BPD ( p = 0.0441) and length of hospitalization ( p = 0.0077. A progressively worse performance was noted in relation to reduction of the GA, while using multivariate analysis, only the length of stay was predictive. At both ages there was no difference in the incidence of disabilities considering AGA and SGA groups, ( p = 0.2689). The analysis of the conjoint distribution of disability at age of two and four years revealed how children without disabilities at the age of two (62.1%) developed impairments at the age of four in 58.4% of cases ( p < 0.0001), with significant correlation between processing speed and manual dexterity with Spearman’s coefficient = 0.47 ( p < 0.0001) and between processing speed and aiming and grasping with Spear man’s coefficient = 0.27 ( p < 0.0001). This study demonstrated a clear shift in the incidence of disabilities since about half of children completely free from disability at two years of age, showed a disability related to fine motor skills that impacted an alteration in processing speed at four years. The authors suggest that attentional capacity may not be the primary cognitive problem, but a motor impairment and a difficulty with oculo-motor coordination. Children with oculo-motor impairment have less cognitive results and this does not reflect their true cognitive abilities. Therefore, for proper assessment of school learning problems, it is necessary to conduct a careful follow-up on all cognitive, motor and behavioral aspects as early as possible to detect the real problem. This allows intervention with appropriate neuropsychological techniques and thus improves school performance . Psycho-emotional distress in relation to COVID-19 confinement Since its appearance in Wuhan in mid-December 2019, COVID-19 has spread dramatically worldwide . The pandemic forced the population to face unprecedented changes such as social isolation, closure of schools and public areas, and significantly impacted the well-being of children and adolescents. Compared with adults, children with COVID-19 usually had a milder or moderate course of the disease, but children were more susceptible to psychological effects than adults, suggesting that the pediatric population is more vulnerable toward mental health problems . García-Rodríguez et al. conducted a systematic review to assess the impact of the lockdown measures associated with COVID-19 pandemic on children (from 2 to 12 years) and adolescent (from 13 to 18 years) . Authors felt it was essential to conduct this systematic literature review since both children and adolescents belong to a fragile group in a stage of physical and mental development. The reviewed studies focused on a population of children and adolescents evaluated during COVID-19 and the quarantine period. Main results can be summarized. Lifestyle changes and psycho-emotional manifestations: school closure and social isolation, which increased the use of screens and technologies making children and adolescent less capable of social skills and socialization. Psycho-emotional manifestations according to age differentiation: in the adolescent population higher levels of stress, depression and anxiety were found, while among children the most common symptoms were irritability, arguments with the rest of the family or rebellious behavior. Effects of confinement from a cross-cultural approach: focusing on young people from three different countries (Spain, Italy and Portugal), authors observed that Italian children had the lowest levels of anxiety and less nutritional, cognitive and sleep disorders than Spanish or Portuguese peers. Children from Portugal and from Spain reported more mood disturbances and more behavioral disturbances, respectively. Strategies for promoting resilience: the most common and successful strategy included spending a lot of time together in a limited space, improving communication between parents and children. Mental health at pediatric age is a source of constant concern for clinicians. Improving the knowledge about the impact that pandemic has had on children will allow clinicians to identify young people who need specialized help and consequently will allow to intervene before irremediable repercussions or long-term effects occur . Children with autism spectrum disorder and their care-givers Autism Spectrum Disorder (ASD) refers to a group of pervasive neurodevelopmental disorders that involve moderately to severely disrupted functioning in areas such as social skills and socialization, expressive and receptive communication, and repetitive or stereotyped behaviors and interests . Caring for children with ASD is a stressful process that heavily depends on the abilities of caregivers. The stress associated with raising a sick or disabled child creates a burden of care, which is defined as the physical, psychological, social, or economic reactions experienced by caregivers during the caregiving process . Rasoulpoor et al. designed a descriptive-analytical study to determine the relationship between care burden, coping styles, and resilience among mothers of children with ASD. Authors assessed caregiving burden, coping styles, and mothers’ resilience by contacting 80 volunteered mothers of autistic children. They responded to a questionnaire consisting of 3 parts: (a) the Caregiver Burden Inventory to measure the objective and subjective burden of care; (b) the Connor-Davidson Resilience Scale to measure the ability to deal with pressure and threats; and (c) the Coping Strategies Questionnaire to study how people cope with stress, in addition to providing demographic information. Questionnaires that were completed correctly and comprehensively by 69 mothers of children with ASD were analyzed. Mothers were recruited among the parents of patients at an Autism Center who met the predefined inclusion criteria (having a child aged between 3 and 15 years with a diagnosis of autism, and the mother’s psychological and physical well-being). Data analysis revealed that the average age of the participating mothers was 38.4 ± 7 years. Of these women, 94.2% were married, and 50.7% had only one child. Additionally, 56.5% had received a university education, but only 30.4% were employed. The average age of the children was 3.3 ± 1 years. Cross-referencing the demographic information with the questionnaire results revealed a significant correlation between maternal age, number of children, maternal employment status, child’s gender, and economic status. Mothers with more than one child, lower economic status, and daughters with ASD exhibited an increased burden of care. Although the average levels of resilience and coping styles were moderate, the average burden of care of mothers participating in the study was 95.5 ± 9.1, which shows that the care load is severe. Additionally, an inverse proportional relationship was observed between caregiving burden and the resilience of mothers with children affected by ASD. Therefore, the findings of this study indicate that mothers of children with autism are burdened with an increased caregiving load and exhibit moderate adaptation capabilities in response to the stress they face, which can be physical, emotional, social, and economic in nature . ADHD in children and adolescents Attention Deficit Hyperactivity Disorder (ADHD) is a neurodevelopmental disorder. Main symptoms of ADHD, i.e., lack of attention and concentration, disorganization, difficulty completing tasks, being forgetful, and losing things, usually occur before age 12 years and interfere with daily life activities in more than one setting (home and school, or school and after-school time). ADHD can result in abnormal social interactions, increased risky behaviors, loss of jobs, and difficulties in school performance . Boys are more likely to manifest symptoms and being diagnosed as having ADHD . The Diagnostic and Statistical Manual of Mental Disorders IV distinguishes among inattentive (ADHD-I), hyperactive–impulsive (ADHD-H), and combined (ADHD-C) subtypes of ADHD. The diagnosis of ADHD-C requires the presence of symptoms across the domains of inattention and hyperactivity–impulsivity. Salari et al. reported a prevalence of ADHD in children 3 to 12 years-old higher than in adolescents aged 12 to 18 years (7.6% versus 5.6%, respectively), with more cases among males than females, while previous research pointed out a lower prevalence in young children (2–7% ). In this systematic review, 1167 studies have been analyzed and the prevalence of several forms of ADHD was also measured. Results show that the prevalence of ADHD-I, ADHD-H, and ADHD-C is nearly equal among children. The prevalence of ADHD was higher when using the DSM-V diagnostic criterion than when using other criteria. According to these findings, while ADHD appears less common in childhood than in adulthood, its prevalence is increasing. Since its appearance in Wuhan in mid-December 2019, COVID-19 has spread dramatically worldwide . The pandemic forced the population to face unprecedented changes such as social isolation, closure of schools and public areas, and significantly impacted the well-being of children and adolescents. Compared with adults, children with COVID-19 usually had a milder or moderate course of the disease, but children were more susceptible to psychological effects than adults, suggesting that the pediatric population is more vulnerable toward mental health problems . García-Rodríguez et al. conducted a systematic review to assess the impact of the lockdown measures associated with COVID-19 pandemic on children (from 2 to 12 years) and adolescent (from 13 to 18 years) . Authors felt it was essential to conduct this systematic literature review since both children and adolescents belong to a fragile group in a stage of physical and mental development. The reviewed studies focused on a population of children and adolescents evaluated during COVID-19 and the quarantine period. Main results can be summarized. Lifestyle changes and psycho-emotional manifestations: school closure and social isolation, which increased the use of screens and technologies making children and adolescent less capable of social skills and socialization. Psycho-emotional manifestations according to age differentiation: in the adolescent population higher levels of stress, depression and anxiety were found, while among children the most common symptoms were irritability, arguments with the rest of the family or rebellious behavior. Effects of confinement from a cross-cultural approach: focusing on young people from three different countries (Spain, Italy and Portugal), authors observed that Italian children had the lowest levels of anxiety and less nutritional, cognitive and sleep disorders than Spanish or Portuguese peers. Children from Portugal and from Spain reported more mood disturbances and more behavioral disturbances, respectively. Strategies for promoting resilience: the most common and successful strategy included spending a lot of time together in a limited space, improving communication between parents and children. Mental health at pediatric age is a source of constant concern for clinicians. Improving the knowledge about the impact that pandemic has had on children will allow clinicians to identify young people who need specialized help and consequently will allow to intervene before irremediable repercussions or long-term effects occur . Autism Spectrum Disorder (ASD) refers to a group of pervasive neurodevelopmental disorders that involve moderately to severely disrupted functioning in areas such as social skills and socialization, expressive and receptive communication, and repetitive or stereotyped behaviors and interests . Caring for children with ASD is a stressful process that heavily depends on the abilities of caregivers. The stress associated with raising a sick or disabled child creates a burden of care, which is defined as the physical, psychological, social, or economic reactions experienced by caregivers during the caregiving process . Rasoulpoor et al. designed a descriptive-analytical study to determine the relationship between care burden, coping styles, and resilience among mothers of children with ASD. Authors assessed caregiving burden, coping styles, and mothers’ resilience by contacting 80 volunteered mothers of autistic children. They responded to a questionnaire consisting of 3 parts: (a) the Caregiver Burden Inventory to measure the objective and subjective burden of care; (b) the Connor-Davidson Resilience Scale to measure the ability to deal with pressure and threats; and (c) the Coping Strategies Questionnaire to study how people cope with stress, in addition to providing demographic information. Questionnaires that were completed correctly and comprehensively by 69 mothers of children with ASD were analyzed. Mothers were recruited among the parents of patients at an Autism Center who met the predefined inclusion criteria (having a child aged between 3 and 15 years with a diagnosis of autism, and the mother’s psychological and physical well-being). Data analysis revealed that the average age of the participating mothers was 38.4 ± 7 years. Of these women, 94.2% were married, and 50.7% had only one child. Additionally, 56.5% had received a university education, but only 30.4% were employed. The average age of the children was 3.3 ± 1 years. Cross-referencing the demographic information with the questionnaire results revealed a significant correlation between maternal age, number of children, maternal employment status, child’s gender, and economic status. Mothers with more than one child, lower economic status, and daughters with ASD exhibited an increased burden of care. Although the average levels of resilience and coping styles were moderate, the average burden of care of mothers participating in the study was 95.5 ± 9.1, which shows that the care load is severe. Additionally, an inverse proportional relationship was observed between caregiving burden and the resilience of mothers with children affected by ASD. Therefore, the findings of this study indicate that mothers of children with autism are burdened with an increased caregiving load and exhibit moderate adaptation capabilities in response to the stress they face, which can be physical, emotional, social, and economic in nature . Attention Deficit Hyperactivity Disorder (ADHD) is a neurodevelopmental disorder. Main symptoms of ADHD, i.e., lack of attention and concentration, disorganization, difficulty completing tasks, being forgetful, and losing things, usually occur before age 12 years and interfere with daily life activities in more than one setting (home and school, or school and after-school time). ADHD can result in abnormal social interactions, increased risky behaviors, loss of jobs, and difficulties in school performance . Boys are more likely to manifest symptoms and being diagnosed as having ADHD . The Diagnostic and Statistical Manual of Mental Disorders IV distinguishes among inattentive (ADHD-I), hyperactive–impulsive (ADHD-H), and combined (ADHD-C) subtypes of ADHD. The diagnosis of ADHD-C requires the presence of symptoms across the domains of inattention and hyperactivity–impulsivity. Salari et al. reported a prevalence of ADHD in children 3 to 12 years-old higher than in adolescents aged 12 to 18 years (7.6% versus 5.6%, respectively), with more cases among males than females, while previous research pointed out a lower prevalence in young children (2–7% ). In this systematic review, 1167 studies have been analyzed and the prevalence of several forms of ADHD was also measured. Results show that the prevalence of ADHD-I, ADHD-H, and ADHD-C is nearly equal among children. The prevalence of ADHD was higher when using the DSM-V diagnostic criterion than when using other criteria. According to these findings, while ADHD appears less common in childhood than in adulthood, its prevalence is increasing. Vascular rings Vascular rings (VR) account for < 1% of all congenital cardiac defects. Abnormalities in position and/ or branching of the aortic arch can lead to a complete or incomplete VR that encircles and compresses the trachea, the bronchi and/or the oesophagus. Over the recent years, there has been an increase in detection of VR due to the increased rate of fetal diagnosis. Right aortic arch with aberrant left subclavian artery is the most common complete VR, followed by double aortic arch (DAA). Aberrant innominate artery (AIA) compression accounts 3 to 20% of cases of incomplete VR, followed by left pulmonary artery sling. Respiratory symptoms associated with VR often occur early in life (at age 1–6 months). The severity of clinical manifestations depends on the encroachment on the trachea, bronchus or oesophagus by the abnormal vascular structures. Common symptoms vary from apnoea and cyanosis to stridor, barky cough, wheezing, shortness of breath, dysphagia for solid food. An history of chronic cough, recurrent bronchopneumonia and fatigue during physical exertion is also frequently reported. Clinical presentation can vary, but disease severity does not appear strictly related with the degree of the anatomical obstruction . A higher prevalence of severe symptoms such as reflex apnoea and stridor, has been reported in young children . Computed tomography (CT) with angiography is an important diagnostic tool as it allows a careful simultaneous assessment of vascular abnormalities and airway involvement. Flexible laryngotracheal-bronchoscopy performed under light sedation and spontaneous breathing allows the dynamic evaluation of the tracheobronchial tree, revealing the localization, extension and the estimation of the airway malacia severity. Spirometry is recommended in children aged over 6 years for documenting the flow-volume curve shape abnormalities. The exercise challenge test is helpful to reproduce exercise induced symptoms frequently reported by patients. As far as treatment, the evidence of a VR is not an indication for early surgical intervention. Corcione et al. have proposed a management algorithm of patients suspected of AIA based on the evidence from literature review of 20 original articles on 2166 patients with several vascular anomalies, including 1092 patients with AIA . A rapid clinical improvement in AIA children treated with aortopexy has been reported, this supporting the role of AIA- induced tracheal compression in the pathogenesis of recurrent/chronic dry cough . Gardella et al. studied a patient population of 28 AIA children, 16 of whom undergone surgical correction. All patients with a clinical presentation sufficiently severe to justify surgical correction showed 70% or greater of tracheal narrowing at endoscopy, and this finding was found in any of the patients in the conservative management group. Porcaro et al. conducted a review based on 14 articles whose endpoint was symptom management of several VR after treatment. Overall, the reviewed studies showed a positive trend of resolution of patients’ symptoms after surgical correction. Nevertheless, the difference in percentage of symptoms resolution likely reflects discrepancy among the different cohorts in term of timing of intervention, anatomical variants of the VR, and prevalence of associated lesions. Based on the available literature findings authors proposed an algorithm including the investigations required for the diagnosis, the indications for surgical treatment and the evaluations needed for monitoring both treated and non-treated patients during the follow up period. Treatment is recommended in all symptomatic patients, particularly in those with DAA or with marked Kommerell diverticulum, in cases with anterior or posterior tracheal compression greater than 50% of the lumen, or in the presence of concomitant congenital heart disease necessitating surgical repair. Conservative treatment might be indeed reasonable in asymptomatic or mildly symptomatic cases. Bronchopulmonary aspergillosis Aspergillus spp. is a mold that colonize the airways provoking a spectrum of clinical syndromes. Invasive pulmonary aspergillosis occurs in immune-compromised subjects. The “gold standard” for diagnosis of invasive pulmonary aspergillosis requires a lung biopsy . It is treated with liposomal amphotericin B in children < 2 years of age and voriconazole in older patients. Voriconazole or itraconazole are used for prophylaxis in children 2–12 years old, posaconazole those > 13 years of age . Allergic sinusitis and allergic bronchopulmonary aspergillosis are characterized by allergic asthma, peripheral blood eosinophilia, skin test or elevated IgE to Aspergillus fumigatus, fungus-specific IgG or precipitins. Allergic bronchopulmonary aspergillosis affects asthmatics, with poor symptom control, and/or children with cystic fibrosis. It requires a CT of the chest for bronchiectasis. A prompt diagnosis of chronic pulmonary aspergillosis is difficult but necessary since it may evolve in idiopathic pulmonary fibrosis. In patients with aspergillus-associated hypersensitivity pneumonitis, a reduced pH values in exhaled breath condensate that is also observed in acute asthma may be helpful in interpreting the specific inhalation challenge. Other conditions include acute community-acquired aspergillus pneumonia, aspergillus bronchitis. Bronchoscopy and severe pneumonia The use of fiberoptic bronchoscopy (FOB) and bronchoalveolar lavage (BAL) is increasingly prevalent in pediatric settings as an aid in diagnosing numerous pulmonary diseases and as a therapeutic tool in specific conditions, particularly those affecting the small airways . The capability provided by endoscopy to identify the etiology of severe pneumonia at an early stage represents an undeniable advantage in the clinical management and prognosis of the disorder. Wu et al. analyzed 229 patients admitted with severe pneumonia to the Pediatric Intensive Care Unit (PICU) at Xinxiang Hospital, China, between November 2018 and December 2021. Patients were divided into two groups based on the necessity of invasive ventilation (invasive ventilation group and non-invasive ventilation group) and further stratified according to the timing of BAL (early BAL group: received BAL within one day of admission; late BAL group: received BAL two days after admission). For each patient, the following information was collected: demographic data, duration of symptoms prior to PICU admission, reason for PICU admission, APACHE II score (that addressed patients’ severity in the PICU), SOFA score (for evaluating the organs’ failure), length of hospitalization overall and in the PICU. Additionally, data regarding patients’ clinical presentation, laboratory tests results, especially microbiology of the BAL specimens by PCR and culture, and endoscopic score assessment were evaluated. Notably, the most frequently isolated etiological agent in the study was Mycoplasma pneumoniae (36.67%), followed by Staphylococcus aureus (26.11%), Haemophilus pneumoniae (23.33%), and Streptococcus spp . (16.67%). Viral identification was less frequent, with RSV being the most prevalent (27.22%), followed by Influenza B virus (17.22%) and Influenza A virus (4.44%). A small portion of the pneumonias were due to fungal infections, with Candida albicans identified in 5.56% of cases. Comparison of endoscopic scores revealed a significantly higher score, indicating greater severity, in patients who required invasive ventilation. Moreover, a shorter PICU stay was observed in patients who underwent early BAL compared to those who had BAL two or more days after ICU admission. The study also demonstrated that patients in the invasive ventilation group had higher SOFA and APACHE II scores and a longer PICU stay. Among the patients examined, 9.61% succumbed to their illness, although no statistically significant differences in mortality rates were observed between the various groups and subgroups. Wu et al. have strengthened the growing body of evidence regarding the role of FOB and BAL in diagnosing and prognostically stratifying patients with pneumopathy, both in acute forms, as seen in the study patients, and in managing pediatric patients with prolonged/recurrent disease forms, such as recurrent pneumonia , and refractory disease forms, where along with CT scan, it represents an indispensable tool for the modern pediatric pulmonologist . Vascular rings (VR) account for < 1% of all congenital cardiac defects. Abnormalities in position and/ or branching of the aortic arch can lead to a complete or incomplete VR that encircles and compresses the trachea, the bronchi and/or the oesophagus. Over the recent years, there has been an increase in detection of VR due to the increased rate of fetal diagnosis. Right aortic arch with aberrant left subclavian artery is the most common complete VR, followed by double aortic arch (DAA). Aberrant innominate artery (AIA) compression accounts 3 to 20% of cases of incomplete VR, followed by left pulmonary artery sling. Respiratory symptoms associated with VR often occur early in life (at age 1–6 months). The severity of clinical manifestations depends on the encroachment on the trachea, bronchus or oesophagus by the abnormal vascular structures. Common symptoms vary from apnoea and cyanosis to stridor, barky cough, wheezing, shortness of breath, dysphagia for solid food. An history of chronic cough, recurrent bronchopneumonia and fatigue during physical exertion is also frequently reported. Clinical presentation can vary, but disease severity does not appear strictly related with the degree of the anatomical obstruction . A higher prevalence of severe symptoms such as reflex apnoea and stridor, has been reported in young children . Computed tomography (CT) with angiography is an important diagnostic tool as it allows a careful simultaneous assessment of vascular abnormalities and airway involvement. Flexible laryngotracheal-bronchoscopy performed under light sedation and spontaneous breathing allows the dynamic evaluation of the tracheobronchial tree, revealing the localization, extension and the estimation of the airway malacia severity. Spirometry is recommended in children aged over 6 years for documenting the flow-volume curve shape abnormalities. The exercise challenge test is helpful to reproduce exercise induced symptoms frequently reported by patients. As far as treatment, the evidence of a VR is not an indication for early surgical intervention. Corcione et al. have proposed a management algorithm of patients suspected of AIA based on the evidence from literature review of 20 original articles on 2166 patients with several vascular anomalies, including 1092 patients with AIA . A rapid clinical improvement in AIA children treated with aortopexy has been reported, this supporting the role of AIA- induced tracheal compression in the pathogenesis of recurrent/chronic dry cough . Gardella et al. studied a patient population of 28 AIA children, 16 of whom undergone surgical correction. All patients with a clinical presentation sufficiently severe to justify surgical correction showed 70% or greater of tracheal narrowing at endoscopy, and this finding was found in any of the patients in the conservative management group. Porcaro et al. conducted a review based on 14 articles whose endpoint was symptom management of several VR after treatment. Overall, the reviewed studies showed a positive trend of resolution of patients’ symptoms after surgical correction. Nevertheless, the difference in percentage of symptoms resolution likely reflects discrepancy among the different cohorts in term of timing of intervention, anatomical variants of the VR, and prevalence of associated lesions. Based on the available literature findings authors proposed an algorithm including the investigations required for the diagnosis, the indications for surgical treatment and the evaluations needed for monitoring both treated and non-treated patients during the follow up period. Treatment is recommended in all symptomatic patients, particularly in those with DAA or with marked Kommerell diverticulum, in cases with anterior or posterior tracheal compression greater than 50% of the lumen, or in the presence of concomitant congenital heart disease necessitating surgical repair. Conservative treatment might be indeed reasonable in asymptomatic or mildly symptomatic cases. Aspergillus spp. is a mold that colonize the airways provoking a spectrum of clinical syndromes. Invasive pulmonary aspergillosis occurs in immune-compromised subjects. The “gold standard” for diagnosis of invasive pulmonary aspergillosis requires a lung biopsy . It is treated with liposomal amphotericin B in children < 2 years of age and voriconazole in older patients. Voriconazole or itraconazole are used for prophylaxis in children 2–12 years old, posaconazole those > 13 years of age . Allergic sinusitis and allergic bronchopulmonary aspergillosis are characterized by allergic asthma, peripheral blood eosinophilia, skin test or elevated IgE to Aspergillus fumigatus, fungus-specific IgG or precipitins. Allergic bronchopulmonary aspergillosis affects asthmatics, with poor symptom control, and/or children with cystic fibrosis. It requires a CT of the chest for bronchiectasis. A prompt diagnosis of chronic pulmonary aspergillosis is difficult but necessary since it may evolve in idiopathic pulmonary fibrosis. In patients with aspergillus-associated hypersensitivity pneumonitis, a reduced pH values in exhaled breath condensate that is also observed in acute asthma may be helpful in interpreting the specific inhalation challenge. Other conditions include acute community-acquired aspergillus pneumonia, aspergillus bronchitis. The use of fiberoptic bronchoscopy (FOB) and bronchoalveolar lavage (BAL) is increasingly prevalent in pediatric settings as an aid in diagnosing numerous pulmonary diseases and as a therapeutic tool in specific conditions, particularly those affecting the small airways . The capability provided by endoscopy to identify the etiology of severe pneumonia at an early stage represents an undeniable advantage in the clinical management and prognosis of the disorder. Wu et al. analyzed 229 patients admitted with severe pneumonia to the Pediatric Intensive Care Unit (PICU) at Xinxiang Hospital, China, between November 2018 and December 2021. Patients were divided into two groups based on the necessity of invasive ventilation (invasive ventilation group and non-invasive ventilation group) and further stratified according to the timing of BAL (early BAL group: received BAL within one day of admission; late BAL group: received BAL two days after admission). For each patient, the following information was collected: demographic data, duration of symptoms prior to PICU admission, reason for PICU admission, APACHE II score (that addressed patients’ severity in the PICU), SOFA score (for evaluating the organs’ failure), length of hospitalization overall and in the PICU. Additionally, data regarding patients’ clinical presentation, laboratory tests results, especially microbiology of the BAL specimens by PCR and culture, and endoscopic score assessment were evaluated. Notably, the most frequently isolated etiological agent in the study was Mycoplasma pneumoniae (36.67%), followed by Staphylococcus aureus (26.11%), Haemophilus pneumoniae (23.33%), and Streptococcus spp . (16.67%). Viral identification was less frequent, with RSV being the most prevalent (27.22%), followed by Influenza B virus (17.22%) and Influenza A virus (4.44%). A small portion of the pneumonias were due to fungal infections, with Candida albicans identified in 5.56% of cases. Comparison of endoscopic scores revealed a significantly higher score, indicating greater severity, in patients who required invasive ventilation. Moreover, a shorter PICU stay was observed in patients who underwent early BAL compared to those who had BAL two or more days after ICU admission. The study also demonstrated that patients in the invasive ventilation group had higher SOFA and APACHE II scores and a longer PICU stay. Among the patients examined, 9.61% succumbed to their illness, although no statistically significant differences in mortality rates were observed between the various groups and subgroups. Wu et al. have strengthened the growing body of evidence regarding the role of FOB and BAL in diagnosing and prognostically stratifying patients with pneumopathy, both in acute forms, as seen in the study patients, and in managing pediatric patients with prolonged/recurrent disease forms, such as recurrent pneumonia , and refractory disease forms, where along with CT scan, it represents an indispensable tool for the modern pediatric pulmonologist . Relevant publications in the field of pediatrics have been provided in the first semester of the last year. Important findings have allowed to improve understanding of pathogenic mechanisms leading to disease development. Novelties on biomarkers may be assessed to link laboratory results with clinical applications. In parallel, several recommendations have shed light on the management of diseases. Finally, interesting and promising results for developing personalized interventions have been reported. We think that published papers give something new that may potentially have a significant effect in healthcare practice. |
Hydrogen isotope labeling unravels origin of soil-bound organic contaminant residues in biodegradability testing | e3f700b1-41bf-443a-b3d3-9030bcf04d96 | 11502848 | Microbiology[mh] | Synthetic organic chemicals convey numerous benefits in daily life but may end up in soils after intentional or unintentional discharge , . Depending on the soil and compound properties, organic contaminants can undergo different binding processes to the soil, microbial degradation, or non-biological degradation influenced by abiotic factors such as sunlight, temperature, and pH – . Compared to water and air, assessing the fate of organic contaminants in soil is complicated by their binding to or entrapment in the solid matrix, making soils a potential ‘temporary reservoir’ or ‘permanent sink’ for contaminants , . However, organic contaminants that are only loosely attached or weakly bound to soil may relocate from soils to food crops, water, and air, putting human and ecosystem health at risk , , . Contrastingly, the permanent binding of organic contaminants to soil or their mineralization to CO 2 and H 2 O, together with assimilation into microbial biomass, can greatly reduce this risk , , , . However, potentially ‘safe’ synthetic organic chemicals first have to be identified in biodegradability tests prior to their use for societal needs – . Biodegradability testing in soil is conducted as part of regulatory safety assessment frameworks around the world, e.g., by the US Environmental Protection Agency or European Chemicals Agency (ECHA) . Chemical (bio)degradation is simulated by applying heavy isotope-labeled analogs of a test substance (usually using 14 C-tracers) to a reference soil and quantifying the isotope mass balance (see Fig. ). This mass balance comprises the isotope label in the mineralized fraction (CO 2 , H 2 O, or N 2 ), mobile solvent-extractable fraction (chemical and its transformation products), and in the immobile solid fraction operationally defined as ‘bound residues’ or ‘non-extractable residues’ (NERs). While the mineralized and mobile solvent-extractable fractions are easily analyzable, the NERs are a ‘black box’ in biodegradability testing as three types of NERs can be formed, which carry different hazards , – . The first two NER types are xenobiotic NERs (xenoNERs) comprising the un-degraded parent chemical or its transformation products that are physically entrapped in soil (type I) or covalently bound to reactive surface groups of the soil (type II) – . In the environment, adsorbed chemicals and type I xenoNERs can detach from soil over varying time scales and thus pose a delayed ‘hidden hazard’ – . Changing environmental conditions, e.g., freezing, thawing, and altered precipitation, temperature, pH, or vegetation cover due to global warming could all exacerbate the release of potentially harmful type I xenoNERs – . To identify potentially mobile residues in laboratory tests, different ‘soft’ and ‘harsh’ extraction schemes are advocated, e.g., a mixture of aqueous and organic solvents along with physical agitation, heat, or pressure , , . The physically entrapped type I xenoNERs are freed after the breakdown of soil aggregates using silylation or extraction with EDTA, while xenoNERs type II can only be released after bond breakage using acidic or alkaline hydrolysis , , . The covalent bonds between the chemical and reactive groups of soil are considered to be hardly breakable, xenoNERs type II are thus a ‘safe sink’ – . Next to xenoNERs, type III biogenic NERs (bioNERs) result from the integration of C, N, or H isotope labels of a biodegraded test chemical into the biomass of microorganisms , , , . This third NER type is identical to biomolecules produced naturally in soils and therefore considered as a permanent ‘safe sink’ for organic contaminants , – , – . BioNERs comprise both living biomass and microbial biomass residues at various stages of decay that eventually get stabilized in the soil matrix , , . NERs measured in laboratory tests thus contain both type I and type II xenoNERs as well as type III bioNERs that are weakly to strongly bound depending on the extraction method used, impeding the distinction between these three types of NERs. Although differentiation between xenoNERs (type I and type II) and bioNERs (type III) is vital to assess the hazard posed by organic contaminants in soil, it is not part of routine biodegradability tests with 14 C tracers, which readily quantify only the total 14 C NERs (Fig. ) – , , , . Well-established protocols exist for the identification and rough quantification of bioNERs based on the analysis of stable carbon ( 13 C) or nitrogen ( 15 N) isotope tracers in biomolecules; however, they are very laborious. Both 13 C or 15 N-labeling is also limited by the scarce supply and high costs of 13 C or 15 N-labeled compounds, and 15 N is additionally restricted to N-containing molecules , , – . We here show that the stable H isotope – D – can be used as an alternative to existing tracers that are both cheaper and more accessible than 13 C or 15 N due to the common use of deuterated standards in analytical chemistry . Furthermore, hypothesizing minimal retention of H in biomolecules and thus in bioNERs, H-labeling could enable a time-efficient distinction between xenoNERs and bioNERs. Because of different turnover dynamics, H is expected to be much less retained in microbial biomass than C. In a prior one-year incubation study, for example, substrate-derived H in soil and microbial lipids was respectively 6-fold and over 10-fold lower compared to substrate-C . The main reason is that total and bioavailable C is limited in soil, whereas H – due to its presence in soil water – is ~ 6–11 times more abundant considering the stochiometric requirements to build different biomolecules (see Supplementary Note ). Microorganisms, therefore, constantly recycle C-substrates and necromass (biomass residues) of primary degraders for both the energy contained in C–H bonds (catabolism) and for C-building blocks required for biomass synthesis (anabolism) (Fig. ) – . Substrate- or necromass-derived C can also be directly assimilated as a C-monomer (e.g., amino acid) into macromolecules (e.g., proteins). Moreover, also unutilized C in decaying microbial biomass is eventually stabilized in the soil matrix as bioNERs , , , . C can be slowly released from decaying soil biomass as CO 2 (Fig. ) before re-uptake by newly growing microorganisms during CO 2 fixation , , , (accounting for ~ 4% of released CO 2 from 2,4-D ). Therefore, during microbial degradation of a 13 C-labeled substrate, a high retention of the 13 C tracer in bioNERs is generally observed. In contrast, the direct incorporation of D from a D-labeled organic substrate into microbial biomass is expected to be low. After catabolic C–D bond cleavage, substrate-derived D is released from coenzymes to ambient water within a few minutes (Fig. ), as explained in more detail in Supplementary Fig. . While this process is fast, the dynamics of enzymatic C–D bond cleavage will vary depending on the biodegradability of the D-substrate. For example, in monomeric biomolecules like glucose or amino acids, 70% of the C–D bonds were broken already within the first 7 days of soil incubation . Besides substrate-H, ambient water provides a highly abundant H source for the de novo formation of C–H bonds in C-monomers during anabolism – . In prior studies with heavy water (D 2 O), up to 79% of the D-water was assimilated into microbial biomass , and incorporation was already visible after 20 min . However, due to the strong dilution of the substrate-derived D with ambient water-H, potentially combined with isotopic fractionation , , , D reuptake into newly synthesized biomass and thus into bioNERs will be low. The dilution of D 2 O with ambient water varies depending on the substrate concentration, biodegradation pathways, and soil water contents but was estimated to result in D concentrations only about 0.001% above its natural abundance (0.015 at%) for substrate concentrations between 10–50 mg kg −1 dry soil (Supplementary Note ). High retention of substrate-D in bioNERs could only occur during the assimilation of D-monomers retaining C–D bonds of the primary D-substrate into macromolecules (e.g., glycine formed from glyphosate; Fig. ). Thus, we hypothesized that D-tracers would be minimally retained in bioNERs. If a bound D-chemical is not degraded by microorganisms, it will be contained in the soil as xenoNERs. Consequently, almost all of the total NERs measured using the D-labeling approach will be xenoNERs. D-labeling could thus allow for the expensive and laborious bioNER assessment to be skipped. To test the feasibility of D-labeling for a time-efficient xenoNER identification, soil incubations oriented at OECD (Organization for Economic Co-operation and Development) guideline 307 were performed with three compounds of environmental concern using both a D- and 13 C-labeling approach. The 13 C-labeling served for the comparison of biogenic and total NER formation between H- and C-isotope tracers. We selected the herbicides 2,4-dichlorophenoxyacetic acid (2,4-D) and glyphosate (GLP) and the antibiotic sulfamethoxazole (SMX) as model compounds due to their different chemical structures allowing isotope labeling at aliphatic (GLP) and aromatic (2,4-D and SMX) moieties, and their different biodegradability in soils: fast for 2,4-D – , medium for GLP – and low for SMX , . GLP was labeled with two D atoms at two different labeling positions (2-C–D 2-GLP or 3-C–D 2-GLP ,) to cover differences between its two key degradation pathways via sarcosine or aminomethylphosphonic acid (AMPA, see Supplementary Figs. , ) . 2,4-D (D 3-2.4-D ) and SMX (D 4-SMX ) were labeled in the aromatic ring with three and four D atoms, respectively. The 13 C-compounds were labeled at similar positions as their D analogs, i.e., GLP had only one 13 C (2- 13 C GLP , 3- 13 C GLP ), whereas all carbons in the aromatic ring of both 2,4-D ( 13 C 6-2,4-D ) and SMX ( 13 C 6-SMX ) were labeled with 13 C. We analyzed the D and 13 C incorporation into total NERs (bioNERs + xenoNERs) and into amino acids (AAs) as a quantitative biomarker for bioNERs in biologically active soils. We also hypothesized that the C–D bonds of all tested compounds are stable against abiotic cleavage, which is essential for the validity of the D-labeling approach. To this end, we incubated sterile soils and ultra-pure (Milli-Q) water with the same D- and 13 C-labeled compounds. Our findings provide the first proof of concept that D is hardly incorporated into bioNERs compared to 13 C, which could help to improve current biodegradability testing strategies in soil.
Total 13 C NERs and D NERs were quantified by elemental analyzer-isotope ratio mass spectrometry (EA-IRMS) as the amount of isotope label remaining in the soil after solvent-extraction of the test compounds. The extraction efficiencies of the tested compounds from the soil directly after spiking were as follows: 98 ± 5% (2,4-D), 93 ± 3% (GLP), and 80 ± 10% (SMX; for details, see Supplementary Note ). However, the 13 C and D NERs measured on sampling day 0 in both sterile (Fig. ) and biologically active soils (Supplementary Note ) suggest a lower extraction efficiency, especially of GLP and SMX. Due to the laborious preparation of all parallel experimental treatments for incubation, it was impossible to perform soil extractions immediately after spiking of the test compounds (Supplementary Note ). Therefore, we cannot exclude abiotic NER formation already in soils sampled on day 0. Moreover, in order to minimize the risk of potential release of the D from C–D bonds, we skipped the final ‘harsh’ extraction step mandated by ECHA, which uses heat or pressure aiming at the extraction of ‘slowly desorbable’ residues . The total 13 C and D NERs from 2,4-D, GLP, and SMX (Figs. , ) thus comprised the sum of bioNERs, xenoNERs type I & II, and possibly also ‘slowly desorbable’ residues which inflate total NER estimates. Although the ‘slowly desorbable’ residues are not considered ‘NERs’ , we kept the NER term for simplicity here. Please note that the current study was not performed for regulatory purposes but for proof of concept, and hence, it did not aim to accurately follow all extraction schemes or soil incubation conditions outlined in OECD guideline 307 . The aim of this study was to compare: (I) both the amounts of 13 C- and D-compounds in ultra-pure water and the 13 C NERs and D NERs (as defined by the employed methodology) in sterile soils (hypothesis 1: stable C–D bonds under abiotic conditions), and (II) the 13 C and D incorporation into bioNERs in biologically active soils (hypothesis 2: minimal retention of D in bioNERs), using the same experimental conditions and extraction protocols for both H and C tracers. Total 13 C- and D-labeled non-extractable residues (NERs) in sterile soils Abiotically formed 13 C NERs and D NERs of each model compound were nearly identical at all sampling dates (Fig. ; for detailed statistics, see Supplementary Note ), proving that D was stable against abiotic breakage of C–D bonds. On the final date, the lowest NERs were observed for 2,4-D ( 13 C NERs : 19 ± 5.1%; D NERs : 29 ± 11%), followed by 2-C GLP ( 13 C NERs : 46 ± 7.4%; D NERs : 46 ± 8.1%) and 3-C GLP ( 13 C NERs : 63 ± 9.3%; D NERs : 53 ± 11%) while the highest NERs were formed by SMX ( 13 C NERs : 88 ± 13%; D NERs : 83 ± 15%). We noticed that the label position of GLP (2-C GLP and 3-C GLP ) slightly affected the 13 C/D NERs , which were somewhat higher for 3-C GLP than for 2-C GLP (Supplementary Note ). However, this divergence was consistent at all sampling dates and might be due to slightly different amounts of added GLP on day 0. Total 13 C- and D-labeled NERs in biologically active soils As hypothesized, for all three model compounds the amount of total 13 C NERs in biologically active soil was higher than that of total D NERs throughout the incubation periods (Fig. ). 2,4-D About 4–5 times higher contents of 13 C NERs from 13 C 6-2,4-D were measured compared to D NERs from D 3-2,4-D , demonstrating much lower retention of D in the total NER pool compared to 13 C (Fig. ). The 13 C NERs on both day 16 (20 ± 6.0% of the applied 13 C) and 36 (14 ± 4.7% of the applied 13 C) were significantly higher than the D NERs on day 16 (5.3 ± 2.6% of the applied D; p adjusted = 0.002; Supplementary Table in Supplementary Note ) and day 36 (3.2 ± 2.1% of the applied D; p adjusted = 0.017). The 13 C NERs from 13 C 6-2,4-D were lower than previously reported for 14 C 6-2,4-D (26 ± 0.2%) and 13 C 6-2,4-D (39 ± 2.6%) in soils with similar properties. Both the total 13 C NERs and D NERs in biologically active soils were either comparable ( 13 C: day 16/18 & 36) or lower (D: day 16/18 & 36) than in sterile soils (Fig. ). Although this finding might seem to contradict the statement that ‘if abiotically formed NERs are much lower than biotically formed NERs, this gives a clear indication on bioNER formation’ in OECD guideline 307 , bioNER formation does not necessitate that abiotic NERs are lower than NERs in biologically active soil. This is because, in biologically active soil, abiotic interactions leading to xenoNER formation compete with simultaneously proceeding biodegradation processes, which include both bioNER formation and complete mineralization of the compound. Thus, when a compound is quickly mineralized in biologically active soil, it may be removed before it can form xenobiotic NERs. In sterile soil, no mineralization occurs, and thus total abiotic NERs may be higher than biotic NERs for readily biodegradable compounds. Interestingly, in a previous study by Girardi et al. , the 13 C NERs from 13 C 6-2,4-D in sterile soils (15 ± 1.8%) were lower than in biologically active soil (39 ± 2.6%). However, much less 13 C 6-2,4-D was mineralized in the study by Girardi et al. (46 ± 2.9% after 32 days) than in this study (78 ± 8.8% after 36 days, see Supplementary Table in Supplementary Note ), suggesting that the rapid biodegradation of 2,4-D in our study prevented formation of xenoNERs in the biologically active soil. GLP The total NER formation of GLP depended on both the type of isotope tracer and the labeling position. About 2–5 times higher 13 C NER contents were measured compared to D NERs on day 38 (Fig. ), resulting in moderate to high amounts of 13 C NERs (2- 13 C GLP : 62 ± 4.7%; 3− 13 C GLP : 32 ± 3.8% of the applied 13 C) but only relatively low amounts of D NERs (2-C–D 2-GLP : 12 ± 2.8% of the applied D; 3-C–D 2-GLP : 17 ± 3.4% of the applied D). The differences between 13 C NERs and D NERs on day 38 were significant for both labeling positions (2-C GLP : p adjusted = 0.000004; 3-C GLP : p adjusted = 0.0063; see Supplementary Note ). Notably, the 13 C NERs from 2- 13 C GLP amounted to 66 ± 4.8% on day 4 and remained fairly stable until day 38. In contrast, D NERs from 2-C–D 2-GLP were twice lower already on day 4 (30 ± 5.3%, p adjusted = 0.000004) and decreased by ~ 50% until the final day 38, showing a continuous release of D from the total NER pool. In the case of 3-C GLP , between day 4 and day 38, the 13 C NERs decreased from 45 ± 4.8% to 32 ± 3.8%, and the D NERs from 37 ± 6.8% to 17 ± 3.4%. The differences in NER formation between D and 13 C tracers were thus less pronounced for 3-C GLP . Overall, 13 C NERs from both 2- and 3- 13 C GLP were close to the previously reported 40-50% of initially added 13 C , as well as ~ 40% of initially, added 14 C . The total 13 C NERs from 2- 13 C GLP were higher in biologically active soil than in sterile soil, while the 13 C NERs from 3- 13 C GLP were lower (Fig. ). Contrastingly, the D NERs from both D-labeled GLP analogs were lower than the abiotically formed D NERs . Mineralization of GLP was similar between 3- 13 C GLP (50 ± 17%; Supplementary Table ) and 2- 13 C GLP (40 ± 12%), suggesting at first glance a comparable biodegradation of GLP labeled at the 2-C and 3-C positions. However, the mineralization of a compound is not always the only factor dictating the total amounts of 13 C (bio)NERs , as shown for 13 C GLP . The labeling position of GLP played a crucial role here, as detailed in the following sections. SMX SMX formed very high amounts of total NERs (Fig. ). The D NERs (68 ± 5.6% of the applied D) appeared slightly lower than 13 C NERs (90 ± 6.8% of the applied 13 C) on day 18 ( p adjusted = 0.059), but no significant differences in NER contents were found between D and 13 C tracers on any sampling day ( p adjusted > 0.05, Fig. ). The 13 C NERs peaked at 107 ± 12 to 115 ± 9.7% and the D NERs at 88 ± 5.2 to 82 ± 10% on day 36–72. Notably, higher 13 C NER values compared to D NERs might be due to a higher amount of 13 C 6-SMX accidentally added to the soil on day 0 (total 13 C-label recovery on day 0: 138%, Supplementary Table in Supplementary Note ). SMX is a hardly biodegradable antibiotic (only 2.3 ± 0.5% 13 CO 2 after 72 days, Supplementary Table ) expected to form mostly xenoNERs; thus, the 13 C NERs in the biotic treatment should be nearly identical to the 13 C NERs and D NERs measured in sterile soil (Fig. ). The overall high NER content are comparable to 14 C NER formation from 14 C 6-SMX reported in prior studies , . Proportion of 13 C- and D-biogenic NERs (bioNERs) within the total NERs Total bioNER contents (Fig. ) were estimated from total amino acids (tAAs) hydrolyzed from soil. The 13 C tAAs and D tAAs were multiplied by 2 since tAAs makeup roughly 50–55% of microbial biomass and can thus be used as a quantitative biomarker for bioNERs , . Due to the uncertainty of this quantitation method, we additionally calculated 13 C bioNERs for 2,4-D, GLP and SMX based on the released 13 CO 2 (Supplementary Table ) using the microbial turnover to biomass (MTB) approach (Supplementary Note ). 2,4-D Total 13 C-amino acids ( 13 C tAAs ) on day 16 (11 ± 1.0% of the applied 13 C) and 36 (9.8 ± 1.6% of the applied 13 C) were 6–7-fold higher than the total D-amino acids (D tAAs ) on both day 16 (1.6 ± 0.2% of the applied D) and 36 (1.7 ± 0.5% of the applied D; Fig. ). These differences between D tAAs and 13 C tAAs were statistically significant at both time points (day 16: p adjusted = 0.037; day 36: p adjusted = 0.035). As hypothesized, the total amounts of D bioNERs from D 3-2,4-D (tAAs + other bioNERs) were thus lower (six to seven times) than those of the 13 C bioNERs . The 13 C bioNERs were nearly identical on day 16 (22 ± 2.0% of the applied 13 C) and 36 (20 ± 3.2% of the applied 13 C). Also, the amounts of D bioNERs were comparable between day 16 (3.2 ± 0.4% of applied D) and 36 (3.4 ± 1.0% of applied D). Notably, these estimates of total 13 C bioNERs and D bioNERs on both days 16 and 36 were nearly equal to the total measured 13 C NERs and D NERs shown in Fig. . Moreover, the 13 C bioNERs determined as tAAs*2 fell within the range of predicted bioNERs based on the MTB model (13.5–32.6%, see Supplementary Table in Supplementary Note ), showing consistent estimates from both approaches. On day 36, tAA contents without application of the conversion factor (~ 10% of applied 13 C) already made up the majority of the total 13 C NERs (14 ± 4.7% of applied 13 C on day 36, Fig. ), while the minimum predicted bioNERs based on MBT calculations (bioNER min : 13.5%) were nearly identical to total 13 C NERs . Since 2,4-D is known to be easily biodegradable , it is thus likely that for both 13 C 6-2,4-D and D 3-2,4-D , the total 13 C NERs and D NERs could be completely ascribed to ‘safe sink’ bioNERs on day 36. GLP In the GLP experiment, both 13 C tAAs and 13 C bioNERs were higher than their D-analogs (Fig. ). Furthermore, we observed a notable difference in the labeling pattern of tAAs and bioNERs with 13 C and D between the two GLP labeling positions (2-C GLP and 3-C GLP ). In the case of 2-C GLP , 13 C tAAs on day 4 (10 ± 3.1% of the applied 13 C) and 18 (12 ± 1.6% of the applied 13 C) were about fourfold higher than D tAAs on both day 4 (2.4 ± 0.6% of the applied D; p adjusted = 0.019; Supplementary Note ) and 18 (3.4 ± 0.6% of the applied D; p adjusted = 0.024). The total 13 C bioNERs increased to 24 ± 3.2% of the applied 13 C on day 18 and then remained relatively stable until day 38, whilst D bioNERs were nearly identical at all time points (5–7 ± 1% of the applied D). At the end of the incubation period, the D bioNERs were thus surprisingly only thrice lower than the 13 C bioNERs (with no significant differences between D tAA and 13 C tAA contents; p adjusted = 0.137), comprising about 60% of the total D NERs . In comparison, the total 13 C bioNERs made up only about 30% of the total 13 C NERs on day 38. Notably, predicted bioNERs from 2- 13 C GLP based on the MTB model were somewhat lower than based on tAAs*2 – ranging from 6.7% to 16% for degradation via the sarcosine pathway and 2.8–6% via the AMPA pathway (Supplementary Table ). This can be explained by the monomeric assimilation of 13 C glycine during the biodegradation of 2- 13 C GLP via the sarcosine pathway (for details, see the following section), which is not accounted for in the MTB model . BioNER formation from 3-C GLP was much lower than from 2-C GLP . Significantly higher 13 C tAAs than D tAAs were measured on day 4 ( p adjusted = 0.006; Supplementary Note ) and day 18 ( p adjusted = 0.019). On the final day, a fivefold higher formation of 13 C bioNERs (2.7 ± 1.5% of the applied 13 C) than D bioNERs was observed (0.6 ± 0.1% of the applied D; Fig. ), although this difference was not statistically significant ( 13 C tAAs vs D tAAs : p adjusted = 0.473). The predicted bioNER min using the MTB model were only slightly higher than the tAAs*2 when considering the AMPA pathway (3.5–7.4%; Supplementary Table ) but overestimated when based on the sarcosine pathway (8.2–19.5%). This may be because bioNERs based on tAAs*2 depend on the 13 C-labeling position, whereas the MTB approach considers the turnover of all C atoms of the substrate equally . SMX No 13 C from 13 C 6-SMX or D from D 4-SMX could be detected in the 13 C tAAs and D tAAs , suggesting that the NERs from SMX were almost exclusively xenoNERs. This is in line with low mineralization (2.3 ± 0.5% cumulative 13 CO 2 evolution after 72 days, Supplementary Table ) and negligible amounts of predicted bioNERs based on the MTB approach (min: 0.6%, max: 1.5%). Labeling position of GLP and its relevance for the 13 C- and D-labeling pattern of bioNERs 2-C-labeling position (2-C GLP ) A closer look at the composition of 13 C and D tAAs revealed that the contents of 13 C glycine (1.7 ± 0.6%– 3.6 ± 1.0% of applied 13 C; Supplementary Fig. ) were comparable to those of D glycine (1.9 ± 0.6%–2.7 ± 1.5% of applied D) on all sampling days ( p adjusted > 0.05, Supplementary Note ), and both were fairly stable between day 4 and day 38. Nearly identical amounts of D glycine and its 13 C-analog suggest that the C–D bonds resisted harsh acid hydrolysis. The D tAAs hydrolyzed from the soil are thus reliable for tracking the D integration into C–D bonds of amino acids during microbial metabolism of a D-compound. With 13 C glycine comprising 16–30% of 13 C tAAs and D glycine 76–81% of D tAAs , glycine was clearly the most predominant AA in the tAA pool (Supplementary Fig. ). These findings are unique to 2-C GLP due to the preservation of the isotope label at the 2-C-position during biodegradation. When 2-C GLP is degraded to sarcosine, which can then be oxidized to glycine in the sarcosine pathway (Supplementary Fig. ), both sarcosine and glycine will retain the D- or 13 C-labels of the parent 2-C GLP , , . Thus, a disproportionally high integration of the isotope label from 2-C GLP into glycine can be expected. 13 C glycine and D glycine from 2- 13 C GLP and 2-C–D 2-GLP were likely assimilated into microbial biomass as a monomeric ‘building block’ for proteins, which is more energy-efficient than the biosynthesis of macromolecules derived from smaller C-precursors like acetyl-groups (Fig. ) , . 13 C glycine and D glycine could then have been partially mineralized to 13 CO 2 and D 2 O. The 13 C from 13 CO 2 and the 13 C incorporated into other biomolecules of microbial degraders (1 st level) was possibly recycled for the synthesis of new biomolecules by the consumers (2 nd , 3 rd , 4 th level, etc.; Fig. ). Due to these 13 C-recycling processes, other 13 C amino acids than 13 C glycine were also enriched in 13 C. Notably, the share of D glycine (76–81%) in the D tAAs was much higher than that of its 13 C analog (16–30%), suggesting that D was only minimally retained in other D amino acids . Unlike 13 C, the D-label is rapidly released as D 2 O after the cleavage of C–D bonds of either 2-C–D 2-GLP or D-biomolecules of the necromass (Fig. ). The D 2 O is then diluted with unlabeled H 2 O, leading to estimated D 2 O concentrations in soil water of only 0.00012–0.00076% for the sarcosine pathway (Supplementary Fig. in Supplementary Note ). Therefore, only low amounts of D-label could have been re-incorporated into the other D amino acids . Overall, based on the shares of the two isotopes in the tAA pool and their recycling processes, we can deduce that GLP is preferentially degraded into the amino acid glycine in the sarcosine pathway, possibly dictating the high bioNER formation for 2-C GLP . Still, a large proportion of the total 13 C NERs from 2- 13 C GLP (70–84%; Fig. ) remained unidentified. We speculate that the 13 C bioNERs for 2- 13 C GLP could have been underestimated due to the uncertainty of the bioNER approximation based on the tAAs*2 and MTB approaches. Notably, 2- 13 C glycine formed from 2- 13 C GLP in the sarcosine pathway (Supplementary Fig. ) is directly incorporated into microbial biomass without the release of 13 CO 2 , as demonstrated by Wang et al. . The predicted amounts of 13 C bioNER for 2- 13 C GLP (6.7–16% for the sarcosine pathway) based on the measured 13 CO 2 were thus likely underestimated, showing the limitations of the MTB approach when monomeric substrate utilization occurs (Fig. ). Another good example proving the limitations of both the tAAs*2 and MTB approaches can be taken from a recent degradation study of 13 C 2-glycine by Aslam et al. , where 37% of 13 C 2-glycine was measured in 13 CO 2 , 8.7% in 13 C tAAs , and 34% in total 13 C NERs . Based on the 13 C tAAs *2, about 17.4% of total 13 C bioNERs were formed, which comprised 51% of the total 13 C NERs ; thus, the other 49% of 13 C NERs were unidentified. The total 13 C bioNER amounts estimated for 13 C 2-glycine using the MTB model were underestimated as well (min: 4% and max: 8.8%, Supplementary Table ). Glycine is an easily biodegradable biomolecule ; therefore, it cannot form 13 C xenoNERs , and all of the 13 C NERs from 13 C 2-glycine should be exclusively 13 C bioNERs . 2- 13 C GLP or 2-C–D 2-GLP is degraded to unlabeled AMPA in the AMPA pathway (Supplementary Fig. ). Therefore, the large portion of unidentified 13 C NERs from 2- 13 C GLP could stem either from unidentified 13 C bioNERs not considered in the MTB or tAAs*2 calculation, or from the un-degraded parent molecule which could be the primary source for 13 C xenoNERs . 3-C-labeling position (3-C GLP ) Contrastingly, neither 13 C glycine nor D glycine was the dominant amino acid within the tAA pool of 3-C GLP (Supplementary Fig. ). When 3-C GLP is degraded via the sarcosine pathway, the major resulting degradation product glycine will be unlabeled (Supplementary Fig. ), explaining the much lower amounts of 13 C bioNERs and D bioNERs compared to 2-C GLP . Although the contents of 13 C glycine were not significantly different from those of D glycine for 3-C GLP on any sampling day ( p adjusted > 0.05, see Supplementary Note ), their amounts were at least 15-fold lower than for 2-C GLP . The percentages of both 13 C glycine (1.1–15% of 13 C tAAs ) and D glycine (3.2–25% of D tAAs ) in the tAA pool were also comparatively lower than for 2-C GLP . In contrast to 2-C GLP , 3-C GLP is degraded to labeled AMPA ( 13 C AMPA or D 2-AMPA ) in the AMPA pathway (Supplementary Fig. ) as the third C of GLP is preserved in AMPA. As 13 C bioNERs and D bioNERs were lower than for 2-C GLP , higher portions of 13 C- and D-labeled xenoNERs could have been formed from 3-C GLP . Since similar behavior of the parent compounds is expected, this could be due to abiotic interactions between labeled 13 C AMPA or D 2-AMPA with reactive groups of the soil.
13 C- and D-labeled non-extractable residues (NERs) in sterile soils Abiotically formed 13 C NERs and D NERs of each model compound were nearly identical at all sampling dates (Fig. ; for detailed statistics, see Supplementary Note ), proving that D was stable against abiotic breakage of C–D bonds. On the final date, the lowest NERs were observed for 2,4-D ( 13 C NERs : 19 ± 5.1%; D NERs : 29 ± 11%), followed by 2-C GLP ( 13 C NERs : 46 ± 7.4%; D NERs : 46 ± 8.1%) and 3-C GLP ( 13 C NERs : 63 ± 9.3%; D NERs : 53 ± 11%) while the highest NERs were formed by SMX ( 13 C NERs : 88 ± 13%; D NERs : 83 ± 15%). We noticed that the label position of GLP (2-C GLP and 3-C GLP ) slightly affected the 13 C/D NERs , which were somewhat higher for 3-C GLP than for 2-C GLP (Supplementary Note ). However, this divergence was consistent at all sampling dates and might be due to slightly different amounts of added GLP on day 0.
13 C- and D-labeled NERs in biologically active soils As hypothesized, for all three model compounds the amount of total 13 C NERs in biologically active soil was higher than that of total D NERs throughout the incubation periods (Fig. ). 2,4-D About 4–5 times higher contents of 13 C NERs from 13 C 6-2,4-D were measured compared to D NERs from D 3-2,4-D , demonstrating much lower retention of D in the total NER pool compared to 13 C (Fig. ). The 13 C NERs on both day 16 (20 ± 6.0% of the applied 13 C) and 36 (14 ± 4.7% of the applied 13 C) were significantly higher than the D NERs on day 16 (5.3 ± 2.6% of the applied D; p adjusted = 0.002; Supplementary Table in Supplementary Note ) and day 36 (3.2 ± 2.1% of the applied D; p adjusted = 0.017). The 13 C NERs from 13 C 6-2,4-D were lower than previously reported for 14 C 6-2,4-D (26 ± 0.2%) and 13 C 6-2,4-D (39 ± 2.6%) in soils with similar properties. Both the total 13 C NERs and D NERs in biologically active soils were either comparable ( 13 C: day 16/18 & 36) or lower (D: day 16/18 & 36) than in sterile soils (Fig. ). Although this finding might seem to contradict the statement that ‘if abiotically formed NERs are much lower than biotically formed NERs, this gives a clear indication on bioNER formation’ in OECD guideline 307 , bioNER formation does not necessitate that abiotic NERs are lower than NERs in biologically active soil. This is because, in biologically active soil, abiotic interactions leading to xenoNER formation compete with simultaneously proceeding biodegradation processes, which include both bioNER formation and complete mineralization of the compound. Thus, when a compound is quickly mineralized in biologically active soil, it may be removed before it can form xenobiotic NERs. In sterile soil, no mineralization occurs, and thus total abiotic NERs may be higher than biotic NERs for readily biodegradable compounds. Interestingly, in a previous study by Girardi et al. , the 13 C NERs from 13 C 6-2,4-D in sterile soils (15 ± 1.8%) were lower than in biologically active soil (39 ± 2.6%). However, much less 13 C 6-2,4-D was mineralized in the study by Girardi et al. (46 ± 2.9% after 32 days) than in this study (78 ± 8.8% after 36 days, see Supplementary Table in Supplementary Note ), suggesting that the rapid biodegradation of 2,4-D in our study prevented formation of xenoNERs in the biologically active soil. GLP The total NER formation of GLP depended on both the type of isotope tracer and the labeling position. About 2–5 times higher 13 C NER contents were measured compared to D NERs on day 38 (Fig. ), resulting in moderate to high amounts of 13 C NERs (2- 13 C GLP : 62 ± 4.7%; 3− 13 C GLP : 32 ± 3.8% of the applied 13 C) but only relatively low amounts of D NERs (2-C–D 2-GLP : 12 ± 2.8% of the applied D; 3-C–D 2-GLP : 17 ± 3.4% of the applied D). The differences between 13 C NERs and D NERs on day 38 were significant for both labeling positions (2-C GLP : p adjusted = 0.000004; 3-C GLP : p adjusted = 0.0063; see Supplementary Note ). Notably, the 13 C NERs from 2- 13 C GLP amounted to 66 ± 4.8% on day 4 and remained fairly stable until day 38. In contrast, D NERs from 2-C–D 2-GLP were twice lower already on day 4 (30 ± 5.3%, p adjusted = 0.000004) and decreased by ~ 50% until the final day 38, showing a continuous release of D from the total NER pool. In the case of 3-C GLP , between day 4 and day 38, the 13 C NERs decreased from 45 ± 4.8% to 32 ± 3.8%, and the D NERs from 37 ± 6.8% to 17 ± 3.4%. The differences in NER formation between D and 13 C tracers were thus less pronounced for 3-C GLP . Overall, 13 C NERs from both 2- and 3- 13 C GLP were close to the previously reported 40-50% of initially added 13 C , as well as ~ 40% of initially, added 14 C . The total 13 C NERs from 2- 13 C GLP were higher in biologically active soil than in sterile soil, while the 13 C NERs from 3- 13 C GLP were lower (Fig. ). Contrastingly, the D NERs from both D-labeled GLP analogs were lower than the abiotically formed D NERs . Mineralization of GLP was similar between 3- 13 C GLP (50 ± 17%; Supplementary Table ) and 2- 13 C GLP (40 ± 12%), suggesting at first glance a comparable biodegradation of GLP labeled at the 2-C and 3-C positions. However, the mineralization of a compound is not always the only factor dictating the total amounts of 13 C (bio)NERs , as shown for 13 C GLP . The labeling position of GLP played a crucial role here, as detailed in the following sections. SMX SMX formed very high amounts of total NERs (Fig. ). The D NERs (68 ± 5.6% of the applied D) appeared slightly lower than 13 C NERs (90 ± 6.8% of the applied 13 C) on day 18 ( p adjusted = 0.059), but no significant differences in NER contents were found between D and 13 C tracers on any sampling day ( p adjusted > 0.05, Fig. ). The 13 C NERs peaked at 107 ± 12 to 115 ± 9.7% and the D NERs at 88 ± 5.2 to 82 ± 10% on day 36–72. Notably, higher 13 C NER values compared to D NERs might be due to a higher amount of 13 C 6-SMX accidentally added to the soil on day 0 (total 13 C-label recovery on day 0: 138%, Supplementary Table in Supplementary Note ). SMX is a hardly biodegradable antibiotic (only 2.3 ± 0.5% 13 CO 2 after 72 days, Supplementary Table ) expected to form mostly xenoNERs; thus, the 13 C NERs in the biotic treatment should be nearly identical to the 13 C NERs and D NERs measured in sterile soil (Fig. ). The overall high NER content are comparable to 14 C NER formation from 14 C 6-SMX reported in prior studies , .
About 4–5 times higher contents of 13 C NERs from 13 C 6-2,4-D were measured compared to D NERs from D 3-2,4-D , demonstrating much lower retention of D in the total NER pool compared to 13 C (Fig. ). The 13 C NERs on both day 16 (20 ± 6.0% of the applied 13 C) and 36 (14 ± 4.7% of the applied 13 C) were significantly higher than the D NERs on day 16 (5.3 ± 2.6% of the applied D; p adjusted = 0.002; Supplementary Table in Supplementary Note ) and day 36 (3.2 ± 2.1% of the applied D; p adjusted = 0.017). The 13 C NERs from 13 C 6-2,4-D were lower than previously reported for 14 C 6-2,4-D (26 ± 0.2%) and 13 C 6-2,4-D (39 ± 2.6%) in soils with similar properties. Both the total 13 C NERs and D NERs in biologically active soils were either comparable ( 13 C: day 16/18 & 36) or lower (D: day 16/18 & 36) than in sterile soils (Fig. ). Although this finding might seem to contradict the statement that ‘if abiotically formed NERs are much lower than biotically formed NERs, this gives a clear indication on bioNER formation’ in OECD guideline 307 , bioNER formation does not necessitate that abiotic NERs are lower than NERs in biologically active soil. This is because, in biologically active soil, abiotic interactions leading to xenoNER formation compete with simultaneously proceeding biodegradation processes, which include both bioNER formation and complete mineralization of the compound. Thus, when a compound is quickly mineralized in biologically active soil, it may be removed before it can form xenobiotic NERs. In sterile soil, no mineralization occurs, and thus total abiotic NERs may be higher than biotic NERs for readily biodegradable compounds. Interestingly, in a previous study by Girardi et al. , the 13 C NERs from 13 C 6-2,4-D in sterile soils (15 ± 1.8%) were lower than in biologically active soil (39 ± 2.6%). However, much less 13 C 6-2,4-D was mineralized in the study by Girardi et al. (46 ± 2.9% after 32 days) than in this study (78 ± 8.8% after 36 days, see Supplementary Table in Supplementary Note ), suggesting that the rapid biodegradation of 2,4-D in our study prevented formation of xenoNERs in the biologically active soil.
The total NER formation of GLP depended on both the type of isotope tracer and the labeling position. About 2–5 times higher 13 C NER contents were measured compared to D NERs on day 38 (Fig. ), resulting in moderate to high amounts of 13 C NERs (2- 13 C GLP : 62 ± 4.7%; 3− 13 C GLP : 32 ± 3.8% of the applied 13 C) but only relatively low amounts of D NERs (2-C–D 2-GLP : 12 ± 2.8% of the applied D; 3-C–D 2-GLP : 17 ± 3.4% of the applied D). The differences between 13 C NERs and D NERs on day 38 were significant for both labeling positions (2-C GLP : p adjusted = 0.000004; 3-C GLP : p adjusted = 0.0063; see Supplementary Note ). Notably, the 13 C NERs from 2- 13 C GLP amounted to 66 ± 4.8% on day 4 and remained fairly stable until day 38. In contrast, D NERs from 2-C–D 2-GLP were twice lower already on day 4 (30 ± 5.3%, p adjusted = 0.000004) and decreased by ~ 50% until the final day 38, showing a continuous release of D from the total NER pool. In the case of 3-C GLP , between day 4 and day 38, the 13 C NERs decreased from 45 ± 4.8% to 32 ± 3.8%, and the D NERs from 37 ± 6.8% to 17 ± 3.4%. The differences in NER formation between D and 13 C tracers were thus less pronounced for 3-C GLP . Overall, 13 C NERs from both 2- and 3- 13 C GLP were close to the previously reported 40-50% of initially added 13 C , as well as ~ 40% of initially, added 14 C . The total 13 C NERs from 2- 13 C GLP were higher in biologically active soil than in sterile soil, while the 13 C NERs from 3- 13 C GLP were lower (Fig. ). Contrastingly, the D NERs from both D-labeled GLP analogs were lower than the abiotically formed D NERs . Mineralization of GLP was similar between 3- 13 C GLP (50 ± 17%; Supplementary Table ) and 2- 13 C GLP (40 ± 12%), suggesting at first glance a comparable biodegradation of GLP labeled at the 2-C and 3-C positions. However, the mineralization of a compound is not always the only factor dictating the total amounts of 13 C (bio)NERs , as shown for 13 C GLP . The labeling position of GLP played a crucial role here, as detailed in the following sections.
SMX formed very high amounts of total NERs (Fig. ). The D NERs (68 ± 5.6% of the applied D) appeared slightly lower than 13 C NERs (90 ± 6.8% of the applied 13 C) on day 18 ( p adjusted = 0.059), but no significant differences in NER contents were found between D and 13 C tracers on any sampling day ( p adjusted > 0.05, Fig. ). The 13 C NERs peaked at 107 ± 12 to 115 ± 9.7% and the D NERs at 88 ± 5.2 to 82 ± 10% on day 36–72. Notably, higher 13 C NER values compared to D NERs might be due to a higher amount of 13 C 6-SMX accidentally added to the soil on day 0 (total 13 C-label recovery on day 0: 138%, Supplementary Table in Supplementary Note ). SMX is a hardly biodegradable antibiotic (only 2.3 ± 0.5% 13 CO 2 after 72 days, Supplementary Table ) expected to form mostly xenoNERs; thus, the 13 C NERs in the biotic treatment should be nearly identical to the 13 C NERs and D NERs measured in sterile soil (Fig. ). The overall high NER content are comparable to 14 C NER formation from 14 C 6-SMX reported in prior studies , .
13 C- and D-biogenic NERs (bioNERs) within the total NERs Total bioNER contents (Fig. ) were estimated from total amino acids (tAAs) hydrolyzed from soil. The 13 C tAAs and D tAAs were multiplied by 2 since tAAs makeup roughly 50–55% of microbial biomass and can thus be used as a quantitative biomarker for bioNERs , . Due to the uncertainty of this quantitation method, we additionally calculated 13 C bioNERs for 2,4-D, GLP and SMX based on the released 13 CO 2 (Supplementary Table ) using the microbial turnover to biomass (MTB) approach (Supplementary Note ). 2,4-D Total 13 C-amino acids ( 13 C tAAs ) on day 16 (11 ± 1.0% of the applied 13 C) and 36 (9.8 ± 1.6% of the applied 13 C) were 6–7-fold higher than the total D-amino acids (D tAAs ) on both day 16 (1.6 ± 0.2% of the applied D) and 36 (1.7 ± 0.5% of the applied D; Fig. ). These differences between D tAAs and 13 C tAAs were statistically significant at both time points (day 16: p adjusted = 0.037; day 36: p adjusted = 0.035). As hypothesized, the total amounts of D bioNERs from D 3-2,4-D (tAAs + other bioNERs) were thus lower (six to seven times) than those of the 13 C bioNERs . The 13 C bioNERs were nearly identical on day 16 (22 ± 2.0% of the applied 13 C) and 36 (20 ± 3.2% of the applied 13 C). Also, the amounts of D bioNERs were comparable between day 16 (3.2 ± 0.4% of applied D) and 36 (3.4 ± 1.0% of applied D). Notably, these estimates of total 13 C bioNERs and D bioNERs on both days 16 and 36 were nearly equal to the total measured 13 C NERs and D NERs shown in Fig. . Moreover, the 13 C bioNERs determined as tAAs*2 fell within the range of predicted bioNERs based on the MTB model (13.5–32.6%, see Supplementary Table in Supplementary Note ), showing consistent estimates from both approaches. On day 36, tAA contents without application of the conversion factor (~ 10% of applied 13 C) already made up the majority of the total 13 C NERs (14 ± 4.7% of applied 13 C on day 36, Fig. ), while the minimum predicted bioNERs based on MBT calculations (bioNER min : 13.5%) were nearly identical to total 13 C NERs . Since 2,4-D is known to be easily biodegradable , it is thus likely that for both 13 C 6-2,4-D and D 3-2,4-D , the total 13 C NERs and D NERs could be completely ascribed to ‘safe sink’ bioNERs on day 36. GLP In the GLP experiment, both 13 C tAAs and 13 C bioNERs were higher than their D-analogs (Fig. ). Furthermore, we observed a notable difference in the labeling pattern of tAAs and bioNERs with 13 C and D between the two GLP labeling positions (2-C GLP and 3-C GLP ). In the case of 2-C GLP , 13 C tAAs on day 4 (10 ± 3.1% of the applied 13 C) and 18 (12 ± 1.6% of the applied 13 C) were about fourfold higher than D tAAs on both day 4 (2.4 ± 0.6% of the applied D; p adjusted = 0.019; Supplementary Note ) and 18 (3.4 ± 0.6% of the applied D; p adjusted = 0.024). The total 13 C bioNERs increased to 24 ± 3.2% of the applied 13 C on day 18 and then remained relatively stable until day 38, whilst D bioNERs were nearly identical at all time points (5–7 ± 1% of the applied D). At the end of the incubation period, the D bioNERs were thus surprisingly only thrice lower than the 13 C bioNERs (with no significant differences between D tAA and 13 C tAA contents; p adjusted = 0.137), comprising about 60% of the total D NERs . In comparison, the total 13 C bioNERs made up only about 30% of the total 13 C NERs on day 38. Notably, predicted bioNERs from 2- 13 C GLP based on the MTB model were somewhat lower than based on tAAs*2 – ranging from 6.7% to 16% for degradation via the sarcosine pathway and 2.8–6% via the AMPA pathway (Supplementary Table ). This can be explained by the monomeric assimilation of 13 C glycine during the biodegradation of 2- 13 C GLP via the sarcosine pathway (for details, see the following section), which is not accounted for in the MTB model . BioNER formation from 3-C GLP was much lower than from 2-C GLP . Significantly higher 13 C tAAs than D tAAs were measured on day 4 ( p adjusted = 0.006; Supplementary Note ) and day 18 ( p adjusted = 0.019). On the final day, a fivefold higher formation of 13 C bioNERs (2.7 ± 1.5% of the applied 13 C) than D bioNERs was observed (0.6 ± 0.1% of the applied D; Fig. ), although this difference was not statistically significant ( 13 C tAAs vs D tAAs : p adjusted = 0.473). The predicted bioNER min using the MTB model were only slightly higher than the tAAs*2 when considering the AMPA pathway (3.5–7.4%; Supplementary Table ) but overestimated when based on the sarcosine pathway (8.2–19.5%). This may be because bioNERs based on tAAs*2 depend on the 13 C-labeling position, whereas the MTB approach considers the turnover of all C atoms of the substrate equally . SMX No 13 C from 13 C 6-SMX or D from D 4-SMX could be detected in the 13 C tAAs and D tAAs , suggesting that the NERs from SMX were almost exclusively xenoNERs. This is in line with low mineralization (2.3 ± 0.5% cumulative 13 CO 2 evolution after 72 days, Supplementary Table ) and negligible amounts of predicted bioNERs based on the MTB approach (min: 0.6%, max: 1.5%).
Total 13 C-amino acids ( 13 C tAAs ) on day 16 (11 ± 1.0% of the applied 13 C) and 36 (9.8 ± 1.6% of the applied 13 C) were 6–7-fold higher than the total D-amino acids (D tAAs ) on both day 16 (1.6 ± 0.2% of the applied D) and 36 (1.7 ± 0.5% of the applied D; Fig. ). These differences between D tAAs and 13 C tAAs were statistically significant at both time points (day 16: p adjusted = 0.037; day 36: p adjusted = 0.035). As hypothesized, the total amounts of D bioNERs from D 3-2,4-D (tAAs + other bioNERs) were thus lower (six to seven times) than those of the 13 C bioNERs . The 13 C bioNERs were nearly identical on day 16 (22 ± 2.0% of the applied 13 C) and 36 (20 ± 3.2% of the applied 13 C). Also, the amounts of D bioNERs were comparable between day 16 (3.2 ± 0.4% of applied D) and 36 (3.4 ± 1.0% of applied D). Notably, these estimates of total 13 C bioNERs and D bioNERs on both days 16 and 36 were nearly equal to the total measured 13 C NERs and D NERs shown in Fig. . Moreover, the 13 C bioNERs determined as tAAs*2 fell within the range of predicted bioNERs based on the MTB model (13.5–32.6%, see Supplementary Table in Supplementary Note ), showing consistent estimates from both approaches. On day 36, tAA contents without application of the conversion factor (~ 10% of applied 13 C) already made up the majority of the total 13 C NERs (14 ± 4.7% of applied 13 C on day 36, Fig. ), while the minimum predicted bioNERs based on MBT calculations (bioNER min : 13.5%) were nearly identical to total 13 C NERs . Since 2,4-D is known to be easily biodegradable , it is thus likely that for both 13 C 6-2,4-D and D 3-2,4-D , the total 13 C NERs and D NERs could be completely ascribed to ‘safe sink’ bioNERs on day 36.
In the GLP experiment, both 13 C tAAs and 13 C bioNERs were higher than their D-analogs (Fig. ). Furthermore, we observed a notable difference in the labeling pattern of tAAs and bioNERs with 13 C and D between the two GLP labeling positions (2-C GLP and 3-C GLP ). In the case of 2-C GLP , 13 C tAAs on day 4 (10 ± 3.1% of the applied 13 C) and 18 (12 ± 1.6% of the applied 13 C) were about fourfold higher than D tAAs on both day 4 (2.4 ± 0.6% of the applied D; p adjusted = 0.019; Supplementary Note ) and 18 (3.4 ± 0.6% of the applied D; p adjusted = 0.024). The total 13 C bioNERs increased to 24 ± 3.2% of the applied 13 C on day 18 and then remained relatively stable until day 38, whilst D bioNERs were nearly identical at all time points (5–7 ± 1% of the applied D). At the end of the incubation period, the D bioNERs were thus surprisingly only thrice lower than the 13 C bioNERs (with no significant differences between D tAA and 13 C tAA contents; p adjusted = 0.137), comprising about 60% of the total D NERs . In comparison, the total 13 C bioNERs made up only about 30% of the total 13 C NERs on day 38. Notably, predicted bioNERs from 2- 13 C GLP based on the MTB model were somewhat lower than based on tAAs*2 – ranging from 6.7% to 16% for degradation via the sarcosine pathway and 2.8–6% via the AMPA pathway (Supplementary Table ). This can be explained by the monomeric assimilation of 13 C glycine during the biodegradation of 2- 13 C GLP via the sarcosine pathway (for details, see the following section), which is not accounted for in the MTB model . BioNER formation from 3-C GLP was much lower than from 2-C GLP . Significantly higher 13 C tAAs than D tAAs were measured on day 4 ( p adjusted = 0.006; Supplementary Note ) and day 18 ( p adjusted = 0.019). On the final day, a fivefold higher formation of 13 C bioNERs (2.7 ± 1.5% of the applied 13 C) than D bioNERs was observed (0.6 ± 0.1% of the applied D; Fig. ), although this difference was not statistically significant ( 13 C tAAs vs D tAAs : p adjusted = 0.473). The predicted bioNER min using the MTB model were only slightly higher than the tAAs*2 when considering the AMPA pathway (3.5–7.4%; Supplementary Table ) but overestimated when based on the sarcosine pathway (8.2–19.5%). This may be because bioNERs based on tAAs*2 depend on the 13 C-labeling position, whereas the MTB approach considers the turnover of all C atoms of the substrate equally .
No 13 C from 13 C 6-SMX or D from D 4-SMX could be detected in the 13 C tAAs and D tAAs , suggesting that the NERs from SMX were almost exclusively xenoNERs. This is in line with low mineralization (2.3 ± 0.5% cumulative 13 CO 2 evolution after 72 days, Supplementary Table ) and negligible amounts of predicted bioNERs based on the MTB approach (min: 0.6%, max: 1.5%).
13 C- and D-labeling pattern of bioNERs 2-C-labeling position (2-C GLP ) A closer look at the composition of 13 C and D tAAs revealed that the contents of 13 C glycine (1.7 ± 0.6%– 3.6 ± 1.0% of applied 13 C; Supplementary Fig. ) were comparable to those of D glycine (1.9 ± 0.6%–2.7 ± 1.5% of applied D) on all sampling days ( p adjusted > 0.05, Supplementary Note ), and both were fairly stable between day 4 and day 38. Nearly identical amounts of D glycine and its 13 C-analog suggest that the C–D bonds resisted harsh acid hydrolysis. The D tAAs hydrolyzed from the soil are thus reliable for tracking the D integration into C–D bonds of amino acids during microbial metabolism of a D-compound. With 13 C glycine comprising 16–30% of 13 C tAAs and D glycine 76–81% of D tAAs , glycine was clearly the most predominant AA in the tAA pool (Supplementary Fig. ). These findings are unique to 2-C GLP due to the preservation of the isotope label at the 2-C-position during biodegradation. When 2-C GLP is degraded to sarcosine, which can then be oxidized to glycine in the sarcosine pathway (Supplementary Fig. ), both sarcosine and glycine will retain the D- or 13 C-labels of the parent 2-C GLP , , . Thus, a disproportionally high integration of the isotope label from 2-C GLP into glycine can be expected. 13 C glycine and D glycine from 2- 13 C GLP and 2-C–D 2-GLP were likely assimilated into microbial biomass as a monomeric ‘building block’ for proteins, which is more energy-efficient than the biosynthesis of macromolecules derived from smaller C-precursors like acetyl-groups (Fig. ) , . 13 C glycine and D glycine could then have been partially mineralized to 13 CO 2 and D 2 O. The 13 C from 13 CO 2 and the 13 C incorporated into other biomolecules of microbial degraders (1 st level) was possibly recycled for the synthesis of new biomolecules by the consumers (2 nd , 3 rd , 4 th level, etc.; Fig. ). Due to these 13 C-recycling processes, other 13 C amino acids than 13 C glycine were also enriched in 13 C. Notably, the share of D glycine (76–81%) in the D tAAs was much higher than that of its 13 C analog (16–30%), suggesting that D was only minimally retained in other D amino acids . Unlike 13 C, the D-label is rapidly released as D 2 O after the cleavage of C–D bonds of either 2-C–D 2-GLP or D-biomolecules of the necromass (Fig. ). The D 2 O is then diluted with unlabeled H 2 O, leading to estimated D 2 O concentrations in soil water of only 0.00012–0.00076% for the sarcosine pathway (Supplementary Fig. in Supplementary Note ). Therefore, only low amounts of D-label could have been re-incorporated into the other D amino acids . Overall, based on the shares of the two isotopes in the tAA pool and their recycling processes, we can deduce that GLP is preferentially degraded into the amino acid glycine in the sarcosine pathway, possibly dictating the high bioNER formation for 2-C GLP . Still, a large proportion of the total 13 C NERs from 2- 13 C GLP (70–84%; Fig. ) remained unidentified. We speculate that the 13 C bioNERs for 2- 13 C GLP could have been underestimated due to the uncertainty of the bioNER approximation based on the tAAs*2 and MTB approaches. Notably, 2- 13 C glycine formed from 2- 13 C GLP in the sarcosine pathway (Supplementary Fig. ) is directly incorporated into microbial biomass without the release of 13 CO 2 , as demonstrated by Wang et al. . The predicted amounts of 13 C bioNER for 2- 13 C GLP (6.7–16% for the sarcosine pathway) based on the measured 13 CO 2 were thus likely underestimated, showing the limitations of the MTB approach when monomeric substrate utilization occurs (Fig. ). Another good example proving the limitations of both the tAAs*2 and MTB approaches can be taken from a recent degradation study of 13 C 2-glycine by Aslam et al. , where 37% of 13 C 2-glycine was measured in 13 CO 2 , 8.7% in 13 C tAAs , and 34% in total 13 C NERs . Based on the 13 C tAAs *2, about 17.4% of total 13 C bioNERs were formed, which comprised 51% of the total 13 C NERs ; thus, the other 49% of 13 C NERs were unidentified. The total 13 C bioNER amounts estimated for 13 C 2-glycine using the MTB model were underestimated as well (min: 4% and max: 8.8%, Supplementary Table ). Glycine is an easily biodegradable biomolecule ; therefore, it cannot form 13 C xenoNERs , and all of the 13 C NERs from 13 C 2-glycine should be exclusively 13 C bioNERs . 2- 13 C GLP or 2-C–D 2-GLP is degraded to unlabeled AMPA in the AMPA pathway (Supplementary Fig. ). Therefore, the large portion of unidentified 13 C NERs from 2- 13 C GLP could stem either from unidentified 13 C bioNERs not considered in the MTB or tAAs*2 calculation, or from the un-degraded parent molecule which could be the primary source for 13 C xenoNERs . 3-C-labeling position (3-C GLP ) Contrastingly, neither 13 C glycine nor D glycine was the dominant amino acid within the tAA pool of 3-C GLP (Supplementary Fig. ). When 3-C GLP is degraded via the sarcosine pathway, the major resulting degradation product glycine will be unlabeled (Supplementary Fig. ), explaining the much lower amounts of 13 C bioNERs and D bioNERs compared to 2-C GLP . Although the contents of 13 C glycine were not significantly different from those of D glycine for 3-C GLP on any sampling day ( p adjusted > 0.05, see Supplementary Note ), their amounts were at least 15-fold lower than for 2-C GLP . The percentages of both 13 C glycine (1.1–15% of 13 C tAAs ) and D glycine (3.2–25% of D tAAs ) in the tAA pool were also comparatively lower than for 2-C GLP . In contrast to 2-C GLP , 3-C GLP is degraded to labeled AMPA ( 13 C AMPA or D 2-AMPA ) in the AMPA pathway (Supplementary Fig. ) as the third C of GLP is preserved in AMPA. As 13 C bioNERs and D bioNERs were lower than for 2-C GLP , higher portions of 13 C- and D-labeled xenoNERs could have been formed from 3-C GLP . Since similar behavior of the parent compounds is expected, this could be due to abiotic interactions between labeled 13 C AMPA or D 2-AMPA with reactive groups of the soil.
GLP ) A closer look at the composition of 13 C and D tAAs revealed that the contents of 13 C glycine (1.7 ± 0.6%– 3.6 ± 1.0% of applied 13 C; Supplementary Fig. ) were comparable to those of D glycine (1.9 ± 0.6%–2.7 ± 1.5% of applied D) on all sampling days ( p adjusted > 0.05, Supplementary Note ), and both were fairly stable between day 4 and day 38. Nearly identical amounts of D glycine and its 13 C-analog suggest that the C–D bonds resisted harsh acid hydrolysis. The D tAAs hydrolyzed from the soil are thus reliable for tracking the D integration into C–D bonds of amino acids during microbial metabolism of a D-compound. With 13 C glycine comprising 16–30% of 13 C tAAs and D glycine 76–81% of D tAAs , glycine was clearly the most predominant AA in the tAA pool (Supplementary Fig. ). These findings are unique to 2-C GLP due to the preservation of the isotope label at the 2-C-position during biodegradation. When 2-C GLP is degraded to sarcosine, which can then be oxidized to glycine in the sarcosine pathway (Supplementary Fig. ), both sarcosine and glycine will retain the D- or 13 C-labels of the parent 2-C GLP , , . Thus, a disproportionally high integration of the isotope label from 2-C GLP into glycine can be expected. 13 C glycine and D glycine from 2- 13 C GLP and 2-C–D 2-GLP were likely assimilated into microbial biomass as a monomeric ‘building block’ for proteins, which is more energy-efficient than the biosynthesis of macromolecules derived from smaller C-precursors like acetyl-groups (Fig. ) , . 13 C glycine and D glycine could then have been partially mineralized to 13 CO 2 and D 2 O. The 13 C from 13 CO 2 and the 13 C incorporated into other biomolecules of microbial degraders (1 st level) was possibly recycled for the synthesis of new biomolecules by the consumers (2 nd , 3 rd , 4 th level, etc.; Fig. ). Due to these 13 C-recycling processes, other 13 C amino acids than 13 C glycine were also enriched in 13 C. Notably, the share of D glycine (76–81%) in the D tAAs was much higher than that of its 13 C analog (16–30%), suggesting that D was only minimally retained in other D amino acids . Unlike 13 C, the D-label is rapidly released as D 2 O after the cleavage of C–D bonds of either 2-C–D 2-GLP or D-biomolecules of the necromass (Fig. ). The D 2 O is then diluted with unlabeled H 2 O, leading to estimated D 2 O concentrations in soil water of only 0.00012–0.00076% for the sarcosine pathway (Supplementary Fig. in Supplementary Note ). Therefore, only low amounts of D-label could have been re-incorporated into the other D amino acids . Overall, based on the shares of the two isotopes in the tAA pool and their recycling processes, we can deduce that GLP is preferentially degraded into the amino acid glycine in the sarcosine pathway, possibly dictating the high bioNER formation for 2-C GLP . Still, a large proportion of the total 13 C NERs from 2- 13 C GLP (70–84%; Fig. ) remained unidentified. We speculate that the 13 C bioNERs for 2- 13 C GLP could have been underestimated due to the uncertainty of the bioNER approximation based on the tAAs*2 and MTB approaches. Notably, 2- 13 C glycine formed from 2- 13 C GLP in the sarcosine pathway (Supplementary Fig. ) is directly incorporated into microbial biomass without the release of 13 CO 2 , as demonstrated by Wang et al. . The predicted amounts of 13 C bioNER for 2- 13 C GLP (6.7–16% for the sarcosine pathway) based on the measured 13 CO 2 were thus likely underestimated, showing the limitations of the MTB approach when monomeric substrate utilization occurs (Fig. ). Another good example proving the limitations of both the tAAs*2 and MTB approaches can be taken from a recent degradation study of 13 C 2-glycine by Aslam et al. , where 37% of 13 C 2-glycine was measured in 13 CO 2 , 8.7% in 13 C tAAs , and 34% in total 13 C NERs . Based on the 13 C tAAs *2, about 17.4% of total 13 C bioNERs were formed, which comprised 51% of the total 13 C NERs ; thus, the other 49% of 13 C NERs were unidentified. The total 13 C bioNER amounts estimated for 13 C 2-glycine using the MTB model were underestimated as well (min: 4% and max: 8.8%, Supplementary Table ). Glycine is an easily biodegradable biomolecule ; therefore, it cannot form 13 C xenoNERs , and all of the 13 C NERs from 13 C 2-glycine should be exclusively 13 C bioNERs . 2- 13 C GLP or 2-C–D 2-GLP is degraded to unlabeled AMPA in the AMPA pathway (Supplementary Fig. ). Therefore, the large portion of unidentified 13 C NERs from 2- 13 C GLP could stem either from unidentified 13 C bioNERs not considered in the MTB or tAAs*2 calculation, or from the un-degraded parent molecule which could be the primary source for 13 C xenoNERs .
GLP ) Contrastingly, neither 13 C glycine nor D glycine was the dominant amino acid within the tAA pool of 3-C GLP (Supplementary Fig. ). When 3-C GLP is degraded via the sarcosine pathway, the major resulting degradation product glycine will be unlabeled (Supplementary Fig. ), explaining the much lower amounts of 13 C bioNERs and D bioNERs compared to 2-C GLP . Although the contents of 13 C glycine were not significantly different from those of D glycine for 3-C GLP on any sampling day ( p adjusted > 0.05, see Supplementary Note ), their amounts were at least 15-fold lower than for 2-C GLP . The percentages of both 13 C glycine (1.1–15% of 13 C tAAs ) and D glycine (3.2–25% of D tAAs ) in the tAA pool were also comparatively lower than for 2-C GLP . In contrast to 2-C GLP , 3-C GLP is degraded to labeled AMPA ( 13 C AMPA or D 2-AMPA ) in the AMPA pathway (Supplementary Fig. ) as the third C of GLP is preserved in AMPA. As 13 C bioNERs and D bioNERs were lower than for 2-C GLP , higher portions of 13 C- and D-labeled xenoNERs could have been formed from 3-C GLP . Since similar behavior of the parent compounds is expected, this could be due to abiotic interactions between labeled 13 C AMPA or D 2-AMPA with reactive groups of the soil.
As hypothesized, the C–D bonds of all tested compounds were stable against abiotic cleavage for the duration of the soil incubations. This was evidenced by comparable D NERs and 13 C NERs in sterile soil (Fig. ) and by the stability of the D-compounds in ultra-pure water solutions (Supplementary Note ). Therefore, the differences between D and 13 C tracers in biologically active soil can be ascribed to biological processes. Compared to 13 C-labeling, the D-labeling approach yielded lower estimates of total NERs for all three model compounds 2,4-D, GLP, and SMX. For the two biodegradable herbicides 2,4-D and GLP, also the D bioNERs were lower than 13 C bioNERs while no bioNERs were measured for the antibiotic SMX. As expected, the D tracer was thus far less retained in microbial biomass than the 13 C tracer. This can be attributed to three factors: (I) the presence of H 2 O as an abundant alternative H source for biosynthesis , , , (II) the release of D 2 O to water during microbial degradation followed by dilution of D with H from soil water, leading to low reuptake of D by microorganisms, and (III) isotopic discrimination against the heavy D , . Our findings thus suggest that D-labeled compounds generally form low amounts of bioNERs compared to 13 C-labeled compounds. Consequently, for biodegradable compounds, the total D NERs will be lower than the total 13 C NERs and will typically comprise a higher fraction of xenoNERs. Recalcitrant compounds like SMX will form predominantly xenoNERs and thus similar amounts of total D NERs and total 13 C NERs . Biodegradable compounds may also form D xenoNERs when the isotope label is preserved in xenobiotic transformation products. 3-C GLP , for example, could be degraded into labeled AMPA. Although biodegradable compounds like 2,4-D or GLP can also form some D bioNERs , the amounts of total D NERs relevant for the NER hazard assessment will remain lower than total 13 C NERs and D-labeling will give a closer estimate of the xenoNER fraction. Notably, monomeric compounds can also be directly integrated into microbial biomass from a biodegraded substrate, as explained in Fig. and shown for 2-C GLP , . This potentially results in higher amounts of D bioNERs than when only D/H incorporation from water occurs. However, bioNER contents from 2-C–D 2-GLP were still three-fold lower than from 2- 13 C GLP . Even though the assimilation of a monomeric compound into the bioNERs had occurred, the results obtained by D-labeling would still be helpful for xenoNER quantification and the related hazard assessment. We have proven that H is only minimally retained in biomolecules in chemical biodegradability testing using the D-isotope; thus, the total NERs comprising mostly xenoNERs could be readily quantified from the H-isotope remaining unextractable. The H-labeling approach could, therefore, solve the problem of the uncertainty associated with calculating total bioNERs based on tAAs*2 or the MTB approach. Although this proof-of-concept study demonstrated D-labeling as a powerful tool for time-efficient xenoNER quantification, it has several limitations. Compared to 13 C or 15 N tracers, D-labeling is undoubtedly cheaper in terms of associated costs of the labeled compound, and less laborious considering that the bioNER analytics can be entirely skipped (Table ). Yet, the measurement of D – just like other stable isotopes – requires incubations with multiple controls and laborious isotope analytics. In addition, the required IRMS instruments may not be widely accessible. Furthermore, the experiments in this proof-of-concept study had to be performed at much higher than environmentally relevant concentrations of the test compound to achieve acceptable detection limits against the natural stable isotope abundances in soil (0.015 at% D and 1.08 at% 13 C). The minimum required spiking concentration for D NER quantification depends on both the number of labeled versus unlabeled H atoms in the test compound as well as the moisture and total H content of the soil. Multiple position D-labeling may thus enable lower spiking levels but is also constrained by the number of stable C–H bonds per test compound. Therefore, to conduct biodegradation studies and NER identification at environmentally relevant concentrations, the radioactive H-isotope – T – could be applied as a promising substitute for D. Unlike stable isotope tracers, a radiolabeled compound can typically be mixed with its unlabeled analog to the desired spiking concentration and still give quantifiable signals. T-labeling has, however, not been favored for regulatory testing of chemicals due to concerns about the potential abiotic release of T from T-labeled compounds. Substrate-H bound to O, S, or N is generally considered exchangeable, whereas many C–H bonds are stable unless enzymatically cleaved by microorganisms , . However, the stability of C–H bonds may be compromised depending on their position within a molecule or ambient conditions. We, therefore, recommend carefully selecting appropriate C–H label positions that are stable against abiotic cleavage and, if needed, performing complementary stability tests in water or abiotic soil. Our proof-of-concept study demonstrated good stability of the C–D bonds for the three selected model compounds even during acidic hydrolysis conducted under ‘harsh’ conditions. Thus, C–H bonds should also resist the last ‘harsh’ step of soil extraction, which remobilizes ‘slowly desorbable’ residues . As negligible amounts of D from all tested D-compounds were quantified in amino acids, the same is expected for T. T-labeling could thus substitute D-labeling for a rapid xenoNER quantification in future biodegradability tests. T-tracing may also be a more accessible approach than stable isotope probing due to the faster and easier quantification of radioactivity in (xeno)NERs using standard liquid scintillation counting (LSC) – , , , . Nevertheless, prior to the potential application of T-labeling for future xenoNER quantification, the stability of C–T bonds under ‘harsh’ extractions still needs to be verified. Besides stability considerations, H isotopes may also cause cellular toxicity and may be more prone to kinetic isotope effects than C isotopes . Kinetic isotope effects were previously reported for enzyme-mediated metabolism and may be aggravated for H due to the large relative mass difference between D or T vs. 1 H , . However, they may work in favor of the presented H-labeling approach as microorganisms were found to discriminate against D (and possibly T) uptake , . This may be due to a higher activation energy required for the synthesis of C–D compared to C–H bonds or the potential toxicity of very high D 2 O concentrations to microbial cells , . Abiotic processes driving xenoNER formation should, however, not be fundamentally altered by the higher mass of D or T vs. 1 H, so that identical xenoNER quantities are expected for compounds labeled with 13 C/ 14 C or D/T at stable C–H bonds. Further research is nevertheless still needed to verify the applicability of H isotope tracing across soils with diverse chemical and biological properties. NER formation was shown to vary substantially between soil types , , e.g., as a function of pH, organic carbon, and mineral content affecting the sorption of organic molecules. While the mechanism behind lower bioNER formation from H tracers is based on enzyme-mediated processes, making it in principle applicable independently of chemical soil composition, extreme pH ranges or a high abundance of transition metals could enhance the catalysis of abiotic H exchange in C–H bonds . Therefore, it may be of interest to compare abiotic NER formation between C and H tracers across a broad range of different soils. Moreover, the abundance and physiology of different types of degraders within the soil microbial community may largely affect the assimilation of H vs. C tracers into bioNERs. For example, CO 2 -fixating autotrophs took up more water-H than heterotrophs but also showed stronger H fractionation . Differences in substrate-H utilization were also observed between different heterotrophic degraders and for favorable vs. stressful growth conditions. In heterotrophic bacteria, water-H uptake occurred mainly during anabolism, while substrate-H was mainly released to ambient water during catabolism . As a complex interplay between chemical and microbiological factors affects H isotope incorporation into bioNERs, further studies are required to gain a better quantitative understanding of how much the difference between C and H isotope-labeled bioNER formation may vary. This proof-of-concept study has shown that D- or T-labeling could become a powerful substitute or supplementary method for rapid xenoNER quantification along with 14 C-labeling. By helping to efficiently identify potentially hazardous long-lasting organic contaminants, the presented H-isotope labeling approach could contribute to the advancement of green chemistry within chemical safety testing. As long as H-labels are stably attached, D- or T-labeling could be broadly applied for scientific and regulatory biodegradability testing of a wide range of organic chemicals to reveal the hidden identity of NERs.
Chemicals All chemicals and reagents were of analytical grade and purchased from VWR (Darmstadt, Germany) or Carl Roth (Karlsruhe, Germany) unless otherwise stated. 2- 13 C and 3- 13 C GLP , and 13 C 6-SMX were obtained from Merck (Darmstadt, Germany), while 13 C 6-2,4-D was purchased from Toronto Research Chemicals (Toronto, Canada). All 13 C-labeled compounds had an isotopic purity of 99 at% 13 C and chemical purity > 98%. 2-C–D 2-GLP (99 at% D), D 3-2,4-D (99 at% D), and D 4-SMX (97 at% D), each with a chemical purity > 99%, were purchased from Toronto Research Chemicals (Toronto, Canada). 3-C–D 2-GLP (98 at% D) was obtained from Cambridge Isotope Laboratories Inc. (Andover, USA). Unlabeled 2,4-D, GLP, and SMX of analytical grade as well as D-depleted water (at% 2 H ≤ 1 ppm) were obtained from Sigma Aldrich (Munich, Germany). Reference soil material The Ap horizon of a Haplic Chernozem was sampled from the long-term agricultural ‘Static Fertilization Experiment’ located at Bad Lauchstädt, Germany. The soil had received various pesticides (including glyphosate and 2,4-D) for over 30 years and had been amended with 30 t manure ha −1 every second year for over 100 years . The silt loam was composed of 21% clay, 68% silt, and 11% sand and had a total organic carbon content of 2.1% and a total nitrogen content of 0.17%. The soil pH was 6.6, and the maximum water-holding capacity (WHC max ) was 37.5% . The soil was sieved to 2 mm, homogenized, and stored at 4 °C at 7% WHC max for 2 months before the experiments. Model compounds Two herbicides, GLP and 2,4-D, and one antibiotic, SMX, were selected to compare the NER and bioNER formation between D and 13 C tracers. The 13 C-compounds were labeled with 13 C at similar positions as the D-compound (see Fig. ). The C–D bonds of all deuterated model compounds were also proven to be stable against abiotic cleavage in water (for details see Supplementary Note ). Incubation experiments Soil incubations oriented at OECD guideline 307 were performed in a static system consisting of 250 mL Schott flasks filled with 60 g (wet weight) of the reference soil material. Besides treatment with the D- and 13 C-labeled compounds, two different controls were included: untreated soil and soil spiked with unlabeled compound. Both controls were performed to obtain background isotopic abundances for the NER and bioNER calculations. For each test compound, analogous sterile experiments were conducted to verify whether D- and 13 C-labeled compounds behave identically under abiotic conditions. All treatments were conducted in triplicate, i.e., three separate flasks were incubated per treatment. Prior to begin of the experiments, the soil was oven-dried at 40 °C over multiple days until reaching a constant weight. After thorough manual mixing, the soil for sterile controls was separated into 250 mL Schott flasks and autoclaved three times (121 °C, 40 min) on consecutive days, with the last autoclaving cycle on day 0 of the experiments. The remaining soil was stored in an airtight 2 L bottle, and the moisture content was monitored gravimetrically. To minimize a priming effect on microbial degraders, approximately four hours before starting the incubations, the soil moisture was adjusted to 20% WHC max , and the soil was again mixed thoroughly by manual stirring. The soil for biotic treatments was then weighed into 250 mL Schott flasks, and spiking was performed separately for each bottle. To this end, aqueous solutions of the test compounds (prepared on the same day as incubation) were added dropwise to the soil, corresponding to final concentrations of 50 mg kg −1 dry soil for GLP, 20 mg kg −1 dry soil for SMX and 10 mg kg −1 dry soil for 2,4-D. These much higher than environmental concentrations were selected after prior testing to yield good resolution on the IRMS instruments. Estimated detection limits for 13 C NERs and D NERs of the three model compounds were derived as described in Supplementary Method . After spiking, the soil moisture was adjusted to 60% WHC max to provide optimal growth conditions for microorganisms . Each treatment was then homogenized by manually stirring for two minutes. In the abiotic treatments, the soil moisture was adjusted separately for each bottle immediately after spiking the test compounds to the dry soil, as no priming effect was expected. Spiking and sampling for the abiotic treatments were conducted under sterile bench conditions. In biotic treatments with the D-compounds and unlabeled controls, D-depleted water was used for the spiking solutions and moisture adjustment in order to lower the background D in EA-IRMS measurements of water-extractable D on day 0 (Supplementary Table in Supplementary Note ). The soil incubations were conducted in the dark at 20 °C to prevent photodegradation of the model chemicals. Soil samples were taken from the same bottles on days 4, 16/18, and 36/38 for GLP and 2,4-D and on days 18, 36, and 72 for SMX because of its slower turnover. Each time, ten roughly 0.5–2 g subsamples were taken carefully from different spots within the soil batch to prevent soil disturbance. The sampling did not cause any noticeable disturbances in soil microbial activity as soil respiration showed a continuous decrease of CO 2 towards the end of the incubation periods (Supplementary Fig. in Supplementary Note ). The soil was then pooled into one composite sample in a 50 mL Falcon tube (day 0: 6 g total, afterward: 18 g total), homogenized by stirring with a spatula and stored at − 20 °C until analysis. During spiking and sampling, treatments were handled in the same order to account for the required processing time. Total 13 C- and D-labeled NERs Total NERs were quantified based on the abundance of D or 13 C remaining in the soil after the ‘soft’ extraction of model compounds and their transformation products by shaking with water-methanol (2,4-D), water-borate buffer (GLP) and water-dichloromethane (SMX) as detailed in Supplementary Note . Soil samples were air-dried over multiple days after the extraction and ground for homogenization. A 2–4 mg aliquot was then weighed into 3.5 × 5 mm tin cartridges (HEKAtech) for analysis by EA-high-temperature conversion-IRMS (D analysis) or EA-combustion-IRMS ( 13 C analysis). The total H and D content and isotopic enrichment of D (at% D/ 1 H) of the D NERs were measured on an EA (EuroEA3000, Euro Vector, Milan, Italy) directly connected via an open split system (ConFlo IV, Thermo Fisher Scientific, Germany) to IRMS (MAT 253 Thermo Fisher Scientific, Germany). The total amount of C and the 13 C/ 12 C isotope ratio of the 13 C NERs were determined using a Flash EA 2000 coupled to a Conflo IV interface and a Delta Advantage mass spectrometer (Finnigan MAT 253, Thermo Scientific, Bremen, Germany). The temperature of the oxidation reactor was 1020 °C, whereas that of the reduction reactor was 650 °C . The isotopic enrichment of D or 13 C in NERs was calculated as excess over the unlabeled control. The detection and quantification limits were estimated at around 3–5% of applied D and 4–5% of applied 13 C (Supplementary Table & in Supplementary Method ). Total 13 C- and D-labeled amino acids (tAAs) and total 13 C- and D-bioNERs Total amino acids (tAAs) were extracted from the soil as quantitative and qualitative markers for bioNERs. The tAAs from living and decayed biomass are the most reliable quantitative biomarkers for bioNERs as their turnover is comparably slow , . The tAA analysis followed the extraction, purification, and derivatization protocol by Nowak et al. , . Briefly, for tAA extraction, a 2 g soil sample was hydrolyzed with 6 M HCl at 105 °C for 22 h and purified over cation resin (Dowex 50W-X8; 50–100 mesh) solid-phase extraction columns. Impurities were removed by consecutive washing with 2.5 M oxalic acid, 0.01 M HCl, and de-ionized water before eluting the AAs with 2.5 M NH 4 OH. The purified extracts were derivatized by iso-propylation of carboxyl groups and trifluoro-acetylation of amino groups before measurement with gas chromatography-mass spectrometry (GC-MS, HP 6890, Agilent) operating with a BPX-5 column (30 m × 0.25 mm × 0.25 μm; SGE International, Darmstadt, Germany). Individual AAs were identified from an external AA standard containing alanine, glycine, threonine, serine, valine, leucine, isoleucine, proline, aspartate, glutamate, phenylalanine, tyrosine and lysine. Quantification was based on two internal standards, L-norleucine added after the hydrolysis and 4-aminomethylcyclohexanecarboxylic acid added before derivatization. The recovery of AAs from soil hydrolysates using this purification method was 97 ± 11% . Isotopic enrichment in tAAs was measured with gas chromatography-isotope ratio-mass spectrometry (GC-IRMS), using a trace 1310 GC system connected via a GC-IsoLink and a ConFlo IV interface to a MAT 253 (all Thermo Fisher Scientific, Bremen, Germany). Sample separation was achieved with a BPX-5 column (30 m × 0.32 mm × 0.5 µm; Agilent Technology) using helium as a carrier gas. Details on the chromatographic analyses with GC-MS and GC-IRMS are provided in Supplementary Method . The total abundance of different AAs was measured by GC-MS, and the isotopic enrichment (at% D/ 1 H or at% 13 C/ 12 C) in the respective molecule was determined by GC-IRMS after correcting for the isotopic shift during derivatization (Supplementary Method ). Isotope label integration from the labeled compounds was calculated by subtracting the isotopic enrichment in unlabeled control samples. Because the conditions of hydrolysis with 6 M HCl were harsh, the D integrated into the C–D bonds of tAAs must have been non-exchangeable, in contrast to D bound to O, N, or S, which is generally considered to be exchangeable . This was proven by the comparable amounts of 13 C glycine and D glycine for the 2-C-labeling position of GLP, as shown in Supplementary Fig. . The estimated LOD for individual AAs ranged between 0.07–2.4% of applied 13 C and 0.01–3.1% of applied D for 2,4-D and GLP (Supplementary Table , in Supplementary Method ). Data analysis D or 13 C enrichment in NERs and tAAs was calculated as the percentage of D or 13 C initially applied with the labeled compounds, and values are presented as mean ± standard deviation. The detailed calculation of 13 C- and D-label incorporation into total NERs and tAAs is explained in Supplementary Method . The mean isotopic enrichment per treatment group (mean enrichment tNERs/AA ) was calculated as the product of the mean total element or AA abundance in soil (mean µmol tNERs/AA ) and the difference between the mean the isotopic enrichment in the labeled treatment (mean at% labeled ) and unlabeled control: 1 [12pt]{minimal}
$${{}}_{{}/{}}= {{}}_{{}/{}} \\ ({{}}_{{}\%{}} -\,{{}}_{{}\%{}}) \\= {{}}_{{}/{}} {{}}_{{}\%{}}$$ mean enrichment tNERs / AA = mean μmol tNERs / AA ⋅ mean at % labeled − mean at % unlabeled = mean μmol tNERs / AA ⋅ mean at % enrichment Therefore, the uncertainty of the mean D or 13 C enrichment in NERs or individual AAs (SD at% enrichment ) was derived considering Gaussian error propagation as follows: 2 [12pt]{minimal}
$${{}}_{{}\%{}}=}}_{{}\%{}}}^{2}+{{{}}_{{}\%{}}}^{2}}$$ SD at % enrichment = SD at % labeled 2 + SD at % unlabeled 2 Where SD at% enrichment , SD at% labeled , and SD at% unlabeled are the standard deviations of the mean isotopic (D or 13 C) enrichment, mean isotopic abundance in the labeled treatment, and mean isotopic abundance in the unlabeled treatment, respectively. The total uncertainty in the mean tNERs or labeled AA contents (SD tNERs/AA ) was calculated as: 3 [12pt]{minimal}
$${{}}_{{}/{}}= {{}}_{{} {}\; {}/{}} \\ }}_{{}\%{}}}{{{}}_{{}\%{}}})}^{2}+{(}}_{{} {}\; {},{}\; {}\; {}}}{{{}}_{{} {}\; {},{}\; {}\; {}}})}^{2}}$$ SD tNERs / AA = mean μ mol tNERs / AA ⋅ SD at % enrichment mean at % enrichment 2 + SD μ mol C , H or AA mean μ mol C , H or AA 2 Here, mean at% enrichment is the mean isotopic enrichment, and mean µmol C, H or AA and SD µmol C, H or AA are, respectively, the mean and standard deviation of the measured total abundance of C, H or the individual AA in the model soil. The total C and H abundance were measured over all sampling days as they were nearly constant while individual AA abundances were calculated separately for each sampling point. The uncertainty in the tAA abundance was calculated analogous to Eq. by taking the square root of the sum of all squared standard deviations for individual AAs that were calculated according to Eq. . Unidentified NERs were calculated from the difference between the total NERs and total bioNERs (tAAs*2). The standard deviations for total bioNERs presented in the text were estimated by applying the conversion factor of 2 to the standard deviation of the tAA measurement; however, it may actually be larger due to the additional uncertainty of the conversion factor. Statistics For statistical analysis, individual replicate values per treatment group were calculated from the mean isotopic abundance in unlabeled controls and the mean total H, C, or AA contents in the soil as mean abundance C/H/AA ∗(at% replicate – mean at% unlabeled ). Statistical tests, hence, only consider the uncertainty of the isotopic abundance in the labeled treatment, whereas error bars in Figs. – show the propagated uncertainty, including the error of the unlabeled control isotopic abundance and the total H, C, or AA abundance. Each treatment group contained three independent measurements of the isotopic enrichment in the labeled treatment from separately processed samples, except for tAAs and glycine of 3-C–D 2-GLP on day 38, where only two replicates were analyzed due to sample losses. Measured contents of total NERs and tAAs from 2,4-D and SMX were assessed for significant differences between D and 13 C tracers at each time point using independent samples and two-tailed t tests. Welch’s degrees of freedom correction for heteroscedasticity was applied as variance ratios were > 3 in all cases. Although assessment of normality is challenging for small samples (here n = 3), Welch-tests were shown to be fairly robust against deviations from normality even for very small group sizes . Since for GLP, four different treatments were used (D vs. 13 C tracer with two different labeling positions each), differences in total NERs, tAA, and glycine contents at each time point were assessed by Kruskal-Wallis tests. In case of significant main effects, Conover-Iman tests were employed for post-hoc comparison. Because of unequal variances, normality was assessed based on quantile-quantile plots of the groupwise standardized residuals (Supplementary Figs. – and Supplementary Note ). Due to the appearance of light-tailed distributions with data gaps, unequal variances, and very small, on day 38 additionally, unbalanced group sizes, nonparametric tests were used. P -values obtained from Welch tests and Conover-Iman tests were corrected for multiple comparisons using the Holm-Bonferroni method of controlling the family-wise error rate. All analyses were carried out with R Statistical Software (v 4.3.3) using a significance level of α = 0.05. Further details on the employed R packages and software output, including t and Χ 2 statistics, degrees of freedom, and exact p -values, are provided in Supplementary Note .
All chemicals and reagents were of analytical grade and purchased from VWR (Darmstadt, Germany) or Carl Roth (Karlsruhe, Germany) unless otherwise stated. 2- 13 C and 3- 13 C GLP , and 13 C 6-SMX were obtained from Merck (Darmstadt, Germany), while 13 C 6-2,4-D was purchased from Toronto Research Chemicals (Toronto, Canada). All 13 C-labeled compounds had an isotopic purity of 99 at% 13 C and chemical purity > 98%. 2-C–D 2-GLP (99 at% D), D 3-2,4-D (99 at% D), and D 4-SMX (97 at% D), each with a chemical purity > 99%, were purchased from Toronto Research Chemicals (Toronto, Canada). 3-C–D 2-GLP (98 at% D) was obtained from Cambridge Isotope Laboratories Inc. (Andover, USA). Unlabeled 2,4-D, GLP, and SMX of analytical grade as well as D-depleted water (at% 2 H ≤ 1 ppm) were obtained from Sigma Aldrich (Munich, Germany).
The Ap horizon of a Haplic Chernozem was sampled from the long-term agricultural ‘Static Fertilization Experiment’ located at Bad Lauchstädt, Germany. The soil had received various pesticides (including glyphosate and 2,4-D) for over 30 years and had been amended with 30 t manure ha −1 every second year for over 100 years . The silt loam was composed of 21% clay, 68% silt, and 11% sand and had a total organic carbon content of 2.1% and a total nitrogen content of 0.17%. The soil pH was 6.6, and the maximum water-holding capacity (WHC max ) was 37.5% . The soil was sieved to 2 mm, homogenized, and stored at 4 °C at 7% WHC max for 2 months before the experiments.
Two herbicides, GLP and 2,4-D, and one antibiotic, SMX, were selected to compare the NER and bioNER formation between D and 13 C tracers. The 13 C-compounds were labeled with 13 C at similar positions as the D-compound (see Fig. ). The C–D bonds of all deuterated model compounds were also proven to be stable against abiotic cleavage in water (for details see Supplementary Note ).
Soil incubations oriented at OECD guideline 307 were performed in a static system consisting of 250 mL Schott flasks filled with 60 g (wet weight) of the reference soil material. Besides treatment with the D- and 13 C-labeled compounds, two different controls were included: untreated soil and soil spiked with unlabeled compound. Both controls were performed to obtain background isotopic abundances for the NER and bioNER calculations. For each test compound, analogous sterile experiments were conducted to verify whether D- and 13 C-labeled compounds behave identically under abiotic conditions. All treatments were conducted in triplicate, i.e., three separate flasks were incubated per treatment. Prior to begin of the experiments, the soil was oven-dried at 40 °C over multiple days until reaching a constant weight. After thorough manual mixing, the soil for sterile controls was separated into 250 mL Schott flasks and autoclaved three times (121 °C, 40 min) on consecutive days, with the last autoclaving cycle on day 0 of the experiments. The remaining soil was stored in an airtight 2 L bottle, and the moisture content was monitored gravimetrically. To minimize a priming effect on microbial degraders, approximately four hours before starting the incubations, the soil moisture was adjusted to 20% WHC max , and the soil was again mixed thoroughly by manual stirring. The soil for biotic treatments was then weighed into 250 mL Schott flasks, and spiking was performed separately for each bottle. To this end, aqueous solutions of the test compounds (prepared on the same day as incubation) were added dropwise to the soil, corresponding to final concentrations of 50 mg kg −1 dry soil for GLP, 20 mg kg −1 dry soil for SMX and 10 mg kg −1 dry soil for 2,4-D. These much higher than environmental concentrations were selected after prior testing to yield good resolution on the IRMS instruments. Estimated detection limits for 13 C NERs and D NERs of the three model compounds were derived as described in Supplementary Method . After spiking, the soil moisture was adjusted to 60% WHC max to provide optimal growth conditions for microorganisms . Each treatment was then homogenized by manually stirring for two minutes. In the abiotic treatments, the soil moisture was adjusted separately for each bottle immediately after spiking the test compounds to the dry soil, as no priming effect was expected. Spiking and sampling for the abiotic treatments were conducted under sterile bench conditions. In biotic treatments with the D-compounds and unlabeled controls, D-depleted water was used for the spiking solutions and moisture adjustment in order to lower the background D in EA-IRMS measurements of water-extractable D on day 0 (Supplementary Table in Supplementary Note ). The soil incubations were conducted in the dark at 20 °C to prevent photodegradation of the model chemicals. Soil samples were taken from the same bottles on days 4, 16/18, and 36/38 for GLP and 2,4-D and on days 18, 36, and 72 for SMX because of its slower turnover. Each time, ten roughly 0.5–2 g subsamples were taken carefully from different spots within the soil batch to prevent soil disturbance. The sampling did not cause any noticeable disturbances in soil microbial activity as soil respiration showed a continuous decrease of CO 2 towards the end of the incubation periods (Supplementary Fig. in Supplementary Note ). The soil was then pooled into one composite sample in a 50 mL Falcon tube (day 0: 6 g total, afterward: 18 g total), homogenized by stirring with a spatula and stored at − 20 °C until analysis. During spiking and sampling, treatments were handled in the same order to account for the required processing time.
13 C- and D-labeled NERs Total NERs were quantified based on the abundance of D or 13 C remaining in the soil after the ‘soft’ extraction of model compounds and their transformation products by shaking with water-methanol (2,4-D), water-borate buffer (GLP) and water-dichloromethane (SMX) as detailed in Supplementary Note . Soil samples were air-dried over multiple days after the extraction and ground for homogenization. A 2–4 mg aliquot was then weighed into 3.5 × 5 mm tin cartridges (HEKAtech) for analysis by EA-high-temperature conversion-IRMS (D analysis) or EA-combustion-IRMS ( 13 C analysis). The total H and D content and isotopic enrichment of D (at% D/ 1 H) of the D NERs were measured on an EA (EuroEA3000, Euro Vector, Milan, Italy) directly connected via an open split system (ConFlo IV, Thermo Fisher Scientific, Germany) to IRMS (MAT 253 Thermo Fisher Scientific, Germany). The total amount of C and the 13 C/ 12 C isotope ratio of the 13 C NERs were determined using a Flash EA 2000 coupled to a Conflo IV interface and a Delta Advantage mass spectrometer (Finnigan MAT 253, Thermo Scientific, Bremen, Germany). The temperature of the oxidation reactor was 1020 °C, whereas that of the reduction reactor was 650 °C . The isotopic enrichment of D or 13 C in NERs was calculated as excess over the unlabeled control. The detection and quantification limits were estimated at around 3–5% of applied D and 4–5% of applied 13 C (Supplementary Table & in Supplementary Method ).
13 C- and D-labeled amino acids (tAAs) and total 13 C- and D-bioNERs Total amino acids (tAAs) were extracted from the soil as quantitative and qualitative markers for bioNERs. The tAAs from living and decayed biomass are the most reliable quantitative biomarkers for bioNERs as their turnover is comparably slow , . The tAA analysis followed the extraction, purification, and derivatization protocol by Nowak et al. , . Briefly, for tAA extraction, a 2 g soil sample was hydrolyzed with 6 M HCl at 105 °C for 22 h and purified over cation resin (Dowex 50W-X8; 50–100 mesh) solid-phase extraction columns. Impurities were removed by consecutive washing with 2.5 M oxalic acid, 0.01 M HCl, and de-ionized water before eluting the AAs with 2.5 M NH 4 OH. The purified extracts were derivatized by iso-propylation of carboxyl groups and trifluoro-acetylation of amino groups before measurement with gas chromatography-mass spectrometry (GC-MS, HP 6890, Agilent) operating with a BPX-5 column (30 m × 0.25 mm × 0.25 μm; SGE International, Darmstadt, Germany). Individual AAs were identified from an external AA standard containing alanine, glycine, threonine, serine, valine, leucine, isoleucine, proline, aspartate, glutamate, phenylalanine, tyrosine and lysine. Quantification was based on two internal standards, L-norleucine added after the hydrolysis and 4-aminomethylcyclohexanecarboxylic acid added before derivatization. The recovery of AAs from soil hydrolysates using this purification method was 97 ± 11% . Isotopic enrichment in tAAs was measured with gas chromatography-isotope ratio-mass spectrometry (GC-IRMS), using a trace 1310 GC system connected via a GC-IsoLink and a ConFlo IV interface to a MAT 253 (all Thermo Fisher Scientific, Bremen, Germany). Sample separation was achieved with a BPX-5 column (30 m × 0.32 mm × 0.5 µm; Agilent Technology) using helium as a carrier gas. Details on the chromatographic analyses with GC-MS and GC-IRMS are provided in Supplementary Method . The total abundance of different AAs was measured by GC-MS, and the isotopic enrichment (at% D/ 1 H or at% 13 C/ 12 C) in the respective molecule was determined by GC-IRMS after correcting for the isotopic shift during derivatization (Supplementary Method ). Isotope label integration from the labeled compounds was calculated by subtracting the isotopic enrichment in unlabeled control samples. Because the conditions of hydrolysis with 6 M HCl were harsh, the D integrated into the C–D bonds of tAAs must have been non-exchangeable, in contrast to D bound to O, N, or S, which is generally considered to be exchangeable . This was proven by the comparable amounts of 13 C glycine and D glycine for the 2-C-labeling position of GLP, as shown in Supplementary Fig. . The estimated LOD for individual AAs ranged between 0.07–2.4% of applied 13 C and 0.01–3.1% of applied D for 2,4-D and GLP (Supplementary Table , in Supplementary Method ).
D or 13 C enrichment in NERs and tAAs was calculated as the percentage of D or 13 C initially applied with the labeled compounds, and values are presented as mean ± standard deviation. The detailed calculation of 13 C- and D-label incorporation into total NERs and tAAs is explained in Supplementary Method . The mean isotopic enrichment per treatment group (mean enrichment tNERs/AA ) was calculated as the product of the mean total element or AA abundance in soil (mean µmol tNERs/AA ) and the difference between the mean the isotopic enrichment in the labeled treatment (mean at% labeled ) and unlabeled control: 1 [12pt]{minimal}
$${{}}_{{}/{}}= {{}}_{{}/{}} \\ ({{}}_{{}\%{}} -\,{{}}_{{}\%{}}) \\= {{}}_{{}/{}} {{}}_{{}\%{}}$$ mean enrichment tNERs / AA = mean μmol tNERs / AA ⋅ mean at % labeled − mean at % unlabeled = mean μmol tNERs / AA ⋅ mean at % enrichment Therefore, the uncertainty of the mean D or 13 C enrichment in NERs or individual AAs (SD at% enrichment ) was derived considering Gaussian error propagation as follows: 2 [12pt]{minimal}
$${{}}_{{}\%{}}=}}_{{}\%{}}}^{2}+{{{}}_{{}\%{}}}^{2}}$$ SD at % enrichment = SD at % labeled 2 + SD at % unlabeled 2 Where SD at% enrichment , SD at% labeled , and SD at% unlabeled are the standard deviations of the mean isotopic (D or 13 C) enrichment, mean isotopic abundance in the labeled treatment, and mean isotopic abundance in the unlabeled treatment, respectively. The total uncertainty in the mean tNERs or labeled AA contents (SD tNERs/AA ) was calculated as: 3 [12pt]{minimal}
$${{}}_{{}/{}}= {{}}_{{} {}\; {}/{}} \\ }}_{{}\%{}}}{{{}}_{{}\%{}}})}^{2}+{(}}_{{} {}\; {},{}\; {}\; {}}}{{{}}_{{} {}\; {},{}\; {}\; {}}})}^{2}}$$ SD tNERs / AA = mean μ mol tNERs / AA ⋅ SD at % enrichment mean at % enrichment 2 + SD μ mol C , H or AA mean μ mol C , H or AA 2 Here, mean at% enrichment is the mean isotopic enrichment, and mean µmol C, H or AA and SD µmol C, H or AA are, respectively, the mean and standard deviation of the measured total abundance of C, H or the individual AA in the model soil. The total C and H abundance were measured over all sampling days as they were nearly constant while individual AA abundances were calculated separately for each sampling point. The uncertainty in the tAA abundance was calculated analogous to Eq. by taking the square root of the sum of all squared standard deviations for individual AAs that were calculated according to Eq. . Unidentified NERs were calculated from the difference between the total NERs and total bioNERs (tAAs*2). The standard deviations for total bioNERs presented in the text were estimated by applying the conversion factor of 2 to the standard deviation of the tAA measurement; however, it may actually be larger due to the additional uncertainty of the conversion factor.
For statistical analysis, individual replicate values per treatment group were calculated from the mean isotopic abundance in unlabeled controls and the mean total H, C, or AA contents in the soil as mean abundance C/H/AA ∗(at% replicate – mean at% unlabeled ). Statistical tests, hence, only consider the uncertainty of the isotopic abundance in the labeled treatment, whereas error bars in Figs. – show the propagated uncertainty, including the error of the unlabeled control isotopic abundance and the total H, C, or AA abundance. Each treatment group contained three independent measurements of the isotopic enrichment in the labeled treatment from separately processed samples, except for tAAs and glycine of 3-C–D 2-GLP on day 38, where only two replicates were analyzed due to sample losses. Measured contents of total NERs and tAAs from 2,4-D and SMX were assessed for significant differences between D and 13 C tracers at each time point using independent samples and two-tailed t tests. Welch’s degrees of freedom correction for heteroscedasticity was applied as variance ratios were > 3 in all cases. Although assessment of normality is challenging for small samples (here n = 3), Welch-tests were shown to be fairly robust against deviations from normality even for very small group sizes . Since for GLP, four different treatments were used (D vs. 13 C tracer with two different labeling positions each), differences in total NERs, tAA, and glycine contents at each time point were assessed by Kruskal-Wallis tests. In case of significant main effects, Conover-Iman tests were employed for post-hoc comparison. Because of unequal variances, normality was assessed based on quantile-quantile plots of the groupwise standardized residuals (Supplementary Figs. – and Supplementary Note ). Due to the appearance of light-tailed distributions with data gaps, unequal variances, and very small, on day 38 additionally, unbalanced group sizes, nonparametric tests were used. P -values obtained from Welch tests and Conover-Iman tests were corrected for multiple comparisons using the Holm-Bonferroni method of controlling the family-wise error rate. All analyses were carried out with R Statistical Software (v 4.3.3) using a significance level of α = 0.05. Further details on the employed R packages and software output, including t and Χ 2 statistics, degrees of freedom, and exact p -values, are provided in Supplementary Note .
Supplementary Information Peer Review File
|
Comparisons of deep learning algorithms for diagnosing bacterial keratitis via external eye photographs | 5119f6ac-9929-40a2-b266-85d9e30ee0c9 | 8688438 | Ophthalmology[mh] | Infectious keratitis (IK) is a severe corneal infection that is cartegorized into viral keratitis (VK), bacterial keratitis (BK), fungal keratitis (FK), and parasitic keratitis (PK) . BK is one of the most common and vision-threatening types of IK , . The most common risk factor for BK is contact lens wear, which has a growing popularity worldwide due to various purposes such as exercise, cosmesis, and myopic control . Compared to other IKs, BK is much more fulminant and painful in the clinical course. A delayed diagnosis of BK has the potential to lead to large-area corneal ulcerations, melting, and even perforation. Thus, prompt diagnosis and treatment of BK are critical objectives in the face of IK. However, the ophthalmologist supply in many rural settings does not meet the demand for the desired speed of diagnosing BK. Convolutional Neural Network (CNN) has been demonstrated to be highly effective in employing deep learning (DL) on classifying images , . Following the fast development of DL algorithms, artificial intelligence (AI) via image recognition could provide eye-pain patients with a primary screening of BK. Several extremely efficient DL algorithms, including ResNet , DenseNet , ResNeXt , SENet , and EfficientNet , have the potential to develop a model for image diagnosis of BK, and have been demonstrated to be effective in several medical applications – . ResNet brought a breakthrough in deep CNN for image processing . It proposed the residual block, which can be seen as a set of layers. Inside each block, additional connections skip one or more layers like shortcuts that perform identity transformation. The residual block design helped make the network structure deeper without facing the degradation problem, which has been observed that deeper structures lead to higher errors in the training process than saturated ones. DenseNet is a representative CNN-based method with fewer computations and is more effective than ResNet . In DenseNet, the dense blocks can be thought of as an enhanced version of the residual block. Instead of one shortcut in each block, it connects all layers directly with each other inside the block in a feedforward manner. Moreover, DenseNet combines the feature maps learned by different layers with concatenation, increasing the input variation of subsequent layers and ameliorating efficiency. ResNeXt was proposed based on the concept of ResNet . It exploited a split-transform-merge strategy, splitting a module block into multi-branch low-dimensional embeddings to perform transformation and aggregated by summation as output. For comparison, the shortcut connection in ResNet can be taken as a two-branch network where one branch is the identity mapping. This strategy exposes a new factor, cardinality, which impacts the dimension of depth and width and supports building an effective multi-branch structure while maintaining computation complexity. SENet used the channel-attention idea to make DL models learn the crucial channels in the training process. The channel-attention is a concept that the model considers the relationship between each channel inside the CNN structure and gives greater attention or heavier weights on crucial channels learned from the training process. Moreover, it can be applied to many existing DL methods to boost their performance, such as SE-ResNet . EfficientNet was developed by a technique of neural architecture search, utilizing the search approach for a baseline neural architecture (EfficientNet B0) optimized with both accuracy and computation cost. After that, the baseline network was scaled to generate other EfficientNets (from B1 and up to B7) by a compound scaling method that used a compound coefficient to uniformly scale the network depth, width, and resolution to get a better performance . Recently, two researches demonstrated a DL model via external eye photos with a terrific performance in diagnosing BK , . However, one adopted two kinds of images (external eye photos and fluorescence staining photos) and processed these photos with an image segmentation technique . The other adopted a specific image transformation technique before running a DL diagnostic model . In this study, we aimed to elucidate the faithful performance of different DL models in diagnosing BK via an external eye photo. Thus, this study compared the presentation of DL models based on image level and used a single external eye photo without other preprocessing techniques, such as image transformation or segmentation.
Study design & subjects We collected external eye photos and reviewed medical records from patients with clinically suspected IK who presented to five Chang Gung Memorial Hospital (CGMH) branches from June 1, 2007 to May 31, 2019. According to the individual standard procedures in CGMH branches, external eye photography was performed by certified ophthalmic technicians using a camera-mounted slit lamp biomicroscope. One photo using white light illumination (no enhancing slit beam) was collected for each patient in the following experiments. The study was approved by the Chang Gung Medical Foundation Institutional Review Board (Ethical approval code: 201901255B0C601) and adhered to the ARVO statement on human subjects and the Declaration of Helsinki. The Chang Gung Medical Foundation Institutional Review Board waived the need for informed consent for patients in this study based on a retrospective design and the privacy protection via delinking personal identification at image and data analysis. The definition of IK from the enrolled patients must meet one of the following criteria: (1) at least one of the following laboratory confirmations, including direct microscopy (Gram or acid-fast stain), culture (blood agar, chocolate agar, Sabouraud dextrose agar, or Löwenstein–Jensen slant) and molecular tests (polymerase chain reaction, or dot hybridization assay) for corneal scraping samples, and pathological examination for corneal biopsy samples – , (2) three experienced corneal specialists (≥ 8 years of qualification in the specialty) made a consensus impression of one specific kind of IK for the same case. The subject was excluded if (1) mixed infections or contaminated organisms such as Staphylococcus epidermidis or Micrococcus spp. were reported by laboratory tests and (2) three experienced corneal specialists could not reach a consensus impression. Via disease code tracking, a total of 1985 photos from 1985 clinically suspected IK patients were initially included, while only 1512 photos from 1512 patients were enrolled after exclusion. Image preprocessing of subjects’ external eye photos The procedure of image preprocessing was similar to our previous report . In brief, the date of photography and identification information footnoted in the photo were pre-cut with a batch processing manner with a specially designed software automatically. The input images were uniformly resized to 224 × 224 pixels, which is a standard-setting for deep learning methods. Each pixel’s RGB values of a photograph were normalized in a range from 0 to 1. Establishment of different DL-based diagnostic models of BK The framework shown in Fig. was the newly established DL models for diagnosing BK via an external eye photo in this study. The training images were used to train a DL model for differentiating BK from non-BK photos (Fig. ), whereas the validation images were used to test the performance of a trained model. After the randomization, each diagnostic model was trained with the respective DL algorithm toward the target with the greatest area under the receiver operating characteristic curve (AUROC). To generate the optimal model, we empirically tuned the hyperparameters of each model, including learning rate, the number of dense blocks, growth rate, and batch size according to the validation results. The Grad-CAM++ was applied for a visual explanation of these DL models . The models were implemented in PyTorch, and all the experiments were performed on NVIDIA GeForce RTX 1080 GPUs. Diagnostic validation Five-fold cross-validation was adopted to determine the sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy of each DL diagnostic model. In brief, the photos were classified as BK group (n = 929) and non-BK group (n = 583), which included FK (n = 383), VK (n = 128), and PK (n = 72). The photos of each group were randomly and equally assigned into five datasets (stratified fivefold cross-validation). There were 185–186 photos of BK & 116–117 photos of non-BK in each dataset. Four of the five datasets were used to train a DL diagnostic model, and the residual one was used to validate the model. Thus, there were five rounds of experiments for the performance validation of DL models. Statistical analysis The average sensitivity, specificity, PPV, NPV, and accuracy of diagnosing BK were compared for different DL models. The 95% Wilson/Brown binomial confidence intervals for the above indices were estimated. The Fisher’s exact test was used for pairwise comparison of the performance index between two different DL models. Moreover, AUROC was alternatively used to compare the performance of various DL models, and the Z score test determined the statistical difference between any two models. A significant difference was set at P < 0.05 and analyzed by GraphPad Prism version 9.2.0 for Windows (GraphPad Software, San Diego, CA).
We collected external eye photos and reviewed medical records from patients with clinically suspected IK who presented to five Chang Gung Memorial Hospital (CGMH) branches from June 1, 2007 to May 31, 2019. According to the individual standard procedures in CGMH branches, external eye photography was performed by certified ophthalmic technicians using a camera-mounted slit lamp biomicroscope. One photo using white light illumination (no enhancing slit beam) was collected for each patient in the following experiments. The study was approved by the Chang Gung Medical Foundation Institutional Review Board (Ethical approval code: 201901255B0C601) and adhered to the ARVO statement on human subjects and the Declaration of Helsinki. The Chang Gung Medical Foundation Institutional Review Board waived the need for informed consent for patients in this study based on a retrospective design and the privacy protection via delinking personal identification at image and data analysis. The definition of IK from the enrolled patients must meet one of the following criteria: (1) at least one of the following laboratory confirmations, including direct microscopy (Gram or acid-fast stain), culture (blood agar, chocolate agar, Sabouraud dextrose agar, or Löwenstein–Jensen slant) and molecular tests (polymerase chain reaction, or dot hybridization assay) for corneal scraping samples, and pathological examination for corneal biopsy samples – , (2) three experienced corneal specialists (≥ 8 years of qualification in the specialty) made a consensus impression of one specific kind of IK for the same case. The subject was excluded if (1) mixed infections or contaminated organisms such as Staphylococcus epidermidis or Micrococcus spp. were reported by laboratory tests and (2) three experienced corneal specialists could not reach a consensus impression. Via disease code tracking, a total of 1985 photos from 1985 clinically suspected IK patients were initially included, while only 1512 photos from 1512 patients were enrolled after exclusion.
The procedure of image preprocessing was similar to our previous report . In brief, the date of photography and identification information footnoted in the photo were pre-cut with a batch processing manner with a specially designed software automatically. The input images were uniformly resized to 224 × 224 pixels, which is a standard-setting for deep learning methods. Each pixel’s RGB values of a photograph were normalized in a range from 0 to 1.
The framework shown in Fig. was the newly established DL models for diagnosing BK via an external eye photo in this study. The training images were used to train a DL model for differentiating BK from non-BK photos (Fig. ), whereas the validation images were used to test the performance of a trained model. After the randomization, each diagnostic model was trained with the respective DL algorithm toward the target with the greatest area under the receiver operating characteristic curve (AUROC). To generate the optimal model, we empirically tuned the hyperparameters of each model, including learning rate, the number of dense blocks, growth rate, and batch size according to the validation results. The Grad-CAM++ was applied for a visual explanation of these DL models . The models were implemented in PyTorch, and all the experiments were performed on NVIDIA GeForce RTX 1080 GPUs.
Five-fold cross-validation was adopted to determine the sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy of each DL diagnostic model. In brief, the photos were classified as BK group (n = 929) and non-BK group (n = 583), which included FK (n = 383), VK (n = 128), and PK (n = 72). The photos of each group were randomly and equally assigned into five datasets (stratified fivefold cross-validation). There were 185–186 photos of BK & 116–117 photos of non-BK in each dataset. Four of the five datasets were used to train a DL diagnostic model, and the residual one was used to validate the model. Thus, there were five rounds of experiments for the performance validation of DL models.
The average sensitivity, specificity, PPV, NPV, and accuracy of diagnosing BK were compared for different DL models. The 95% Wilson/Brown binomial confidence intervals for the above indices were estimated. The Fisher’s exact test was used for pairwise comparison of the performance index between two different DL models. Moreover, AUROC was alternatively used to compare the performance of various DL models, and the Z score test determined the statistical difference between any two models. A significant difference was set at P < 0.05 and analyzed by GraphPad Prism version 9.2.0 for Windows (GraphPad Software, San Diego, CA).
Performance of the non-EfficientNet DL models for diagnosing BK The average performances of five cross-validations of the four non-Efficient DL models were shown in Table . Among the four DL models, SE-ResNet50 revealed the highest sensitivity (82.4%), PPV (74.4%), NPV (66.5%), and accuracy (71/7%), while ResNeXt50 showed the highest specificity (55.1%). However, all the performance indices for diagnosing BK did not reach statistical difference between any two models in the four non-EfficientNet DL models. Performance of the EfficientNet DL models for identifying BK The average diagnostic performances of the four Efficient DL models were shown in Table . Among the four DL models, EfficientNet B0 had the highest sensitivity (74.4%), whereas EfficientNet B3 had the highest specificity (64.3%) and PPV (76.8%). EfficientNets B1 and B3 had the highest NPV (61.1%) and accuracy (70.3%), equivalently. However, there was no significant performance difference between any two models in the four EfficientNet DL models in diagnosing BK. Comparing the EfficientNet models from the non-EfficientNet models in diagnosing BK When comparing the four non-EfficientNet models (Table ) and the EfficientNet models (Table ), all non-EfficientNet models had significantly higher sensitivity than those of the EfficientNet models (Fig. a). In contrast, all EfficientNet models had significantly higher specificities than those of the non-EfficientNet models except for EfficientNet B0, which did not reach significance when compared with ResNeXt50 or SE-ResNet50 (Fig. b). EfficientNets B1, B2, and B3 models had significantly higher PPV than that of the ResNet50 (Fig. c), whereas the ResNet50 had significantly higher NPV than those of the EfficientNets B0 and B2 (Fig. d). The accuracy and AUROC summarize the above performance indices in diagnosing BK. We found that all non-EfficientNet and EfficientNet models had no significant difference in diagnostic accuracy (ranged from 68.8% to 71.7%; Fig. a) and AUROC (ranged from 73.4% to 76.5%, Fig. b). The receiver operating characteristic curves of the fivefold cross-validation of the four models with the greatest AUROCs, SE-ResNet50, DenseNet121, EfficientNets B1, and B3, were shown in Fig. .
The average performances of five cross-validations of the four non-Efficient DL models were shown in Table . Among the four DL models, SE-ResNet50 revealed the highest sensitivity (82.4%), PPV (74.4%), NPV (66.5%), and accuracy (71/7%), while ResNeXt50 showed the highest specificity (55.1%). However, all the performance indices for diagnosing BK did not reach statistical difference between any two models in the four non-EfficientNet DL models.
The average diagnostic performances of the four Efficient DL models were shown in Table . Among the four DL models, EfficientNet B0 had the highest sensitivity (74.4%), whereas EfficientNet B3 had the highest specificity (64.3%) and PPV (76.8%). EfficientNets B1 and B3 had the highest NPV (61.1%) and accuracy (70.3%), equivalently. However, there was no significant performance difference between any two models in the four EfficientNet DL models in diagnosing BK.
When comparing the four non-EfficientNet models (Table ) and the EfficientNet models (Table ), all non-EfficientNet models had significantly higher sensitivity than those of the EfficientNet models (Fig. a). In contrast, all EfficientNet models had significantly higher specificities than those of the non-EfficientNet models except for EfficientNet B0, which did not reach significance when compared with ResNeXt50 or SE-ResNet50 (Fig. b). EfficientNets B1, B2, and B3 models had significantly higher PPV than that of the ResNet50 (Fig. c), whereas the ResNet50 had significantly higher NPV than those of the EfficientNets B0 and B2 (Fig. d). The accuracy and AUROC summarize the above performance indices in diagnosing BK. We found that all non-EfficientNet and EfficientNet models had no significant difference in diagnostic accuracy (ranged from 68.8% to 71.7%; Fig. a) and AUROC (ranged from 73.4% to 76.5%, Fig. b). The receiver operating characteristic curves of the fivefold cross-validation of the four models with the greatest AUROCs, SE-ResNet50, DenseNet121, EfficientNets B1, and B3, were shown in Fig. .
BK is the most common IK in subtropical regions , and is a principal cause of corneal scar leading to visual loss worldwide . Recently, some authors reported that DL-based image diagnosis had an excellent diagnostic rate for BK , , . Their results showed that different DL algorithms possessed diverse diagnostic performance, in which DenseNet and ResNet50 revealed the best performance , , . However, these researchers adopted different niches to promote their own DL models in diagnosing BK, which made the actual performance of DL models incomparable among these studies. Therefore, we compared the potential DL algorithms via an external eye photo under the same verification setting without adding a fluorescence staining photo, performing image segmentation, or transforming the images before running a DL model for diagnosing BK. This study found non-EfficientNet models (ResNet50, ResNeXt50, DenseNet121, SE-ResNet50) were more sensitive than EfficientNet models (EfficientNets B0, B1, B2, and B3), while EfficientNet models were more specific than non-EfficientNet models. All the above models had comparable accuracy and AUROC. In this study, not all IK were confirmed by laboratory tests. The confirmation rate of BK, FK, VK, and PK was 54%, 68%, 23%, and 47%, respectively. According to their clinical presentations and treatment histories, some subjects were diagnosed unanimously by the three corneal experts. Most BK and FK patients had typical presentations for these subjects, and they were treated early and successfully with empirical regimens, making laboratory tests unnecessary or unrecovered. Most VK patients were herpes keratitis, and most PK subjects were microsporidial keratitis. The epithelial type of herpes and microsporidia keratitis was usually diagnosed by the pathognomonic signs and response to treatment. Therefore, we must incorporate experts’ consensus diagnosis as the supplementary diagnostic standard to include these typical subjects. Unavoidably, there may be few subjects inherently misclassified into other kinds of IK. However, many DL models adopted experts’ diagnosis or grading as a gold standard – , and a DL system learning from experts’ impressions may decrease the interference from atypical presentations of some IK subjects. Thus, we ultimately decided to include the subjects with consensus diagnosis from three experts in this study. The photographic diagnosis of BK via ophthalmologists was reported with 66–75% sensitivity and 68–90% specificity , . In our study, the sensitivity and specificity of the four non-EfficientNet models were 79–82% and 50–55%, respectively. The sensitivity and specificity of the EfficientNet models were 73–74% and 60–64%, respectively. These DL models had higher sensitivity but lower specificity than those of the ophthalmologists in the image diagnosis of BK. Redd et al. adopted a pre-trained ResNet50 model and used 70 test images, and they found the sensitivity and specificity in diagnosing BK was about 70% and 80%, respectively . Under the same image-level as our study, which had no assistant processing such as segmentation, the accuracies of DL models VGG-16, GoogLeNet-v3, and DenseNet were 48.8%, 53.4%, and 60.5%, respectively . Hung et al. did not show the data but mentioned nearly a 70% accuracy in diagnosing BK via a DenseNet model with a combination of two image types (external eye photos and fluorescence staining photos) under no image segmentation processing . In our study, we found the accuracy of candidate models was 68.8–71.7% (Tables and ), which was higher than the best result (60.5%) of Xu et al. in the same image level . We further adopted AUROC as a performance index to compare these potential DL algorithms in diagnosing BK (Fig. b). The top four models based on AUROC were EfficientNet B3, EfficientNet B1, SE-ResNet50, and DenseNet121 in order. SE-ResNet incorporates the channel-attention operation of SENet to booster ResNet by focusing on learning the crucial channels , and DenseNet can lessen the vanishing-gradient problem, fortify feature propagation, encourage feature reuse, and considerably reduce the number of parameters . EfficientNet architectures are upgraded from baseline B0 (lowest computation cost) to highest B7 (highest computation cost with theoretically best accuracy) . Due to the limitation of our computation resource, this study used EfficientNet-B0 to B3 models. Therefore, we can expect a growing performance in diagnosing BK via an external eye photo to be achieved by incorporating different DL models or introducing a more effective DL model. In addition to introducing more powerful DL algorithms, there are other potential methods for promoting a DL model’s performance in diagnosing BK via external eye images. Xu et al. introduced a sequential-level feature learning model by annotating the centroid of the lesion to build a minimum circumscribed area and then partitioning the scaling up circular rings for training . They found that this approach promotes the accuracy of diagnosing BK from 60.5% (image level via DenseNet) to 75.3% (sequential level via random-ordered patches). Hung et al. reported their DL diagnostic system for BK achieved 80% to 96% accuracy . They adopted two images (external eye photos and fluorescence staining photos) and developed an additional image segmentation model for obtaining a cropped cornea before running a DL model. In addition, they narrowed the classification targets, which included only images of BK and FK for identification. Redd et al. adopted a specially designed portable digital camera with a pre-trained ResNet50 model . They used 70 photos (50% were BK and 50% were FK) for testing and found that the diagnostic accuracy for BK was 76%. This study implied pictures from the same photographic system might promote the performance of a DL model. All of the above approaches showed that the image-based DL technology is an up-and-coming tool for diagnosing BK. In this study, we used Grad-CAM++ to generate the heat maps for explaining the results from DL models. In Fig. a, although all DL models correctly classified a BK image, the distribution of the heat maps demonstrated that EfficientNets were more focused on the lesions. The other models may have focused on not only the correct regions but also loci that were out of the lesions. In Fig. b, the classification results of a non-BK image showed that most models focused on the lesions, though ResNet50 and ResNeXt50 covered a more extensive range. In conclusion, it is practical to promote the performance of an AI system for image diagnosis of BK via adopting a robust DL algorithm. SE-ResNet, DenseNet121, EfficientNets B1 and B3 possessed the greatest AUROC, in which DenseNet was recognized as the best DL algorithm in diagnosing not only BK but also FK , , . We believe the requirement of an additional fluorescence staining photo, sophisticated image segmentation or transformation, and a specially designed camera will be gradually decreased by introducing a more effective DL model in diagnosing BK based solely on an external eye image. This approach may be more practical and useful in clinical settings without ophthalmological medical personnel.
|
null | 879ffaab-55cb-4d8e-8b25-36d5272db87b | 11837533 | Microbiology[mh] | Fatty acids (FAs) are key components of various lipid types and play important biological roles in life, such as serving as an energy resource, participating in signaling pathways, and stabilizing cell membranes . They are incorporated into lipid structures through ester bonds and can be classified into different groups based on their carbon chain length. Furthermore, they can be classified on their degree of saturation and the location and orientation of double bonds . Polyunsaturated fatty acids (PUFAs), especially omega-3 and omega-6 PUFAs, are essential fatty acids for all higher organisms and have diverse functions, including building the lipid bilayer of plasma membranes . The degree of saturation of the lipid bilayer determines its permeability and fluidity, which can influence cell integrity and metabolism for energy production, growth, and replication, particularly in response to environmental stressors like pressure, temperature, or oxidative stress . Omega-3 ( ω 3) fatty acids have anti-inflammatory functions and oppose the pro-inflammatory effects mediated by omega-6 ( ω 6) fatty acids. The omega-3 and omega-6 PUFAs, especially eicosapentaenoic acid (EPA), arachidonic acid (ARA), and docosahexaenoic acid (DHA), are physiologically important for broad precursors for eicosanoid secondary metabolites . The production of eicosanoids is critical for insects and annelids to modulate physiological processes and immunity . De novo PUFA synthesis by microorganisms PUFAs are synthesized by two known pathways. The aerobic pathway involves elongation and oxygenic desaturation steps which are catalyzed by specific desaturases and elongases until the final PUFA length and unsaturation is reached . The anaerobic pathway synthesizes PUFAs via a multifunctional mega-enzyme, and double bonds are not introduced through oxygenic desaturation but rather by a PUFA synthase . This process, which was fully elucidated in 2001 , is encoded by a type I polyketide-like fatty acid synthase consisting of canonical fatty acid synthase domains , situated among either three genes (protozoa and myxobacteria) or five genes (marine bacteria). While the aerobic pathway is used by higher animals, lower plants, and eukaryotic microorganisms, the anaerobic pathway is limited to microorganisms such as fungi, algae, and bacteria . The final fatty acid product that can be produced depends on the types of elongases and desaturases encoded in the genome and the selective conditions of their activity. A description of the enzymatic pathway and evolutionary considerations of organismal production patterns have been covered in detail, readers are especially referred to references and . To summarize, there is a widespread presence of aerobic PUFA synthase genes among invertebrate taxa, including members of Nematoda, Cnidaria, Rotifera, Mollusca, Arthropoda, and Annelida. These genes putatively confer de novo synthesis capacity, and their functional characterization gives further evidence that their presence is part of an evolutionarily conserved adaptation to meet physiological needs for certain long-chain PUFAs. Even so, major groups of invertebrate producers experience a fitness cost when dietarily preformed sources of PUFAs are absent and exhibit a dramatic reduction in growth and reproduction , suggesting that outsourcing PUFA production to either primary or more basal producers is preferred. Earthworms, as part of Annelida, may contain the enzymatic repertoire in their genome to produce long-chain PUFAs , though this has not been specifically explored in Lumbricidae. Terrestrial and freshwater vertebrates are physiologically constrained by the limited availability of long-chain PUFAs and cope by either adjusting dietary foraging behaviors or through adaptive radiations to increase synthesis capacity encoded in the genome . For example, mammals lack Δ12- and Δ15-desaturases, preventing the potential to synthesize linoleic and alpha-linoleic acid, which are precursors to longer-chain FAs . Therefore, EPA, ARA, and DHA are considered essential in the mammalian diet. However, PUFAs accumulate at higher trophic levels, starting from microalgae and bacteria as primary producers . Marine bacteria and microalgae use PUFAs in membrane lipids to maintain sufficient membrane fluidity in high-pressure and low-temperature environments . Linking microbial activity to the food web As PUFAs are important to human and animal development, physiology, and health, it is crucial to better understand PUFA producers and their role in the food web. This knowledge is particularly valuable to the biochemical engineering industry, as it aims to develop biosynthetic methods for PUFA production . In addition, there is a significant goal to increase PUFA yield from natural microorganisms, reducing dependence on marine resources and promoting sustainable production . However, the supply of PUFAs is linked with aquatic organisms, and there is little evidence or expectation for a de-novo PUFA producer in terrestrial habitats, leaving open the question as to how these habitats are adequately supplied. Therefore, understanding how PUFAs are produced in terrestrial ecosystems and supplied to terrestrial food webs has cross-disciplinary interest from fields such as human evolution, ecology, and microbiology. The bacterial contribution to this effect has received relatively far less attention than plant and eukaryotic producer systems, yet there is promising genetic and functional evidence that soil bacteria are capable PUFA producers . One unique study focused on the earthworm Lumbricus terrestris and revealed increasing concentrations of PUFAs, including ARA, EPA, and DHA, from the surrounding soil to the intestinal contents—referred to hereafter as “gut soil”—of L. terrestris . The authors concluded that a variety of unique fatty acids, including the long-chain PUFAs, are produced by microorganisms inside of the earthworm gut, and that these lipids are then transferred from the intestine to the luminal cells and then finally into muscle tissue. The long-chain PUFAs were hypothesized to be used for energy needs of the L. terrestris . Another study analyzed the fatty acid composition of gut soil from the epigeic composting earthworm, Eisenia fetida , in a comparison between bulk soil containing the earthworms, and soil without . The study reported a lower total fatty acid content in soil with E. fetida , along with reduced microbial biomass and diversity. The identified fatty acids ranged from 12 to 20 carbon atoms in length, but no unsaturation was detected among the 20-carbon chain varieties. Overall, these studies report that earthworms modify soil properties and microbial communities by selectively filtering microbial biomass, and additionally that earthworms likely benefit from the absorption of bacterial metabolites, including certain lipids. Efforts have been dedicated not only to the exploration of fatty acid production in earthworms but also to the analysis of microbial compositions within diverse earthworm species toward understanding their role in composting and soil revitalization. One study examined the gut microbiome of earthworm species from three genera: Aporrectodea, Allolobophora , and Lumbricus , which occupy two different ecotypes. The Lumbricus are vertically burrowing “anecic” worms, whereas the Aporrectodea and Allolobophora are horizontally burrowing “endogeic” worms . By utilizing 16S rRNA amplicon sequencing, it was discovered that Pseudomonadota was the primary phylum present in both earthworm ecotypes. Furthermore, the genus Arthrobacter and the class of Gammaproteobacteria were indicator taxa for Lumbricus , while at higher taxonomic levels, the Acidobacter and Alphaproteobacteria were indicative of Aporrectodea . Further distinctions between ecotypes included the presence of bacteriovore protists, as well as the richness of bacterial taxa, which is considered to reflect the variation in the feeding behavior of these earthworms. Another study focused on the top-soil and leaf-litter dwelling “epigeic” E. fetida and assessed the microbial composition of its microbiome when fed with different compost types, including brewers spent grain, cow manure, and a 50/50 mixture of both . Analysis of the earthworm castings (feces) revealed that, regardless of the treatment, the dominant phyla remained Proteobacteria and Bacteroidetes. However, a significant increase in Firmicutes and Actinobacteria was observed when E. fetida was fed with either 100% or 50% brewers spent grain. Similar results have been reported when E. fetida was fed vegetable waste . In sum, the effects of feeding behavior and the impact of soil conditions have been well studied in how they can modify the composition of the earthworm gut soil microbiota. These works lend important insights into how the earthworm gut soil profile is distinct but related to the microbial composition of bulk soil. Variations between different types of earthworms appear dependent upon their ecotype rather than species. These previous works suggest that the gut soil microbiome of earthworms may contain PUFAs that are not initially present in the soil that they inhabit, and so, are not taken up from primary consumption. However, the combined study of the gut microbiota and the lipid metabolites from the same sample set is needed to resolve the source of PUFA production and to define the microbial organisms involved in this production. We address this gap with an analysis of the fatty acid and microbial compositions of gut soil, bulk soil, and earthworm castings of E. fetida . We hypothesize that bacteria in terrestrial environments, including those in the guts of terrestrial animals, may be capable of producing PUFAs, but only under certain conditions or with specific microbial neighbors. To test this hypothesis, the gut soil content of E. fetida was analyzed in three parts. First, the fatty acid composition was analyzed using gas-chromatography mass spectrometry (GC-MS). Second, the prokaryotic and eukaryotic constituents were profiled using 16S and 18S rRNA gene amplicon sequencing. Finally, amplicon sequencing was performed on the initiator ketoacyl-synthase (KS) domain on the polyunsaturated fatty acid gene ( pfaA ) to identify microorganisms with the genetic potential to de-novo produce PUFAs using the iterative type-1 polyketide-like synthase system. This work uses the earthworm and its gut microbiota as a model for terrestrial ecosystem dynamics to identify potential PUFA producers and to help us understand the origin of PUFAs in terrestrial ecosystems and the diversity of microorganisms that may provide them. PUFA synthesis by microorganisms PUFAs are synthesized by two known pathways. The aerobic pathway involves elongation and oxygenic desaturation steps which are catalyzed by specific desaturases and elongases until the final PUFA length and unsaturation is reached . The anaerobic pathway synthesizes PUFAs via a multifunctional mega-enzyme, and double bonds are not introduced through oxygenic desaturation but rather by a PUFA synthase . This process, which was fully elucidated in 2001 , is encoded by a type I polyketide-like fatty acid synthase consisting of canonical fatty acid synthase domains , situated among either three genes (protozoa and myxobacteria) or five genes (marine bacteria). While the aerobic pathway is used by higher animals, lower plants, and eukaryotic microorganisms, the anaerobic pathway is limited to microorganisms such as fungi, algae, and bacteria . The final fatty acid product that can be produced depends on the types of elongases and desaturases encoded in the genome and the selective conditions of their activity. A description of the enzymatic pathway and evolutionary considerations of organismal production patterns have been covered in detail, readers are especially referred to references and . To summarize, there is a widespread presence of aerobic PUFA synthase genes among invertebrate taxa, including members of Nematoda, Cnidaria, Rotifera, Mollusca, Arthropoda, and Annelida. These genes putatively confer de novo synthesis capacity, and their functional characterization gives further evidence that their presence is part of an evolutionarily conserved adaptation to meet physiological needs for certain long-chain PUFAs. Even so, major groups of invertebrate producers experience a fitness cost when dietarily preformed sources of PUFAs are absent and exhibit a dramatic reduction in growth and reproduction , suggesting that outsourcing PUFA production to either primary or more basal producers is preferred. Earthworms, as part of Annelida, may contain the enzymatic repertoire in their genome to produce long-chain PUFAs , though this has not been specifically explored in Lumbricidae. Terrestrial and freshwater vertebrates are physiologically constrained by the limited availability of long-chain PUFAs and cope by either adjusting dietary foraging behaviors or through adaptive radiations to increase synthesis capacity encoded in the genome . For example, mammals lack Δ12- and Δ15-desaturases, preventing the potential to synthesize linoleic and alpha-linoleic acid, which are precursors to longer-chain FAs . Therefore, EPA, ARA, and DHA are considered essential in the mammalian diet. However, PUFAs accumulate at higher trophic levels, starting from microalgae and bacteria as primary producers . Marine bacteria and microalgae use PUFAs in membrane lipids to maintain sufficient membrane fluidity in high-pressure and low-temperature environments . As PUFAs are important to human and animal development, physiology, and health, it is crucial to better understand PUFA producers and their role in the food web. This knowledge is particularly valuable to the biochemical engineering industry, as it aims to develop biosynthetic methods for PUFA production . In addition, there is a significant goal to increase PUFA yield from natural microorganisms, reducing dependence on marine resources and promoting sustainable production . However, the supply of PUFAs is linked with aquatic organisms, and there is little evidence or expectation for a de-novo PUFA producer in terrestrial habitats, leaving open the question as to how these habitats are adequately supplied. Therefore, understanding how PUFAs are produced in terrestrial ecosystems and supplied to terrestrial food webs has cross-disciplinary interest from fields such as human evolution, ecology, and microbiology. The bacterial contribution to this effect has received relatively far less attention than plant and eukaryotic producer systems, yet there is promising genetic and functional evidence that soil bacteria are capable PUFA producers . One unique study focused on the earthworm Lumbricus terrestris and revealed increasing concentrations of PUFAs, including ARA, EPA, and DHA, from the surrounding soil to the intestinal contents—referred to hereafter as “gut soil”—of L. terrestris . The authors concluded that a variety of unique fatty acids, including the long-chain PUFAs, are produced by microorganisms inside of the earthworm gut, and that these lipids are then transferred from the intestine to the luminal cells and then finally into muscle tissue. The long-chain PUFAs were hypothesized to be used for energy needs of the L. terrestris . Another study analyzed the fatty acid composition of gut soil from the epigeic composting earthworm, Eisenia fetida , in a comparison between bulk soil containing the earthworms, and soil without . The study reported a lower total fatty acid content in soil with E. fetida , along with reduced microbial biomass and diversity. The identified fatty acids ranged from 12 to 20 carbon atoms in length, but no unsaturation was detected among the 20-carbon chain varieties. Overall, these studies report that earthworms modify soil properties and microbial communities by selectively filtering microbial biomass, and additionally that earthworms likely benefit from the absorption of bacterial metabolites, including certain lipids. Efforts have been dedicated not only to the exploration of fatty acid production in earthworms but also to the analysis of microbial compositions within diverse earthworm species toward understanding their role in composting and soil revitalization. One study examined the gut microbiome of earthworm species from three genera: Aporrectodea, Allolobophora , and Lumbricus , which occupy two different ecotypes. The Lumbricus are vertically burrowing “anecic” worms, whereas the Aporrectodea and Allolobophora are horizontally burrowing “endogeic” worms . By utilizing 16S rRNA amplicon sequencing, it was discovered that Pseudomonadota was the primary phylum present in both earthworm ecotypes. Furthermore, the genus Arthrobacter and the class of Gammaproteobacteria were indicator taxa for Lumbricus , while at higher taxonomic levels, the Acidobacter and Alphaproteobacteria were indicative of Aporrectodea . Further distinctions between ecotypes included the presence of bacteriovore protists, as well as the richness of bacterial taxa, which is considered to reflect the variation in the feeding behavior of these earthworms. Another study focused on the top-soil and leaf-litter dwelling “epigeic” E. fetida and assessed the microbial composition of its microbiome when fed with different compost types, including brewers spent grain, cow manure, and a 50/50 mixture of both . Analysis of the earthworm castings (feces) revealed that, regardless of the treatment, the dominant phyla remained Proteobacteria and Bacteroidetes. However, a significant increase in Firmicutes and Actinobacteria was observed when E. fetida was fed with either 100% or 50% brewers spent grain. Similar results have been reported when E. fetida was fed vegetable waste . In sum, the effects of feeding behavior and the impact of soil conditions have been well studied in how they can modify the composition of the earthworm gut soil microbiota. These works lend important insights into how the earthworm gut soil profile is distinct but related to the microbial composition of bulk soil. Variations between different types of earthworms appear dependent upon their ecotype rather than species. These previous works suggest that the gut soil microbiome of earthworms may contain PUFAs that are not initially present in the soil that they inhabit, and so, are not taken up from primary consumption. However, the combined study of the gut microbiota and the lipid metabolites from the same sample set is needed to resolve the source of PUFA production and to define the microbial organisms involved in this production. We address this gap with an analysis of the fatty acid and microbial compositions of gut soil, bulk soil, and earthworm castings of E. fetida . We hypothesize that bacteria in terrestrial environments, including those in the guts of terrestrial animals, may be capable of producing PUFAs, but only under certain conditions or with specific microbial neighbors. To test this hypothesis, the gut soil content of E. fetida was analyzed in three parts. First, the fatty acid composition was analyzed using gas-chromatography mass spectrometry (GC-MS). Second, the prokaryotic and eukaryotic constituents were profiled using 16S and 18S rRNA gene amplicon sequencing. Finally, amplicon sequencing was performed on the initiator ketoacyl-synthase (KS) domain on the polyunsaturated fatty acid gene ( pfaA ) to identify microorganisms with the genetic potential to de-novo produce PUFAs using the iterative type-1 polyketide-like synthase system. This work uses the earthworm and its gut microbiota as a model for terrestrial ecosystem dynamics to identify potential PUFA producers and to help us understand the origin of PUFAs in terrestrial ecosystems and the diversity of microorganisms that may provide them. We analyzed the total fatty acid content and microbiome profiles of three sample types: pre-compost (PC), vermicompost (VC), and the gut soil (GS) of the earthworm E. fetida . PC is a mixture of horse manure and thermophilic compost used as feed and applied once per week, while VC is worm castings (feces) that are normally collected via sifting and used as an organic fertilizer. The sampling was conducted over 2 weeks, with samples collected on three specifically spaced days within each week. PC was only sampled on the first day of each weekly “feeding” cycle, as it is laid freshly onto the existing E. fetida compost housing once a week. Earthworm gut soil harbors a distinct fatty acid profile enriched in PUFAs The fatty acid profile of samples collected in weeks 1 and 2 was compared to identify any batch effects between sampling weeks based on sample type or sampling days (as shown in Materials and Methods). Permutational multivariate analysis of variance (PerMANOVA) showed a significant difference ( P = 0.016) between the sampling weeks, but this only explained 0.5% of the variation, while most of the variation (87.6%, P = 0.001) was explained by the sample type . We then examined whether there were any significant differences between the respective days of sampling for each sample type using the Kolmogorov–Smirnov test. No significant differences were identified for all sample types; therefore, the data from weeks 1 and 2 were merged for all subsequent fatty acid composition analyses. According to the analysis of the fatty acid composition of PC presented in , palmitic acid (16:0) was found to be the most abundant fatty acid, constituting on average over 30% of the detected fatty acids, followed by stearic acid (18:0) and oleic acid (18:1 cis-ω 9) at 13% and 14%, respectively. The only PUFA detected in PC was linoleic acid (18:2 ω 6), which accounted for more than 6% of the total fatty acid content and likely derived from bulk plant material in the PC mixture. Linoleic acid is known to be a precursor for the synthesis of the omega-6 series of fatty acids, particularly leading to ARA, whereas α-linolenic acid, which is the precursor to the omega-3 series, including EPA and DHA, was not detected. One curious finding was the presence of trans -fatty acids. Although bacteria do not usually produce trans -fatty acids, they can produce them under specific circumstances, which has been demonstrated in Vibrio and Pseudomonas . In Vibrio , a temperature shift caused cis / trans isomerization, while in Pseudomonas , toxic compounds such as organic hydrocarbons resulted in isomerization . PC partly consists of thermophilic compost which has been heat processed through fermentative activity, and this temperature increase could potentially have caused cis/trans isomerization from bacterially originated FAs. The percentage of PUFAs in GS was found to range from 17% to 20% of all FAs across the three sampling days. ARA (20:4 ω 6) and EPA (20:5 ω 3) were identified as two notable fatty acids, comprising ~7%–9% and 4.5%–5.5%, respectively, of the detected fatty acids. Other major fatty acids included lauric acid (12:0, ~4%–6%), elaidic acid (the trans isomer of oleic acid,18:1 trans-ω 9, ~4%–6%), and gondoic acid (20:1, ~6%–9%). No significant differences in the relative abundance of fatty acids were observed between sampling days ( P > 0.05), except for lauric acid (12:0) (two-way ANOVA with post hoc Tukey, day 1–day 3: P = 0.0280), arachidonic acid (20:4 ω6) (two-way ANOVA with post hoc Tukey, day 1–day 2: P = 0.0451), the overall SFA content (two-way ANOVA with post hoc Tukey, day 1–day 2: P = 0.0002, day 2–day 3: P = 0.0002), and the overall PUFA content (two-way ANOVA with post hoc Tukey, day 1–day 2: P = 0.0002). PUFAs were found to have a lower relative content on day 1 of sampling compared to day 2. Lauric acid content increased steadily from day 1 to day 3, but only the difference between days 1 and 3 was found to be significant. This may be due to the feeding cycle, which occurs on day 1 of the week, and therefore the second sampling point captures the peak nutrient availability and metabolic activity, which then falls off by the end of the week. The high percentage of unidentified fatty acids (n.i.), which accounted for approximately 23% of the total content, were compounds either structurally similar to fatty acids but unidentifiable with the standard fatty acid methyl-ester (FAME) mix, or compounds with low identification scores in the national institute of standards and technology (NIST) mass spectral library and thus grouped as “not identified.” Only six fatty acids were identified in VC, all of which were the same as those found in PC samples. Among these, linoleic acid (18:2 ω 6) was the only PUFA, accounting for less than 4% of fatty acids. Palmitic acid (16:0) was one of the major fatty acids found in VC, accounting for 28%–31% of the total content, along with stearic acid (18:0), which accounted for approximately 15%. Palmitic acid showed significant differences over the one-week sampling period (two-way ANOVA with post hoc Tukey, P = 0.0321). We detected oleamide, or oleic acid amide, in all samples, which likely originated from the compost added to PC since oleamide has been detected in mixtures containing starch, sunflower oil, and soy protein . These results show that PUFAs are uniquely concentrated in GS samples compared to the starting compost and the resulting vermicompost . These results are largely consistent with the findings of Sampedro et al. , who reported elevated levels of PUFAs including ARA, EPA, and DHA in the gut of L. terrestris compared to the surrounding soil . In our study with E. fetida , the PC is the nutrient-rich housing soil for the metabolic activity of gut-resident microorganisms, the products of which are available for host uptake. Since we did not find PUFAs in PC (incoming) nor VC (outgoing) samples, then PUFAs found in the GS, either from microbial activity or from the shedding of earthworm epithelial cells, appear to be selectively retained in the earthworm and its microbiome for physiological needs. The diverse range of fatty acids that we recovered in the GS may be indicative of high metabolic activity stimulated in the gut. Metataxonomic amplicon analysis To identify potential microbial producers of ARA and EPA in the gut soil samples, we used 16S and 18S rRNA gene sequencing alongside amplicon sequencing of the PfaA-KS domain using primers previously developed to target the PUFA synthase complex (described in the next section) . Using high-level taxonomic binning of the functional gene sequences and then examining members of these taxonomic bins among the sample types, potential producers of ARA and EPA could be identified. To account for the effects of low biomass and interindividual variation, samples of GS from individual E. fetida (denoted IG) were also compared with the pooled GS samples (see for sample metadata). The V4 hypervariable region of the 16S rRNA was targeted using the 515F/806R universal primers with slight modification, and the TAReuk454FWD1/TAReukREV3mod universal primers were used to target the eukaryotic 18S rRNA gene for amplification and sequencing on an Illumina MiSeq (see Materials and Methods for primer sequences). After decontamination of 16S rRNA sequences, an average of 8,116 (±4,193 SD) reads were obtained for GS, 5,606 (±4,124 SD) for IG, 8,296 (±5,664 SD) for PC, and 7,648 (±4,295 SD) for VC. A total of 4,238 amplicon sequence variants (ASVs) were identified across all the samples. The 18S sequencing resulted in an average of 30,753 (±15,703 SD) reads for GS, 25,977 (±10,907 SD) for IG, 18,221 (±12,272 SD) for PC and 20,034 (±14,504 SD) for VC after decontamination with a total of 886 identified ASVs (see for 16S and 18S ASV tables). Prokaryote taxonomic sequence profiles Observed ASVs and Shannon metrics were used to analyze the alpha-diversity of the microbiome, based on a rarefied ASV table with 5,000 reads for 16S rRNA analysis and 10,000 reads for 18S rRNA analysis ( rarefaction curves). No significant difference was found in richness or diversity between GS and IG samples. However, gut soils (GS and IG) showed significant variation compared to compost samples (PC and VC), with VC displaying significantly higher measures of richness compared to both gut soils (Wilcoxon, GS-VC: observed ASVs P = 3.291e-04 and IG-VC: P = 1.842e-05; ). Similar variation was observed in Shannon diversity metrics, with significant differences identified between both gut soil samples compared to VC (Wilcoxon, GS-VC: Shannon P = 0.001636 and IG-VC: P = 9.578e-06; ). This suggests that VC and PC have a higher evenness and contain more ASVs that were only identified once or twice compared to GS samples. Beta-diversity analysis was conducted using the unrarefied ASV table but was normalized using a variance stabilizing transformation with DESeq2. Ordination analysis using Euclidean distance on center-log-ratio transformed data indicated distinct sample clusters for PC and VC, while IG and GS clustered together . Based on the analysis of alpha- and beta-diversity, it was found that there were no significant differences in richness or diversity between the pooled and individual GS samples and that they clustered together. In contrast, the PC, VC, and gut soil samples were found to each have distinct microbial community compositions. shows the clustering of class-level bins and the relatively enriched or depleted taxa across samples and reveals that GS samples largely cluster together except for a few IG samples, while the compost samples form a separate cluster. This suggests that either the earthworm gut community is truly an endemic microbiome or that the gut environment provides highly selective conditions that promote transient growth and activity in a stable reproducible manner . The relative abundance of major phyla and families in each sample type provides additional evidence of different microbial community compositions per sample type. Phyla that contribute at least 5% and families that contribute at least 2% to the total relative abundance are compared for relative abundance. To test for significant differences in relative abundance, a pairwise Wilcoxon analysis with post-Benjamini-Hochberg was conducted between each sample type ( , and see pairwise family level corr ). Comparing IG and GS samples revealed significant differences across sample types at the phylum, class, and family levels; however, these differences were primarily driven by only a few family-level groups within the primary segregating phyla. Specifically, significant differences were found in three of 51 families in Gammaproteobacteria, five of 61 families within Firmicutes, and seven of 37 families within Bacteroidota for gut sample comparisons. On the other hand, pronounced differences were observed in the comparisons between the gut soils and composts, where most families showed significant changes in abundance. In general, there was significant variation in relative abundances when gut soils were compared with composts ( relative abundance barplots ). While the significance was less for IG than for GS in most cases, the trend of the mean values was the same, supporting that the features seen in pooled samples can be generalized to individual specimens, and are not driven by stochastic or random features from outliers. Notably, Actinobacteriota and Firmicutes display a higher relative abundance in gut soil samples compared to the compost samples, whereas Bacteroidota shows a higher relative abundance in the compost samples compared to the gut soils. The pairwise comparison of sample types supported the findings from the taxonomic summary regarding Actinobacteriota, Bacteroidota, and Bacillota (formerly Firmicutes). Specifically, Bacteroidota, Gammaproteobacteria, and Chloroflexi had a higher relative abundance in both compost samples compared to each of the gut soil samples. This was also true for Alphaproteobacteria when comparing the gut soil samples to VC. Verrucomicrobiota had a higher relative abundance in the gut soils compared to PC but still lower than in VC. No differences were found in Planctomycetota for all comparisons and in Alphaproteobacteria when comparing the gut soils to PC. In previous works, the major phyla identified in E. fetida when fed with different types of compost were Proteobacteria and Bacteroidota. However, since the compost used in our study differed from that used in the analysis by Budroni et al. , slight differences in the microbial composition of GS are expected . This is supported by previous reports that show changes in microbial composition based on the compost composition, as well as other studies that varied the feeding compost . A study by Sapkota et al. also identified Proteobacteria as a major phylum in their analysis of the earthworm genera Aporrectodea and Lumbricus . Although Proteobacteria and Bacteroidota are still present in GS from our study, they do not account for most of the microbial composition in this sample type. Their lower abundance could be explained by the nutrient-rich PC substrate, applied to the earthworm population as feed and housing matrix. Previous research has shown that Proteobacteria and Bacteroidota are particularly increased in abundance when E. fetida is fed with a nutrient-poor substrate, whereas a nutrient-rich substrate results in higher microbial diversity and a higher abundance of Firmicutes and Actinobacteria . PfaA-KS for assessment of bacterial PUFA producers To understand the potential that PUFA lipids concentrating in the gut of the earthworm are derived from bacteria, we used a previously developed universal primer targeting the beta-ketoacyl synthase (KS) region in the single-copy pfaA gene, which amplifies an approximately 502 bp region spanning the PfaA-KS N- and C-terminals . The KS domains from PKS types tend to cluster as a monophyletic group, and within these groups, the KS can approximate the evolutionary phylogeny of the organisms . In addition, the KS sequence phylogeny is distinctive of the type of PKS complex in which they occur, which makes it a suitable candidate marker for targeting the genes involved in the synthesis of a specific metabolite. We sequenced the amplified KS products using an Illumina MiSeq, resulting in 53 samples with successfully generated, filtered, and PfaA-KS-assigned reads, and final average sequence counts segregating per sample type: 16 (±27 SD) for GS-derived reads, 315 (±526 SD) for PC, and 1,230 (±1,459 SD) for VC. A total of 66 PfaA-KS ASVs were derived using the DADA2 workflow on merged reads. One read (Seq64) was probably a chimera from earthworm DNA and therefore removed from the data set before proceeding to downstream analyses. The final 65 PfaA-KS ASVs led to the identification of 6 resolved unique bacterial strains within 14 taxonomic bins across 5 phyla . Neither microeukaryotes nor fungi were found among the annotations, although some of these organisms harbor an iterative type I PKS (T1PKS) gene complex and KS in a neighboring clade to bacterial iterative T1PKS . Little is known about the distribution of PUFA-synthase genes and production potential in microorganisms outside of marine contexts, with only very recent investigations beginning to probe the terrestrial biospheres . To date, we are aware of only one primer set that has been developed for specifically targeting the pfa T1PKS KS domain, which was modeled from the sequences of marine Pseudomonadota, especially those belonging to γ-Proteobacteria. Since these bacteria appear to have propagated T1PKS genes via HGT , a single primer pair may work universally within this phylum to capture many pfa KS sequences. However, this specificity toward marine Gammproteobacteria likely hinders the recovery of bacteria in diverse phyla deriving from terrestrial soil and freshwater ecosystems such as actinobacteria and cyanobacteria, respectively, where T1PKS appears to have evolved rather from a common ancestor. Indeed, certain soil taxa with known PUFA production competence were surprisingly not found among our annotated sequences, such as some strains within myxobacteria, Actinomycetes, and Desulfobacteria . We checked primer specificity to 427 phylogenetically diverse organisms from NCBI GenBank reference genomes and saw alignments occur across eight deeply divergent lineages, including common soil or environmental bacteria. Gammaproteobacteria ( n = 151) were the dominant lineage, of which Shewanella ( n = 107) genomes were most highly represented. Of 427 queried genomes, 182 matched both forward and reverse primers and indicated a high likelihood of successful amplification. This analysis did confirm that Sorangium cellulosum and Minicystis rosea of Myxococcota would be missed ( Phylogenetic distribution of PfaA-KS primer matches ). Despite this limitation, a surprising number of sequences and diversity were recovered from the compost and earthworm gut soil sample material that we report here. We recovered PfaA-KS sequences that had >95% sequence identity to those found among marine bacterial reference isolates using BLASTn query on the full nucleotide collection (nt). For sequences that were not well resolved using the NCBI RefSeq nt database, a BLASTx translated amino acid sequence was mapped against the non-redundant (nr) database, with most hits achieving >95% homology (see for a full summary of BLAST hit results). Phylogenetic inference among recovered sequences To understand the relationship between the sequences we captured from earthworm gut and compost soil samples, we constructed a phylogenetic inference tree using our 65 bacterial PfaA-KS ASVs plus the 446 PfaA-KS sequences that were published in the original publication for the creation of the PfaA-KS primer set . The relevant difference with our present study is the use of the PfaA-KS primers on terrestrial soil-derived samples, whereas prior research has been restricted to their application on marine water and sediment samples. Using the Escherichia coli fab F gene as an outgroup (representative of a KS subtype involved in the canonical type II FAS system), we see that our sequences surprisingly form clusters among shallow branches of the tree which match their taxonomic assignments, rather than their environmental source. However, known evolutionary relationships that are based on whole genomes are not recapitulated since more distantly related taxa (such as Pseudomonadota and Bacteroidota) form larger clusters that break up established evolutionary lineages, such as between members of Gammproteobacteria. There is a clear separation between sequences found within earthworm GS samples and those of the compost samples (denoted by barplots along the outer ring of the tree). All GS sequences are assigned to Shewanella and cluster among a fairly undifferentiated lineage that includes other Vibrio -assigned sequences, and which stem from a bifurcation that separates (i) facultatively anaerobic cold-water-marine Gammaproteobacteria ( Vibrio , Colwellia , Moritella , Psychromonas ) and aerobic saprophytic or predatory Bacteroidota ( Flammeovirga , Saprospirace); from (ii) subsurface anoxic-dwelling taxa (Legionellales, Candidatus Hydrogenedentes), anaerobic fermenters ( Anaerolineae ), and terrestrial aerobic chemoheterotrophs (Armatimonadota) . The PfaA-KS sequences found within compost samples were spread more widely across the tree among organisms such as Gemmata (Planctomycetota), Cloacimonadota, and other Anaerolineales that are often found among carbon-rich sediments such as peat bogs, waste-water treatments, or the rhizosphere. To date, the activity and product of these putatively competent PUFA synthase complexes remain unconfirmed . The remaining one-third of the tree is wholly dominated by a basal cluster of sequences assigned to Deltaproteobacteria or its sublineage, SAR324, which are distinguished from the other taxonomically and phylogenetically diverse branchings. This suggests that metabolic niche participation has broadly determined the structure of the tree, which separates primarily between marine chemolithoheterotrophs (sulfate reducers) and diverse chemoorganoheterotrophs (carbon/nitrogen utilizers through predatory or saprophytic processes) . This topology further supports the observation that T1PKS are prone to HGT events , and that HGT propagation of the pfa complex helps to explain its environmental and phylogenetically widespread occurrence and lack of strict coherence among species or sub-species level presence/absence patterns . PfaA-KS sequence distribution and abundance across sample types To gain insight into which members of the microbial community that are concentrated in earthworm gut soil and which may be contributing to the production of PUFA, we conducted an in-depth investigation on the ASV table abundances, which was first transformed to center-log-ratio (CLR) values . An extremely sparse compositional table rendered normalization practices such as rarefaction and relative abundance inadequate to standardize sequence abundances across samples. We confirmed that there is no apparent bias in the amount of genetic data recovered from the samples based on sample type, starting amount, or concentration, supporting the conclusion that the observed sparsity of the PfaA-KS ASV table is biologically real (see for sample metadata; see for associations between PfaA-KS sequence recovery and sample metadata). Surprisingly, in contrast to the metabolite lipid profiles generated from the same sample material, the recovered PfaA-KS sequences were most diverse and concentrated among compost samples (PC and VC) rather than the GS samples . The PfaA-KS sequences were sparsely distributed at very low abundance in GS samples, both pooled and individual . While the ASV taxonomic assignments suggest that just a handful of unique taxa contribute to the total sequence variation, the sequence variants themselves are quite numerous even within the same putative bacterial strain and are strongly segregated between GS and compost samples . No single ASV is shared across all three sample types. This can be most clearly seen when ASVs are aggregated at the genus level, or otherwise, their lowest taxonomic bin, and then hierarchically clustered according to Ward’s sum of squares method . The GS samples are composed entirely of Shewanella assigned sequences while the compost samples contain a mixture of the other seven environmental taxa bins. This suggests that only Shewanella remain in close association with the earthworm host tissue rather than passing through as transients (indicated by the strains found in VC but not in GS), and perhaps actually are co-residents of the earthworm gut . PfaA-KS taxa within the microbiome profile To understand the prevalence of putative PUFA-producing taxa among the overall prokaryote microbiome community, the taxonomic bins assigned to the PfaA-KS sequences were sought among the 16S rRNA data set, and the matching ASVs from the abundance tables were subset for further analysis ( prok.pfa abundance barplots ). The subset of PfaA-KS taxa from the 16S data (herein referred to as prok.pfa ) was normalized by the total number of ASVs and then transformed using CLR to compare with the PfaA-KS sequence amplicon data set (herein referred to as pfa ). First, Procrustes analysis was used to infer the correspondence of the ASV level ordinations of the prok.pfa and pfa data using Euclidean distance. Using the function “protest” with 999 permutations, the data sets were found to have modestly significant correlation (m12 squared = 0.758, corr. = 0.492, P = 0.001), and their ordinations show similar separation of sample types along the first and second principal components . Next, the ASVs counts were binned at genus level (or lowest taxonomic assignment), resulting in seven PfaA-KS taxonomic assignments: Anaerolineales, Armatimonadota, Gemmataceae , Hydrogenedentes, Moritella , Phycisphaera, and Shewanella . The correspondence of these binned taxa vectors within the pfa and prok.pfa abundance tables was calculated using a Mantel test on the Euclidean distance matrices, yielding again low correlation (Spearman’s rho = 0.194, P = 0.007, mantel test ), suggesting that the abundance of the sequences captured using the PfaA-KS primers cannot be related to the microbiome profile reconstructed from 16S rRNA amplification. Correlation and visualization of the abundance distribution of each taxonomic bin shows that only for Shewanella abundance is there coherence between the PfaA-KS and 16S rRNA abundances (Pearson’s rho: PC = 0.465, GS = 0.662; ; TaxaLinearAssoc-pfaclrM2; & Histograms ), supporting a hypothesized situation that Shewanella proliferate in the earthworm gut relative to the overall community, during which they could contribute PUFA nutritionally to the earthworm and are not just transient dead organic matter. To be able to truly correlate the amplicon data sets requires quantification of the amplified gene regions and more targeted research in the future should incorporate a digital PCR or quantitative PCR step in the amplicon preparation workflow. This would verify the coherence in the presence of the PfaA-KS and the 16S gene regions and the bacterial taxa from which they derive. Using effect size as a measure of explanatory power for the difference in ASV abundance in the prok.pfa table also found Shewanella-assigned ASVs to be highly indicative of differences between vermicompost (earthworm castings), and pre-compost (compost soil applied as feed but not containing earthworms) ( ; Effect size aldex2 results ). The dissociation between pfa and prok.pfa abundances for the remaining taxa is most likely because the taxa that contain a putative PUFA gene complex are extremely low-abundant members of the total microbial community (especially in soil or compost), and so are negatively biased during 16S rRNA primer annealing and amplification. Eukaryote taxonomic assessment for other soil microbial or microfaunal PUFA producers Earthworms ingest a variety of non-bacterial microbiota from the soil and can even be selective foragers. Phospholipid fatty acid (PLFA) biomarkers indicate the concentration of microeukaryotes and fungi within the earthworm gut relative to the bulk soil , and a robust immune response has evolved to entrap and expel parasitic nematodes . Prior research on the lipid content of earthworm GS attributed the accumulation of PUFA in GS and somatic tissue to the metabolic activity of microeukaryotes such as fungi and protozoa. This was in accordance with the understanding that certain FAs are lipid biomarkers for the presence of organismal groups, on account of the FA composition of their phospholipid membrane . According to the pattern of observations, algae, yeast, and other fungi are typically associated with the synthesis of long-chain polyunsaturated acyl-moieties, including 20C + fatty acids and sphingolipids, from both de novo and precursor 18C fatty acids . Common terrestrial bacteria on the other hand are attributed with medium-chain, straight or branched, odd-numbered-saturated or even-numbered-monounsaturated fatty acids , and so they have not been considered candidate producers for PUFAs found in a terrestrial sediment context. Examples of soil-living microeukaryote organisms with notable concentrations of PUFA include invertebrate microfauna such as nematodes (Chromadorea), and macroinvertebrate collembola (Ellipura), fungi such as Mortierellaceae , and protozoan microalgae such as Labyrinthulomycetes. However, the origin of earthworm gut PUFAs from a microeukaryotic source has not been fully investigated. Therefore, we used 18S rRNA amplification to catalog the presence of putative PUFA-producing eukaryotes and determine whether their pattern of distribution across sample types may be a factor toward explaining PUFA accumulation in the earthworm gut environment. Eukaryote ASVs that mapped to the Oligochaetes subclass (aquatic and terrestrial worms) were first removed to eliminate the host signal that otherwise obliterated our ability to see representative taxa. For GS samples, this unfortunately resulted in a loss of >90% of reads. Filtered samples were then rarefied to 1,000 reads and ASVs were binned at the “class/order/family” level for visualization and assessment ( ; Eukaryote taxa assignments barplot ). Of the resulting 46 taxonomic bins achieving at least 1% abundance in 5% of samples, four are identified as containing known PUFA producers: Chromadorea (nematodes), Ellipura (collembola), Mortierellaceae (soil fungi), and Thraustochytriaceae (saprotrophic protists). The Chromadorea and Ellipura percent relative abundance in GS samples were approximately 1.88% and 0.02% of rarefied reads, respectively, while significantly greater proportions of these two organisms were found in VC samples (castings) (Wilcoxon: P < 0001). Mortierellaceae were most prevalent in PC (7.55%) while <1% in VC and GS, and Thraustochytriaceae were <1% for all sample types . Nematodes contain all desaturase and elongase enzymes to de novo produce up to 20C length FAs, including 20:5 ω3 EPA and 20:4n6 ARA, which were the uniquely enriched PUFAs found among our GS samples . Therefore, we cannot exclude that the prevalence of nematodes passing through the earthworm intestinal tract and expelled with the castings may explain the detection of PUFAs in the GS. However, since PC samples also contain a smaller but detectable amount of nematode 18S rRNA signal, and yet are devoid of any detectable PUFA, then it is unclear whether the nematodes in the earthworm composting system where we sampled are sequestering PUFA. Previous studies have shown that Caenorhabditis elegans may contain anywhere from 1% to 20% of total lipid PUFA, depending on growth temperature . These are largely due to an abundance of 18C FAs, namely a signature peak at 18:1 ω7 ( cis -vaccenic acid) in all lipid fractions—phosphatidylcholine, phatidylethanolamine—but to a lesser degree also EPA and ARA . However, none of our sample peaks were annotated for 18:1 ω7, which would be expected if nematode lipids (or otherwise aerobic bacteria) were contributing to the pool of measured fatty acids in our samples. Furthermore, earthworms respond to nematode infestations that occur in the coelomic cavity and near the nephridia by encapsulating the tiny worms in cysts called “brown bodies” and destroying them using reactive oxygen species . Alternatively, the earthworm itself may be able to produce PUFAs, as recent genomic analysis has revealed genes encoding “methyl”-end desaturase homologs to the Δ12 (ω6) and Δ15 (ω3) desaturases in Oligochaeta . Little is known about the ability of invertebrates to produce long-chain PUFA, especially the amount and rate that this might occur. Evidence from genomic analysis attests to the widespread occurrence of elongation and desaturation enzymes, but only a handful of earlier publications have indicated some limited de novo production such as in Collembola (springtails), Daphnia (water fleas), and nematodes . Still, Arthropoda exhibit poor fecundity and decline in fitness when their dietary supply of pre-formed PUFA is absent or restricted, despite encoding the competent enzymes for de novo production . In earthworms, the increasing concentration of PUFAs from bulk soil to gut soil, and finally peaking in muscle tissue, which was interpreted as trophic transfer from soil microorganisms to the earthworm , may well work in the opposite direction. The earthworm may be producing PUFA within its own tissue, perhaps as precursors to eicosanoids or for cellular membrane permeability, and as the epithelial cells shed into the gut, then PUFAs appear to concentrate in gut soil. However, in this prior study, the entire unbroken intestinal track was surgically isolated from body, and it challenges any notion that somatic cell metabolites should be leaked into the gut soil, and rather strengthens the original interpretation. To definitively rule out earthworm-derived PUFAs, further work should isolate metabolic activity using approaches such as stable isotope probing with heavy isotopes of carbon and hydrogen. Conclusions In sum, we found that the earthworm gut soil ecosystem is distinct from the input compost and output castings according to the type and abundance of organisms present, as well as in the fatty acid metabolite profiles. Amplicon sequencing of the KS functional domain on the pfaA gene as part of the PUFA synthase complex illuminated the sparse and phylogenetically narrow distribution of putative PUFA-producing bacterial taxa, especially the ASVs from GS samples, which all unambiguously belonged to strains of Shewanella . Most surprising was that while the 20C PUFAs were found among the total lipid pool of the GS samples and absent in both compost sample types, the PfaA-KS sequence abundance pattern was exactly the contrary. Samples in which the largest number and diversity of PfaA-KS sequences were recovered were VC, which are the castings or feces of the earthworms. These samples contained taxa also found among the GS samples which were implicated as PUFA competent based on PfaA-KS presence, but VC samples also contained other PfaA-KS-positive taxa known to be widespread free-living organisms. Based on 16S and 18S rRNA taxonomic profiling, the GS samples diverged from the compost samples based on the presence and abundance of ASVs, suggestive of a unique ecosystem that is maintained by the conditions of the earthworm gut. Eukaryotic origin of the PUFAs is possible but not well supported, since organisms such as nematodes and collembola failed to show other signature 18C fatty acid biomarkers, and were not well represented among the amplicon sequence pool, especially of the GS samples. The overall picture appears to be that earthworms may aggregate the ARA and EPA PUFAs stemming from bacterial metabolism, or they produce these lipids from their own genomically encoded enzymes. Conditions in the intestinal tract may activate PUFA-producing taxa like Shewanella that proliferate in the earthworm gut ecosystem during the passage of environmental soil and bacteria. The fatty acid profile of samples collected in weeks 1 and 2 was compared to identify any batch effects between sampling weeks based on sample type or sampling days (as shown in Materials and Methods). Permutational multivariate analysis of variance (PerMANOVA) showed a significant difference ( P = 0.016) between the sampling weeks, but this only explained 0.5% of the variation, while most of the variation (87.6%, P = 0.001) was explained by the sample type . We then examined whether there were any significant differences between the respective days of sampling for each sample type using the Kolmogorov–Smirnov test. No significant differences were identified for all sample types; therefore, the data from weeks 1 and 2 were merged for all subsequent fatty acid composition analyses. According to the analysis of the fatty acid composition of PC presented in , palmitic acid (16:0) was found to be the most abundant fatty acid, constituting on average over 30% of the detected fatty acids, followed by stearic acid (18:0) and oleic acid (18:1 cis-ω 9) at 13% and 14%, respectively. The only PUFA detected in PC was linoleic acid (18:2 ω 6), which accounted for more than 6% of the total fatty acid content and likely derived from bulk plant material in the PC mixture. Linoleic acid is known to be a precursor for the synthesis of the omega-6 series of fatty acids, particularly leading to ARA, whereas α-linolenic acid, which is the precursor to the omega-3 series, including EPA and DHA, was not detected. One curious finding was the presence of trans -fatty acids. Although bacteria do not usually produce trans -fatty acids, they can produce them under specific circumstances, which has been demonstrated in Vibrio and Pseudomonas . In Vibrio , a temperature shift caused cis / trans isomerization, while in Pseudomonas , toxic compounds such as organic hydrocarbons resulted in isomerization . PC partly consists of thermophilic compost which has been heat processed through fermentative activity, and this temperature increase could potentially have caused cis/trans isomerization from bacterially originated FAs. The percentage of PUFAs in GS was found to range from 17% to 20% of all FAs across the three sampling days. ARA (20:4 ω 6) and EPA (20:5 ω 3) were identified as two notable fatty acids, comprising ~7%–9% and 4.5%–5.5%, respectively, of the detected fatty acids. Other major fatty acids included lauric acid (12:0, ~4%–6%), elaidic acid (the trans isomer of oleic acid,18:1 trans-ω 9, ~4%–6%), and gondoic acid (20:1, ~6%–9%). No significant differences in the relative abundance of fatty acids were observed between sampling days ( P > 0.05), except for lauric acid (12:0) (two-way ANOVA with post hoc Tukey, day 1–day 3: P = 0.0280), arachidonic acid (20:4 ω6) (two-way ANOVA with post hoc Tukey, day 1–day 2: P = 0.0451), the overall SFA content (two-way ANOVA with post hoc Tukey, day 1–day 2: P = 0.0002, day 2–day 3: P = 0.0002), and the overall PUFA content (two-way ANOVA with post hoc Tukey, day 1–day 2: P = 0.0002). PUFAs were found to have a lower relative content on day 1 of sampling compared to day 2. Lauric acid content increased steadily from day 1 to day 3, but only the difference between days 1 and 3 was found to be significant. This may be due to the feeding cycle, which occurs on day 1 of the week, and therefore the second sampling point captures the peak nutrient availability and metabolic activity, which then falls off by the end of the week. The high percentage of unidentified fatty acids (n.i.), which accounted for approximately 23% of the total content, were compounds either structurally similar to fatty acids but unidentifiable with the standard fatty acid methyl-ester (FAME) mix, or compounds with low identification scores in the national institute of standards and technology (NIST) mass spectral library and thus grouped as “not identified.” Only six fatty acids were identified in VC, all of which were the same as those found in PC samples. Among these, linoleic acid (18:2 ω 6) was the only PUFA, accounting for less than 4% of fatty acids. Palmitic acid (16:0) was one of the major fatty acids found in VC, accounting for 28%–31% of the total content, along with stearic acid (18:0), which accounted for approximately 15%. Palmitic acid showed significant differences over the one-week sampling period (two-way ANOVA with post hoc Tukey, P = 0.0321). We detected oleamide, or oleic acid amide, in all samples, which likely originated from the compost added to PC since oleamide has been detected in mixtures containing starch, sunflower oil, and soy protein . These results show that PUFAs are uniquely concentrated in GS samples compared to the starting compost and the resulting vermicompost . These results are largely consistent with the findings of Sampedro et al. , who reported elevated levels of PUFAs including ARA, EPA, and DHA in the gut of L. terrestris compared to the surrounding soil . In our study with E. fetida , the PC is the nutrient-rich housing soil for the metabolic activity of gut-resident microorganisms, the products of which are available for host uptake. Since we did not find PUFAs in PC (incoming) nor VC (outgoing) samples, then PUFAs found in the GS, either from microbial activity or from the shedding of earthworm epithelial cells, appear to be selectively retained in the earthworm and its microbiome for physiological needs. The diverse range of fatty acids that we recovered in the GS may be indicative of high metabolic activity stimulated in the gut. To identify potential microbial producers of ARA and EPA in the gut soil samples, we used 16S and 18S rRNA gene sequencing alongside amplicon sequencing of the PfaA-KS domain using primers previously developed to target the PUFA synthase complex (described in the next section) . Using high-level taxonomic binning of the functional gene sequences and then examining members of these taxonomic bins among the sample types, potential producers of ARA and EPA could be identified. To account for the effects of low biomass and interindividual variation, samples of GS from individual E. fetida (denoted IG) were also compared with the pooled GS samples (see for sample metadata). The V4 hypervariable region of the 16S rRNA was targeted using the 515F/806R universal primers with slight modification, and the TAReuk454FWD1/TAReukREV3mod universal primers were used to target the eukaryotic 18S rRNA gene for amplification and sequencing on an Illumina MiSeq (see Materials and Methods for primer sequences). After decontamination of 16S rRNA sequences, an average of 8,116 (±4,193 SD) reads were obtained for GS, 5,606 (±4,124 SD) for IG, 8,296 (±5,664 SD) for PC, and 7,648 (±4,295 SD) for VC. A total of 4,238 amplicon sequence variants (ASVs) were identified across all the samples. The 18S sequencing resulted in an average of 30,753 (±15,703 SD) reads for GS, 25,977 (±10,907 SD) for IG, 18,221 (±12,272 SD) for PC and 20,034 (±14,504 SD) for VC after decontamination with a total of 886 identified ASVs (see for 16S and 18S ASV tables). Prokaryote taxonomic sequence profiles Observed ASVs and Shannon metrics were used to analyze the alpha-diversity of the microbiome, based on a rarefied ASV table with 5,000 reads for 16S rRNA analysis and 10,000 reads for 18S rRNA analysis ( rarefaction curves). No significant difference was found in richness or diversity between GS and IG samples. However, gut soils (GS and IG) showed significant variation compared to compost samples (PC and VC), with VC displaying significantly higher measures of richness compared to both gut soils (Wilcoxon, GS-VC: observed ASVs P = 3.291e-04 and IG-VC: P = 1.842e-05; ). Similar variation was observed in Shannon diversity metrics, with significant differences identified between both gut soil samples compared to VC (Wilcoxon, GS-VC: Shannon P = 0.001636 and IG-VC: P = 9.578e-06; ). This suggests that VC and PC have a higher evenness and contain more ASVs that were only identified once or twice compared to GS samples. Beta-diversity analysis was conducted using the unrarefied ASV table but was normalized using a variance stabilizing transformation with DESeq2. Ordination analysis using Euclidean distance on center-log-ratio transformed data indicated distinct sample clusters for PC and VC, while IG and GS clustered together . Based on the analysis of alpha- and beta-diversity, it was found that there were no significant differences in richness or diversity between the pooled and individual GS samples and that they clustered together. In contrast, the PC, VC, and gut soil samples were found to each have distinct microbial community compositions. shows the clustering of class-level bins and the relatively enriched or depleted taxa across samples and reveals that GS samples largely cluster together except for a few IG samples, while the compost samples form a separate cluster. This suggests that either the earthworm gut community is truly an endemic microbiome or that the gut environment provides highly selective conditions that promote transient growth and activity in a stable reproducible manner . The relative abundance of major phyla and families in each sample type provides additional evidence of different microbial community compositions per sample type. Phyla that contribute at least 5% and families that contribute at least 2% to the total relative abundance are compared for relative abundance. To test for significant differences in relative abundance, a pairwise Wilcoxon analysis with post-Benjamini-Hochberg was conducted between each sample type ( , and see pairwise family level corr ). Comparing IG and GS samples revealed significant differences across sample types at the phylum, class, and family levels; however, these differences were primarily driven by only a few family-level groups within the primary segregating phyla. Specifically, significant differences were found in three of 51 families in Gammaproteobacteria, five of 61 families within Firmicutes, and seven of 37 families within Bacteroidota for gut sample comparisons. On the other hand, pronounced differences were observed in the comparisons between the gut soils and composts, where most families showed significant changes in abundance. In general, there was significant variation in relative abundances when gut soils were compared with composts ( relative abundance barplots ). While the significance was less for IG than for GS in most cases, the trend of the mean values was the same, supporting that the features seen in pooled samples can be generalized to individual specimens, and are not driven by stochastic or random features from outliers. Notably, Actinobacteriota and Firmicutes display a higher relative abundance in gut soil samples compared to the compost samples, whereas Bacteroidota shows a higher relative abundance in the compost samples compared to the gut soils. The pairwise comparison of sample types supported the findings from the taxonomic summary regarding Actinobacteriota, Bacteroidota, and Bacillota (formerly Firmicutes). Specifically, Bacteroidota, Gammaproteobacteria, and Chloroflexi had a higher relative abundance in both compost samples compared to each of the gut soil samples. This was also true for Alphaproteobacteria when comparing the gut soil samples to VC. Verrucomicrobiota had a higher relative abundance in the gut soils compared to PC but still lower than in VC. No differences were found in Planctomycetota for all comparisons and in Alphaproteobacteria when comparing the gut soils to PC. In previous works, the major phyla identified in E. fetida when fed with different types of compost were Proteobacteria and Bacteroidota. However, since the compost used in our study differed from that used in the analysis by Budroni et al. , slight differences in the microbial composition of GS are expected . This is supported by previous reports that show changes in microbial composition based on the compost composition, as well as other studies that varied the feeding compost . A study by Sapkota et al. also identified Proteobacteria as a major phylum in their analysis of the earthworm genera Aporrectodea and Lumbricus . Although Proteobacteria and Bacteroidota are still present in GS from our study, they do not account for most of the microbial composition in this sample type. Their lower abundance could be explained by the nutrient-rich PC substrate, applied to the earthworm population as feed and housing matrix. Previous research has shown that Proteobacteria and Bacteroidota are particularly increased in abundance when E. fetida is fed with a nutrient-poor substrate, whereas a nutrient-rich substrate results in higher microbial diversity and a higher abundance of Firmicutes and Actinobacteria . PfaA-KS for assessment of bacterial PUFA producers To understand the potential that PUFA lipids concentrating in the gut of the earthworm are derived from bacteria, we used a previously developed universal primer targeting the beta-ketoacyl synthase (KS) region in the single-copy pfaA gene, which amplifies an approximately 502 bp region spanning the PfaA-KS N- and C-terminals . The KS domains from PKS types tend to cluster as a monophyletic group, and within these groups, the KS can approximate the evolutionary phylogeny of the organisms . In addition, the KS sequence phylogeny is distinctive of the type of PKS complex in which they occur, which makes it a suitable candidate marker for targeting the genes involved in the synthesis of a specific metabolite. We sequenced the amplified KS products using an Illumina MiSeq, resulting in 53 samples with successfully generated, filtered, and PfaA-KS-assigned reads, and final average sequence counts segregating per sample type: 16 (±27 SD) for GS-derived reads, 315 (±526 SD) for PC, and 1,230 (±1,459 SD) for VC. A total of 66 PfaA-KS ASVs were derived using the DADA2 workflow on merged reads. One read (Seq64) was probably a chimera from earthworm DNA and therefore removed from the data set before proceeding to downstream analyses. The final 65 PfaA-KS ASVs led to the identification of 6 resolved unique bacterial strains within 14 taxonomic bins across 5 phyla . Neither microeukaryotes nor fungi were found among the annotations, although some of these organisms harbor an iterative type I PKS (T1PKS) gene complex and KS in a neighboring clade to bacterial iterative T1PKS . Little is known about the distribution of PUFA-synthase genes and production potential in microorganisms outside of marine contexts, with only very recent investigations beginning to probe the terrestrial biospheres . To date, we are aware of only one primer set that has been developed for specifically targeting the pfa T1PKS KS domain, which was modeled from the sequences of marine Pseudomonadota, especially those belonging to γ-Proteobacteria. Since these bacteria appear to have propagated T1PKS genes via HGT , a single primer pair may work universally within this phylum to capture many pfa KS sequences. However, this specificity toward marine Gammproteobacteria likely hinders the recovery of bacteria in diverse phyla deriving from terrestrial soil and freshwater ecosystems such as actinobacteria and cyanobacteria, respectively, where T1PKS appears to have evolved rather from a common ancestor. Indeed, certain soil taxa with known PUFA production competence were surprisingly not found among our annotated sequences, such as some strains within myxobacteria, Actinomycetes, and Desulfobacteria . We checked primer specificity to 427 phylogenetically diverse organisms from NCBI GenBank reference genomes and saw alignments occur across eight deeply divergent lineages, including common soil or environmental bacteria. Gammaproteobacteria ( n = 151) were the dominant lineage, of which Shewanella ( n = 107) genomes were most highly represented. Of 427 queried genomes, 182 matched both forward and reverse primers and indicated a high likelihood of successful amplification. This analysis did confirm that Sorangium cellulosum and Minicystis rosea of Myxococcota would be missed ( Phylogenetic distribution of PfaA-KS primer matches ). Despite this limitation, a surprising number of sequences and diversity were recovered from the compost and earthworm gut soil sample material that we report here. We recovered PfaA-KS sequences that had >95% sequence identity to those found among marine bacterial reference isolates using BLASTn query on the full nucleotide collection (nt). For sequences that were not well resolved using the NCBI RefSeq nt database, a BLASTx translated amino acid sequence was mapped against the non-redundant (nr) database, with most hits achieving >95% homology (see for a full summary of BLAST hit results). Phylogenetic inference among recovered sequences To understand the relationship between the sequences we captured from earthworm gut and compost soil samples, we constructed a phylogenetic inference tree using our 65 bacterial PfaA-KS ASVs plus the 446 PfaA-KS sequences that were published in the original publication for the creation of the PfaA-KS primer set . The relevant difference with our present study is the use of the PfaA-KS primers on terrestrial soil-derived samples, whereas prior research has been restricted to their application on marine water and sediment samples. Using the Escherichia coli fab F gene as an outgroup (representative of a KS subtype involved in the canonical type II FAS system), we see that our sequences surprisingly form clusters among shallow branches of the tree which match their taxonomic assignments, rather than their environmental source. However, known evolutionary relationships that are based on whole genomes are not recapitulated since more distantly related taxa (such as Pseudomonadota and Bacteroidota) form larger clusters that break up established evolutionary lineages, such as between members of Gammproteobacteria. There is a clear separation between sequences found within earthworm GS samples and those of the compost samples (denoted by barplots along the outer ring of the tree). All GS sequences are assigned to Shewanella and cluster among a fairly undifferentiated lineage that includes other Vibrio -assigned sequences, and which stem from a bifurcation that separates (i) facultatively anaerobic cold-water-marine Gammaproteobacteria ( Vibrio , Colwellia , Moritella , Psychromonas ) and aerobic saprophytic or predatory Bacteroidota ( Flammeovirga , Saprospirace); from (ii) subsurface anoxic-dwelling taxa (Legionellales, Candidatus Hydrogenedentes), anaerobic fermenters ( Anaerolineae ), and terrestrial aerobic chemoheterotrophs (Armatimonadota) . The PfaA-KS sequences found within compost samples were spread more widely across the tree among organisms such as Gemmata (Planctomycetota), Cloacimonadota, and other Anaerolineales that are often found among carbon-rich sediments such as peat bogs, waste-water treatments, or the rhizosphere. To date, the activity and product of these putatively competent PUFA synthase complexes remain unconfirmed . The remaining one-third of the tree is wholly dominated by a basal cluster of sequences assigned to Deltaproteobacteria or its sublineage, SAR324, which are distinguished from the other taxonomically and phylogenetically diverse branchings. This suggests that metabolic niche participation has broadly determined the structure of the tree, which separates primarily between marine chemolithoheterotrophs (sulfate reducers) and diverse chemoorganoheterotrophs (carbon/nitrogen utilizers through predatory or saprophytic processes) . This topology further supports the observation that T1PKS are prone to HGT events , and that HGT propagation of the pfa complex helps to explain its environmental and phylogenetically widespread occurrence and lack of strict coherence among species or sub-species level presence/absence patterns . PfaA-KS sequence distribution and abundance across sample types To gain insight into which members of the microbial community that are concentrated in earthworm gut soil and which may be contributing to the production of PUFA, we conducted an in-depth investigation on the ASV table abundances, which was first transformed to center-log-ratio (CLR) values . An extremely sparse compositional table rendered normalization practices such as rarefaction and relative abundance inadequate to standardize sequence abundances across samples. We confirmed that there is no apparent bias in the amount of genetic data recovered from the samples based on sample type, starting amount, or concentration, supporting the conclusion that the observed sparsity of the PfaA-KS ASV table is biologically real (see for sample metadata; see for associations between PfaA-KS sequence recovery and sample metadata). Surprisingly, in contrast to the metabolite lipid profiles generated from the same sample material, the recovered PfaA-KS sequences were most diverse and concentrated among compost samples (PC and VC) rather than the GS samples . The PfaA-KS sequences were sparsely distributed at very low abundance in GS samples, both pooled and individual . While the ASV taxonomic assignments suggest that just a handful of unique taxa contribute to the total sequence variation, the sequence variants themselves are quite numerous even within the same putative bacterial strain and are strongly segregated between GS and compost samples . No single ASV is shared across all three sample types. This can be most clearly seen when ASVs are aggregated at the genus level, or otherwise, their lowest taxonomic bin, and then hierarchically clustered according to Ward’s sum of squares method . The GS samples are composed entirely of Shewanella assigned sequences while the compost samples contain a mixture of the other seven environmental taxa bins. This suggests that only Shewanella remain in close association with the earthworm host tissue rather than passing through as transients (indicated by the strains found in VC but not in GS), and perhaps actually are co-residents of the earthworm gut . PfaA-KS taxa within the microbiome profile To understand the prevalence of putative PUFA-producing taxa among the overall prokaryote microbiome community, the taxonomic bins assigned to the PfaA-KS sequences were sought among the 16S rRNA data set, and the matching ASVs from the abundance tables were subset for further analysis ( prok.pfa abundance barplots ). The subset of PfaA-KS taxa from the 16S data (herein referred to as prok.pfa ) was normalized by the total number of ASVs and then transformed using CLR to compare with the PfaA-KS sequence amplicon data set (herein referred to as pfa ). First, Procrustes analysis was used to infer the correspondence of the ASV level ordinations of the prok.pfa and pfa data using Euclidean distance. Using the function “protest” with 999 permutations, the data sets were found to have modestly significant correlation (m12 squared = 0.758, corr. = 0.492, P = 0.001), and their ordinations show similar separation of sample types along the first and second principal components . Next, the ASVs counts were binned at genus level (or lowest taxonomic assignment), resulting in seven PfaA-KS taxonomic assignments: Anaerolineales, Armatimonadota, Gemmataceae , Hydrogenedentes, Moritella , Phycisphaera, and Shewanella . The correspondence of these binned taxa vectors within the pfa and prok.pfa abundance tables was calculated using a Mantel test on the Euclidean distance matrices, yielding again low correlation (Spearman’s rho = 0.194, P = 0.007, mantel test ), suggesting that the abundance of the sequences captured using the PfaA-KS primers cannot be related to the microbiome profile reconstructed from 16S rRNA amplification. Correlation and visualization of the abundance distribution of each taxonomic bin shows that only for Shewanella abundance is there coherence between the PfaA-KS and 16S rRNA abundances (Pearson’s rho: PC = 0.465, GS = 0.662; ; TaxaLinearAssoc-pfaclrM2; & Histograms ), supporting a hypothesized situation that Shewanella proliferate in the earthworm gut relative to the overall community, during which they could contribute PUFA nutritionally to the earthworm and are not just transient dead organic matter. To be able to truly correlate the amplicon data sets requires quantification of the amplified gene regions and more targeted research in the future should incorporate a digital PCR or quantitative PCR step in the amplicon preparation workflow. This would verify the coherence in the presence of the PfaA-KS and the 16S gene regions and the bacterial taxa from which they derive. Using effect size as a measure of explanatory power for the difference in ASV abundance in the prok.pfa table also found Shewanella-assigned ASVs to be highly indicative of differences between vermicompost (earthworm castings), and pre-compost (compost soil applied as feed but not containing earthworms) ( ; Effect size aldex2 results ). The dissociation between pfa and prok.pfa abundances for the remaining taxa is most likely because the taxa that contain a putative PUFA gene complex are extremely low-abundant members of the total microbial community (especially in soil or compost), and so are negatively biased during 16S rRNA primer annealing and amplification. Eukaryote taxonomic assessment for other soil microbial or microfaunal PUFA producers Earthworms ingest a variety of non-bacterial microbiota from the soil and can even be selective foragers. Phospholipid fatty acid (PLFA) biomarkers indicate the concentration of microeukaryotes and fungi within the earthworm gut relative to the bulk soil , and a robust immune response has evolved to entrap and expel parasitic nematodes . Prior research on the lipid content of earthworm GS attributed the accumulation of PUFA in GS and somatic tissue to the metabolic activity of microeukaryotes such as fungi and protozoa. This was in accordance with the understanding that certain FAs are lipid biomarkers for the presence of organismal groups, on account of the FA composition of their phospholipid membrane . According to the pattern of observations, algae, yeast, and other fungi are typically associated with the synthesis of long-chain polyunsaturated acyl-moieties, including 20C + fatty acids and sphingolipids, from both de novo and precursor 18C fatty acids . Common terrestrial bacteria on the other hand are attributed with medium-chain, straight or branched, odd-numbered-saturated or even-numbered-monounsaturated fatty acids , and so they have not been considered candidate producers for PUFAs found in a terrestrial sediment context. Examples of soil-living microeukaryote organisms with notable concentrations of PUFA include invertebrate microfauna such as nematodes (Chromadorea), and macroinvertebrate collembola (Ellipura), fungi such as Mortierellaceae , and protozoan microalgae such as Labyrinthulomycetes. However, the origin of earthworm gut PUFAs from a microeukaryotic source has not been fully investigated. Therefore, we used 18S rRNA amplification to catalog the presence of putative PUFA-producing eukaryotes and determine whether their pattern of distribution across sample types may be a factor toward explaining PUFA accumulation in the earthworm gut environment. Eukaryote ASVs that mapped to the Oligochaetes subclass (aquatic and terrestrial worms) were first removed to eliminate the host signal that otherwise obliterated our ability to see representative taxa. For GS samples, this unfortunately resulted in a loss of >90% of reads. Filtered samples were then rarefied to 1,000 reads and ASVs were binned at the “class/order/family” level for visualization and assessment ( ; Eukaryote taxa assignments barplot ). Of the resulting 46 taxonomic bins achieving at least 1% abundance in 5% of samples, four are identified as containing known PUFA producers: Chromadorea (nematodes), Ellipura (collembola), Mortierellaceae (soil fungi), and Thraustochytriaceae (saprotrophic protists). The Chromadorea and Ellipura percent relative abundance in GS samples were approximately 1.88% and 0.02% of rarefied reads, respectively, while significantly greater proportions of these two organisms were found in VC samples (castings) (Wilcoxon: P < 0001). Mortierellaceae were most prevalent in PC (7.55%) while <1% in VC and GS, and Thraustochytriaceae were <1% for all sample types . Nematodes contain all desaturase and elongase enzymes to de novo produce up to 20C length FAs, including 20:5 ω3 EPA and 20:4n6 ARA, which were the uniquely enriched PUFAs found among our GS samples . Therefore, we cannot exclude that the prevalence of nematodes passing through the earthworm intestinal tract and expelled with the castings may explain the detection of PUFAs in the GS. However, since PC samples also contain a smaller but detectable amount of nematode 18S rRNA signal, and yet are devoid of any detectable PUFA, then it is unclear whether the nematodes in the earthworm composting system where we sampled are sequestering PUFA. Previous studies have shown that Caenorhabditis elegans may contain anywhere from 1% to 20% of total lipid PUFA, depending on growth temperature . These are largely due to an abundance of 18C FAs, namely a signature peak at 18:1 ω7 ( cis -vaccenic acid) in all lipid fractions—phosphatidylcholine, phatidylethanolamine—but to a lesser degree also EPA and ARA . However, none of our sample peaks were annotated for 18:1 ω7, which would be expected if nematode lipids (or otherwise aerobic bacteria) were contributing to the pool of measured fatty acids in our samples. Furthermore, earthworms respond to nematode infestations that occur in the coelomic cavity and near the nephridia by encapsulating the tiny worms in cysts called “brown bodies” and destroying them using reactive oxygen species . Alternatively, the earthworm itself may be able to produce PUFAs, as recent genomic analysis has revealed genes encoding “methyl”-end desaturase homologs to the Δ12 (ω6) and Δ15 (ω3) desaturases in Oligochaeta . Little is known about the ability of invertebrates to produce long-chain PUFA, especially the amount and rate that this might occur. Evidence from genomic analysis attests to the widespread occurrence of elongation and desaturation enzymes, but only a handful of earlier publications have indicated some limited de novo production such as in Collembola (springtails), Daphnia (water fleas), and nematodes . Still, Arthropoda exhibit poor fecundity and decline in fitness when their dietary supply of pre-formed PUFA is absent or restricted, despite encoding the competent enzymes for de novo production . In earthworms, the increasing concentration of PUFAs from bulk soil to gut soil, and finally peaking in muscle tissue, which was interpreted as trophic transfer from soil microorganisms to the earthworm , may well work in the opposite direction. The earthworm may be producing PUFA within its own tissue, perhaps as precursors to eicosanoids or for cellular membrane permeability, and as the epithelial cells shed into the gut, then PUFAs appear to concentrate in gut soil. However, in this prior study, the entire unbroken intestinal track was surgically isolated from body, and it challenges any notion that somatic cell metabolites should be leaked into the gut soil, and rather strengthens the original interpretation. To definitively rule out earthworm-derived PUFAs, further work should isolate metabolic activity using approaches such as stable isotope probing with heavy isotopes of carbon and hydrogen. Observed ASVs and Shannon metrics were used to analyze the alpha-diversity of the microbiome, based on a rarefied ASV table with 5,000 reads for 16S rRNA analysis and 10,000 reads for 18S rRNA analysis ( rarefaction curves). No significant difference was found in richness or diversity between GS and IG samples. However, gut soils (GS and IG) showed significant variation compared to compost samples (PC and VC), with VC displaying significantly higher measures of richness compared to both gut soils (Wilcoxon, GS-VC: observed ASVs P = 3.291e-04 and IG-VC: P = 1.842e-05; ). Similar variation was observed in Shannon diversity metrics, with significant differences identified between both gut soil samples compared to VC (Wilcoxon, GS-VC: Shannon P = 0.001636 and IG-VC: P = 9.578e-06; ). This suggests that VC and PC have a higher evenness and contain more ASVs that were only identified once or twice compared to GS samples. Beta-diversity analysis was conducted using the unrarefied ASV table but was normalized using a variance stabilizing transformation with DESeq2. Ordination analysis using Euclidean distance on center-log-ratio transformed data indicated distinct sample clusters for PC and VC, while IG and GS clustered together . Based on the analysis of alpha- and beta-diversity, it was found that there were no significant differences in richness or diversity between the pooled and individual GS samples and that they clustered together. In contrast, the PC, VC, and gut soil samples were found to each have distinct microbial community compositions. shows the clustering of class-level bins and the relatively enriched or depleted taxa across samples and reveals that GS samples largely cluster together except for a few IG samples, while the compost samples form a separate cluster. This suggests that either the earthworm gut community is truly an endemic microbiome or that the gut environment provides highly selective conditions that promote transient growth and activity in a stable reproducible manner . The relative abundance of major phyla and families in each sample type provides additional evidence of different microbial community compositions per sample type. Phyla that contribute at least 5% and families that contribute at least 2% to the total relative abundance are compared for relative abundance. To test for significant differences in relative abundance, a pairwise Wilcoxon analysis with post-Benjamini-Hochberg was conducted between each sample type ( , and see pairwise family level corr ). Comparing IG and GS samples revealed significant differences across sample types at the phylum, class, and family levels; however, these differences were primarily driven by only a few family-level groups within the primary segregating phyla. Specifically, significant differences were found in three of 51 families in Gammaproteobacteria, five of 61 families within Firmicutes, and seven of 37 families within Bacteroidota for gut sample comparisons. On the other hand, pronounced differences were observed in the comparisons between the gut soils and composts, where most families showed significant changes in abundance. In general, there was significant variation in relative abundances when gut soils were compared with composts ( relative abundance barplots ). While the significance was less for IG than for GS in most cases, the trend of the mean values was the same, supporting that the features seen in pooled samples can be generalized to individual specimens, and are not driven by stochastic or random features from outliers. Notably, Actinobacteriota and Firmicutes display a higher relative abundance in gut soil samples compared to the compost samples, whereas Bacteroidota shows a higher relative abundance in the compost samples compared to the gut soils. The pairwise comparison of sample types supported the findings from the taxonomic summary regarding Actinobacteriota, Bacteroidota, and Bacillota (formerly Firmicutes). Specifically, Bacteroidota, Gammaproteobacteria, and Chloroflexi had a higher relative abundance in both compost samples compared to each of the gut soil samples. This was also true for Alphaproteobacteria when comparing the gut soil samples to VC. Verrucomicrobiota had a higher relative abundance in the gut soils compared to PC but still lower than in VC. No differences were found in Planctomycetota for all comparisons and in Alphaproteobacteria when comparing the gut soils to PC. In previous works, the major phyla identified in E. fetida when fed with different types of compost were Proteobacteria and Bacteroidota. However, since the compost used in our study differed from that used in the analysis by Budroni et al. , slight differences in the microbial composition of GS are expected . This is supported by previous reports that show changes in microbial composition based on the compost composition, as well as other studies that varied the feeding compost . A study by Sapkota et al. also identified Proteobacteria as a major phylum in their analysis of the earthworm genera Aporrectodea and Lumbricus . Although Proteobacteria and Bacteroidota are still present in GS from our study, they do not account for most of the microbial composition in this sample type. Their lower abundance could be explained by the nutrient-rich PC substrate, applied to the earthworm population as feed and housing matrix. Previous research has shown that Proteobacteria and Bacteroidota are particularly increased in abundance when E. fetida is fed with a nutrient-poor substrate, whereas a nutrient-rich substrate results in higher microbial diversity and a higher abundance of Firmicutes and Actinobacteria . To understand the potential that PUFA lipids concentrating in the gut of the earthworm are derived from bacteria, we used a previously developed universal primer targeting the beta-ketoacyl synthase (KS) region in the single-copy pfaA gene, which amplifies an approximately 502 bp region spanning the PfaA-KS N- and C-terminals . The KS domains from PKS types tend to cluster as a monophyletic group, and within these groups, the KS can approximate the evolutionary phylogeny of the organisms . In addition, the KS sequence phylogeny is distinctive of the type of PKS complex in which they occur, which makes it a suitable candidate marker for targeting the genes involved in the synthesis of a specific metabolite. We sequenced the amplified KS products using an Illumina MiSeq, resulting in 53 samples with successfully generated, filtered, and PfaA-KS-assigned reads, and final average sequence counts segregating per sample type: 16 (±27 SD) for GS-derived reads, 315 (±526 SD) for PC, and 1,230 (±1,459 SD) for VC. A total of 66 PfaA-KS ASVs were derived using the DADA2 workflow on merged reads. One read (Seq64) was probably a chimera from earthworm DNA and therefore removed from the data set before proceeding to downstream analyses. The final 65 PfaA-KS ASVs led to the identification of 6 resolved unique bacterial strains within 14 taxonomic bins across 5 phyla . Neither microeukaryotes nor fungi were found among the annotations, although some of these organisms harbor an iterative type I PKS (T1PKS) gene complex and KS in a neighboring clade to bacterial iterative T1PKS . Little is known about the distribution of PUFA-synthase genes and production potential in microorganisms outside of marine contexts, with only very recent investigations beginning to probe the terrestrial biospheres . To date, we are aware of only one primer set that has been developed for specifically targeting the pfa T1PKS KS domain, which was modeled from the sequences of marine Pseudomonadota, especially those belonging to γ-Proteobacteria. Since these bacteria appear to have propagated T1PKS genes via HGT , a single primer pair may work universally within this phylum to capture many pfa KS sequences. However, this specificity toward marine Gammproteobacteria likely hinders the recovery of bacteria in diverse phyla deriving from terrestrial soil and freshwater ecosystems such as actinobacteria and cyanobacteria, respectively, where T1PKS appears to have evolved rather from a common ancestor. Indeed, certain soil taxa with known PUFA production competence were surprisingly not found among our annotated sequences, such as some strains within myxobacteria, Actinomycetes, and Desulfobacteria . We checked primer specificity to 427 phylogenetically diverse organisms from NCBI GenBank reference genomes and saw alignments occur across eight deeply divergent lineages, including common soil or environmental bacteria. Gammaproteobacteria ( n = 151) were the dominant lineage, of which Shewanella ( n = 107) genomes were most highly represented. Of 427 queried genomes, 182 matched both forward and reverse primers and indicated a high likelihood of successful amplification. This analysis did confirm that Sorangium cellulosum and Minicystis rosea of Myxococcota would be missed ( Phylogenetic distribution of PfaA-KS primer matches ). Despite this limitation, a surprising number of sequences and diversity were recovered from the compost and earthworm gut soil sample material that we report here. We recovered PfaA-KS sequences that had >95% sequence identity to those found among marine bacterial reference isolates using BLASTn query on the full nucleotide collection (nt). For sequences that were not well resolved using the NCBI RefSeq nt database, a BLASTx translated amino acid sequence was mapped against the non-redundant (nr) database, with most hits achieving >95% homology (see for a full summary of BLAST hit results). To understand the relationship between the sequences we captured from earthworm gut and compost soil samples, we constructed a phylogenetic inference tree using our 65 bacterial PfaA-KS ASVs plus the 446 PfaA-KS sequences that were published in the original publication for the creation of the PfaA-KS primer set . The relevant difference with our present study is the use of the PfaA-KS primers on terrestrial soil-derived samples, whereas prior research has been restricted to their application on marine water and sediment samples. Using the Escherichia coli fab F gene as an outgroup (representative of a KS subtype involved in the canonical type II FAS system), we see that our sequences surprisingly form clusters among shallow branches of the tree which match their taxonomic assignments, rather than their environmental source. However, known evolutionary relationships that are based on whole genomes are not recapitulated since more distantly related taxa (such as Pseudomonadota and Bacteroidota) form larger clusters that break up established evolutionary lineages, such as between members of Gammproteobacteria. There is a clear separation between sequences found within earthworm GS samples and those of the compost samples (denoted by barplots along the outer ring of the tree). All GS sequences are assigned to Shewanella and cluster among a fairly undifferentiated lineage that includes other Vibrio -assigned sequences, and which stem from a bifurcation that separates (i) facultatively anaerobic cold-water-marine Gammaproteobacteria ( Vibrio , Colwellia , Moritella , Psychromonas ) and aerobic saprophytic or predatory Bacteroidota ( Flammeovirga , Saprospirace); from (ii) subsurface anoxic-dwelling taxa (Legionellales, Candidatus Hydrogenedentes), anaerobic fermenters ( Anaerolineae ), and terrestrial aerobic chemoheterotrophs (Armatimonadota) . The PfaA-KS sequences found within compost samples were spread more widely across the tree among organisms such as Gemmata (Planctomycetota), Cloacimonadota, and other Anaerolineales that are often found among carbon-rich sediments such as peat bogs, waste-water treatments, or the rhizosphere. To date, the activity and product of these putatively competent PUFA synthase complexes remain unconfirmed . The remaining one-third of the tree is wholly dominated by a basal cluster of sequences assigned to Deltaproteobacteria or its sublineage, SAR324, which are distinguished from the other taxonomically and phylogenetically diverse branchings. This suggests that metabolic niche participation has broadly determined the structure of the tree, which separates primarily between marine chemolithoheterotrophs (sulfate reducers) and diverse chemoorganoheterotrophs (carbon/nitrogen utilizers through predatory or saprophytic processes) . This topology further supports the observation that T1PKS are prone to HGT events , and that HGT propagation of the pfa complex helps to explain its environmental and phylogenetically widespread occurrence and lack of strict coherence among species or sub-species level presence/absence patterns . To gain insight into which members of the microbial community that are concentrated in earthworm gut soil and which may be contributing to the production of PUFA, we conducted an in-depth investigation on the ASV table abundances, which was first transformed to center-log-ratio (CLR) values . An extremely sparse compositional table rendered normalization practices such as rarefaction and relative abundance inadequate to standardize sequence abundances across samples. We confirmed that there is no apparent bias in the amount of genetic data recovered from the samples based on sample type, starting amount, or concentration, supporting the conclusion that the observed sparsity of the PfaA-KS ASV table is biologically real (see for sample metadata; see for associations between PfaA-KS sequence recovery and sample metadata). Surprisingly, in contrast to the metabolite lipid profiles generated from the same sample material, the recovered PfaA-KS sequences were most diverse and concentrated among compost samples (PC and VC) rather than the GS samples . The PfaA-KS sequences were sparsely distributed at very low abundance in GS samples, both pooled and individual . While the ASV taxonomic assignments suggest that just a handful of unique taxa contribute to the total sequence variation, the sequence variants themselves are quite numerous even within the same putative bacterial strain and are strongly segregated between GS and compost samples . No single ASV is shared across all three sample types. This can be most clearly seen when ASVs are aggregated at the genus level, or otherwise, their lowest taxonomic bin, and then hierarchically clustered according to Ward’s sum of squares method . The GS samples are composed entirely of Shewanella assigned sequences while the compost samples contain a mixture of the other seven environmental taxa bins. This suggests that only Shewanella remain in close association with the earthworm host tissue rather than passing through as transients (indicated by the strains found in VC but not in GS), and perhaps actually are co-residents of the earthworm gut . To understand the prevalence of putative PUFA-producing taxa among the overall prokaryote microbiome community, the taxonomic bins assigned to the PfaA-KS sequences were sought among the 16S rRNA data set, and the matching ASVs from the abundance tables were subset for further analysis ( prok.pfa abundance barplots ). The subset of PfaA-KS taxa from the 16S data (herein referred to as prok.pfa ) was normalized by the total number of ASVs and then transformed using CLR to compare with the PfaA-KS sequence amplicon data set (herein referred to as pfa ). First, Procrustes analysis was used to infer the correspondence of the ASV level ordinations of the prok.pfa and pfa data using Euclidean distance. Using the function “protest” with 999 permutations, the data sets were found to have modestly significant correlation (m12 squared = 0.758, corr. = 0.492, P = 0.001), and their ordinations show similar separation of sample types along the first and second principal components . Next, the ASVs counts were binned at genus level (or lowest taxonomic assignment), resulting in seven PfaA-KS taxonomic assignments: Anaerolineales, Armatimonadota, Gemmataceae , Hydrogenedentes, Moritella , Phycisphaera, and Shewanella . The correspondence of these binned taxa vectors within the pfa and prok.pfa abundance tables was calculated using a Mantel test on the Euclidean distance matrices, yielding again low correlation (Spearman’s rho = 0.194, P = 0.007, mantel test ), suggesting that the abundance of the sequences captured using the PfaA-KS primers cannot be related to the microbiome profile reconstructed from 16S rRNA amplification. Correlation and visualization of the abundance distribution of each taxonomic bin shows that only for Shewanella abundance is there coherence between the PfaA-KS and 16S rRNA abundances (Pearson’s rho: PC = 0.465, GS = 0.662; ; TaxaLinearAssoc-pfaclrM2; & Histograms ), supporting a hypothesized situation that Shewanella proliferate in the earthworm gut relative to the overall community, during which they could contribute PUFA nutritionally to the earthworm and are not just transient dead organic matter. To be able to truly correlate the amplicon data sets requires quantification of the amplified gene regions and more targeted research in the future should incorporate a digital PCR or quantitative PCR step in the amplicon preparation workflow. This would verify the coherence in the presence of the PfaA-KS and the 16S gene regions and the bacterial taxa from which they derive. Using effect size as a measure of explanatory power for the difference in ASV abundance in the prok.pfa table also found Shewanella-assigned ASVs to be highly indicative of differences between vermicompost (earthworm castings), and pre-compost (compost soil applied as feed but not containing earthworms) ( ; Effect size aldex2 results ). The dissociation between pfa and prok.pfa abundances for the remaining taxa is most likely because the taxa that contain a putative PUFA gene complex are extremely low-abundant members of the total microbial community (especially in soil or compost), and so are negatively biased during 16S rRNA primer annealing and amplification. Earthworms ingest a variety of non-bacterial microbiota from the soil and can even be selective foragers. Phospholipid fatty acid (PLFA) biomarkers indicate the concentration of microeukaryotes and fungi within the earthworm gut relative to the bulk soil , and a robust immune response has evolved to entrap and expel parasitic nematodes . Prior research on the lipid content of earthworm GS attributed the accumulation of PUFA in GS and somatic tissue to the metabolic activity of microeukaryotes such as fungi and protozoa. This was in accordance with the understanding that certain FAs are lipid biomarkers for the presence of organismal groups, on account of the FA composition of their phospholipid membrane . According to the pattern of observations, algae, yeast, and other fungi are typically associated with the synthesis of long-chain polyunsaturated acyl-moieties, including 20C + fatty acids and sphingolipids, from both de novo and precursor 18C fatty acids . Common terrestrial bacteria on the other hand are attributed with medium-chain, straight or branched, odd-numbered-saturated or even-numbered-monounsaturated fatty acids , and so they have not been considered candidate producers for PUFAs found in a terrestrial sediment context. Examples of soil-living microeukaryote organisms with notable concentrations of PUFA include invertebrate microfauna such as nematodes (Chromadorea), and macroinvertebrate collembola (Ellipura), fungi such as Mortierellaceae , and protozoan microalgae such as Labyrinthulomycetes. However, the origin of earthworm gut PUFAs from a microeukaryotic source has not been fully investigated. Therefore, we used 18S rRNA amplification to catalog the presence of putative PUFA-producing eukaryotes and determine whether their pattern of distribution across sample types may be a factor toward explaining PUFA accumulation in the earthworm gut environment. Eukaryote ASVs that mapped to the Oligochaetes subclass (aquatic and terrestrial worms) were first removed to eliminate the host signal that otherwise obliterated our ability to see representative taxa. For GS samples, this unfortunately resulted in a loss of >90% of reads. Filtered samples were then rarefied to 1,000 reads and ASVs were binned at the “class/order/family” level for visualization and assessment ( ; Eukaryote taxa assignments barplot ). Of the resulting 46 taxonomic bins achieving at least 1% abundance in 5% of samples, four are identified as containing known PUFA producers: Chromadorea (nematodes), Ellipura (collembola), Mortierellaceae (soil fungi), and Thraustochytriaceae (saprotrophic protists). The Chromadorea and Ellipura percent relative abundance in GS samples were approximately 1.88% and 0.02% of rarefied reads, respectively, while significantly greater proportions of these two organisms were found in VC samples (castings) (Wilcoxon: P < 0001). Mortierellaceae were most prevalent in PC (7.55%) while <1% in VC and GS, and Thraustochytriaceae were <1% for all sample types . Nematodes contain all desaturase and elongase enzymes to de novo produce up to 20C length FAs, including 20:5 ω3 EPA and 20:4n6 ARA, which were the uniquely enriched PUFAs found among our GS samples . Therefore, we cannot exclude that the prevalence of nematodes passing through the earthworm intestinal tract and expelled with the castings may explain the detection of PUFAs in the GS. However, since PC samples also contain a smaller but detectable amount of nematode 18S rRNA signal, and yet are devoid of any detectable PUFA, then it is unclear whether the nematodes in the earthworm composting system where we sampled are sequestering PUFA. Previous studies have shown that Caenorhabditis elegans may contain anywhere from 1% to 20% of total lipid PUFA, depending on growth temperature . These are largely due to an abundance of 18C FAs, namely a signature peak at 18:1 ω7 ( cis -vaccenic acid) in all lipid fractions—phosphatidylcholine, phatidylethanolamine—but to a lesser degree also EPA and ARA . However, none of our sample peaks were annotated for 18:1 ω7, which would be expected if nematode lipids (or otherwise aerobic bacteria) were contributing to the pool of measured fatty acids in our samples. Furthermore, earthworms respond to nematode infestations that occur in the coelomic cavity and near the nephridia by encapsulating the tiny worms in cysts called “brown bodies” and destroying them using reactive oxygen species . Alternatively, the earthworm itself may be able to produce PUFAs, as recent genomic analysis has revealed genes encoding “methyl”-end desaturase homologs to the Δ12 (ω6) and Δ15 (ω3) desaturases in Oligochaeta . Little is known about the ability of invertebrates to produce long-chain PUFA, especially the amount and rate that this might occur. Evidence from genomic analysis attests to the widespread occurrence of elongation and desaturation enzymes, but only a handful of earlier publications have indicated some limited de novo production such as in Collembola (springtails), Daphnia (water fleas), and nematodes . Still, Arthropoda exhibit poor fecundity and decline in fitness when their dietary supply of pre-formed PUFA is absent or restricted, despite encoding the competent enzymes for de novo production . In earthworms, the increasing concentration of PUFAs from bulk soil to gut soil, and finally peaking in muscle tissue, which was interpreted as trophic transfer from soil microorganisms to the earthworm , may well work in the opposite direction. The earthworm may be producing PUFA within its own tissue, perhaps as precursors to eicosanoids or for cellular membrane permeability, and as the epithelial cells shed into the gut, then PUFAs appear to concentrate in gut soil. However, in this prior study, the entire unbroken intestinal track was surgically isolated from body, and it challenges any notion that somatic cell metabolites should be leaked into the gut soil, and rather strengthens the original interpretation. To definitively rule out earthworm-derived PUFAs, further work should isolate metabolic activity using approaches such as stable isotope probing with heavy isotopes of carbon and hydrogen. In sum, we found that the earthworm gut soil ecosystem is distinct from the input compost and output castings according to the type and abundance of organisms present, as well as in the fatty acid metabolite profiles. Amplicon sequencing of the KS functional domain on the pfaA gene as part of the PUFA synthase complex illuminated the sparse and phylogenetically narrow distribution of putative PUFA-producing bacterial taxa, especially the ASVs from GS samples, which all unambiguously belonged to strains of Shewanella . Most surprising was that while the 20C PUFAs were found among the total lipid pool of the GS samples and absent in both compost sample types, the PfaA-KS sequence abundance pattern was exactly the contrary. Samples in which the largest number and diversity of PfaA-KS sequences were recovered were VC, which are the castings or feces of the earthworms. These samples contained taxa also found among the GS samples which were implicated as PUFA competent based on PfaA-KS presence, but VC samples also contained other PfaA-KS-positive taxa known to be widespread free-living organisms. Based on 16S and 18S rRNA taxonomic profiling, the GS samples diverged from the compost samples based on the presence and abundance of ASVs, suggestive of a unique ecosystem that is maintained by the conditions of the earthworm gut. Eukaryotic origin of the PUFAs is possible but not well supported, since organisms such as nematodes and collembola failed to show other signature 18C fatty acid biomarkers, and were not well represented among the amplicon sequence pool, especially of the GS samples. The overall picture appears to be that earthworms may aggregate the ARA and EPA PUFAs stemming from bacterial metabolism, or they produce these lipids from their own genomically encoded enzymes. Conditions in the intestinal tract may activate PUFA-producing taxa like Shewanella that proliferate in the earthworm gut ecosystem during the passage of environmental soil and bacteria. Materials For molecular isolations, including lipid extraction and nucleic acid isolation, the reagents and equipment used are listed in , respectively. Equipment Experimental design and sample collection Adult Eisenia fetida were collected from Vermigrand GmbH in Absdorf, Austria together with different compost samples, designated Pre-Compost and Vermicompost. Pre-Compost (PC) is compost consisting of horse manure and thermophilic compost, it is laid freshly onto the existing compost every week to feed the worms. Vermicompost (VC) is worm castings and is used as an organic fertilizer. Samples were collected on three specific days within a week, for two separate weeks . This sampling scheme was chosen to capture the possible effects within a week of composting since new Pre-compost is laid onto existing soil once a week. Collected worms were kept in their natural soil composition until the dissection, taking place not more than 2 hours later. For dissection, the worms were put into a petri dish and cleaned from residual soil before being euthanized in hot water (70°C) for 20 s. Worms were then dissected aseptically under a stereomicroscope to recover soil from the intestinal tract (called “Gut soil” or GS). Gut soil from 10 to 15 worms was collected to recover at least 275 mg of wet gut soil per replicate. To collect castings of E. fetida , worms were cleaned externally with deionized water and then separated into individual culture boxes with ventilation for 48 hours. The castings produced by the worms were collected, weighed, and then pooled to recover at least 275 mg of wet weight. Samples were frozen immediately after collection, freeze-dried on the next day, and stored at −20°C until further use. Every sample was given a unique identifier for tracking the sample during the analysis as shown below : Lipid extraction The Folch extraction method was carried out following the procedure described by Folch et al. : 70 mg dry sample (PC, VC, and GS) was extracted with 1.4 mL of chloroform:methanol (2:1) vortexing for 2 minutes. The mixture was centrifuged at 4,000 rpm for 10 minutes before collecting the supernatant. The extraction process was carried out three times on the same sample and pooled. The collected organic layers were purified by washing with 1 mL dH 2 O and a spatula tip of NaCl. The mixture was vortexed for 1 minute and centrifuged at 4,000 rpm for 10 minutes. The chloroform layer that contains the extracted lipids was recovered and evaporated in a Labconco CentriVap Benchtop Vacuum concentrator under reduced pressure at 10°C. The lipid content was determined gravimetrically and calculated as the weight percentage of dry biomass. Lipid extracts obtained were directly used for derivatization. Basic derivatization Basic derivatization was carried out by resuspending 10 mg of lipid extracts in 100 µL of hexane. After vortexing, to completely dissolve the extract, 50 µL of 2 M KOH solution in methanol was added and vortexed for 1 minute. The sample was incubated for 5 minutes at room temperature. 125 mg of sodium bisulfate was added to the sample, vortexed, and centrifuged at 4,000 rpm for 5 minutes. 100 µL of supernatant was collected, mixed with 400 µL of hexane, and filtered through a 0.22 µm PTFE filter. The filter was washed with an additional 100 µL of hexane. GC-MS method FAMEs were separated using a HP-5ms Ultra Inert Column (30 m × 250 μm × 0.25 µm) (Agilent, CA, USA). 1 µL sample was injected in spitless mode and an injector temperature of 280°C. The initial oven temperature was set at 150°C for 1 minute and the temperature was gradually raised to 220°C at 3°C/min with a final increase to 300°C for 3 minutes. Helium was used as carrier gas at a constant column flow rate of 2.52 mL/min. The GC to MS interface temperature was fixed at 280°C and an electron ionization system was set on the MS in scan mode. The mass range evaluated was 50–600 m/z, where MS quad and source temperatures were maintained at 150°C and 230°C, respectively. To search and identify each fatty acid, the NIST-MS Library (2.2) was used. It was also used to measure the relative percentage of each compound and relative peak areas of the total ionic chromatogram were used. In addition, a commercial standard (Supelco 37 Component FAME Mix cat.#47885) was used to identify peaks of the fatty acids by comparing retention times to those of known fatty acids in the commercial standard. Statistical analysis of identified fatty acids All extractions were performed in biological triplicate for each time point within a week. The results were expressed as mean ± standard deviation while the effect of the extraction method and the extraction conditions on lipid recovery was analyzed using two-way ANOVA followed by Tukey post hoc test (statistical significance at P < 0.05). Statistical analysis was performed with GraphPad Prism 8 (8.0.1) (GraphPad Software, San Diego, CA, USA). To test differences between the two sampling weeks and to discriminate differences in fatty acid composition between samples, the data were transformed by using a center-log ratio and then evaluated using a Kolmogorov–Smirnov t-test and Euclidean distance-based ordination, respectively. Between-group differences were assessed by permutational ANOVA by explaining community distances based on sampling week interacting with sampling day and sample type: (Response = adonis2 (community data ~ sampling week * sampling day * sample type)). Every analysis was performed with R version 4.1.1 using the “vegan” package (v2.6–4) . Nucleic acid extraction and sequencing DNA and RNA extraction was carried out by using the ZymoBIOMICS MagBead DNA/RNA Kit . Samples were weighed at roughly equivalent starting biomass (approximately 20 mg dry weight when sample quantity allowed) into prefilled tube strips (8 × 12) containing 750 µl DNA/RNA Shield (ZymoBIOMICS) and 40–400 µm glass beads (Macherey-Nagel) and arranged on a 96-well rack. Homogenization was conducted on an MP Fastprep-96 with 1 minute on maximum speed and 5 minutes rest, which was repeated five times. DNA and RNA were purified from the same input material following the kit protocol, yielding two separate eluates for DNA and RNA, respectively. The concentrations were measured with the Qubit 1× dsDNA HS Assay Kit and RNA HS Assay Kit. The detailed protocol for amplicon generation and sequencing is published by Pjevac et al. . In brief, a two-step PCR was performed for amplicon generation and to add linkers as well as barcodes. Amplicons were generated for the V4 region of the 16S (515F: GTG YCA GCM GCC GCG GTA A; 806R: GGA CTA CNV GGG TWT CTA AT) and 18S (TAReuk454FWD1: CCA GCA SCY GCG GTA ATT CC; TAR eukREV3mod: ACT TTC GTT CTT GAT YRA TGA) rRNA genes as well as the PfaA-KS domain region of the pfa gene (PfaA-KS-Fw: TGG GAA GAR AWT TCC C; PfaA-KS-Rv: GTR CCN GTR CNG CTT C), part of the Pfa-Synthase complex . Linker sequences from primer to barcode were forward GCT ATG CGC GAG CTG C and reverse TAG CGC ACA CCT GGT A. Cycling conditions for the 16S and 18S rRNA amplicons were as follows: initial denature 94°C for 3 minutes and then denaturing at 95°C for 30 s, annealing at 55°C for 30 s, and extension at 72°C for 60 s. Cycling conditions for the PfaA-KS amplicons were as follows: initial denature at 94°C for 3 minutes and then denaturing at 94°C for 45 s, annealing at 50.3°C for 30 s, and extension at 72°C for 90 s. The first PCR for all three primer sets was done with 30 cycles and 0,25 µM primer concentration while the second PCR was done with seven cycles for the 16S and 18S rRNA amplicons and 15 cycles for the PfaA-KS amplicons using 0.8 µM primer concentration. Sequencing was performed by the Joint Microbiome Facility (JMF) on an Illumina MiSeq using 2 × 300 base pair sequencing . Analysis of amplicon sequencing data In-house generated data Input data were filtered for PhiX contamination with BBDuk . Demultiplexing was performed with the python package demultiplex allowing one mismatch for barcodes and two mismatches for linkers . Primers were verified with the python package demultiplex allowing 2 and 2 mismatches for forward and reverse primers, respectively. Barcodes, linkers, and primers were trimmed off using BBDuk with 47 and 48 bases being left-trimmed for F.1/R.2 and F.2/R.1, respectively, allowing a minimum length of 220 base pairs. Around 80% of reads across all samples passed all trimming, filtering, and denoising criteria. ASV determination ASVs for 16S and 18S sequences were generated by an in-house sequencing facility using a standard workflow with DADA2 . The PfaA-KS sequences were handled separately according to the following steps: (i) Forward and reverse orientated sequences were kept separated based on primer (F.1/F.2 and R.1/R.2), and merged using PEAR with minimum overlap (-o) of 5, minimum trim length (-t) of 300, maximum assembly length (-m) of 600, minimum quality score (-q) of 30, and maximum uncalled bases allowed (-u) of 0; (ii) The forward-orientated FASTQ reads starting with the reverse primer (R: merged from R1/R2) were reverse complemented, and then all merged sequences were concatenated per sample and used for ASV calling DADA2; (iii) Filtering was conducted based on quality profile plots: length truncated to 475 nt, maxN = 0, maxEE = 5, truncQ = 3, rm.phix = TRUE. After merging and filtering, 11 samples were discarded (01, 16, 24, 27, 40, 41, 42, 50, 52, 63, and 64) because no reads passed filtering; (iv) The remaining 54 samples were used for error modeling and ASV calling, yielding 66 unique biological PfaA-KS sequence ASVs. The sample ASV table and ASV sequences were exported for further analysis. Taxonomy assignments SSU rRNA ASVs were classified using SINA version 1.6.1 and the SILVA database SSU Ref NR 99 release 138.1 . The PfaA-KS sequences were classified using BLASTn and BLASTx against the nucleotide (nt) and non-redundant protein (nr) databases respectively and the top hit extracted for putative assignment . The results from BLASTn were primarily used unless the identity and query coverage fell below 80%, in which the BLASTx results were consulted. Sixteen samples remained with ambiguous assignments (“uncultured organism”) and for these, the BLASTx (nr) search was narrowed to Bacterial references only, which resolved the assignments, with a minimum percent identity of 81% and full query coverage (see for all BLAST output). Top hits for each amplicon variant were used for the final assignment, and the full taxonomy lineage was retrieved from the Genome Taxonomy Database, release 202 metadata . Alignment and tree-building The PfaA-KS sequence variants from the original study were accessed from the NCBI GenBank PopSet 303162115 (446 total entries) and reannotated following the same procedures with BLASTn and BLASTx as for our in-house generated PfaA-KS ASVs . These sequences were combined with our in-house generated PfaA-KS ASVs. The Escherichia coli fabF gene was used as an outgroup to root the phylogeny. Sequences were aligned with MAFFT (v7.520) using the --localpair option and --maxiterate 1000 . Phylogenetic inference was made using IQ-TREE (multicore version 1.6.12) , first using model selection, and then using the best-fit model with the following parameters: -m TIM2e + R5 bb 1000 -nt 4 -redo. The resulting tree was visualized using iTOL . Genomic mining of PfaA-KS primer matches To validate the use of universal primers designed for marine bacteria using a region on the ketoacyl-synthase domain of the pfaA gene (Shulse & Allen 2010), all available whole genomes of archaea and bacteria were downloaded from the NCBI GenBank archive. Putative PUFA synthase regions were found using a sequence-naïve approach by searching for tandem repeats of the phosphopantetheine attachment site for the acyl-carrier-protein domain. The genomic region containing the full putative PfaA-KS domain was extracted from reference genomes of positive hits. The blastn command line tool was used to map all forward and reverse sequence possibilities of the degenerate primers, setting word size to 7 and qcov_hsp_perc (percent-query-coverage per high-scoring pair) to 50 to accommodate loose configurations of primer matches. The matches were annotated as to whether or not the last three bases on the 3′ end aligned to the reference, and the out6 formats of all hits compiled into a table, one entry per reference, with the final header: qseqid, sseqid, length, mismatch, frames, qstart, qend, sstart, send, sseq, and the custom column for 3′-codon-matches for forward and reverse, respectively. The originally extracted putative PfaA-KS domains from the reference genomes were aligned using MAFFT and then phylogenetic inference was made using IQ-TREE, with the resulting tree file exported and visualized in iTOL. The entries were colored according to the taxonomic lineage. The leaves were annotated as to their primer pair match, 3′-end coverage, and the predicted insert size to visualize amplicons that could be theoretically obtained from across the entire phylogenetic space. Analysis and statistics All analyses on the ASV abundance tables were handled in R version 4.1.1 including the following packages: {decontam} (v1.14.0) was used for removing contaminants ; sampling depth was normalized with {DESeq2} (v1.36.0) using a variance stabilizing transformation ; compositional transformations using center-log-ratio (CLR) and estimation of effect sizes were handled with {propr}, {vegan}, and {ALDEx2} . An effect size >1 was considered as explanatory. The base R {stats} was used for clustering samples as well as pairwise comparisons with the Wilcoxon test. In addition, to reduce the rate of type-I errors, multiple testing corrections were done with the Benjamini-Hochberg method. False discovery rate (FDR) < 0.05 was considered statistically significant . {phyloseq} (v1.40.0) was used for alpha- and beta-diversity calculation while rarefaction for alpha diversity calculation was done with {vegan} (2.6–4) . Heat plots were generated with {made4} (v1.68.0) , {massageR} and {gplots} ; data frame transformation and reformatting were handled with {reshape2}, {dplyr} (v1.0.10), {tidyr} (v1.2.1) , and plotting conducted with {ggplot2} using {wesanderson} color palettes . For molecular isolations, including lipid extraction and nucleic acid isolation, the reagents and equipment used are listed in , respectively. Adult Eisenia fetida were collected from Vermigrand GmbH in Absdorf, Austria together with different compost samples, designated Pre-Compost and Vermicompost. Pre-Compost (PC) is compost consisting of horse manure and thermophilic compost, it is laid freshly onto the existing compost every week to feed the worms. Vermicompost (VC) is worm castings and is used as an organic fertilizer. Samples were collected on three specific days within a week, for two separate weeks . This sampling scheme was chosen to capture the possible effects within a week of composting since new Pre-compost is laid onto existing soil once a week. Collected worms were kept in their natural soil composition until the dissection, taking place not more than 2 hours later. For dissection, the worms were put into a petri dish and cleaned from residual soil before being euthanized in hot water (70°C) for 20 s. Worms were then dissected aseptically under a stereomicroscope to recover soil from the intestinal tract (called “Gut soil” or GS). Gut soil from 10 to 15 worms was collected to recover at least 275 mg of wet gut soil per replicate. To collect castings of E. fetida , worms were cleaned externally with deionized water and then separated into individual culture boxes with ventilation for 48 hours. The castings produced by the worms were collected, weighed, and then pooled to recover at least 275 mg of wet weight. Samples were frozen immediately after collection, freeze-dried on the next day, and stored at −20°C until further use. Every sample was given a unique identifier for tracking the sample during the analysis as shown below : The Folch extraction method was carried out following the procedure described by Folch et al. : 70 mg dry sample (PC, VC, and GS) was extracted with 1.4 mL of chloroform:methanol (2:1) vortexing for 2 minutes. The mixture was centrifuged at 4,000 rpm for 10 minutes before collecting the supernatant. The extraction process was carried out three times on the same sample and pooled. The collected organic layers were purified by washing with 1 mL dH 2 O and a spatula tip of NaCl. The mixture was vortexed for 1 minute and centrifuged at 4,000 rpm for 10 minutes. The chloroform layer that contains the extracted lipids was recovered and evaporated in a Labconco CentriVap Benchtop Vacuum concentrator under reduced pressure at 10°C. The lipid content was determined gravimetrically and calculated as the weight percentage of dry biomass. Lipid extracts obtained were directly used for derivatization. Basic derivatization Basic derivatization was carried out by resuspending 10 mg of lipid extracts in 100 µL of hexane. After vortexing, to completely dissolve the extract, 50 µL of 2 M KOH solution in methanol was added and vortexed for 1 minute. The sample was incubated for 5 minutes at room temperature. 125 mg of sodium bisulfate was added to the sample, vortexed, and centrifuged at 4,000 rpm for 5 minutes. 100 µL of supernatant was collected, mixed with 400 µL of hexane, and filtered through a 0.22 µm PTFE filter. The filter was washed with an additional 100 µL of hexane. GC-MS method FAMEs were separated using a HP-5ms Ultra Inert Column (30 m × 250 μm × 0.25 µm) (Agilent, CA, USA). 1 µL sample was injected in spitless mode and an injector temperature of 280°C. The initial oven temperature was set at 150°C for 1 minute and the temperature was gradually raised to 220°C at 3°C/min with a final increase to 300°C for 3 minutes. Helium was used as carrier gas at a constant column flow rate of 2.52 mL/min. The GC to MS interface temperature was fixed at 280°C and an electron ionization system was set on the MS in scan mode. The mass range evaluated was 50–600 m/z, where MS quad and source temperatures were maintained at 150°C and 230°C, respectively. To search and identify each fatty acid, the NIST-MS Library (2.2) was used. It was also used to measure the relative percentage of each compound and relative peak areas of the total ionic chromatogram were used. In addition, a commercial standard (Supelco 37 Component FAME Mix cat.#47885) was used to identify peaks of the fatty acids by comparing retention times to those of known fatty acids in the commercial standard. Statistical analysis of identified fatty acids All extractions were performed in biological triplicate for each time point within a week. The results were expressed as mean ± standard deviation while the effect of the extraction method and the extraction conditions on lipid recovery was analyzed using two-way ANOVA followed by Tukey post hoc test (statistical significance at P < 0.05). Statistical analysis was performed with GraphPad Prism 8 (8.0.1) (GraphPad Software, San Diego, CA, USA). To test differences between the two sampling weeks and to discriminate differences in fatty acid composition between samples, the data were transformed by using a center-log ratio and then evaluated using a Kolmogorov–Smirnov t-test and Euclidean distance-based ordination, respectively. Between-group differences were assessed by permutational ANOVA by explaining community distances based on sampling week interacting with sampling day and sample type: (Response = adonis2 (community data ~ sampling week * sampling day * sample type)). Every analysis was performed with R version 4.1.1 using the “vegan” package (v2.6–4) . Basic derivatization was carried out by resuspending 10 mg of lipid extracts in 100 µL of hexane. After vortexing, to completely dissolve the extract, 50 µL of 2 M KOH solution in methanol was added and vortexed for 1 minute. The sample was incubated for 5 minutes at room temperature. 125 mg of sodium bisulfate was added to the sample, vortexed, and centrifuged at 4,000 rpm for 5 minutes. 100 µL of supernatant was collected, mixed with 400 µL of hexane, and filtered through a 0.22 µm PTFE filter. The filter was washed with an additional 100 µL of hexane. FAMEs were separated using a HP-5ms Ultra Inert Column (30 m × 250 μm × 0.25 µm) (Agilent, CA, USA). 1 µL sample was injected in spitless mode and an injector temperature of 280°C. The initial oven temperature was set at 150°C for 1 minute and the temperature was gradually raised to 220°C at 3°C/min with a final increase to 300°C for 3 minutes. Helium was used as carrier gas at a constant column flow rate of 2.52 mL/min. The GC to MS interface temperature was fixed at 280°C and an electron ionization system was set on the MS in scan mode. The mass range evaluated was 50–600 m/z, where MS quad and source temperatures were maintained at 150°C and 230°C, respectively. To search and identify each fatty acid, the NIST-MS Library (2.2) was used. It was also used to measure the relative percentage of each compound and relative peak areas of the total ionic chromatogram were used. In addition, a commercial standard (Supelco 37 Component FAME Mix cat.#47885) was used to identify peaks of the fatty acids by comparing retention times to those of known fatty acids in the commercial standard. All extractions were performed in biological triplicate for each time point within a week. The results were expressed as mean ± standard deviation while the effect of the extraction method and the extraction conditions on lipid recovery was analyzed using two-way ANOVA followed by Tukey post hoc test (statistical significance at P < 0.05). Statistical analysis was performed with GraphPad Prism 8 (8.0.1) (GraphPad Software, San Diego, CA, USA). To test differences between the two sampling weeks and to discriminate differences in fatty acid composition between samples, the data were transformed by using a center-log ratio and then evaluated using a Kolmogorov–Smirnov t-test and Euclidean distance-based ordination, respectively. Between-group differences were assessed by permutational ANOVA by explaining community distances based on sampling week interacting with sampling day and sample type: (Response = adonis2 (community data ~ sampling week * sampling day * sample type)). Every analysis was performed with R version 4.1.1 using the “vegan” package (v2.6–4) . DNA and RNA extraction was carried out by using the ZymoBIOMICS MagBead DNA/RNA Kit . Samples were weighed at roughly equivalent starting biomass (approximately 20 mg dry weight when sample quantity allowed) into prefilled tube strips (8 × 12) containing 750 µl DNA/RNA Shield (ZymoBIOMICS) and 40–400 µm glass beads (Macherey-Nagel) and arranged on a 96-well rack. Homogenization was conducted on an MP Fastprep-96 with 1 minute on maximum speed and 5 minutes rest, which was repeated five times. DNA and RNA were purified from the same input material following the kit protocol, yielding two separate eluates for DNA and RNA, respectively. The concentrations were measured with the Qubit 1× dsDNA HS Assay Kit and RNA HS Assay Kit. The detailed protocol for amplicon generation and sequencing is published by Pjevac et al. . In brief, a two-step PCR was performed for amplicon generation and to add linkers as well as barcodes. Amplicons were generated for the V4 region of the 16S (515F: GTG YCA GCM GCC GCG GTA A; 806R: GGA CTA CNV GGG TWT CTA AT) and 18S (TAReuk454FWD1: CCA GCA SCY GCG GTA ATT CC; TAR eukREV3mod: ACT TTC GTT CTT GAT YRA TGA) rRNA genes as well as the PfaA-KS domain region of the pfa gene (PfaA-KS-Fw: TGG GAA GAR AWT TCC C; PfaA-KS-Rv: GTR CCN GTR CNG CTT C), part of the Pfa-Synthase complex . Linker sequences from primer to barcode were forward GCT ATG CGC GAG CTG C and reverse TAG CGC ACA CCT GGT A. Cycling conditions for the 16S and 18S rRNA amplicons were as follows: initial denature 94°C for 3 minutes and then denaturing at 95°C for 30 s, annealing at 55°C for 30 s, and extension at 72°C for 60 s. Cycling conditions for the PfaA-KS amplicons were as follows: initial denature at 94°C for 3 minutes and then denaturing at 94°C for 45 s, annealing at 50.3°C for 30 s, and extension at 72°C for 90 s. The first PCR for all three primer sets was done with 30 cycles and 0,25 µM primer concentration while the second PCR was done with seven cycles for the 16S and 18S rRNA amplicons and 15 cycles for the PfaA-KS amplicons using 0.8 µM primer concentration. Sequencing was performed by the Joint Microbiome Facility (JMF) on an Illumina MiSeq using 2 × 300 base pair sequencing . In-house generated data Input data were filtered for PhiX contamination with BBDuk . Demultiplexing was performed with the python package demultiplex allowing one mismatch for barcodes and two mismatches for linkers . Primers were verified with the python package demultiplex allowing 2 and 2 mismatches for forward and reverse primers, respectively. Barcodes, linkers, and primers were trimmed off using BBDuk with 47 and 48 bases being left-trimmed for F.1/R.2 and F.2/R.1, respectively, allowing a minimum length of 220 base pairs. Around 80% of reads across all samples passed all trimming, filtering, and denoising criteria. ASV determination ASVs for 16S and 18S sequences were generated by an in-house sequencing facility using a standard workflow with DADA2 . The PfaA-KS sequences were handled separately according to the following steps: (i) Forward and reverse orientated sequences were kept separated based on primer (F.1/F.2 and R.1/R.2), and merged using PEAR with minimum overlap (-o) of 5, minimum trim length (-t) of 300, maximum assembly length (-m) of 600, minimum quality score (-q) of 30, and maximum uncalled bases allowed (-u) of 0; (ii) The forward-orientated FASTQ reads starting with the reverse primer (R: merged from R1/R2) were reverse complemented, and then all merged sequences were concatenated per sample and used for ASV calling DADA2; (iii) Filtering was conducted based on quality profile plots: length truncated to 475 nt, maxN = 0, maxEE = 5, truncQ = 3, rm.phix = TRUE. After merging and filtering, 11 samples were discarded (01, 16, 24, 27, 40, 41, 42, 50, 52, 63, and 64) because no reads passed filtering; (iv) The remaining 54 samples were used for error modeling and ASV calling, yielding 66 unique biological PfaA-KS sequence ASVs. The sample ASV table and ASV sequences were exported for further analysis. Taxonomy assignments SSU rRNA ASVs were classified using SINA version 1.6.1 and the SILVA database SSU Ref NR 99 release 138.1 . The PfaA-KS sequences were classified using BLASTn and BLASTx against the nucleotide (nt) and non-redundant protein (nr) databases respectively and the top hit extracted for putative assignment . The results from BLASTn were primarily used unless the identity and query coverage fell below 80%, in which the BLASTx results were consulted. Sixteen samples remained with ambiguous assignments (“uncultured organism”) and for these, the BLASTx (nr) search was narrowed to Bacterial references only, which resolved the assignments, with a minimum percent identity of 81% and full query coverage (see for all BLAST output). Top hits for each amplicon variant were used for the final assignment, and the full taxonomy lineage was retrieved from the Genome Taxonomy Database, release 202 metadata . Alignment and tree-building The PfaA-KS sequence variants from the original study were accessed from the NCBI GenBank PopSet 303162115 (446 total entries) and reannotated following the same procedures with BLASTn and BLASTx as for our in-house generated PfaA-KS ASVs . These sequences were combined with our in-house generated PfaA-KS ASVs. The Escherichia coli fabF gene was used as an outgroup to root the phylogeny. Sequences were aligned with MAFFT (v7.520) using the --localpair option and --maxiterate 1000 . Phylogenetic inference was made using IQ-TREE (multicore version 1.6.12) , first using model selection, and then using the best-fit model with the following parameters: -m TIM2e + R5 bb 1000 -nt 4 -redo. The resulting tree was visualized using iTOL . Genomic mining of PfaA-KS primer matches To validate the use of universal primers designed for marine bacteria using a region on the ketoacyl-synthase domain of the pfaA gene (Shulse & Allen 2010), all available whole genomes of archaea and bacteria were downloaded from the NCBI GenBank archive. Putative PUFA synthase regions were found using a sequence-naïve approach by searching for tandem repeats of the phosphopantetheine attachment site for the acyl-carrier-protein domain. The genomic region containing the full putative PfaA-KS domain was extracted from reference genomes of positive hits. The blastn command line tool was used to map all forward and reverse sequence possibilities of the degenerate primers, setting word size to 7 and qcov_hsp_perc (percent-query-coverage per high-scoring pair) to 50 to accommodate loose configurations of primer matches. The matches were annotated as to whether or not the last three bases on the 3′ end aligned to the reference, and the out6 formats of all hits compiled into a table, one entry per reference, with the final header: qseqid, sseqid, length, mismatch, frames, qstart, qend, sstart, send, sseq, and the custom column for 3′-codon-matches for forward and reverse, respectively. The originally extracted putative PfaA-KS domains from the reference genomes were aligned using MAFFT and then phylogenetic inference was made using IQ-TREE, with the resulting tree file exported and visualized in iTOL. The entries were colored according to the taxonomic lineage. The leaves were annotated as to their primer pair match, 3′-end coverage, and the predicted insert size to visualize amplicons that could be theoretically obtained from across the entire phylogenetic space. Analysis and statistics All analyses on the ASV abundance tables were handled in R version 4.1.1 including the following packages: {decontam} (v1.14.0) was used for removing contaminants ; sampling depth was normalized with {DESeq2} (v1.36.0) using a variance stabilizing transformation ; compositional transformations using center-log-ratio (CLR) and estimation of effect sizes were handled with {propr}, {vegan}, and {ALDEx2} . An effect size >1 was considered as explanatory. The base R {stats} was used for clustering samples as well as pairwise comparisons with the Wilcoxon test. In addition, to reduce the rate of type-I errors, multiple testing corrections were done with the Benjamini-Hochberg method. False discovery rate (FDR) < 0.05 was considered statistically significant . {phyloseq} (v1.40.0) was used for alpha- and beta-diversity calculation while rarefaction for alpha diversity calculation was done with {vegan} (2.6–4) . Heat plots were generated with {made4} (v1.68.0) , {massageR} and {gplots} ; data frame transformation and reformatting were handled with {reshape2}, {dplyr} (v1.0.10), {tidyr} (v1.2.1) , and plotting conducted with {ggplot2} using {wesanderson} color palettes . Input data were filtered for PhiX contamination with BBDuk . Demultiplexing was performed with the python package demultiplex allowing one mismatch for barcodes and two mismatches for linkers . Primers were verified with the python package demultiplex allowing 2 and 2 mismatches for forward and reverse primers, respectively. Barcodes, linkers, and primers were trimmed off using BBDuk with 47 and 48 bases being left-trimmed for F.1/R.2 and F.2/R.1, respectively, allowing a minimum length of 220 base pairs. Around 80% of reads across all samples passed all trimming, filtering, and denoising criteria. ASVs for 16S and 18S sequences were generated by an in-house sequencing facility using a standard workflow with DADA2 . The PfaA-KS sequences were handled separately according to the following steps: (i) Forward and reverse orientated sequences were kept separated based on primer (F.1/F.2 and R.1/R.2), and merged using PEAR with minimum overlap (-o) of 5, minimum trim length (-t) of 300, maximum assembly length (-m) of 600, minimum quality score (-q) of 30, and maximum uncalled bases allowed (-u) of 0; (ii) The forward-orientated FASTQ reads starting with the reverse primer (R: merged from R1/R2) were reverse complemented, and then all merged sequences were concatenated per sample and used for ASV calling DADA2; (iii) Filtering was conducted based on quality profile plots: length truncated to 475 nt, maxN = 0, maxEE = 5, truncQ = 3, rm.phix = TRUE. After merging and filtering, 11 samples were discarded (01, 16, 24, 27, 40, 41, 42, 50, 52, 63, and 64) because no reads passed filtering; (iv) The remaining 54 samples were used for error modeling and ASV calling, yielding 66 unique biological PfaA-KS sequence ASVs. The sample ASV table and ASV sequences were exported for further analysis. SSU rRNA ASVs were classified using SINA version 1.6.1 and the SILVA database SSU Ref NR 99 release 138.1 . The PfaA-KS sequences were classified using BLASTn and BLASTx against the nucleotide (nt) and non-redundant protein (nr) databases respectively and the top hit extracted for putative assignment . The results from BLASTn were primarily used unless the identity and query coverage fell below 80%, in which the BLASTx results were consulted. Sixteen samples remained with ambiguous assignments (“uncultured organism”) and for these, the BLASTx (nr) search was narrowed to Bacterial references only, which resolved the assignments, with a minimum percent identity of 81% and full query coverage (see for all BLAST output). Top hits for each amplicon variant were used for the final assignment, and the full taxonomy lineage was retrieved from the Genome Taxonomy Database, release 202 metadata . The PfaA-KS sequence variants from the original study were accessed from the NCBI GenBank PopSet 303162115 (446 total entries) and reannotated following the same procedures with BLASTn and BLASTx as for our in-house generated PfaA-KS ASVs . These sequences were combined with our in-house generated PfaA-KS ASVs. The Escherichia coli fabF gene was used as an outgroup to root the phylogeny. Sequences were aligned with MAFFT (v7.520) using the --localpair option and --maxiterate 1000 . Phylogenetic inference was made using IQ-TREE (multicore version 1.6.12) , first using model selection, and then using the best-fit model with the following parameters: -m TIM2e + R5 bb 1000 -nt 4 -redo. The resulting tree was visualized using iTOL . To validate the use of universal primers designed for marine bacteria using a region on the ketoacyl-synthase domain of the pfaA gene (Shulse & Allen 2010), all available whole genomes of archaea and bacteria were downloaded from the NCBI GenBank archive. Putative PUFA synthase regions were found using a sequence-naïve approach by searching for tandem repeats of the phosphopantetheine attachment site for the acyl-carrier-protein domain. The genomic region containing the full putative PfaA-KS domain was extracted from reference genomes of positive hits. The blastn command line tool was used to map all forward and reverse sequence possibilities of the degenerate primers, setting word size to 7 and qcov_hsp_perc (percent-query-coverage per high-scoring pair) to 50 to accommodate loose configurations of primer matches. The matches were annotated as to whether or not the last three bases on the 3′ end aligned to the reference, and the out6 formats of all hits compiled into a table, one entry per reference, with the final header: qseqid, sseqid, length, mismatch, frames, qstart, qend, sstart, send, sseq, and the custom column for 3′-codon-matches for forward and reverse, respectively. The originally extracted putative PfaA-KS domains from the reference genomes were aligned using MAFFT and then phylogenetic inference was made using IQ-TREE, with the resulting tree file exported and visualized in iTOL. The entries were colored according to the taxonomic lineage. The leaves were annotated as to their primer pair match, 3′-end coverage, and the predicted insert size to visualize amplicons that could be theoretically obtained from across the entire phylogenetic space. All analyses on the ASV abundance tables were handled in R version 4.1.1 including the following packages: {decontam} (v1.14.0) was used for removing contaminants ; sampling depth was normalized with {DESeq2} (v1.36.0) using a variance stabilizing transformation ; compositional transformations using center-log-ratio (CLR) and estimation of effect sizes were handled with {propr}, {vegan}, and {ALDEx2} . An effect size >1 was considered as explanatory. The base R {stats} was used for clustering samples as well as pairwise comparisons with the Wilcoxon test. In addition, to reduce the rate of type-I errors, multiple testing corrections were done with the Benjamini-Hochberg method. False discovery rate (FDR) < 0.05 was considered statistically significant . {phyloseq} (v1.40.0) was used for alpha- and beta-diversity calculation while rarefaction for alpha diversity calculation was done with {vegan} (2.6–4) . Heat plots were generated with {made4} (v1.68.0) , {massageR} and {gplots} ; data frame transformation and reformatting were handled with {reshape2}, {dplyr} (v1.0.10), {tidyr} (v1.2.1) , and plotting conducted with {ggplot2} using {wesanderson} color palettes . |
MEArec: A Fast and Customizable Testbench Simulator for Ground-truth Extracellular Spiking Activity | 1cc01083-cba3-4b09-ad97-a9212855e639 | 7782412 | Physiology[mh] | Extracellular neural electrophysiology is one of the most used and important techniques to study brain function. It consists of measuring the electrical activity of neurons from electrodes in the extracellular space, that pick up the electrical activity of surrounding neurons. To communicate with each other, neurons generate action potentials, which can be identified in the recorded signals as fast potential transients called spikes . Since electrodes can record the extracellular activity of several surrounding neurons, a processing step called spike sorting is needed. Historically this has required manual curation of the data, which in addition to being time consuming also introduces human bias to data interpretations. In recent years, several automated spike sorters have been developed to alleviate this problems. Spike sorting algorithms (Rey et al. ; Lefebvre et al. ) attempt to separate spike trains of different neurons (units) from the extracellular mixture of signals using a variety of different approaches. After a pre-processing step that usually involves high-pass filtering and re-referencing of the signals to reduce noise, some algorithms first detect putative spikes above a detection threshold and then cluster the extracted and aligned waveforms in a lower-dimensional space (Quiroga et al. ; Rossant et al. ; Chung et al. ; Hilgen et al. ; Jun et al. ). Another approach consists of finding spike templates, using clustering methods, and then matching the templates recursively to the recordings to find when a certain spike has occurred. The general term for these approaches is template-matching (Pachitariu et al. ; Yger et al. ; Diggelmann et al. ). Other approaches have been explored, including the use of independent component analysis (Jäckel et al. ; Buccino et al. ) and semi-supervised approaches (Lee et al. ). The recent development of high-density silicon probes both for in vitro (Berdondini et al. ; Frey et al. ) and in vivo applications (Neto et al. ; Jun et al. ) poses new challenges for spike sorting (Steinmetz et al. ). The high electrode count calls for fully automatic spike sorting algorithms, as the process of manually curating hundreds or thousands of channels becomes more time consuming and less manageable. Therefore, spike sorting algorithms need to be be capable of dealing with a large number of units and dense probes. To address these requirements, the latest developments in spike sorting software have attempted to make algorithms scalable and hardware-accelerated (Pachitariu et al. ; Jun et al. ; Yger et al. ; Pachitariu et al. ). The evaluation of spike sorting performance is also not trivial. Spike sorting is unsupervised by definition, as the recorded signals are only measured extracellularly with no knowledge of the underlying spiking activity. A few attempts to provide ground-truth datasets, for example by combining extracellular and patch-clamp or juxtacellular recordings (Henze et al. ; Harris et al. ; Neto et al. ; Yger et al. ; Marques-Smith et al. ; Allen et al. ) exist, but the main limitation of this approach is that only one or a few cells can be patched at the same time, providing very limited ground-truth information with respect to the number of neurons that can be recorded simultaneously from extracellular probes. An alternative method consists of adding artificial or previously-sorted and well-isolated spikes in the recordings (hybrid method) (Rossant et al. ; Wouters et al. ). The hybrid approach is convenient as all the characteristics of the underlying recording are kept. However, only a few hybrid units can be added at a time, and this limits the validation capability of this method. Biophysically detailed simulated data provide a powerful alternative and complementary approach to spike sorting validation (Einevoll et al. ). In simulations, recordings can be built from known ground-truth data for all neurons, which allows one to precisely evaluate the performance of spike sorters. Simulators of extracellular activity should be able to replicate important aspects of spiking activity that can be challenging for spike sorting algorithms, including bursting modulation, spatio-temporal overlap of spikes, unit drifts over time, as well as realistic noise models. Moreover, they should allow users to have full control over these features and they should be efficient and fast. While simulated recordings provide ground-truth information of many units at once, it is an open question how realistically they can reproduce real recordings. In the last years, there have been a few projects aiming to develop neural simulators for benchmarking spike sorting methods (Camuñas-Mesa and Quiroga ; Hagen et al. ; Mondragón-González and Burguière ): Camunas et al. developed NeuroCube (Camuñas-Mesa and Quiroga ), a MATLAB-based simulator which combines biophysically detailed cell models and synthetic spike trains to simulate the activity of neurons close to a recording probe, while noise is simulated by the activity of distant neurons. NeuroCube is very easy to use with a simple and intuitive graphical user interface (GUI). The user has direct control of parameters to control the rate of active neurons, their firing rate properties, and the duration of the recordings. The cell models are shipped with the software and recordings can be simulated on a single electrodes or a tetrode. It is relatively fast, but the cell model simulations (using NEURON (Carnevale and Hines )) are re-simulated for every recording. Hagen et al. developed ViSAPy (Hagen et al. ), a Python-based simulator that uses multi-compartment simulation of single neurons to generate spikes, network modeling of point-neurons in NEST (Diesmann and Gewaltig ) to generate synaptic inputs onto the spiking neurons, and experimentally fitted noise. ViSAPy runs a full network simulation in NEURON (Carnevale and Hines ) and computes the extracellular potentials using LFPy (Lindén et al. ; Hagen et al. ). ViSAPy implements a Python application programming interface (API) which allows the user to set multiple parameters for the network simulation providing the synaptic input, the probe design, and the noise model generator. Cell models can be freely chosen and loaded using the LFPy package. Further, 1-dimensional drift can be incorporated in the simulations by shifting the electrodes over time (Franke et al. ). Learning to use the software and, in particular, tailoring the specific properties of the resulting spike trains, for example burstiness, requires some effort by the user. As the running of NEURON simulations with biophysically detailed neurons can be computationally expensive, the use of ViSAPy to generate long-duration spike-sorting benchmarking data is boosted by access to powerful computers. Mondragon et al. developed a Neural Benchmark Simulator (NBS) (Mondragón-González and Burguière ) extending the NeuroCube software. NBS extends the capability of NeuroCube for using user-specific probes, and it combines the spiking activity signals (from NeuroCube), with low-frequency activity signals, and artifacts libraries shipped with the code. The user can set different weight parameters to assemble the spiking, low-frequency, and artifact signals, but these three signal types are not modifiable. Despite the existence of such tools for generating benchmarking data, their use in spike sorting literature has until now been limited, making the benchmarking and validation of spike sorting algorithms non-standardized and unsystematic. A natural question to ask is thus how to best stimulate the use of such benchmarking tools in the spike sorting community. From a spike sorting developer perspective, we argue that an ideal extracellular simulator should be i) fast, ii) controllable, iii) biophysically detailed, and iv) easy to use. A fast simulator would enable spike sorter developers to generate a large and varied set of recordings to test their algorithms against and to improve their spike sorting methods. Controllability refers to the possibility to have direct control of features of the simulated recordings. The ideal extracellular spike simulator should include the possibility to use different cell models and types, to decide the firing properties of the neurons, to control the rate of spatio-temporal spike collisions, to generate recordings on different probe models, and to have full reproducibility of the simulated recordings. A biophysically detailed simulator should be capable of reproducing key physiological aspects of the recordings, including, but not limited to, bursting spikes, drifts between the electrodes and the neurons, and realistic noise profiles. Finally, to maximize the ease of use, the ideal extracellular simulator should be designed as an accessible and easy to learn software package. Preferably, the tool should be implemented with a graphical user interface (GUI), a command line interface (CLI), or with a simple application programming interface (API). With these principles in mind, we present here MEArec, an open-source Python-based simulator. MEArec provides a fast, highly controllable, biophysically detailed, and easy to use framework to generate simulated extracellular recordings. In addition to producing benchmark datasets, we developed MEArec as a powerful tool that can serve as a testbench for optimizing existing and novel spike sorting methods. To facilitate this goal, MEArec allows users to explore how several aspects of recordings affect spike sorting, with full control of challenging features such as bursting activity, drifting, spatio-temporal synchrony, and noise effects, so that spike sorter developers can use it to help their algorithm design. The source code for MEArec is on Github ( https://github.com/alejoe91/MEArec ) and the Python package is on PyPi ( https://pypi.org/project/MEArec/ ). An extensive documentation is available ( https://mearec.readthedocs.io/ ), and the code is tested with a continuous integration platform ( https://travis-ci.org/ ). Moreover, all the datsets generated for this article and used to make figures are available on Zenodo (10.5281/zenodo.3696926). The article is organized as follows: in Section “ ” we introduce the principles of MEArec and we show how to run simulations with the CLI and Python API. In Section “ ” we explain the different features available in MEArec, including the capability of simulating recordings for MEAs, reproducing bursting behavior, controlling spatio-temporal overlaps, reproducing drifts, and replicating biological noise characteristics. In Section “ ” we present the use of MEArec as a testbench for spike sorting development, and its integration with the SpikeInterface framework (Buccino et al. ). In Section “ ” we document the simulation outputs and how to save and load them with the MEArec API. Finally, in Section we discuss the presented software and contextualize it with respect to the state-of-the-art.
We start by describing the principle of the MEArec simulator and showing examples on how to get started with the simulations. The simulation is split in two phases: templates generation (Fig. ) and recordings generation (Fig. ). Templates (or extracellular action potentials - EAPs) are generated using biophysically realistic cell models which are positioned in the surroundings of a probe model. The templates generation phase is further divided into an intracellular and an extracellular simulation. During the intracellular simulation, each cell model is stimulated with a constant current and transmembrane currents of action potentials are computed (using NEURON (Carnevale and Hines )) and stored to disk (the intracellular simulation is the most time consuming part and storing its output to disk enables one to run it only once). The extracellular simulation uses the LFPy package (Lindén et al. ; Hagen et al. ) to compute extracellular potentials generated at the electrodes’ locations using the well-established line-source approximation (see Supplementary Methods – Templates generation – for details). In particular, the cell morphology is loaded and shifted to a random position around the probe. Additionally, the user can add different rotations to the models. When the cell model is shifted and rotated, the previously computed and stored transmembrane currents are loaded and the EAP is computed. This step is repeated several times for each cell model, for different positions and rotations. The templates generation phase outputs a library of a large variety of extracellular templates, which can then be used to build the recordings. The templates generation phase is the most time consuming, but the same template library can be used to generate multiple recordings. It is therefore recommended to simulate many more templates than needed by a single recording, so that the same template library can be used to simulate a virtually infinite number of recordings. MEArec, at installation, comes with 13 layer 5 cortical cell models from the Neocortical Microcircuit Portal (Ramaswamy et al. ). This enables the user to dive into simulations without the need to download and compile cell models. On the other hand, the initial set of cell models can be easily extended as outlined in the Supplementary Methods – Templates generation . To generate 30 extracellular spikes (also referred as templates) per cell model recorded on a shank tetrode probe, the user can simply run this command: The -prb option allows for choosing the probe model, -n controls the number of templates per cell model to generate, and the --seed option is used to ensure reproducibility and if it is not provided, a random seed is chosen. In both cases, the seed is saved in the HDF5 file, so that the same templates can be replicated. Recordings are then generated by combining templates selected with user-defined rules (based on minimum distance between neurons, amplitudes, spatial overlaps, and cell-types) and by simulating spike trains (Supplementary Methods – Recordings generation – for details on spike trains generation and template selection). Selected templates and spike trains are assembled using a customized (or modulated) convolution, which can replicate interesting features of spiking activity such as bursting and drift. After convolution, additive noise is generated and added to the recordings. Finally, the output recordings can be optionally filtered with a band-pass or a high-pass filter. Note that filtering the recordings will affect the shape and amplitude of the spike waveforms, but this is a common procedure in spike sorting to remove lower frequency components. Recordings can be generated with the CLI as follows: The gen-recordings command combines the selected templates from 4 excitatory cells (-ne 4) and 2 inhibitory cells (-ni 2), that usually have a more narrow spike waveform and a higher firing rate, with randomly generated spike trains. The duration of the output recordings is 30 seconds (-d 30). In this case, four random seeds control the spike train random generation (--st-seed 0), the template selection (--temp-seed 1), the noise generation (--noise-seed 2), and the convolution process (--conv-seed 3). Figure shows one second of the generated recordings (A), the extracted waveforms and the mean waveforms for each unit on the electrode with the largest peak (B), and the principal component analysis (PCA) projections of the waveforms on the tetrode channels. MEArec also implements a convenient Python API, which is run internally by the CLI commands. For example, the following snippet of code implements the same commands shown above for generating templates and recordings: Moreover, the Python API implements plotting functions to visually inspect the simulated templates and recordings. For example, Fig. panels were generated using the plot_recordings (A), plot_waveforms (B), and plot_pca_map (C) functions. MEArec is designed to allow for full customization, transparency, and reproducibility of the simulated recordings. Parameters for the templates and recordings generation are accessible by the user and documented, so that different aspects of the simulated signals can be finely tuned (see Supplementary Methods for a list of parameters and their explanation). Moreover, the implemented command line interface (CLI) and simple Python API, enable the user to easily modify parameters, customize, and run simulations. Finally, MEArec permits to manually set several random seeds used by the simulator to make recordings fully reproducible. This feature also enables one to study how separate characteristics of the recordings affect the spike sorting performance. As an example, we will show in the next sections how to simulate a recording sharing all parameters, hence with exactly the same spiking activity, but with different noise levels or drifting velocities.
Generation of realistic Multi-Electrode Array recordings The recent development of Multi-Electrode Arrays (MEAs) enables researchers to record extracellular activity at very high spatio-temporal density both for in vitro (Berdondini et al. ; Frey et al. ) and in vivo applications (Neto et al. ; Jun et al. ). The large number of electrodes and their high density can result in challenges for spike sorting algorithms. It is therefore important to be able to simulate recordings from these kind of neural probes. To deal with different probe designs, MEArec uses another Python package (MEAutility - https://meautility.readthedocs.io/ ), that allows users to easily import several available probe models and to define custom probe designs. Among others, MEAutility include Neuropixels probes (Jun et al. ), Neuronexus commercial probes ( http://neuronexus.com/products/neural-probes/ ), and a wide variety of square MEA designs with different contact densities (the list of available probes can be found using the mearec available-probes command). Similarly to the tetrode example, we first have to generate templates for the probes. These are the commands to generate templates and recordings for a Neuropixels design with 128 electrodes (Neuropixels-128). The recordings contain 60 neurons, 48 excitatory and 12 inhibitory. With similar commands, we generated templates and recordings for a Neuronexus probe with 32 channels (A1x32-Poly3-5mm-25s-177-CM32 - Neuronexus-32) with 20 cells (16 excitatory and 4 inhibitory), and a square 10x10 MEA with 15 μ m inter-electrode-distance (SqMEA-10-15) and 50 cells (40 excitatory and 10 inhibitory). Figure shows the three above-mentioned probes (A), a sample template for each probe design (B), and one-second snippets of the three recordings (C-D-E), with zoomed in windows to highlight spiking activity. While all the recordings shown so far have been simulated with default parameters, several aspects of the spiking activity are critical for spike sorting. In the next sections, we will show how these features, including bursting, spatio-temporal overlapping spikes, drift, and noise assumptions can be explored with MEArec simulations. Bursting modulation of spike amplitude and shape Bursting activity is one of the most complicated features of spiking activity that can compromise the performance of spike sorting algorithms. When a neuron bursts, i.e., fires a rapid train of action potentials with very short inter-spike intervals, the dynamics underlying the generation of the spikes changes over the bursting period (Hay et al. ). While the bursting mechanism has been largely studied with patch-clamp experiments, combined extracellular-juxtacellular recordings (Allen et al. ) and computational studies (Hagen et al. ) suggest that during bursting, extracellular spikes become lower in amplitude and wider in shape. In order to simulate this property of the extracellular waveforms in a fast and efficient manner, templates can be modulated both in amplitude and shape during the convolution operation, depending on the spiking history. To demonstrate how bursting is mimicked, we built a toy example with a constant spike train with 10 ms inter-spike-interval (Fig. ). A modulation value is computed for each spike and it is used to modulate the waveform for that event by scaling its amplitude, and optionally stretching its shape. The blue dots show the default modulation (bursting disabled), in which the modulation values are drawn from a Gaussian distribution with unitary mean to add some physiological variation to the spike waveforms. When bursting is enabled (by setting the bursting parameter to true), the modulation values are computed based on the spike history, and it depends on the number of consecutive spikes in a bursting event and their average inter-spike-intervals (see Supplementary Methods – Recordings generation - Modulated convolution – for details on the modulation values calculation). Bursting events can be either controlled by the maximum number of spikes making a burst (orange dots - 5 spikes per burst; green dots - 10 spikes per burst) or by setting a maximum bursting duration (red dots - maximum 75 ms ). Note that in Fig. the spike train is constant just to illustrate the computation of the modulation values. In actual simulations, instead, the modulation values will depend on the firing rate and the timing between spikes. By default, spikes are only modulated in amplitude. The user can also enable shape modulation by setting the shape_mod parameter to true. The modulation value, computed for each spike, controls both the amplitude scaling and shape modulation of the spike event. For amplitude modulation, the amplitude of the spike is simply multiplied by the modulation value. Additionally, when shape modulation is enabled, the waveform of each spike is also stretched. The shape_stretch parameter controls the overall amount of stretch, but the actual stretch of single waveforms depends on the modulation value computed for each spike. In Fig. , examples of bursting templates are shown. The blue traces display templates only modulated in amplitude, i.e., the amplitude is scaled by the modulation value. The orange and green traces, instead, also present shape modulation, with different values of the shape_stretch parameter (the higher the shape_stretch, the more stretched waveforms will be). We refer to the Supplementary Methods – Recordings generation - Modulated convolution – for further details on amplitude and shape modulation. Figure shows a one-second snippet of the tetrode recording shown previously after bursting modulation is activated. The top panel shows the spike events, the middle one displays the modulation values computed for each spike, and the bottom panel shows the output of the modulated convolution between one of the templates (on the electrode with the largest amplitude) and the spike train. Figures and e show the waveform projections on the first principal component of each channel for the tetrode recording shown in Section with and without bursting enabled, respectively. In this case all neurons are bursting units and this causes a stretch in the PCA space, which is a clear complication for spike sorting algorithms. Note that shape modulation does not affect all neurons by the same amount, since it depends on the spike history and therefore on the firing rate. Controlling spatio-temporal overlaps Another complicated aspect of extracellular spiking activity that can influence spike sorting performance is the occurrence of overlapping spikes. While temporal overlapping of events on spatially separated locations can be solved with feature masking (Rossant et al. ), spatio-temporal overlapping can cause a distortion of the detected waveform, due to the superposition of separate spikes. Some spike sorting approaches, based on template-matching, are designed to tackle this problem (Pachitariu et al. ; Yger et al. ; Diggelmann et al. ). In order to evaluate to what extent spatio-temporal overlap affects spike sorting, MEArec allows the user to set the number of spatially overlapping templates and to modify the synchrony rate of their spike trains. In Fig. we show an example of this on a Neuronexus-32 probe (see Fig. A). The recording was constructed with two excitatory and spatially overlapping neurons, whose templates are shown in Fig. (see Supplementary Methods – Recordings generation - Overlapping spikes and spatio-temporal synchrony – for details on the spatial overlap definition). The spike synchrony rate can be controlled with the sync_rate parameter. If this parameter is not set (Fig. - left), some spatio-temporal overlapping spikes are present (red events). If the synchrony rate is set to 0, those spikes are removed from the spike trains (Fig. - middle). If set to 0.05, i.e., 5% of the spikes will be spatio-temporal collisions, events are added to the spike trains to reach the specified synchrony rate value of spatio-temporal overlap. As shown in Fig. , the occurrence of spatio-temporal overlapping events affects the recorded extracellular waveform: the waveforms of the neurons, in fact, get summed and might be mistaken for a separate unit by spike sorting algorithms when the spikes are overlapping. The possibility of reproducing and controlling this feature of extracellular recordings within MEArec could aid in the development of spike sorters which are robust to spatio-temporal collisions. Generating drifting recordings When extracellular probes are inserted in the brain, especially for acute experiments, the neural tissue might move with respect to the electrodes. This phenomenon is known as drift. Drift can be due to a slow relaxation of the tissue (slow drift) or to fast re-adjustments of the tissue, for example due to an abrupt motion of the tissue (fast drift). These two types of drifts can also be observed in tandem (Pachitariu et al. ). Drifting units are particularly critical for spike sorting, as the waveform shapes change over time due to the relative movement between the neurons and the probe. New spike sorting algorithms have been developed to specifically tackle the drifting problem (e.g. Kilosort2 (Pachitariu et al. ), IronClust (Jun et al. )). In order to simulate drift in the recordings, we first need to generate drifting templates: Drifting templates are generated by choosing an initial and final soma position with user-defined rules (see Supplementary Methods – Template generation - Drifing templates – for details) and by moving the cell along the line connecting the two positions for a defined number of constant drifting steps that span the segment connecting the initial and final positions (30 steps by default). An example of a drifting template is depicted in Fig. , alongside with the drifting neuron’s soma locations for the different drifting steps. Once a library of drifting templates is generated, drifting recordings can be simulated. MEArec allows users to simulate recordings with three types of drift modes: slow , fast , and slow+fast . When slow drift is selected, the drifting template is selected over time depending on the initial position and the drifting velocity (5 μ / m i n by default). If the final drifting position is reached, the drift direction is reversed. For fast drifts, the position of a drifting neuron is shifted abruptly with a user-defined period (every 20 s by default). The new position is chosen so that the difference in waveform amplitude of the drifting neuron on its current maximum channel remains within user-defined limits (5-20 μ V by default), in order to prevent from moving the neuron too far from its previous position. The slow+fast mode combines the slow and fast mechanisms. In Fig. and c we show examples of slow drift and fast drift, respectively. In the top panel the recordings are displayed, with superimposed drifting templates on the electrode with the largest peak. Note that the maximum channel can change over time due to drift. In the bottom panels, instead, the amplitude of the waveforms on the channels with the initial largest peak for each neuron are shown over time. Slow drift causes the amplitude to slowly vary, while for fast drifts we observe more abrupt changes when a fast drift event occurs. In the slow+fast drift mode, these two effects are combined. Modeling experimental noise Spike sorting performance can be greatly affected by noise in the recordings. Many algorithms first use a spike detection step to identify putative spikes. The threshold for spike detection is usually set depending on the noise standard deviation or median absolute deviation (Quiroga et al. ). Clearly, recordings with larger noise levels will result in higher spike detection thresholds, hence making it harder to robustly detect lower amplitude spiking activity. In addition to the noise amplitude, other noise features can affect spike sorting performance: some clustering algorithms, for example, assume that clusters have Gaussian shape, due to the assumption of an additive normal noise to the recordings. Moreover, the noise generated by biological sources can produce spatial correlations in the noise profiles among different channels and it can be modulated in frequency (Camuñas-Mesa and Quiroga ; Rey et al. ). To investigate how the above-mentioned assumptions on noise can affect spike sorting performance, MEArec can generate recordings with several noise models. Figure shows 5-second spiking-free recordings of a tetrode probe for five different noise profiles that can be generated (A - recordings, B - spectrum, C - channel covariance, D - amplitude distribution). The first column shows uncorrelated Gaussian noise, which presents a flat spectrum, a diagonal covariance matrix, and a symmetrical noise amplitude distribution. In the recording in the second column, spatially correlated noise was generated as a multivariate Gaussian noise with a covariance matrix depending on the channel distance. Also in this case, the spectrum (B) presents a flat profile and the amplitude distribution is symmetrical (D), but the covariance matrix shows a correlation depending on the inter-electrode distance. As previous studies showed (Camuñas-Mesa and Quiroga ; Rey et al. ), the frequency content of extracellular noise is not flat, but its spectrum is affected by the spiking activity of distant neurons, which appear in the recordings as below-threshold biological noise. To reproduce the spectrum profile that is observed in experimental data, MEArec allows coloring the noise spectrum of Gaussian noise with a second order infinite impulse response (IIR) filter (see Supplementary Methods – Recordings generation - Noise models and post-processing – for details). Colored noise represents an efficient way of obtaining the desired spectrum, as shown in the third and fourth columns of Fig. , panel B. Distance correlation is maintained (panel C - fourth column), and the distribution of the noise amplitudes is symmetrical. Finally, a last noise model enables one to generate activity of distant neurons. In this case, noise is built as the convolution between many neurons (300 by default) whose template amplitudes are below an amplitude threshold (10 μ V by default). A Gaussian noise floor is then added to the resulting noise, which is scaled to match the user-defined noise level. The far-neurons noise profile is shown in the last column of Fig. . While the spectrum and spatial correlation of this noise profile are similar to the ones generated with a colored, distance-correlated noise (4th column), the shape of the noise distribution is skewed towards negative values (panel D), mainly due to the negative contribution of the action potentials. The capability of MEArec to simulate several noise models enables spike sorter developers to assess how different noise profiles affect their algorithms and to modify their methods to be insensitive to specific noise assumptions.
The recent development of Multi-Electrode Arrays (MEAs) enables researchers to record extracellular activity at very high spatio-temporal density both for in vitro (Berdondini et al. ; Frey et al. ) and in vivo applications (Neto et al. ; Jun et al. ). The large number of electrodes and their high density can result in challenges for spike sorting algorithms. It is therefore important to be able to simulate recordings from these kind of neural probes. To deal with different probe designs, MEArec uses another Python package (MEAutility - https://meautility.readthedocs.io/ ), that allows users to easily import several available probe models and to define custom probe designs. Among others, MEAutility include Neuropixels probes (Jun et al. ), Neuronexus commercial probes ( http://neuronexus.com/products/neural-probes/ ), and a wide variety of square MEA designs with different contact densities (the list of available probes can be found using the mearec available-probes command). Similarly to the tetrode example, we first have to generate templates for the probes. These are the commands to generate templates and recordings for a Neuropixels design with 128 electrodes (Neuropixels-128). The recordings contain 60 neurons, 48 excitatory and 12 inhibitory. With similar commands, we generated templates and recordings for a Neuronexus probe with 32 channels (A1x32-Poly3-5mm-25s-177-CM32 - Neuronexus-32) with 20 cells (16 excitatory and 4 inhibitory), and a square 10x10 MEA with 15 μ m inter-electrode-distance (SqMEA-10-15) and 50 cells (40 excitatory and 10 inhibitory). Figure shows the three above-mentioned probes (A), a sample template for each probe design (B), and one-second snippets of the three recordings (C-D-E), with zoomed in windows to highlight spiking activity. While all the recordings shown so far have been simulated with default parameters, several aspects of the spiking activity are critical for spike sorting. In the next sections, we will show how these features, including bursting, spatio-temporal overlapping spikes, drift, and noise assumptions can be explored with MEArec simulations.
Bursting activity is one of the most complicated features of spiking activity that can compromise the performance of spike sorting algorithms. When a neuron bursts, i.e., fires a rapid train of action potentials with very short inter-spike intervals, the dynamics underlying the generation of the spikes changes over the bursting period (Hay et al. ). While the bursting mechanism has been largely studied with patch-clamp experiments, combined extracellular-juxtacellular recordings (Allen et al. ) and computational studies (Hagen et al. ) suggest that during bursting, extracellular spikes become lower in amplitude and wider in shape. In order to simulate this property of the extracellular waveforms in a fast and efficient manner, templates can be modulated both in amplitude and shape during the convolution operation, depending on the spiking history. To demonstrate how bursting is mimicked, we built a toy example with a constant spike train with 10 ms inter-spike-interval (Fig. ). A modulation value is computed for each spike and it is used to modulate the waveform for that event by scaling its amplitude, and optionally stretching its shape. The blue dots show the default modulation (bursting disabled), in which the modulation values are drawn from a Gaussian distribution with unitary mean to add some physiological variation to the spike waveforms. When bursting is enabled (by setting the bursting parameter to true), the modulation values are computed based on the spike history, and it depends on the number of consecutive spikes in a bursting event and their average inter-spike-intervals (see Supplementary Methods – Recordings generation - Modulated convolution – for details on the modulation values calculation). Bursting events can be either controlled by the maximum number of spikes making a burst (orange dots - 5 spikes per burst; green dots - 10 spikes per burst) or by setting a maximum bursting duration (red dots - maximum 75 ms ). Note that in Fig. the spike train is constant just to illustrate the computation of the modulation values. In actual simulations, instead, the modulation values will depend on the firing rate and the timing between spikes. By default, spikes are only modulated in amplitude. The user can also enable shape modulation by setting the shape_mod parameter to true. The modulation value, computed for each spike, controls both the amplitude scaling and shape modulation of the spike event. For amplitude modulation, the amplitude of the spike is simply multiplied by the modulation value. Additionally, when shape modulation is enabled, the waveform of each spike is also stretched. The shape_stretch parameter controls the overall amount of stretch, but the actual stretch of single waveforms depends on the modulation value computed for each spike. In Fig. , examples of bursting templates are shown. The blue traces display templates only modulated in amplitude, i.e., the amplitude is scaled by the modulation value. The orange and green traces, instead, also present shape modulation, with different values of the shape_stretch parameter (the higher the shape_stretch, the more stretched waveforms will be). We refer to the Supplementary Methods – Recordings generation - Modulated convolution – for further details on amplitude and shape modulation. Figure shows a one-second snippet of the tetrode recording shown previously after bursting modulation is activated. The top panel shows the spike events, the middle one displays the modulation values computed for each spike, and the bottom panel shows the output of the modulated convolution between one of the templates (on the electrode with the largest amplitude) and the spike train. Figures and e show the waveform projections on the first principal component of each channel for the tetrode recording shown in Section with and without bursting enabled, respectively. In this case all neurons are bursting units and this causes a stretch in the PCA space, which is a clear complication for spike sorting algorithms. Note that shape modulation does not affect all neurons by the same amount, since it depends on the spike history and therefore on the firing rate.
Another complicated aspect of extracellular spiking activity that can influence spike sorting performance is the occurrence of overlapping spikes. While temporal overlapping of events on spatially separated locations can be solved with feature masking (Rossant et al. ), spatio-temporal overlapping can cause a distortion of the detected waveform, due to the superposition of separate spikes. Some spike sorting approaches, based on template-matching, are designed to tackle this problem (Pachitariu et al. ; Yger et al. ; Diggelmann et al. ). In order to evaluate to what extent spatio-temporal overlap affects spike sorting, MEArec allows the user to set the number of spatially overlapping templates and to modify the synchrony rate of their spike trains. In Fig. we show an example of this on a Neuronexus-32 probe (see Fig. A). The recording was constructed with two excitatory and spatially overlapping neurons, whose templates are shown in Fig. (see Supplementary Methods – Recordings generation - Overlapping spikes and spatio-temporal synchrony – for details on the spatial overlap definition). The spike synchrony rate can be controlled with the sync_rate parameter. If this parameter is not set (Fig. - left), some spatio-temporal overlapping spikes are present (red events). If the synchrony rate is set to 0, those spikes are removed from the spike trains (Fig. - middle). If set to 0.05, i.e., 5% of the spikes will be spatio-temporal collisions, events are added to the spike trains to reach the specified synchrony rate value of spatio-temporal overlap. As shown in Fig. , the occurrence of spatio-temporal overlapping events affects the recorded extracellular waveform: the waveforms of the neurons, in fact, get summed and might be mistaken for a separate unit by spike sorting algorithms when the spikes are overlapping. The possibility of reproducing and controlling this feature of extracellular recordings within MEArec could aid in the development of spike sorters which are robust to spatio-temporal collisions.
When extracellular probes are inserted in the brain, especially for acute experiments, the neural tissue might move with respect to the electrodes. This phenomenon is known as drift. Drift can be due to a slow relaxation of the tissue (slow drift) or to fast re-adjustments of the tissue, for example due to an abrupt motion of the tissue (fast drift). These two types of drifts can also be observed in tandem (Pachitariu et al. ). Drifting units are particularly critical for spike sorting, as the waveform shapes change over time due to the relative movement between the neurons and the probe. New spike sorting algorithms have been developed to specifically tackle the drifting problem (e.g. Kilosort2 (Pachitariu et al. ), IronClust (Jun et al. )). In order to simulate drift in the recordings, we first need to generate drifting templates: Drifting templates are generated by choosing an initial and final soma position with user-defined rules (see Supplementary Methods – Template generation - Drifing templates – for details) and by moving the cell along the line connecting the two positions for a defined number of constant drifting steps that span the segment connecting the initial and final positions (30 steps by default). An example of a drifting template is depicted in Fig. , alongside with the drifting neuron’s soma locations for the different drifting steps. Once a library of drifting templates is generated, drifting recordings can be simulated. MEArec allows users to simulate recordings with three types of drift modes: slow , fast , and slow+fast . When slow drift is selected, the drifting template is selected over time depending on the initial position and the drifting velocity (5 μ / m i n by default). If the final drifting position is reached, the drift direction is reversed. For fast drifts, the position of a drifting neuron is shifted abruptly with a user-defined period (every 20 s by default). The new position is chosen so that the difference in waveform amplitude of the drifting neuron on its current maximum channel remains within user-defined limits (5-20 μ V by default), in order to prevent from moving the neuron too far from its previous position. The slow+fast mode combines the slow and fast mechanisms. In Fig. and c we show examples of slow drift and fast drift, respectively. In the top panel the recordings are displayed, with superimposed drifting templates on the electrode with the largest peak. Note that the maximum channel can change over time due to drift. In the bottom panels, instead, the amplitude of the waveforms on the channels with the initial largest peak for each neuron are shown over time. Slow drift causes the amplitude to slowly vary, while for fast drifts we observe more abrupt changes when a fast drift event occurs. In the slow+fast drift mode, these two effects are combined.
Spike sorting performance can be greatly affected by noise in the recordings. Many algorithms first use a spike detection step to identify putative spikes. The threshold for spike detection is usually set depending on the noise standard deviation or median absolute deviation (Quiroga et al. ). Clearly, recordings with larger noise levels will result in higher spike detection thresholds, hence making it harder to robustly detect lower amplitude spiking activity. In addition to the noise amplitude, other noise features can affect spike sorting performance: some clustering algorithms, for example, assume that clusters have Gaussian shape, due to the assumption of an additive normal noise to the recordings. Moreover, the noise generated by biological sources can produce spatial correlations in the noise profiles among different channels and it can be modulated in frequency (Camuñas-Mesa and Quiroga ; Rey et al. ). To investigate how the above-mentioned assumptions on noise can affect spike sorting performance, MEArec can generate recordings with several noise models. Figure shows 5-second spiking-free recordings of a tetrode probe for five different noise profiles that can be generated (A - recordings, B - spectrum, C - channel covariance, D - amplitude distribution). The first column shows uncorrelated Gaussian noise, which presents a flat spectrum, a diagonal covariance matrix, and a symmetrical noise amplitude distribution. In the recording in the second column, spatially correlated noise was generated as a multivariate Gaussian noise with a covariance matrix depending on the channel distance. Also in this case, the spectrum (B) presents a flat profile and the amplitude distribution is symmetrical (D), but the covariance matrix shows a correlation depending on the inter-electrode distance. As previous studies showed (Camuñas-Mesa and Quiroga ; Rey et al. ), the frequency content of extracellular noise is not flat, but its spectrum is affected by the spiking activity of distant neurons, which appear in the recordings as below-threshold biological noise. To reproduce the spectrum profile that is observed in experimental data, MEArec allows coloring the noise spectrum of Gaussian noise with a second order infinite impulse response (IIR) filter (see Supplementary Methods – Recordings generation - Noise models and post-processing – for details). Colored noise represents an efficient way of obtaining the desired spectrum, as shown in the third and fourth columns of Fig. , panel B. Distance correlation is maintained (panel C - fourth column), and the distribution of the noise amplitudes is symmetrical. Finally, a last noise model enables one to generate activity of distant neurons. In this case, noise is built as the convolution between many neurons (300 by default) whose template amplitudes are below an amplitude threshold (10 μ V by default). A Gaussian noise floor is then added to the resulting noise, which is scaled to match the user-defined noise level. The far-neurons noise profile is shown in the last column of Fig. . While the spectrum and spatial correlation of this noise profile are similar to the ones generated with a colored, distance-correlated noise (4th column), the shape of the noise distribution is skewed towards negative values (panel D), mainly due to the negative contribution of the action potentials. The capability of MEArec to simulate several noise models enables spike sorter developers to assess how different noise profiles affect their algorithms and to modify their methods to be insensitive to specific noise assumptions.
In the previous sections, we have shown several examples on how MEArec is capable of reproducing several aspects of extracellular recordings which are critical for spike sorting performance, in a fully reproducible way. The proposed design and its integration with a spike sorting evaluation framework called SpikeInterface (Buccino et al. ) enables developers to actively include customized simulations in the spike sorting development phase. Due to its speed and controllability, we see MEArec as a testbench , rather than a benchmark tool. We provide here a couple of examples. In Fig. , we show a one-second section of recordings simulated on a Neuronexus-32 probe with fixed parameters and random seeds regarding template selection and spike train generation, but with four different levels of additive Gaussian noise, with standard deviations of 5, 10, 20, and 30 μ V . The traces show the same underlying spiking activity, so the only variability in spike sorting performance will be due to the varying noise levels. Similarly, in Fig. , 1-minute drifting recordings were simulated with three different drifting velocities. The recordings show that for low drifting speeds the waveform changes are almost not visible (green traces), while for faster drifts (orange and blue traces), the waveform changes over time become more important. The capability of MEArec of reproducing such behaviors in a highly controlled manner could aid in the design of specific tests for measuring and quantifying the ability of a spike sorting software to deal with specific complexities in extracellular recordings. Other examples include simulating a recording with increasing levels of bursting in order to measure to what extent bursting units are correctly clustered, or changing the synchrony rate of spatially overlapping units to assess how much spatio-temporal collisions affect performance. Integration with SpikeInterface We have recently developed SpikeInterface (Buccino et al. ), a Python-based framework for running several spike sorting algorithms, comparing, and validating their results. MEArec can be easily interfaced to SpikeInterface so that simulated recordings can be loaded, spike sorted, and benchmarked with a few lines of code. In the following example, a MEArec recording is loaded, spike sorted with Mountainsort4 (Chung et al. ) and Kilosort2 (Pachitariu et al. ), and benchmarked with respect to the ground-truth spike times available from the MEArec simulation: The get_performance function returns the accuracy, precision, and recall for all the ground-truth units in the MEArec recording. For further details on these metrics and a more extensive characterization of the comparison we refer to the SpikeInterface documentation and article (Buccino et al. ). The combination of MEArec and SpikeInterface represents a powerful tool for systematically testing and comparing spike sorter performances with respect to several complications of extracellular recordings. MEArec simulations, in combination with SpikeInterface, are already being used to benchmark and compare spike sorting algorithms within the SpikeForest project (Magland et al. ). Performance considerations As a testbench tool, the speed requirement has been one of the main design principle of MEArec. In order to achieve high speed, most parts of the simulation process are fully parallelized. As shown in Fig. , the simulations are split in templates and recordings generation. The templates generation phase is the most time consuming, but the same template library can be used to generate several recordings. This phase is further split in two sub-phases: the intracellular and extracellular simulations. The former only needs to be run once, as it generates a set of cell model-specific spikes that are stored and then used for extracellular simulations, which is instead probe specific. We present here run times for the different phases of the templates generation and for the recordings generation. All simulations were run on an Ubuntu 18.04 Intel(R) Core(TM) i7-6600U CPU @ 2.60GHz, with 16 GB of RAM. The intracellular simulation run time for the 13 cell models shipped with the software was [12pt]{minimal}
$ 130$ ∼ 130 seconds ( [12pt]{minimal}
$ 10$ ∼ 10 seconds per cell model). Run times for extracellular simulations for several probe types, number of templates in the library, and drifting templates are shown in the Templates generation section of Table . The run times for this phase mainly depend on the number of templates to be generated (N templates column), on the minimum amplitude of accepted templates (Min. amplitude column, see Supplementary Methods – Templates generation - Extracellular simulation – for further details), and especially on drift (Drifting column). When simulating drifting templates, in fact, the number of actual extracellular spikes for each cell model is N templates times N drift steps. Note that in order to generate the far-neurons noise model, the minimum amplitude should be set to 0, so that low-amplitude templates are not discarded. The number of templates available in the template library will be the specified number of templates (N templates) times the number of cell models (13 by default). Recordings are then generated using the simulated template libraries. In Table , the Recordings generation section shows run times for several recordings with different probes, durations, number of cells, bursting, and drifting options. The main parameter that affects simulation times is the number of cells, as it increases the number of modulated convolutions. Bursting and drifting behavior also increase the run time of the simulations, because of the extra processing required in the convolution step. The simulation run times, however, range from a few seconds to a few minutes. Therefore, the speed of MEArec enables users to generate numerous recordings with different parameters for testing spike sorter performances. Moreover, the software internally uses memory maps to reduce the RAM usage and the simulations can be chunked in time. These features enable users to simulate long recordings on probes with several hundreds of electrodes (e.g. Neuropixels probes) without the need of large-memory nodes or high-performance computing platforms.
We have recently developed SpikeInterface (Buccino et al. ), a Python-based framework for running several spike sorting algorithms, comparing, and validating their results. MEArec can be easily interfaced to SpikeInterface so that simulated recordings can be loaded, spike sorted, and benchmarked with a few lines of code. In the following example, a MEArec recording is loaded, spike sorted with Mountainsort4 (Chung et al. ) and Kilosort2 (Pachitariu et al. ), and benchmarked with respect to the ground-truth spike times available from the MEArec simulation: The get_performance function returns the accuracy, precision, and recall for all the ground-truth units in the MEArec recording. For further details on these metrics and a more extensive characterization of the comparison we refer to the SpikeInterface documentation and article (Buccino et al. ). The combination of MEArec and SpikeInterface represents a powerful tool for systematically testing and comparing spike sorter performances with respect to several complications of extracellular recordings. MEArec simulations, in combination with SpikeInterface, are already being used to benchmark and compare spike sorting algorithms within the SpikeForest project (Magland et al. ).
As a testbench tool, the speed requirement has been one of the main design principle of MEArec. In order to achieve high speed, most parts of the simulation process are fully parallelized. As shown in Fig. , the simulations are split in templates and recordings generation. The templates generation phase is the most time consuming, but the same template library can be used to generate several recordings. This phase is further split in two sub-phases: the intracellular and extracellular simulations. The former only needs to be run once, as it generates a set of cell model-specific spikes that are stored and then used for extracellular simulations, which is instead probe specific. We present here run times for the different phases of the templates generation and for the recordings generation. All simulations were run on an Ubuntu 18.04 Intel(R) Core(TM) i7-6600U CPU @ 2.60GHz, with 16 GB of RAM. The intracellular simulation run time for the 13 cell models shipped with the software was [12pt]{minimal}
$ 130$ ∼ 130 seconds ( [12pt]{minimal}
$ 10$ ∼ 10 seconds per cell model). Run times for extracellular simulations for several probe types, number of templates in the library, and drifting templates are shown in the Templates generation section of Table . The run times for this phase mainly depend on the number of templates to be generated (N templates column), on the minimum amplitude of accepted templates (Min. amplitude column, see Supplementary Methods – Templates generation - Extracellular simulation – for further details), and especially on drift (Drifting column). When simulating drifting templates, in fact, the number of actual extracellular spikes for each cell model is N templates times N drift steps. Note that in order to generate the far-neurons noise model, the minimum amplitude should be set to 0, so that low-amplitude templates are not discarded. The number of templates available in the template library will be the specified number of templates (N templates) times the number of cell models (13 by default). Recordings are then generated using the simulated template libraries. In Table , the Recordings generation section shows run times for several recordings with different probes, durations, number of cells, bursting, and drifting options. The main parameter that affects simulation times is the number of cells, as it increases the number of modulated convolutions. Bursting and drifting behavior also increase the run time of the simulations, because of the extra processing required in the convolution step. The simulation run times, however, range from a few seconds to a few minutes. Therefore, the speed of MEArec enables users to generate numerous recordings with different parameters for testing spike sorter performances. Moreover, the software internally uses memory maps to reduce the RAM usage and the simulations can be chunked in time. These features enable users to simulate long recordings on probes with several hundreds of electrodes (e.g. Neuropixels probes) without the need of large-memory nodes or high-performance computing platforms.
The templates generation outputs a Template Generator object, containing the following fields: templates contains the generated templates – array with shape (n_templates, n_electrodes, n_points) for non-drifting templates or (n_templates, n_drift_steps, n_electrodes, n_points) for drifting ones locations contains the 3D soma locations of the templates – array with shape (n_templates, 3) for non-drifting templates or (n_templates, n_drift_steps, 3) for drifting templates. rotations contains the 3D rotations applied to the cell model before computing the template – array with shape (n_templates, 3) (for drifting templates rotation is fixed) celltypes contains the cell types of the generated templates – array of strings with length (n_templates) info contains a dictionary with parameters used for the simulation (params key) and information about the probe (electrodes key) The recordings generation outputs a Recording Generator object, containing the following fields: recordings contains the generated recordings – array with shape (n_electrodes, n_samples) spiketrains contains the spike trains – list of (n_neurons) neo.Spiketrain objects (Garcia et al. ) templates contains the selected templates – array with shape (n_neurons, n_jitters, n_electrodes, n_templates samples) templates for non-drifting recordings - or (n_neurons, n_drift_steps, n_jitters, n_electrodes, n_neurons) for drifting ones templates_celltypes contains the cell type of the selected templates – array of strings with length (n_neurons) templates_locations contains the 3D soma locations of the selected templates – array with shape (n_neurons, 3) for non-drifting recordings or (n_neurons, n_drift_steps, 3) for drifting ones templates_rotations contains the 3D rotations applied to the selected templates – array with shape (n_neurons, 3) channel_positions contains the 3D positions of the probe electrodes – array with shape (n_electrodes, 3) timestamps contains the timestamps in seconds – array with length (n_samples) voltage_peaks contains the average voltage peaks of the templates on each electrode – array with shape (n_neurons, n_electrodes) spike_traces contains a clean spike trace for each neuron (generated by a clean convolution between the spike train and the template on the electrode with the largest peak) – array with shape (n_neurons, n_samples) info contains a dictionary with parameters used for the simulation When simulating with the Python API, the returned TemplateGenerator and RecordingGenerator can be saved as .h5 files with: The generation using the CLI saves templates and recordings directly. The saved templates and recordings can be loaded in Python as TemplateGenerator and RecordingGenerator objects with:
In this paper we have presented MEArec, a Python package for simulating extracellular recordings for spike sorting development and validation. We first introduced an overview of the software function, consisting in separating the templates and the recordings generation to improve efficiency and simulation speed. We then showed the ease of use of the software, whose command line interface and simple Python API enable users to simulate extracellular recordings with a couple of commands or a few lines of code. We explored the capability of reproducing and controlling several aspects of extracellular recordings which can be critical for spike sorting algorithms, including spikes in a burst with varying spike shapes, spatio-temporal overlaps, drifting units, and noise assumptions. We illustrated two examples of using MEArec, in combination with SpikeInterface (Buccino et al. ), as a testbench platform for developing spike sorting algorithms. Finally, we benchmarked the speed performance of MEArec (Table ). Investigating the validation section of several recently developed spike sorting algorithms (Rossant et al. ; Pachitariu et al. ; Jun et al. ; Hilgen et al. ; Jun et al. ; Lee et al. ; Yger et al. ), it is clear that the neuroscientific community needs a standardized validation framework for spike sorting performance. Some spike sorters are validated using a so called hybrid approach, in which well-identified units from previous experimental recordings are artificially injected in the recordings and used to compute performance metrics (Rossant et al. ; Pachitariu et al. ; Wouters et al. ). The use of templates extracted from previously sorted datasets poses some questions regarding the accuracy of the initial sorting, as well as the complexity of the well-identified units. Alternatively, other spike sorters are validated on experimental paired ground-truth recordings (Chung et al. ; Yger et al. ). While these valuable datasets (Harris et al. ; Henze et al. ; Neto et al. ; Marques-Smith et al. ) can certainly provide useful information, the low count of ground-truth units makes the validation incomplete and could result in biases (for example algorithm-specific parameters could be tuned to reach a higher performance for the recorded ground-truth units). A third validation method consist of using simulated ground-truth recordings (Einevoll et al. ). While this approach is promising, in combination with experimental paired recordings, the current available simulators (Camuñas-Mesa and Quiroga ; Hagen et al. ; Mondragón-González and Burguière ) present some limitations in terms of biological realism, controllability, speed, and/or ease of use (see Introduction). We therefore introduced MEArec, a software package which is computationally efficient, easy to use, highly controllable, and capable of reproducing critical characteristics of extracellular recordings relevant to spike sorting, including bursting modulation, spatio-temporal overlaps, drift of units over time, and various noise profiles. The capability of MEArec to replicate complexities in extracellular recordings which are usually either ignored or not controlled in other simulators, permits the user to include tailored simulations in the spike sorting implementation process, using the simulator as a testbench platform for algorithm development. MEArec simulations could not only be used to test the final product, but specific simulations could be used to help implementing algorithms that are able to cope with drifts, bursting, and spatio-temporal overlap, which are regarded as the most complex aspects for spike sorting performance (Rey et al. ; Yger et al. ). In MEArec, in order to generate extracellular templates, we used a well-established modeling framework for solving the single neuron dynamics (Carnevale and Hines ), and for calculating extracellular fields generated by transmembrane currents (Lindén et al. ; Hagen et al. ). These models have some assumptions that, if warranted, could be addressed with more sophisticated methods, such as finite element methods (FEM). In a recent work (Buccino et al. ), we used FEM simulations and showed that the extracellular probes, especially MEAs, affect the amplitude of the recorded signals. While this finding is definitely interesting for accurately modeling and understanding how the extracellular potential is generated and recorded, it is unclear how it would affect the spike sorting performance. Moreover, when modeling signals on MEAs, we used the method of images (Ness et al. ; Buccino et al. ), which models the probe as a infinite insulating plane and better describes the recorded potentials for large MEA probes (Buccino et al. ). Secondly, during templates generation, the neuron models were randomly moved around and rotated with physiologically acceptable values (Buccino et al. ). In this phase, some dendritic trees might unnaturally cross the probes. We decided to not modify the cell models and allow for this behavior for sake of efficiency of the simulator. The modification of the dendritic trees for each extracellular spike generation would in fact be too computationally intense. However, since the templates generation phase is only run once for each probes, in the future we plan to both to include the probe effect in the simulations and to carefully modify the dendritic positions so that they do not cross the probes’ plane. Another limitation of the proposed modeling approach is in the replication of bursting behavior. We implemented a simplified bursting modulation that attempts to capture the features recorded from extracellular electrodes by modifying the template amplitude and shape depending on the spiking history. However, more advanced aspects of waveform modulation caused by bursting, including morphology-dependent variation of spike shapes, cannot be modeled with the proposed approach, and their replication requires a full multi-compartment simulation (Hagen et al. ). Nevertheless, the suggested simplified model of bursting could be a valuable tool for testing the capability of spike sorters to deal with this phenomenon. Finally, the current version of MEArec only supports cell models from the Neocortical Microcircuit Portal (Markram et al. ; Ramaswamy et al. ), which includes models from juvenile rat somatosensory cortex. The same cell model format is also being used to build a full hippocampus model (Migliore et al. ) and other brain regions, and therefore the integration of new models should be straightforward. However, we also provide a mechanism to use custom cell models. For example, cell models from the Allen Brain Institute database (Gouwens et al. ) , which contains models from mice and humans, can be easily used to simu late templates and recordings, as documented in this notebook: https://github.com/alejoe91/MEArec/blob/master/notebooks/generate_recordings_with_allen_models.ipynb . Other cell models can be used with the same approach. The use of fully-simulated recordings can raise questions on how well the simulations replicate real extracellular recordings. For example, recordings on freely moving animals present several motion artifacts that are complicated to model and incorporate into simulators. For these reasons, we believe that spike sorting validation cannot be solely limited to simulated recordings. In a recent effort for spike sorting validation, named SpikeForest (Magland et al. ), the authors have gathered more than 650 ground-truth recordings belonging to different categories: paired recordings, simulated synthetic recordings (including MEArec-generated datasets), hybrid recordings, and manually sorted data. We think that a systematic benchmark of spike sorting tools will benefit from this larger collection of diverse ground-truth recordings, and in this light, MEArec can provide high-quality simulated datasets to aid this purpose. In conclusion, we introduced MEArec, which is a Python-based simulation framework for extracellular recordings. Thanks to its speed and controllability, we see MEArec to aid both the development and validation spike sorting algorithms and to help understanding the limitation of current methods, to improve their performance, and to generate new software tools for the hard and still partially unsolved spike sorting problem.
The presented software package is available at https://github.com/alejoe91/MEArec and https://github.com/alejoe91/MEAutility (used for probe handling). The packages are also available on pypi: https://pypi.org/project/MEArec/ - https://pypi.org/project/MEAutility/ . All the datsets generated for the paper and used to make figures are available on Zenodo at 10.5281/zenodo.3696926, where instruction to generate figures are also provided.
Below is the link to the electronic supplementary material. (PDF 377 KB)
|
Perioperative Echocardiography During the Coronavirus Crisis: Considerations in Pediatrics and Congenital Heart Disease | dfeb8bda-648a-4cf2-8adf-133b6e186b08 | 7165086 | Pediatrics[mh] | Pediatric echocardiography, including transthoracic, transesophageal, and fetal imaging, has established indications and procedures. , Based on published appropriate-use criteria for pediatric echocardiography, the indication for an echocardiographic examination is considered appropriate when the expected incremental information, combined with clinical judgment, exceed the expected risks to an acceptable and reasonable degree. , Furthermore, an indication for echocardiographic imaging has been classified into 1 of the following 3 categories: generally appropriate (as reflected by a median panel score of 7-9), may be appropriate (as reflected by a median panel score of 4-6), and rarely appropriate (as reflected by a median panel score of 1-3). , , The goal of this scale of appropriate-use criteria has been to minimize echocardiography examinations in pediatric practice for rarely appropriate criteria. , , With the advent of the coronavirus crisis and the potentially life-threatening risks of infection, the imaging indication in pediatric echocardiography should be screened carefully, with a preference for delaying examinations that are either elective or rarely appropriate in accordance with institutional practice. , , , , Emergency examinations in pediatric echocardiography with strong indications therefore have a high priority to proceed. Given that the intensity of the coronavirus crisis is variable and dynamic, the triage of echocardiography examinations must remain agile and responsive to local conditions. This management process also should focus on strict infection control. , , Fetal echocardiography also should be triaged based on published levels of risk. A fetal echocardiogram for a low-risk patient typically will have a low-risk referral indication in the setting of a normal cardiac screening examination and as such has a low priority for additional consideration during the peak of the coronavirus crisis. , A fetal echocardiogram for a moderate-risk patient typically will be indicated by a moderate-to-high risk referral indication with a gestational age greater than 24 weeks or by confirmed congenital heart disease with a gestational age less than 34 weeks. , These examinations typically can be rescheduled after the peak of the crisis has passed. A fetal echocardiogram for high-risk patients typically will include an urgent clinical indication, or a moderate-to-high risk referral indication with a gestational age less than 24 weeks, or confirmed congenital heart disease with a gestational age more than 34 weeks. , The examinations in this category should be scheduled as soon as possible. The details of this management process have been fully covered in the provided references and are beyond the scope of this editorial. , , , , , Transesophageal imaging is considered high risk because it is associated with viral aerosolization and consequent increased risk of transmission. , Consequently, the threshold for this imaging modality in pediatric practice should be high during the coronavirus crisis. These examinations have low priority in the setting of a weak indication, borderline clinical effect, or if an alternative imaging modality could be diagnostic, according to published consensus and guidelines. , , , ,
Echocardiographic examinations may be possible at the point of care by the clinicians already taking care of these children with suspected or proven coronavirus infection. , , This approach is advantageous not only for patient convenience but also for infection control. The final location for an echocardiographic examination often will require thoughtful consideration of the following: risk of viral transmission, including pregnant women; monitoring capabilities; and staffing requirements. , , , , A complicating factor in pediatric practice is that children with this infection often may be asymptomatic. , In certain circumstances, such as the peak of the coronavirus crisis, it may be reasonable to test new pediatric hospital admissions for this infection to guide the choice of appropriate measures, including infection control. , , In the operating room environment, transesophageal echocardiography often is performed in the setting of a secure airway. This approach to airway management can minimize aerosolization of viral particles and contain viral spread. , The conduct of transesophageal imaging in the setting of pediatric coronavirus infection should consider current recommendations indexed to institutional practice and the intensity of the coronavirus crisis. , , , , There may be dedicated probes and machines in this pediatric setting, depending on local factors. , , , ,
The conduct of the echocardiographic examination in children with suspected or confirmed coronavirus infection should be tailored to address the clinical question. , , , , The cardiac manifestations of COVID-19, such as pericarditis and myocarditis, should be considered during this focused examination. Prolonged echocardiographic examinations should be minimized to limit exposure, given that infectious risks likely are present in asymptomatic children during the crisis phase of COVID-19. , , , Consequently, an experienced practitioner should complete the examination in a focused, time-efficient but comprehensive fashion. , , , , Even though this strategy may erode the educational environment, the safety of learners and trainees is more important, as outlined clearly by the Accreditation Council for Graduate Medical Education (full details available at www.acgme.org/covid-19 ). Apart from the imaging protocol, the conduct of the pediatric echocardiographic examination should take place according to institutional standards for infection control during the crisis, including adequate barrier techniques. , , , , The degree of personal protective equipment will depend on level of infectious risk as defined by specific testing, institutional protocol, and the level of the pandemic at a given hospital. , , Clinical symptoms in infected children often may be absent, prompting interim strategies such as testing all hospitalized children as needed or raising the index of suspicion for active infection. Airborne precautions against viral droplet infection include N95 and N99 masks and powered air purifying respirators. , Transesophageal imaging in suspected or confirmed coronavirus-infected patients carries a heightened risk of viral transmission because of the increased load from viral aerosolization. , , , , It may be reasonable during the height of the crisis to assume that all children who require transesophageal examinations are positive for the infection. In the setting of a protocol for disease testing, a documented negative test within 48- to- 72 hours may be considered adequate at some institutions to conduct the examination with standard precautions such as eye protection, mask, and gloves rather than the enhanced standards with full personal protective equipment. In pediatric patients for whom testing results are unknown and who have an endotracheal tube before arrival in the operating room or interventional suite, the risk of viral transmission from aerosolization is considered low. , , , , How should the risks of viral aerosolization be managed in asymptomatic untested children who have not undergone tracheal intubation and who require transesophageal imaging in the operating room or interventional suite? In this scenario, it is reasonable to expect that the infectious risk is high, assuming that these children may be positive for infection and that endotracheal intubation generates a high load of aerosolized viral particles. In this setting, airway management and probe placement likely should proceed with maximal barrier precautions, including personal protective equipment and consideration for air turnover in the given space. It also is reasonable that the transesophageal probe be placed and positioned by the airway team during aerosol precautions to minimize operator and infectious risks. , , , , In the setting of children with known positive infection, full isolation and aerosol precautions should apply not only for the conduct of the echocardiographic examination but also the overall care of those pediatric patients. , , , The intensity of the coronavirus crisis at a given institution challenges in many ways not only the imaging protocols but also the infectious control procedures for pediatric perioperative echocardiography. An additional consideration for infection control concerns the appropriate care of echocardiographic equipment (the “hardware”) to minimize the risks of viral transmission. , , , , The relevant probes and machine consoles may be covered with disposable plastic. Depending on institutional circumstances, certain hardware can be specifically designated for imaging of suspected or confirmed pediatric cases of coronavirus infection. , , , , Although most disinfectant solutions are virucidal, all echocardiographic equipment should be processed thoroughly for the goals of viral clearance and hardware functionality with maximal protection of patients and ultrasound providers. , , Despite the variations in sanitation protocols, these standards should comply with the recommendations from the American Institute for Ultrasound in Medicine to balance infectious risks with imaging performance. , , , , Education and teaching in pediatric echocardiography are important. , During the coronavirus crisis, however, learner well-being has a higher priority. In this stressful clinical learning environment, it is reasonable to cancel elective rotations and to restrict trainee exposure. Furthermore, education in echocardiography can be transitioned to distance-based learning, including remote conferencing technology. The protection of echocardiography personnel can be enhanced further by thoughtful assignments for staff with risk factors for severe infection such as advanced age, chronic conditions, immunosuppression, and pregnancy.
The coronavirus pandemic has significantly affected the conduct of pediatric echocardiography in the perioperative setting. Careful consideration of the indications, venues, and approaches for echocardiographic imaging will both optimize patient care and infection control during the crisis.
None.
|
At the threshold of viability: to resuscitate or not to resuscitate – the perspectives of Israeli neonatologists | efa1cf62-2e35-4d7e-9e18-18fefcf34cd6 | 11097872 | Pediatrics[mh] | Managing deliveries at the limit of viability (broadly defined as 22 0/7 weeks through 24 6/7 weeks gestation) remains one of the most challenging issues faced by neonatologists. Guidelines regarding the treatment for infants born at the threshold of viability may be confusing and lead to various courses of action. Significant variation was observed among Israeli neonatologists regarding delivery room management of extremely premature infants born at 22–24 weeks gestation, with a notable emphasis on respecting parents’ wishes. Country’s guidelines should reflect the existing range of opinions, possibly through a broad survey of caregivers before setting the guidelines and recommendations. Birth at a very immature stage of intrauterine development imposes a high risk of death or severe long-term neurological disability. This can generate medical, ethical, and legal controversies, challenges, and opportunities. It is questionable whether initiating resuscitation after birth in these extremely preterm infants could be considered in their best interests. However, how to translate this concern into clinical action may be unclear. To this, the large gaps in the law regarding treatment of infants born in the grey zone of viability should be added. The counselling and management of deliveries at the limit of viability (broadly defined as 22 0/7 weeks through 24 6/7 weeks gestation) remains one of the most challenging issues faced by neonatologists. Physicians and parents make complex and challenging decisions. Those rely, as in many other ethical dilemmas, on prognostic data. Multiple factors are associated with the outcomes of extremely prematurity in addition to gestational age (GA) at birth. These include non-modifiable factors (including gender, birth weight and plurality) but also potentially modifiable antepartum factors (the location of the delivery (country, hospital), administration of antenatal corticosteroids and magnesium sulfate) and of course, the decision whether to start or withhold intensive care after delivery. While there is a clear trend of improvement in the survival of extremely premature infants in recent years, a significant variation in outcomes exists between countries and even between hospitals in the same country. Deliveries occurring between 22 and 23 weeks gestation are associated with the most complicated dilemmas. In countries such as Japan, Sweden, the UK, the USA and Canada, full intensive care is sometimes provided and neurodevelopmental outcomes are assessed even after deliveries at these GAs. The data from these countries suggest that survival without moderate to severe neurodevelopmental impairment is a possibility even in-preterm infants born at these very premature range of GAs. Guidelines regarding treatment for infants born at the threshold of viability may be confusing and lead to various courses of action. A position paper published by the Israeli Neonatal and Obstetrics and Gynecology (OBGYN) societies 2020 serves as a guideline for managing threatened deliveries on the verge of viability. According to this statement, intensive care should not be given to infants born between 22.0 and 22.6 weeks gestation while those born at or after 24.0 weeks should get full intensive care by default. For infants born between 23.0 and 23.6 weeks, the decision on whether to provide intensive care should depend on the parents’ preferences and the newborn’s medical condition and initial response to treatment. The health system in Israel is a National Health Insurance system, and Neonatal Intensive Care Unit (NICU) stay is subsidised and freely accessible to all. In Israel, a rich mosaic of religions and ethnicities comes together, complicating the formulation of generalised guidelines for ethical questions. In this study, we examined the attitudes of neonatologists in Israel regarding resuscitation at the threshold of viability. In addition, we examined whether the guidelines, set in Israel as in the rest of the world by a small group of physicians, reflect the opinion of most neonatologists. We hypothesised that we would find diversity in attitudes and results, with some deviation from the current Israeli guidelines for managing births on the verge of viability. Aims and design The research aims were to investigate Israeli neonatologists’ views and attitudes regarding resuscitation of newborns at 22–24 weeks and their responses to parents’ requests to resuscitate or not resuscitate these premature infants. It also seeks to explore the additional factors that influence physicians’ decisions and how their approaches correspond with the Israeli clinical guidelines. This was a descriptive and correlative study that used a 47-question online questionnaire developed by the researchers and sent to all Israeli neonatologists. The study population Following a pilot test by five neonatologists, a final online questionnaire was developed and distributed as URL link using an existing email distribution list of all Israeli neonatologists, who are registered in the Israeli Neonatal Society, 127 physicians altogether. The email was sent weekly, five times between 13 April 2020 and 11 May 2020. The questionnaire We created the questionnaire (see supplementation) with input from a team of expert neonatologists and conducted preliminary pilot testing involving 10 neonatologists to assess internal consistency and inter-rater reliability using Cronbach’s alpha. The participants were presented with a scenario where they had to consider the best interests of premature infants born at 22, 23 or 24 weeks gestation. The following questions were designed to identify the main factors that affect decisions regarding postpartum treatments. We used Likert scales and multiple-choice questions. In the following items, respondents had to choose one of five postpartum treatments in one of the three following situations applying to deliveries at 22, 23 and 24 weeks (three situations per each week): (1) parents seek to avoid any treatment following birth, (2) parents’ wish is unknown and (3) parents seek full treatment. The alternative treatments included (1) no resuscitation, compassionate care only; (2) ‘non-invasive’ resuscitation procedures only (ie, bag and mask ventilation only, no intubation, no chest compressions, no medications); (3) intubation and positive pressure ventilation only, and only if the newborn is vital (ie, had body movements and/or breathing effort); (4) full resuscitation as needed only if the newborn is vital and (5) full resuscitation as needed in any case . Afterwards, participants selected statements that they believed accurately reflected the legal status and professional guidelines related to deliveries at 22–24 weeks. The following questions assessed the participant’s opinions on managing conflicts between the treating physician and the parents regarding postpartum treatment after delivery at weeks 22–23. Additionally, we asked about the participant’s inclination towards administering steroids in the case of a clinical indication of early threatened delivery at 22 or 23 weeks gestation. Data analysis We used descriptive statistics to analyse the sociodemographic characteristics, views and attitudes towards resuscitation and postpartum care for premature infants born at 22–24 weeks gestation. We also examined the relationships between these factors, using various statistical methods depending on the variable types, including χ 2 test for independence or Fisher’s exact test (for nominal data), Wilcoxon tests, Kruskal-Wallis tests and Spearman correlations (for ordinal data) and t-tests (for continuous data). Our analysis was performed according to GA. We assessed the internal consistency of attitudes towards resuscitation and postpartum care according to gestation using Cronbach’s alpha coefficient, conditional on three possible parental preferences: full care, no treatment or unknown. The research aims were to investigate Israeli neonatologists’ views and attitudes regarding resuscitation of newborns at 22–24 weeks and their responses to parents’ requests to resuscitate or not resuscitate these premature infants. It also seeks to explore the additional factors that influence physicians’ decisions and how their approaches correspond with the Israeli clinical guidelines. This was a descriptive and correlative study that used a 47-question online questionnaire developed by the researchers and sent to all Israeli neonatologists. Following a pilot test by five neonatologists, a final online questionnaire was developed and distributed as URL link using an existing email distribution list of all Israeli neonatologists, who are registered in the Israeli Neonatal Society, 127 physicians altogether. The email was sent weekly, five times between 13 April 2020 and 11 May 2020. We created the questionnaire (see supplementation) with input from a team of expert neonatologists and conducted preliminary pilot testing involving 10 neonatologists to assess internal consistency and inter-rater reliability using Cronbach’s alpha. The participants were presented with a scenario where they had to consider the best interests of premature infants born at 22, 23 or 24 weeks gestation. The following questions were designed to identify the main factors that affect decisions regarding postpartum treatments. We used Likert scales and multiple-choice questions. In the following items, respondents had to choose one of five postpartum treatments in one of the three following situations applying to deliveries at 22, 23 and 24 weeks (three situations per each week): (1) parents seek to avoid any treatment following birth, (2) parents’ wish is unknown and (3) parents seek full treatment. The alternative treatments included (1) no resuscitation, compassionate care only; (2) ‘non-invasive’ resuscitation procedures only (ie, bag and mask ventilation only, no intubation, no chest compressions, no medications); (3) intubation and positive pressure ventilation only, and only if the newborn is vital (ie, had body movements and/or breathing effort); (4) full resuscitation as needed only if the newborn is vital and (5) full resuscitation as needed in any case . Afterwards, participants selected statements that they believed accurately reflected the legal status and professional guidelines related to deliveries at 22–24 weeks. The following questions assessed the participant’s opinions on managing conflicts between the treating physician and the parents regarding postpartum treatment after delivery at weeks 22–23. Additionally, we asked about the participant’s inclination towards administering steroids in the case of a clinical indication of early threatened delivery at 22 or 23 weeks gestation. We used descriptive statistics to analyse the sociodemographic characteristics, views and attitudes towards resuscitation and postpartum care for premature infants born at 22–24 weeks gestation. We also examined the relationships between these factors, using various statistical methods depending on the variable types, including χ 2 test for independence or Fisher’s exact test (for nominal data), Wilcoxon tests, Kruskal-Wallis tests and Spearman correlations (for ordinal data) and t-tests (for continuous data). Our analysis was performed according to GA. We assessed the internal consistency of attitudes towards resuscitation and postpartum care according to gestation using Cronbach’s alpha coefficient, conditional on three possible parental preferences: full care, no treatment or unknown. 90 (71%) questionnaires were correctly and fully completed and were thus analysed for this research. The characteristics of the participants are presented in . General attitudes Overall, 74%, 50% and 16% of respondents believe that resuscitation and full treatment at birth is contrary to the best interests of infants born at 22, 23 and 24 weeks gestation, respectively . The principal factor influencing most (62%) of the respondents’ treatment decisions was their knowledge regarding the infant survival without severe impairment after discharge. The importance ascribed to the sanctity of life was very scarce among respondents (3%). The answers regarding the respondents’ preferred resuscitation decisions in weeks 22, 23 and 24 of gestations, in the different scenarios of parents’ wishes (against, asking for full resuscitation or unknown to the attending staff) are shown in . The highest consistency was found when parents requested full care (α=0.65) while lower consistency was observed when parents wanted to avoid treatment (α=0.55). The lowest consistency was detected when parental preferences were unknown (α=0.34), indicating an inconsistency in the doctor’s position across different preterm birthdates in the absence of parental preferences. Respondents’ views on whether resuscitation is in the best interest of premature infants born at 22, 23 and 24 weeks gestation were linked to their willingness to offer intensive/non-intensive care in the scenario that parents’ wishes were unknown or when parents seeked to withhold treatment (p<0.001, p<0.001 and p=0.045, respectively). At 22 weeks 3 days delivery, such a relationship was also significant when parents seek full care immediately after birth (p=0.013). Attitudes regarding the legal position 26% of responders believe that at 22 weeks, there is no legal obligation to provide postpartum treatment, even if requested by the parents. For infants born at 23 weeks, most respondents (73%) believe that there is no legal obligation to resuscitate the premature infant but it may be done if requested by the parents. Attitudes and knowledge regarding the clinical guidelines In this study, respondents’ replies did not always correspond with the clinical guidelines on the management of deliveries at the border of viability. Hence, per delivery at 22 weeks, while most of the respondents understand that according to the clinical guidelines, resuscitation should not be offered and management of deliveries should be made in accordance with maternal indications, 25% of them hold that resuscitation can be offered following parents’ request and 5% of them believe that resuscitation is at the full discretion of physicians or that there is a clinical recommendation to offer it even if this is contrary to parents’ wishes. This latter position significantly increases and is prevalent among 19% of respondents when asked about deliveries at 23 weeks, although clinical guidelines do not hold that. Clinical guidelines do not address conflicts between parents and physicians regarding the resuscitation of infants born at 22 weeks. Respondents’ opinions are divided: 30% support neonatologists’ views, 24% prioritise parents’ views and 45% consider the infant’s medical status, specifically vitality, as the deciding factor. For infants born at 23 weeks, there is less division: 3% support physicians’ views while 44% believe parents’ wishes should prevail. Respondents who believe resuscitation contradicts the best interests of the infant tend to provide less intensive care. Conversely, respondents who think the guidelines grant discretion to physicians or recommend resuscitation despite parents’ objections are more willing to offer intensive care in such situations (χ 2 (4)=16.81, p=0.002). Most respondents (56%) think that providing full care to infants born at 22 and 23 weeks should be avoided, even if it could enhance neonatal care and survival rates for larger infants born at 24–25 weeks. 11% do not believe that such a contribution would be significant. However, approximately 23% of the respondents argue that all efforts should be made to improve viability at 24–25 weeks. 53% of the respondents stated that they agree/very much agree that every living creature has the right to live, even with severe disability. However, most (86%) of the respondents believed that the quality of life of an infant born at 22 or 23 weeks and his/her chances of survival are more important than their mere living existence. Respondents were divided as to whether neonatologists have (43%) or do not have (57%) a moral right to determine if the life of a premature infant born at 22 or 23 weeks is worth living. Over half of respondents (54%) believe the neonatologist’s legal risk affects decision-making for premature infants at 22 or 23 weeks. A higher percentage (89%) think the personal values of the physician influence these decisions while a lower percentage (23%) see financial considerations as influential. The influence of physicians’ biographical characteristics on care decision-making At 22 weeks, male and non-Jewish physicians tend to offer more intensive treatment when parents wish to withhold care (p=0.049 and p=0.009, respectively). At 23 weeks, male physicians tend to provide more intensive treatment when parents seek full care (p=0.031) while non-Jewish or non-secular Jewish physicians offer more intensive treatment when parents wish to withhold care (p=0.014 and p=0.038, respectively). The more experienced the physician, the more he or she tends to offer intensive treatment, even when the infant’s parents seek to withhold treatment (r=0.239, p=0.036). At 24 weeks, female, foreign-born or religious physicians offer more intensive treatment when parents want to withhold care (p=0.018, p=0.013 and p=0.039, respectively). Otherwise, no significant relationship has been observed between respondents’ biographical characteristics, type and size of healthcare organisation or work experience and respondents’ preference as to postpartum treatments offered to infants born at 22–24 weeks. When parents’ wishes are unknown, the more experienced the physician, the more he or she is likely to offer intensive treatment (r=0.247, p=0.030). Overall, 74%, 50% and 16% of respondents believe that resuscitation and full treatment at birth is contrary to the best interests of infants born at 22, 23 and 24 weeks gestation, respectively . The principal factor influencing most (62%) of the respondents’ treatment decisions was their knowledge regarding the infant survival without severe impairment after discharge. The importance ascribed to the sanctity of life was very scarce among respondents (3%). The answers regarding the respondents’ preferred resuscitation decisions in weeks 22, 23 and 24 of gestations, in the different scenarios of parents’ wishes (against, asking for full resuscitation or unknown to the attending staff) are shown in . The highest consistency was found when parents requested full care (α=0.65) while lower consistency was observed when parents wanted to avoid treatment (α=0.55). The lowest consistency was detected when parental preferences were unknown (α=0.34), indicating an inconsistency in the doctor’s position across different preterm birthdates in the absence of parental preferences. Respondents’ views on whether resuscitation is in the best interest of premature infants born at 22, 23 and 24 weeks gestation were linked to their willingness to offer intensive/non-intensive care in the scenario that parents’ wishes were unknown or when parents seeked to withhold treatment (p<0.001, p<0.001 and p=0.045, respectively). At 22 weeks 3 days delivery, such a relationship was also significant when parents seek full care immediately after birth (p=0.013). 26% of responders believe that at 22 weeks, there is no legal obligation to provide postpartum treatment, even if requested by the parents. For infants born at 23 weeks, most respondents (73%) believe that there is no legal obligation to resuscitate the premature infant but it may be done if requested by the parents. In this study, respondents’ replies did not always correspond with the clinical guidelines on the management of deliveries at the border of viability. Hence, per delivery at 22 weeks, while most of the respondents understand that according to the clinical guidelines, resuscitation should not be offered and management of deliveries should be made in accordance with maternal indications, 25% of them hold that resuscitation can be offered following parents’ request and 5% of them believe that resuscitation is at the full discretion of physicians or that there is a clinical recommendation to offer it even if this is contrary to parents’ wishes. This latter position significantly increases and is prevalent among 19% of respondents when asked about deliveries at 23 weeks, although clinical guidelines do not hold that. Clinical guidelines do not address conflicts between parents and physicians regarding the resuscitation of infants born at 22 weeks. Respondents’ opinions are divided: 30% support neonatologists’ views, 24% prioritise parents’ views and 45% consider the infant’s medical status, specifically vitality, as the deciding factor. For infants born at 23 weeks, there is less division: 3% support physicians’ views while 44% believe parents’ wishes should prevail. Respondents who believe resuscitation contradicts the best interests of the infant tend to provide less intensive care. Conversely, respondents who think the guidelines grant discretion to physicians or recommend resuscitation despite parents’ objections are more willing to offer intensive care in such situations (χ 2 (4)=16.81, p=0.002). Most respondents (56%) think that providing full care to infants born at 22 and 23 weeks should be avoided, even if it could enhance neonatal care and survival rates for larger infants born at 24–25 weeks. 11% do not believe that such a contribution would be significant. However, approximately 23% of the respondents argue that all efforts should be made to improve viability at 24–25 weeks. 53% of the respondents stated that they agree/very much agree that every living creature has the right to live, even with severe disability. However, most (86%) of the respondents believed that the quality of life of an infant born at 22 or 23 weeks and his/her chances of survival are more important than their mere living existence. Respondents were divided as to whether neonatologists have (43%) or do not have (57%) a moral right to determine if the life of a premature infant born at 22 or 23 weeks is worth living. Over half of respondents (54%) believe the neonatologist’s legal risk affects decision-making for premature infants at 22 or 23 weeks. A higher percentage (89%) think the personal values of the physician influence these decisions while a lower percentage (23%) see financial considerations as influential. At 22 weeks, male and non-Jewish physicians tend to offer more intensive treatment when parents wish to withhold care (p=0.049 and p=0.009, respectively). At 23 weeks, male physicians tend to provide more intensive treatment when parents seek full care (p=0.031) while non-Jewish or non-secular Jewish physicians offer more intensive treatment when parents wish to withhold care (p=0.014 and p=0.038, respectively). The more experienced the physician, the more he or she tends to offer intensive treatment, even when the infant’s parents seek to withhold treatment (r=0.239, p=0.036). At 24 weeks, female, foreign-born or religious physicians offer more intensive treatment when parents want to withhold care (p=0.018, p=0.013 and p=0.039, respectively). Otherwise, no significant relationship has been observed between respondents’ biographical characteristics, type and size of healthcare organisation or work experience and respondents’ preference as to postpartum treatments offered to infants born at 22–24 weeks. When parents’ wishes are unknown, the more experienced the physician, the more he or she is likely to offer intensive treatment (r=0.247, p=0.030). In our national survey, we examined neonatologists’ attitudes towards resuscitation at the verge of viability, specifically the attitude regarding the infant’s best interest at 22, 23 and 24 weeks gestation. We asked about the resuscitation decisions during these weeks, and the basis for these decisions and assessed how they correspond with the published national guidelines. Overall, the physicians demonstrated diversity and occasional discrepancies with the national guidelines concerning resuscitation at the border of viability. Israel is a melting pot of religions and ethnicities and this variation could inform policy-makers and the health fraternity on best ways to handle a question that really has no answer. When asked about resuscitation preferences according to parents’ wish, at 22 weeks, 14% answered that they would perform some resuscitation actions even if the parents wished to avoid it. If the parents’ wish was unknown, almost half preferred some resuscitation effort, especially if the newborn was vital. If the parents desired full treatment, over 70% would resuscitate the newborn, regardless of vitality. This variability in the approach regarding resuscitation is inconsistent with the recommendations of the National Neonatology Association that supports compassionate care only and does not correspond to the fact that over 75% thought that resuscitation is not in the best interest of the preterm newborn at this gestation. At 23 weeks gestation, most physicians aligned with parents’ wishes and national guidelines, choosing not to resuscitate if the parents were against it or fully resuscitate if the parents wanted it. However, 25% of physicians would initiate some resuscitation, especially if the newborn was vital, even against parents’ wishes. If parents desired full treatment, all physicians tended to provide care but were often limited to intubation. Interestingly, if the parents’ wish is unknown, only 16% would provide compassionate care, despite 50% declaring previously that resuscitation is not in the newborn’s best interest at 23 weeks. Overall, our findings reveal a gap between the neonatologists’ perception as to what is or is not in the best interest of the newborn and their pragmatic view, which is mostly affected by parents’ wishes but is also related to deeper personal attitudes and beliefs that may contradict each other. Physicians tend to provide resuscitation when attending birth at 24 weeks gestation. However, even in such cases, medical discretion is exercised. Hence, almost half and more than half will resuscitate only if the infant is vital, if the parents’ wish is unknown or against providing care, respectively. When the parents are against care, 17% will choose compassionate care only. In general, participants’ attitudes regarding resuscitation at the age of 24 weeks of pregnancy were variable, but in line with the 2020 national guidelines. Our findings show that neonatologists’ personal beliefs as to whether providing full and intensive care immediately after a premature infant is born is in the best or not in the best interests of the infant is mostly expressed in two scenarios: when parents’ wishes are unknown, and when parents seek to withhold care. However, when parents seek full care, such personal views are less powerful in determining the course of treatment. Despite religious and cultural diversity in Israel, and similar to another study, which surveyed Israeli neonatologist’ views on life and death issues, our study also reveals that Israeli neonatologists’ ethnic, religious or religiosity levels have little impact on their decision of whether to resuscitate a premature child. Instead, they refer mostly to considerations such as the child’s chances of survival, caring for a handicapped child and respecting parents’ wishes. Around the globe, neonatologists acknowledge the significance of including parents in the decision-making process, but their approach varies depending on the infant’s GA. Belgian neonatologists noted the existence of a grey zone, placed at 23–24 weeks gestation, where parents were perceived as the primary decision-makers due to the significant clinical ambiguity. Beyond this grey zone, that is, below 23 weeks and above 24 weeks gestation, physicians were considered the main decision-makers, and while parents’ desires were considered, counselling became more authoritative and the physician made the ultimate decision. In their study, Tan et al showed differences between clinicians and parents when deciding on resuscitation or neonatal intensive care treatment. Parents appeared to be more tolerant of a higher mortality and averse to disability risks compared with clinicians. However, parents do not approach these decisions from one common perspective. In addition, there is significant variation among neonatal professionals’ assessments of survival and severe disability rates for extremely premature infants, which can further affect the precision of informed shared decision-making. Accordingly, Haward et al suggested moving from doctor-driven to parent-personalised discussions when counselling at the grey zone of viability. The findings in this study reveal that neonatologists’ views regarding the resuscitation at 22 weeks, and in some circumstances at 23 weeks as well, do not correspond to the national guidelines. Resuscitation guidelines in the threshold of viability vary among different countries, but they generally recommend that infants born at or beyond 23 weeks gestation should be considered for active resuscitation while those born earlier will receive comfort care or should be managed according to individual circumstances. Decisions about resuscitation take into account factors such as GA, birth weight, parental preferences and the infant’s overall condition. In Canada and UK. palliative care is suggested when there is high risk for mortality or severe neurodevelopmental disability, which includes, for example, all infants born at 22 weeks GA, or birth weight <400 g irrespective of additional risk factors, and intensive care and palliative care are both usual care options for infants at 23 weeks. Based on survival rate without major impairment, in Australia and New Zealand, guidelines suggest that for infants born at 23 weeks, decisions about the baby’s best interests should be made in partnership with parents and can be flexible while those born at 22 weeks gestation will usually receive comfort care. Infants born at 24 weeks will usually receive full resuscitation and care. In Belgium, from 24 weeks resuscitation is mandatory. After 24 weeks, resuscitation is generally not recommended, but exceptions are considered. In the USA, the guidance by the American College of Obstetricians and the American Academy of Pediatrics (AAP) is to consider resuscitation at 22 and 23 weeks and recommend it at 24 and 25 weeks. As mentioned, the Israeli guidlines state that no intensive care should be provided at 22.0–22.6 weeks gestation, and that providing intensive care to preterm infants born at 24.0 gestation and higher is the default. At 23.0–23.6 gestation, treatment should be in accordance with the parents’ wishes and the newborn’s clinical status and response to intensive care after birth. Although many guidelines resemble the Israeli guidelines, in some countries, a more proactive approach is common even at 22 weeks. Outcomes of infants delivered at 22–24 weeks of gestation vary significantly between countries and even between centres. The data on survival of extremely premature infants in Israel show practically no survival at 22.0–22.6 weeks gestation, around 17% survival for preterm infants born at 23.0–23.6 weeks gestation, and 50%–60% at 24.0–24.6 weeks . Among other explanations for the low survival rate in Israel, which is considered a modern developed country with good medical capabilities, one can argue for a self-fulfilling prophecy explanation. Accordingly, if neonatologists in Israel believe that survival is extremely rare at 22–23 weeks gestation, they will refrain from providing intensive care to newborns born at these weeks. Adhering to this argument, it is possible, theoretically, that if neonatologists offer more intensive care at 23 and even at 22 weeks gestation, the survival rate may increase. Similar to our research, other studies have shown that the approach of medical staff to resuscitation at the threshold of viability varies and does not always adhere to published guidelines and frameworks. One possible cause is that the prognosis of premature birth at the threshold of viability is not solely dependent on GA and is more complex. To better reflect the views of medical professionals, guidelines should take into consideration additional factors that affect the survival and survival without impairment of these newborns. This may result in guidelines that more accurately represent the diversity of opinions. Despite having more detailed guidelines that consider various factors when determining whether resuscitation should be recommended or avoided beyond GA, the medical staff still have their own attitudes and make decisions that deviate from these guidelines. In the UK, neonatal professionals’ interpretation and subsequent management decisions do not always follow the guideline framework’s recommendations. LoRe et al found that physicians’ views of extremely early newborns’ future quality of life correlated with self-reported resuscitation preferences and varied by specialty and level of training. Varying approaches used by midwives, obstetricians, neonatologists and nurses who provide perinatal counselling to parents at extremely low GAs lead to conflicting advice, particularly when opinions regarding treatment decisions diverge. In the USA, Boghossian et al demonstrated a significant regional disparity in perinatal interventions for the care of neonates at 22 and 23 weeks gestation. Regional and racial-ethnic differences can also influence perinatal interventions. Thus, for example, in the Northeast and West regions of the USA, neonates from minority backgrounds at 22 and 23 weeks gestation received a greater amount of postnatal life support. As suggested by Williams et al , plausible solution to bridge the gap between the viewpoints of healthcare providers and the guidelines would be to create guidelines based on comprehensive and extensive survey of medical professionals from various specialties who manage premature infants. This would enable the creation of guidelines that reflect a diverse range of accepted perspectives. Our study has limitations. We acknowledge the potential controversy surrounding the strategy of resuscitating if the baby is deemed ‘vital’ (as outlined in , strategies 3 and 4). It is noted that Apgar scores and heart rates at 1 and 5 min may not reliably predict survival or intact neurological survival. Nevertheless, similar to the consideration of other treatment options, neonatologists contributed suggestions regarding these options during the construction of the questionnaire, and they were all chosen intermittently in the survey itself. 71 response rate, while good, may be considered moderate for such an important topic and given its descriptive nature. Non-responders’ characteristics were similar to responders (data are not shown). Additionally, this is a survey, and there might be a gap between what neonatologists say they would do and their actual practices. Further studies should compare the results of the survey to actual data regarding resuscitation and survival rates in various neonatal deliveries. Our survey revealed significant variability in delivery room management decisions at 22–24 weeks gestation among Israeli neonatologists, with a majority (but not all and not in every scenario) placing emphasis on respecting parents’ wishes. National guidelines, developed by selected neonatologists, do not fully capture this diversity. Given the uncertainty of infants’ outcomes at the viability threshold, it is reasonable that management would be individualised and family-centred, considering fetal and maternal conditions, risk factors, and parental beliefs. Each country’s guidelines should incorporate a wide range of opinions, possibly through surveys of caregivers, including nurses or parents for reflection and formulation. Regardless of guidelines, promoting optimal decision-making in delivery room management should involve joint discussions between parents and neonatal care providers whenever possible. Reviewer comments Author's manuscript |
Lymphovascular invasion and p16 expression are independent prognostic factors in stage I vulvar squamous cell carcinoma | d3c6d6b7-dc16-4679-bcbd-89a77e430d97 | 11186959 | Anatomy[mh] | Vulvar cancer is a rare malignancy, with 45,240 cases diagnosed globally in 2020, and 17,427 cancer-related death during this year, both consisting 0.2% of the entire cancer burden . The majority (> 90%) of vulvar cancers are squamous cell carcinomas (vSqCC), consisting of HPV-associated and HPV-independent tumors. The former have as precursor high-grade vulvar intraepithelial neoplasia (HG-VIN), while in the latter are associated with differentiated VIN (dVIN) and lichen sclerosus (LS). HPV16 is the most common type found in HPV-associated carcinomas, whereas TP53 mutations are common in HPV-independent carcinomas. Immunostaining for p16 and p53 has been applied as surrogate markers in HPV-associated and HPV-independent carcinomas, respectively. Surgery is the mainstay of treatment for vulvar cancer, with adjuvant radiotherapy applied in some cases. Five-year survival is estimated at 50–70%, and is worse for patients with HPV-independent tumors . Clinicopathologic parameters that have been assessed for potential association with survival include the type of precursor lesion present (HG-VIN vs. dVIN), presence of LS, histological grade, tumor size, depth of invasion, stromal changes, pattern of invasion, lymphovascular space invasion (LVSI), perineural invasion, tumor focality, resection margin status, lymph node metastasis, HPV status/p16 expression, patient age and tumor stage. Results have been variable, with tumor stage and lymph node status shown most consistently to be prognosticators in this disease [reviewed in ]. The objective of the present study was to analyze the role of clinicopathologic parameters in a Norwegian cohort of patients diagnosed with stage I vSqCC.
Study population The study cohort consisted of 126 patients diagnosed with stage I vSqCC without clinical, radiological, or cytological evidence of groin lymph node metastasis at the time of primary diagnosis, who were treated at the Department of Gynecological Oncology, Oslo University Hospital – Norwegian Radium Hospital between 01.01.2006 and 31.12.2016. All patients underwent radical wide local excision of the primary tumor, in combination with unilateral or bilateral sentinel lymph node dissection (SLND) or inguinofemoral lymphadenectomy. In unilateral tumors, only ipsilateral lymphadenectomy was performed. Since 2009, SLND has been the standard of care for unifocal tumors smaller than 4 cm at our institution. All specimens underwent histopathological review by a surgical pathologist specialized in gynecologic pathology (BD), who confirmed the diagnosis and assessed resection margin status, and the presence of lichen sclerosus (LS) and lymphovascular space invasion (LVSI). In agreement with the recommendations from the International Collaboration on Cancer Reporting (ICCR) , tumors were not graded. Patients with positive sentinel node or positive surgical margin/close distance to tumor-free margin underwent either re-operation or received adjuvant radiotherapy if re-operation is not possible, with or without concomitant weekly cisplatin, with a dose up to 70 Gy. Insufficient pathological tumor-free margin distance was defined as < 8 mm . When re-excision was performed, the closest margin after re-excision was assessed. After primary treatment, patients were examined routinely every three months during the first two years, every six months during the third to fifth year, and annually thereafter. Information on baseline clinicopathologic characteristics, treatment, recurrence and follow-up was obtained from the patients’ electronic records. Individual survival data were available through linkage to Statistics Norway. In the present study, the following 10 clinicopathologic factors were analyzed as possible risk factors for tumor recurrence: Age at diagnosis, tumor size, resection margin status, the presence of LS or LVSI, p53 status, p16 expression, the presence of HPV, groin lymph node metastasis and administration of adjuvant radiotherapy. Immunohistochemistry (IHC) Formalin-fixed, paraffin-embedded sections from 116 tumors with available block were analyzed for p16 and p53 protein expression using the Dako EnVision Flex + System (K8012; Dako, Glostrup, Denmark). The p16 antibody was a mouse monoclonal antibody purchased from NeoMarkers/Thermo Fisher Scientific Inc. (cat # MS-887-P; clone 16P04; Fremont, CA), applied at a 1:500 dilution following antigen retrieval in LpH buffer (pH 6.0). The p53 antibody was a mouse monoclonal antibody purchased from Santa Cruz Biotechnology (cat # sc-126; clone DO-1; Santa Cruz, CA), applied at a 1:500 dilution following antigen retrieval in HpH buffer (pH 9.0). Following deparaffinization, sections were treated with EnVision™ Flex + mouse linker (15 min) and EnVision™ Flex/HRP enzyme (30 min) and stained for 10 min with 3′3-diaminobenzidine tetrahydrochloride (DAB), counterstained with hematoxylin, dehydrated and mounted in Toluene-Free Mounting Medium (Dako). Positive control for p16 and p53 consisted of high-grade serous carcinoma and colon carcinoma, respectively. In negative controls, the primary antibody was replaced with isotype-specific mouse myeloma protein diluted to the same concentration as the primary antibody. IHC scoring Staining was scored by a gyn-pathologist (BD). p16 expression was scored as diffuse, patchy or absent, the latter two grouped as negative. p53 expression was scored as wild-type vs. aberrant (mutation-type) based on recent recommendations . HPV typing HPV status was analyzed in 116 tumors with available block. Tumor tissue was highlighted on the H&E slide by the study pathologist prior to DNA extraction. Four 10 µm-thick sections were cut for DNA extraction. Between each patient block, the microtome was cleaned with RNase Zap (decontamination solution), distilled water, and 70% ethanol; gloves and the microtome blade were changed, and a control block, containing only paraffin, was sectioned. Total DNA was extracted using the QIAamp DNA FFPE Tissue Kit (Qiagen GmbH, Hilden, Germany). Tumor tissue corresponding to the marked H&E sections was scraped and paraffin was removed in 160µL deparaffinization solution (Qiagen). Tissue was then lysed at 56 °C overnight with 20µL proteinase K. DNA was subsequently purified in several wash steps according to the manufacturer guidelines, eluted in 90µL ATE buffer, and stored at 4 °C until further use. HPV DNA detection was performed with the AnyplexTMII HPV28 Detection Kit (Seegene, Seoul, South Korea) as described by the manufacturer. The assay detects 28 different HPV genotypes, including high-risk (16, 18, 31, 33, 35, 39, 45, 51, 52, 56, 58 and 59), probably or possibly high-risk (68, 26, 53, 66, 67, 70, 73 and 82) and low-risk genotypes (6, 11, 40, 42, 43, 44, 54 and 61), based on current guidelines . Statistical analysis Statistical analysis was performed applying the SPSS-PC package (Version 28). Probability of < 0.05 was considered statistically significant. For progression free survival (PFS), follow-up time was calculated from the date of diagnosis until the date of relapse, date of death from any cause or end of follow-up (15.02.20). For overall survival (OS), follow-up time was calculated from the date of diagnosis until date of death from any cause or end of follow-up, whichever occurred first. Survival curves were plotted with the Kaplan–Meier method. The log rank test was used to compare survival between the groups. Multivariate survival analysis was performed using Cox proportional hazard models.
The study cohort consisted of 126 patients diagnosed with stage I vSqCC without clinical, radiological, or cytological evidence of groin lymph node metastasis at the time of primary diagnosis, who were treated at the Department of Gynecological Oncology, Oslo University Hospital – Norwegian Radium Hospital between 01.01.2006 and 31.12.2016. All patients underwent radical wide local excision of the primary tumor, in combination with unilateral or bilateral sentinel lymph node dissection (SLND) or inguinofemoral lymphadenectomy. In unilateral tumors, only ipsilateral lymphadenectomy was performed. Since 2009, SLND has been the standard of care for unifocal tumors smaller than 4 cm at our institution. All specimens underwent histopathological review by a surgical pathologist specialized in gynecologic pathology (BD), who confirmed the diagnosis and assessed resection margin status, and the presence of lichen sclerosus (LS) and lymphovascular space invasion (LVSI). In agreement with the recommendations from the International Collaboration on Cancer Reporting (ICCR) , tumors were not graded. Patients with positive sentinel node or positive surgical margin/close distance to tumor-free margin underwent either re-operation or received adjuvant radiotherapy if re-operation is not possible, with or without concomitant weekly cisplatin, with a dose up to 70 Gy. Insufficient pathological tumor-free margin distance was defined as < 8 mm . When re-excision was performed, the closest margin after re-excision was assessed. After primary treatment, patients were examined routinely every three months during the first two years, every six months during the third to fifth year, and annually thereafter. Information on baseline clinicopathologic characteristics, treatment, recurrence and follow-up was obtained from the patients’ electronic records. Individual survival data were available through linkage to Statistics Norway. In the present study, the following 10 clinicopathologic factors were analyzed as possible risk factors for tumor recurrence: Age at diagnosis, tumor size, resection margin status, the presence of LS or LVSI, p53 status, p16 expression, the presence of HPV, groin lymph node metastasis and administration of adjuvant radiotherapy.
Formalin-fixed, paraffin-embedded sections from 116 tumors with available block were analyzed for p16 and p53 protein expression using the Dako EnVision Flex + System (K8012; Dako, Glostrup, Denmark). The p16 antibody was a mouse monoclonal antibody purchased from NeoMarkers/Thermo Fisher Scientific Inc. (cat # MS-887-P; clone 16P04; Fremont, CA), applied at a 1:500 dilution following antigen retrieval in LpH buffer (pH 6.0). The p53 antibody was a mouse monoclonal antibody purchased from Santa Cruz Biotechnology (cat # sc-126; clone DO-1; Santa Cruz, CA), applied at a 1:500 dilution following antigen retrieval in HpH buffer (pH 9.0). Following deparaffinization, sections were treated with EnVision™ Flex + mouse linker (15 min) and EnVision™ Flex/HRP enzyme (30 min) and stained for 10 min with 3′3-diaminobenzidine tetrahydrochloride (DAB), counterstained with hematoxylin, dehydrated and mounted in Toluene-Free Mounting Medium (Dako). Positive control for p16 and p53 consisted of high-grade serous carcinoma and colon carcinoma, respectively. In negative controls, the primary antibody was replaced with isotype-specific mouse myeloma protein diluted to the same concentration as the primary antibody. IHC scoring Staining was scored by a gyn-pathologist (BD). p16 expression was scored as diffuse, patchy or absent, the latter two grouped as negative. p53 expression was scored as wild-type vs. aberrant (mutation-type) based on recent recommendations .
Staining was scored by a gyn-pathologist (BD). p16 expression was scored as diffuse, patchy or absent, the latter two grouped as negative. p53 expression was scored as wild-type vs. aberrant (mutation-type) based on recent recommendations .
HPV status was analyzed in 116 tumors with available block. Tumor tissue was highlighted on the H&E slide by the study pathologist prior to DNA extraction. Four 10 µm-thick sections were cut for DNA extraction. Between each patient block, the microtome was cleaned with RNase Zap (decontamination solution), distilled water, and 70% ethanol; gloves and the microtome blade were changed, and a control block, containing only paraffin, was sectioned. Total DNA was extracted using the QIAamp DNA FFPE Tissue Kit (Qiagen GmbH, Hilden, Germany). Tumor tissue corresponding to the marked H&E sections was scraped and paraffin was removed in 160µL deparaffinization solution (Qiagen). Tissue was then lysed at 56 °C overnight with 20µL proteinase K. DNA was subsequently purified in several wash steps according to the manufacturer guidelines, eluted in 90µL ATE buffer, and stored at 4 °C until further use. HPV DNA detection was performed with the AnyplexTMII HPV28 Detection Kit (Seegene, Seoul, South Korea) as described by the manufacturer. The assay detects 28 different HPV genotypes, including high-risk (16, 18, 31, 33, 35, 39, 45, 51, 52, 56, 58 and 59), probably or possibly high-risk (68, 26, 53, 66, 67, 70, 73 and 82) and low-risk genotypes (6, 11, 40, 42, 43, 44, 54 and 61), based on current guidelines .
Statistical analysis was performed applying the SPSS-PC package (Version 28). Probability of < 0.05 was considered statistically significant. For progression free survival (PFS), follow-up time was calculated from the date of diagnosis until the date of relapse, date of death from any cause or end of follow-up (15.02.20). For overall survival (OS), follow-up time was calculated from the date of diagnosis until date of death from any cause or end of follow-up, whichever occurred first. Survival curves were plotted with the Kaplan–Meier method. The log rank test was used to compare survival between the groups. Multivariate survival analysis was performed using Cox proportional hazard models.
Clinicopathologic data, as well as IHC and HPV typing results are summarized in Table . p16 and p53 immunostaining is illustrated in Fig. . HPV16 was found in 54 of the 66 (82%) HPV-positive tumors, including 48 tumors in which it was the single virus type and 6 in which mixed infection was found. The remaining cases harbored HPV33 (7 tumors), HPV18 (2 tumors), HPV51 (1 tumor), HPV73 (1 tumor) and mixed infection with HPV types 42, 61 and 6 (1 case). Positive p16 expression and aberrant p53 were mutually exclusive in the majority (93/116; 80%) of cases (42 p16-positive tumors and 51 p53-aberrant tumors), whereas in the remaining tumors both analyses were positive (7 cases) or negative (12 cases). In 4 tumors assessment was impossible due to equivocal p16 expression. HPV status and p16 expression correlated in 91/115 (79%) tumors. The remaining tumors consisted of 18 p16-negative, HPV-positive and 2 p16-positive, HPV-negative cases, as well as the 4 above-mentioned cases in which p16 staining could not be classified with certainty. The follow-up period ranged from 5 to 165 months (mean = 71 months, median = 67 months). Relapse was diagnosed in 35/126 (28%) of patients, and 23 (18%) died of disease. In univariate analysis of OS, tumor diameter > 4 cm (Fig. A; p = 0.013), LVSI (Fig. B; p < 0.001), the presence of LS (Fig. C; p = 0.019), negative p16 expression (Fig. D; p = 0.007), aberrant p53 expression (Fig. E; p = 0.012), negative HPV status (Fig. F; p = 0.021), lymph node metastasis (yes vs. no and 0 vs. 1 vs. > 1, Fig. G and H, respectively; both p < 0.001) and post-operative radiotherapy (Fig. I; p < 0.001) were significantly related to shorter OS. Perinodal involvement was additionally significantly associated with shorter OS ( p = 0.029), though the number of events (= 5) was deemed too small for meaningful analysis (data not shown). In univariate analysis of PFS, tumor diameter > 4 cm (Fig. A; p = 0.038), LVSI (Fig. B; p = 0.003), the presence of LS (Fig. C; p = 0.004), negative p16 expression (Fig. D; p = 0.004), negative HPV status (Fig. E; p = 0.039), lymph node metastasis (yes vs. no and 0 vs. 1 vs. > 1, Fig. F and G, respectively; both p < 0.001) and post-operative radiotherapy (Fig. H; p < 0.001) were significantly related to shorter PFS. p53 expression was associated with only a trend for shorter PFS ( p = 0.058). Age, BMI and surgical resection involvement by VIN or SqCC were not significantly associated with OS or PFS ( p > 0.05). In Cox multivariate analysis, LVSI and p16 expression were independent prognosticators of OS ( p < 0.001 and p = 0.02, respectively) and PFS ( p = 0.018, p = 0.037). HPV status was not independently related to OS or PFS, a finding that was unchanged even when p16 was removed from the Cox analysis ( p > 0.05; data not shown). Data are summarized in Tables and . Univariate survival analysis was additionally performed for patients with HPV-negative and HPV-positive tumors, in the aim of comparing the prognostic role of LVSI in each group. In this analysis, as in analysis of the entire cohort, LVSI was associated with poor OS and PFS, with stronger association for patients with HPV-negative tumors ( p = 0.003 and p = 0.007 for OS and PFS, respectively; Supplementary Figs. -A, 1-B) compared to those with HPV-positive tumors ( p = 0.023 and p = 0.02 for OS and PFS, respectively; Supplementary Figs. -C, 1-D). No significant differences were observed in comparative analysis of patients with limited LVSI (< 4 vessels) compared to those with extensive LVSI (≥ 4 vessels) in analysis of the entire cohort, with survival being in fact shorter for patients with fewer involved vessels ( p = 0.278 and p = 0.259 for OS and PFS, respectively; Supplementary Figs. -E, 1-F).
The present study analyzed the presence and prognostic role of clinicopathologic parameters and biomarkers in a cohort of patients diagnosed with vSqCC and treated at a tertiary cancer center. Numerous studies have assessed the prognostic role of clinicopathologic parameters in vulvar cancer [reviewed in ], and the present discussion focuses on those that have additionally analyzed p16, p53 and/or HPV status. The latter group has reported widely discrepant results, reflecting differences in cohorts owing to epidemiology, patient age and case selection . The parameters analyzed also differ across studies, as does the cut-off for some parameters (e.g., tumor diameter cut-off at 2 vs. 4 cm). Finally, definition of p16 expression as surrogate for HPV infection vs. molecular analysis of HPV status is another feature rendering comparisons difficult. Among clinical parameters, lymph node metastasis and post-operative radiotherapy were strong prognosticators in univariate analysis of both OS and PFS, with less robust, though still significant association observed for tumor diameter, and no such role for age or BMI. With the exception of age, the present data are in agreement with previous reports . Older age has been reported to be associated with poor outcome in the majority , though not all , studies. Of note, the cut-off applied for grouping patients in this category differs across study, being 78 years in the Alonso series , 70 years in the Kortekass series and 65 years in the Barlow series , compared to 60 years in the present study, a fact that may explain the discrepancy. Among morphological parameters, the presence of LVSI and LS, but not surgical resection involvement by VIN or carcinoma, were related to OS and PFS. The finding regarding LVSI is in agreement with several previous reports , though it was not observed in the Barlow series . The prognostic role of LS has not been assessed in the studies discussed here, but LS has been reported to be associated with shorter PFS in several studies [reviewed in . Resection margin involvement by VIN and/or carcinoma has been reported to be associated with survival in some , but not all studies. HPV detection rates in vSqCC have been reported to be as low as 19.4% or 23% , and as high as 68.8% . The data in the present study, in which 57% of tumors harbored HPV DNA, are within this range, but closer to the values in the latter series. As in other series , HPV16 was the most frequently detected type, followed by HPV33 and HPV18. Our data are well in agreement with a recent large Danish study, in which HPV was detected in 52% of 1,308 vSqCC, of which the majority harbored high-risk HPV types, predominantly, in that order, types 16, 33 and 18 . Good, but not full, agreement was observed between HPV status and p16 expression, supporting previous observations that while p16 may inform on HPV infection in the majority of cases , it cannot fully replace HPV typing and may best fit resource-limited conditions. In the present study, p16 and p53 protein expression and HPV status were all significantly related to OS, whereas only p16 and HPV status were associated with PFS in univariate analysis. Multivariate analysis identified p16 expression and LVSI as independent prognosticators of both OS and PFS. Few studies have assessed the relative prognostic value of p16, p53 and HPV in vSqCC. Our data are fully discordant with those of Alonso et al. . In the latter study, p16, p53 and HPV were unrelated to OS or DFS, and comparable survival was observed for patients with HPV-associated and HPV-independent carcinomas. However, with only 19.4% of 98 tumors being HPV-positive, this study may have been underpowered to investigate this question. Data documenting better survival for patients with HPV-associated carcinomas have since been published by other groups . Several papers have reported significant association between p16 expression and DFS/PFS, OS or both, though HPV analysis has not been performed in these studies . A prognostic role for p16 and p53 separately or for the combination p16/p53 was reported in other studies. An independent prognostic role for p16 and LVSI was identified in this study in analysis of both PFS and OS. p16 was an independent prognostic marker of OS in the Dong series , and the combination p16/p53 was independently related to RFS in the Kortekaas series . However, our data are in best agreement with the results of Gadducci et al. , who recently reported on an association between p16 expression and LVSI in multivariate analysis, in a series of 78 vSqCC. The lack of independent prognostic role for parameters that performed strongly in univariate analysis, such as lymph node metastasis and administration of radiotherapy, may be related to a relatively small number of events, particularly since cases included in multivariate analysis are only those who have all values entered ( n = 105 in the present series). Of note, LVSI was associated with poor OS and PFS also in separate analysis of patients with HPV-negative and HPV-positive tumors, despite the small number of events in each group ( n = 8), with stronger association in the HPV-negative group. Conversely, the number of vessels involved does not appear to be informative of outcome in this cohort, based on analysis of the entire group. In conclusion, analysis of 126 vSqCC highlighted the role of p16 IHC and LVSI in predicting outcome in this disease. Whether the role of p16 as an independent prognosticator is reproducible in series which have different proportion of HPV-negative and -positive carcinomas remains to be established in other cohorts. However, despite the fact that p16 staining is not informative of all HPV-positive carcinomas, inclusion of these 2 tests for prognostic purposes in everyday practice appears mandated based on our series.
Below is the link to the electronic supplementary material. Supplementary Fig. 1. Additional analyses of LVSI . A: Kaplan-Meier survival curve showing the association between the presence of LVSI and OS for 48 vulvar SqCC patients with HPV-negative tumors (1 patient with no LVSI data). Patients with LVSI ( n = 8; red line) had mean OS of 44 months compared to 115 months for patients whose tumors did not have LVSI ( n = 40, blue line; p = 0.003). B: Kaplan-Meier survival curve showing the association between the presence of LVSI and PFS for 48 vulvar SqCC patients with HPV-negative tumors (1 patient with no LVSI data). Patients with LVSI ( n = 8; red line) had mean PFS of 41 months compared to 114 months for patients whose tumors did not have LVSI ( n = 40, blue line; p = 0.007). C: Kaplan-Meier survival curve showing the association between the presence of LVSI and OS for 66 vulvar SqCC patients with HPV-positive tumors. Patients with LVSI ( n = 8; red line) had mean OS of 107 months compared to 151 months for patients whose tumors did not have LVSI ( n = 58, blue line; p = 0.023). D: Kaplan-Meier survival curve showing the association between the presence of LVSI and PFS for 66 vulvar SqCC patients with HPV-positive tumors. Patients with LVSI ( n = 8; red line) had mean PFS of 102 months compared to 141 months for patients whose tumors did not have LVSI ( n = 58, blue line; p = 0.02). E: Kaplan-Meier survival curve showing the association between the number of vessels involved by SqCC and OS for 17 patients with LVSI (8 with HPV-negative tumor, 8 with HPV-positive tumor, 1 patient with inconclusive HPV status). Patients with tumors that had involvement of ≥4 vessels ( n = 8; red line) had mean OS of 99 months compared to 54 months for patients whose tumors had involvement of <4 vessels ( n = 9, blue line; p = 0.278). F: Kaplan-Meier survival curve showing the association between the number of vessels involved by SqCC and PFS for 17 patients with LVSI (8 with HPV-negative tumor, 8 with HPV-positive tumor, 1 patient with inconclusive HPV status). Patients with tumors that had involvement of ≥4 vessels ( n = 8; red line) had mean PFS of 95 months compared to 50 months for patients whose tumors had involvement of <4 vessels ( n = 9, blue line; p = 0.259). (PPTX 7536 KB)
|
Links Between Microvascular and AD‐related pathology in the Medial Temporal Lobe | ba0c23d3-2d2b-46f1-80b2-e1eca7a88a77 | 11713746 | Forensic Medicine[mh] | |
Tight junction protein changes in irritable bowel syndrome: the relation of age and disease severity | 3c8acbd5-8782-412a-8975-4ecf356389f5 | 11569931 | Anatomy[mh] | Irritable bowel syndrome (IBS) is a common functional gastrointestinal (GI) disorder characterized by chronic abdominal pain and altered bowel habits. This reduces the quality of life and places a significant economic burden on the medical system and society . The pathophysiology of IBS is multifactorial, involving alterations in visceral hypersensitivity, GI motility, mucosa-associated immune alterations, microbiome, and intestinal permeability . Recent studies have shown the role of intestinal barrier dysfunction in IBS. Several previous studies have suggested that intestinal permeability is increased in patients with IBS with predominant diarrhea (IBS-D) or in those with post-infection IBS . The intestinal epithelium forms a crucial barrier that regulates the selective passage of nutrients and prevents the infiltration of harmful pathogens and toxins into the underlying tissues . Tight junction (TJ) proteins, which serve as molecular gatekeepers that control paracellular permeability, are central to the integrity of this barrier. TJs consist of various proteins, including claudins, occludins, and zonula occludens, all of which play pivotal roles in maintaining the structural and functional integrity of the intestinal epithelial barrier . Alterations in TJ proteins may be a key factor that contributes to the disrupted gut barrier observed in patients with IBS. The dysregulation of these proteins can lead to increased intestinal permeability, thereby allowing luminal antigens to translocate into the underlying mucosa. This, in turn, can trigger immune responses, low-grade inflammation, and an array of GI symptoms characteristic of IBS . Among the proteins composing TJs, the following three proteins have been mainly studied: the transmembrane proteins, occludin and claudin-1, and the cytosolic protein, zonula occludens-1 (ZO-1) . However, in patients with IBS, the role of TJ proteins has been poorly investigated. Previous studies have recently shown an increase in the colonic paracellular permeability in patients with IBS associated with a decrease in ZO-1 mRNA level . Other studies have reported that occludin mRNA expression remained not modified . However, a recent study has reported a decrease in the occludin expression in the colonic mucosa of patients with IBS . While several studies have investigated the association between TJ protein levels and IBS, the exact nature of these changes, their mechanisms, and their clinical implications remain a subject of ongoing research and debate. Although studies have identified changes in TJ proteins in patients with IBS, the relationship between these changes and the aging process has not been well elucidated. Therefore, we aimed to investigate the changes in TJ proteins associated with aging and disease severity in patients with IBS, thereby contributing to a deeper understanding of the pathophysiological mechanisms in IBS. Participants This study was conducted prospectively at a tertiary care center from 2018 to 2022. All participants were recruited from Korea, with the control group comprising individuals who underwent colonoscopies as part of a health checkup. Under the guidance of a well-trained interviewer, participants received colonoscopies and completed questionnaires regarding IBS symptoms. We excluded patients with any macroscopic or histologic abnormalities of the colonic mucosa, a history of GI surgery, neuromuscular diseases, GI malignancies, or any other organic bowel diseases. Participants were categorized into control or IBS groups, with only IBS-D patients enrolled according to the Rome IV criteria, as shown in previous studies . IBS-D is commonly known to be more prevalent, and it was expected that the expression of TJ proteins would be more closely associated with IBS-D. The control group consisted of individuals with normal colonic mucosa and no evidence of organic or functional bowel disease. Colonoscopy biopsy All participants underwent a comprehensive colonoscopy, during which biopsies were taken from the sigmoid colon. Using standard biopsy forceps, we collected six tissue samples of the colon mucosa. These samples were then analyzed using quantitative reverse transcription polymerase chain reaction (qRT-PCR), western blotting, and immunohistochemistry (IHC). Specifically, two colon mucosal biopsy samples were allocated for each analysis method: qRT-PCR, western blot analysis, and IHC. Questionnaire Thirty-six patients with IBS fulfilling the Rome IV criteria and twenty-four controls were included in this study. IBS symptom severity was assessed using the IBS Severity Scoring System . The overall IBS score ranged from 0 to 500, and patients were divided into 3 groups according to score, as follows: mild IBS, < 175; moderate IBS, 175–300; and severe IBS, > 300 . The IBS and control groups were classified according to age. Further, both groups were categorized into the old- and young-aged groups. To clarify differences between ages, participants aged 65 years or older were categorized into the old-aged group, whereas those aged 20–59 years were categorized into the young-aged group. qRT-PCR Total RNA was extracted from human sigmoid colon tissues using QIAzol reagent (Qiagen, Venlo, The Netherlands), according to the manufaturer’s instructions. Using 2 μg of the total RNA as a template, the first strand cDNA was synthesized using the RevertAid First Strand cDNA synthesis kit (Thermo Fisher Scientific, Waltham, MA, USA). The ABI ViiA7 Real-time PCR system (Applied Biosystems, Waltham, MA, USA) was used for quantitative real-time PCR (qPCR) amplification and detection. qPCR was run in triplicates, each of 10-μg reaction volume using Power SYBR® Green PCR Master mix (ABI), following the manufacturer’s instructions. Thermal cycling was performed under the following conditions: 50°C, 2 minutes; 95°C, 10 minute; 40 cycles of 95°C, 15 seconds; and 60°C, 1 minute. Used primers are described in . All expression levels were normalized to GAPDH levels of the same sample. The relative quantity of target gene expression to housekeeping genes was measured using the comparative Ct method . Western blot analysis Based on the qRT-PCR results, we selected TJ proteins for western blot analysis. Claudin-1, -2, occludin, and ZO-1 were selected for analysis as they are likely to be related to disease severity and age based on PCR analysis. Total protein was extracted from sigmoid colon tissues using RIPA buffer (Cell Signaling Technology, Danvers, MA, USA) and PMSF (Sigma Aldrich, St. Louis, MO, USA). Following electrophoresis, proteins were transferred to polyvinylidene difluoride membranes. Primary antibodies used claudin-1 (rabbit, 1:250, Invitrogen, Carlsbad, CA, USA, 51–9,000), claudin-2 (rabbit, 1:250, Invitrogen, 51–6,100), occludin (rabbit, 1:250, Invitrogen, 40–4,700), ZO-1 (rabbit, 1:500, Invitrogen, 40–2,200), and β-actin (rabbit, 1:200, Cell Signaling Technology, 4,970). Rabbit-HRP (1:1,000, 7074S, Cell Signaling Technology) was used as the secondary antibody . We performed densitometric quantification using the ImageJ software (National Institutes of Health, Bethesda, MD, USA) and removed irrelevant or blank lanes from blot images to present our data in a streamlined way. Immunohistochemical analysis Based on the qRT-PCR and western blot results, we selected ZO-1 as the TJ protein for IHC analysis. Two professionally trained pathologists performed the IHC analysis. The immunohistochemical experiment was commissioned by SuperBioChips (SuperBioChips Laboratories, Seoul, Korea). The primary antibody used was ZO-1 (rabbit, 1:20, ATLAS, Stockholm, Sweden). Samples were analyzed using the BX43F microscope (Olympus, Tokyo, Japan). Expression percentage was assessed as a ratio between the number of cells stained with ZO-1 divided by the total number of cells counted in a section. ZO-1 expression was quantified using the ImageJ software. (IHC score, diffuse + strong = 3, focal + strong or diffuse + weak = 2; focal + weak = 1; negative = 0) . Statistical methods All statistical analyses were performed using Statistical Package for the Social Sciences (version 18.0; SPSS Inc., Chicago, IL, USA) and GraphPad Prism 9 (GraphPad Software, LLC, Boston, MA, USA). Data were expressed as means ± standard error of the mean. The IBS and control groups were analyzed by age, sex, and body mass index (BMI). Each classification criterion was explained in the questionnaire. One-way analysis of variance test and unpaired t-test were used to compare data between different groups. When the number of samples was greater than 10 and less than 30, the normality test was used to analyze the data, and the Mann–Whitney test and Kruskal–Wallis test were used to analyze the data with less than 10 samples. p values of < 0.05 were considered statistically significant. Ethics statement The study’s protocol received approval from the Institutional Review Board (IRB) of Kyungpook National University Hospital (IRB 2019-06-020), and written informed consent was secured from all participants. This study was conducted prospectively at a tertiary care center from 2018 to 2022. All participants were recruited from Korea, with the control group comprising individuals who underwent colonoscopies as part of a health checkup. Under the guidance of a well-trained interviewer, participants received colonoscopies and completed questionnaires regarding IBS symptoms. We excluded patients with any macroscopic or histologic abnormalities of the colonic mucosa, a history of GI surgery, neuromuscular diseases, GI malignancies, or any other organic bowel diseases. Participants were categorized into control or IBS groups, with only IBS-D patients enrolled according to the Rome IV criteria, as shown in previous studies . IBS-D is commonly known to be more prevalent, and it was expected that the expression of TJ proteins would be more closely associated with IBS-D. The control group consisted of individuals with normal colonic mucosa and no evidence of organic or functional bowel disease. All participants underwent a comprehensive colonoscopy, during which biopsies were taken from the sigmoid colon. Using standard biopsy forceps, we collected six tissue samples of the colon mucosa. These samples were then analyzed using quantitative reverse transcription polymerase chain reaction (qRT-PCR), western blotting, and immunohistochemistry (IHC). Specifically, two colon mucosal biopsy samples were allocated for each analysis method: qRT-PCR, western blot analysis, and IHC. Thirty-six patients with IBS fulfilling the Rome IV criteria and twenty-four controls were included in this study. IBS symptom severity was assessed using the IBS Severity Scoring System . The overall IBS score ranged from 0 to 500, and patients were divided into 3 groups according to score, as follows: mild IBS, < 175; moderate IBS, 175–300; and severe IBS, > 300 . The IBS and control groups were classified according to age. Further, both groups were categorized into the old- and young-aged groups. To clarify differences between ages, participants aged 65 years or older were categorized into the old-aged group, whereas those aged 20–59 years were categorized into the young-aged group. Total RNA was extracted from human sigmoid colon tissues using QIAzol reagent (Qiagen, Venlo, The Netherlands), according to the manufaturer’s instructions. Using 2 μg of the total RNA as a template, the first strand cDNA was synthesized using the RevertAid First Strand cDNA synthesis kit (Thermo Fisher Scientific, Waltham, MA, USA). The ABI ViiA7 Real-time PCR system (Applied Biosystems, Waltham, MA, USA) was used for quantitative real-time PCR (qPCR) amplification and detection. qPCR was run in triplicates, each of 10-μg reaction volume using Power SYBR® Green PCR Master mix (ABI), following the manufacturer’s instructions. Thermal cycling was performed under the following conditions: 50°C, 2 minutes; 95°C, 10 minute; 40 cycles of 95°C, 15 seconds; and 60°C, 1 minute. Used primers are described in . All expression levels were normalized to GAPDH levels of the same sample. The relative quantity of target gene expression to housekeeping genes was measured using the comparative Ct method . Based on the qRT-PCR results, we selected TJ proteins for western blot analysis. Claudin-1, -2, occludin, and ZO-1 were selected for analysis as they are likely to be related to disease severity and age based on PCR analysis. Total protein was extracted from sigmoid colon tissues using RIPA buffer (Cell Signaling Technology, Danvers, MA, USA) and PMSF (Sigma Aldrich, St. Louis, MO, USA). Following electrophoresis, proteins were transferred to polyvinylidene difluoride membranes. Primary antibodies used claudin-1 (rabbit, 1:250, Invitrogen, Carlsbad, CA, USA, 51–9,000), claudin-2 (rabbit, 1:250, Invitrogen, 51–6,100), occludin (rabbit, 1:250, Invitrogen, 40–4,700), ZO-1 (rabbit, 1:500, Invitrogen, 40–2,200), and β-actin (rabbit, 1:200, Cell Signaling Technology, 4,970). Rabbit-HRP (1:1,000, 7074S, Cell Signaling Technology) was used as the secondary antibody . We performed densitometric quantification using the ImageJ software (National Institutes of Health, Bethesda, MD, USA) and removed irrelevant or blank lanes from blot images to present our data in a streamlined way. Based on the qRT-PCR and western blot results, we selected ZO-1 as the TJ protein for IHC analysis. Two professionally trained pathologists performed the IHC analysis. The immunohistochemical experiment was commissioned by SuperBioChips (SuperBioChips Laboratories, Seoul, Korea). The primary antibody used was ZO-1 (rabbit, 1:20, ATLAS, Stockholm, Sweden). Samples were analyzed using the BX43F microscope (Olympus, Tokyo, Japan). Expression percentage was assessed as a ratio between the number of cells stained with ZO-1 divided by the total number of cells counted in a section. ZO-1 expression was quantified using the ImageJ software. (IHC score, diffuse + strong = 3, focal + strong or diffuse + weak = 2; focal + weak = 1; negative = 0) . All statistical analyses were performed using Statistical Package for the Social Sciences (version 18.0; SPSS Inc., Chicago, IL, USA) and GraphPad Prism 9 (GraphPad Software, LLC, Boston, MA, USA). Data were expressed as means ± standard error of the mean. The IBS and control groups were analyzed by age, sex, and body mass index (BMI). Each classification criterion was explained in the questionnaire. One-way analysis of variance test and unpaired t-test were used to compare data between different groups. When the number of samples was greater than 10 and less than 30, the normality test was used to analyze the data, and the Mann–Whitney test and Kruskal–Wallis test were used to analyze the data with less than 10 samples. p values of < 0.05 were considered statistically significant. The study’s protocol received approval from the Institutional Review Board (IRB) of Kyungpook National University Hospital (IRB 2019-06-020), and written informed consent was secured from all participants. This study included 36 participants with IBS and 24 controls. The baseline characteristics of the participants are presented in . Twenty-three and thirteen patients were included in the IBS group’s young- and old-aged groups, respectively. The control group consisted of 24 participants, of whom 12 and 12 were included in the young- and old-aged groups, respectively. The mild and moderate-to-severe IBS groups comprised 16 and 20 participants, respectively. The average symptom duration in patients with IBS was 34.3 weeks. Moreover, all groups had more than 32 weeks of symptom duration. The average and standard deviation of symptom duration in the IBS subgroups are described in . qRT-PCR analysis of TJ mRNA Compared with the control group, claudin-1, -2, -4, junctional adhesion molecule A (JAM-A), occludin, and ZO-1 mRNA levels all showed a tendency to decrease the IBS group. However, only claudin-1 and -2 showed a statistically significant decrease ( p < 0.05) . In the subgroup analysis, the mild IBS group tended to have lower mRNA levels than the control group; however, only claudin-1 showed a statistically significantly decrease ( p < 0.05), and no significant difference was noted in claudin-2, -4, JAM-A, occludin, and ZO-1 . Claudin-1, -2, and ZO-1 showed a tendency to decrease in the old-aged and IBS groups. However, claudin-4, JAM-A, and occludin did not show any tendency or statistical significance. Claudins-1 and -2 were statistically significantly reduced in the old IBS group compared to the young IBS group ( p < 0.05) . In the subgroup analysis by sex, the decrease in Claudin-1, Claudin-2, and ZO-1 TJ mRNA levels in IBS patients was found to have a greater effect in women than in men, although this difference was not statistically significant . In the analysis by BMI, TJ mRNA levels tended to increase in cases of obesity, but this trend also did not reach statistical significance . Western blot analysis of TJ proteins Claudin-1, -2, occludin, and ZO-1 protein levels were measured by western blot. Compared with the control group, claudin-1, -2, occludin, and ZO-1 protein levels all showed a tendency to decrease in the IBS group. Furthermore, claudin-1 ( p = 0.040) and ZO-1 ( p < 0.0001) protein levels showed a statistically significant decrease in the IBS group . In the subgroup analysis, claudin-1 ( p = 0.304), -2 ( p = 0.832), and occludin ( p = 0.690) protein levels showed no significant difference in IBS severity; however, only ZO-1 showed a statistically significant decreased level in both the mild ( p = 0.031) and moderate-to-severe IBS groups ( p = 0.006) . Regarding claudin-1, -2, occludin, and ZO-1 protein levels, a tendency to decrease was observed in the old-aged groups compared with those in the youngaged group. However, claudin-1 ( p = 0.452), -2 ( p = 0.291), and occludin ( p = 0.883) showed no statistical significance. Compared with the young- and old-aged control group, the old-aged IBS group showed a significantly decreased ZO-1 level ( p = 0.047) . IHC analysis of ZO-1 In the IHC analysis, patients with IBS showed a significantly lower ZO-1 expression than controls ( p < 0.001) . In the subgroup analysis, both mild and moderate-to-severe IBS groups tended to have lower ZO-1 expression levels than the control group. However, the mild IBS group did not show statistical significance ( p = 0.229), and the moderate-to-severe IBS group had a significantly lower ZO-1 expression level than the control group ( p < 0.001) . The young-aged group tended to have a higher ZO-1 expression level. Older age and moderate to severe symptoms tended to be associated with lower levels of ZO-1 expression. In the subtype analysis according to age, no difference was observed between the young-versus old-aged control groups ( p = 0.506), and the young-versus old-aged IBS groups ( p = 0.515). The old-aged IBS groups ( p < 0.05) showed a statistically significantly lower ZO-1 expression than the young-aged control group . Compared with the control group, claudin-1, -2, -4, junctional adhesion molecule A (JAM-A), occludin, and ZO-1 mRNA levels all showed a tendency to decrease the IBS group. However, only claudin-1 and -2 showed a statistically significant decrease ( p < 0.05) . In the subgroup analysis, the mild IBS group tended to have lower mRNA levels than the control group; however, only claudin-1 showed a statistically significantly decrease ( p < 0.05), and no significant difference was noted in claudin-2, -4, JAM-A, occludin, and ZO-1 . Claudin-1, -2, and ZO-1 showed a tendency to decrease in the old-aged and IBS groups. However, claudin-4, JAM-A, and occludin did not show any tendency or statistical significance. Claudins-1 and -2 were statistically significantly reduced in the old IBS group compared to the young IBS group ( p < 0.05) . In the subgroup analysis by sex, the decrease in Claudin-1, Claudin-2, and ZO-1 TJ mRNA levels in IBS patients was found to have a greater effect in women than in men, although this difference was not statistically significant . In the analysis by BMI, TJ mRNA levels tended to increase in cases of obesity, but this trend also did not reach statistical significance . Claudin-1, -2, occludin, and ZO-1 protein levels were measured by western blot. Compared with the control group, claudin-1, -2, occludin, and ZO-1 protein levels all showed a tendency to decrease in the IBS group. Furthermore, claudin-1 ( p = 0.040) and ZO-1 ( p < 0.0001) protein levels showed a statistically significant decrease in the IBS group . In the subgroup analysis, claudin-1 ( p = 0.304), -2 ( p = 0.832), and occludin ( p = 0.690) protein levels showed no significant difference in IBS severity; however, only ZO-1 showed a statistically significant decreased level in both the mild ( p = 0.031) and moderate-to-severe IBS groups ( p = 0.006) . Regarding claudin-1, -2, occludin, and ZO-1 protein levels, a tendency to decrease was observed in the old-aged groups compared with those in the youngaged group. However, claudin-1 ( p = 0.452), -2 ( p = 0.291), and occludin ( p = 0.883) showed no statistical significance. Compared with the young- and old-aged control group, the old-aged IBS group showed a significantly decreased ZO-1 level ( p = 0.047) . In the IHC analysis, patients with IBS showed a significantly lower ZO-1 expression than controls ( p < 0.001) . In the subgroup analysis, both mild and moderate-to-severe IBS groups tended to have lower ZO-1 expression levels than the control group. However, the mild IBS group did not show statistical significance ( p = 0.229), and the moderate-to-severe IBS group had a significantly lower ZO-1 expression level than the control group ( p < 0.001) . The young-aged group tended to have a higher ZO-1 expression level. Older age and moderate to severe symptoms tended to be associated with lower levels of ZO-1 expression. In the subtype analysis according to age, no difference was observed between the young-versus old-aged control groups ( p = 0.506), and the young-versus old-aged IBS groups ( p = 0.515). The old-aged IBS groups ( p < 0.05) showed a statistically significantly lower ZO-1 expression than the young-aged control group . Our study elucidates the complex interplay between TJ protein expression, aging, and disease severity in patients with IBS-D. The differential expression of TJ mRNA or proteins, notably claudin-1, -2, occludin, and ZO-1, underscores the multifaceted nature of IBS-D pathophysiology, particularly in the context of intestinal barrier function. The statistically significant decrease in claudin-1 and -2 mRNA levels in the IBS group compared to controls ( p < 0.05) highlights the potential role of these gene expres-sions in contributing to the compromised intestinal barrier integrity observed in IBS-D. Claudin-1 and -2 are crucial for maintaining the selectivity and permeability of the TJ barrier, and their decreased expression may facilitate increased intestinal permeability, a hallmark feature observed in many IBS-D patients . This phenomenon could exacerbate the translocation of luminal contents, thereby triggering inflammatory responses and symptomatology associated with IBS-D . Notably, the specificity of ZO-1’s significant decrease across different analyses (western blot and IHC) further supports its critical role in TJ integrity and the pathogenesis of IBS-D. The reduction in ZO-1 protein, particularly pronounced in the old-aged IBS group and the moderate- to-severe IBS subgroup, aligns with existing theories suggesting a degradation of intestinal barrier function with aging and increased disease severity. The paracellular permeability of the intestinal barrier is regulated by a complex protein system that constitutes TJs. Transcellular or paracellular pathways can be described in three ways. The first is the pore pathway, which is a high-capacity size- and charge-selection pathway regulated by the claudin family . The second is the leak pathway, which is a nonselective low-dose pathway primarily regulated by ZO-1, occludin, and myosin light chain kinase . Finally, unrestricted pathways are opened owing to the loss of TJ complexes, usually as a result of cell death, apoptosis, or mucosal damage. This pathway can allow the passage of large macromolecules and even microorganisms across the epithelium . A decreased membrane protein expression triggers an inflammatory response in the intestinal wall, thereby leading to changes in intestinal sensitivity and motility with intestinal barrier permeability disruption . Coëffier et al. have linked epithelial barrier disruption and increased permeability to both the downregulation of the TJ scaffolding protein ZO-1 and proteasome-mediated occludin degradation in the colonic mucosa of patients with IBS. Several studies have shown increased intestinal barrier permeability and decreased TJ protein expression in patients with IBS-D. ZO-1 and occludin protein expressions were significantly lower in the colonic mucosa of patients with IBS compared with those of the controls. Zeng et al. have reported that the mRNA expressions of ZO-1 and occludin were significantly reduced in IBS-D. Camilleri et al. have reported that the RNA sequencing of claudin-1 was reduced in IBS-D. Zhen et al. have stated that patients with IBS‑D had significantly reduced occludin production compared with the controls. Moreover, Lee et al. reported that ZO-1 gene expression was decreased in females, and no difference in claudin-1 and occluding was observed. Western blot showed a decrease in ZO-1 . However, in some studies, no differences were noted. Ishimoto et al. and Camilleri et al. have shown that the mRNA levels of ZO-1, occludin, and claudin-1 were similar between each IBS subtype and controls. The expression of TJ proteins showed different results in each study, and no clear data were observed on the subgroup . In our study, western blot analysis revealed that patients with IBS exhibited significantly decreased levels of claudin-1 and ZO-1 proteins ( p < 0.05). Subgroup analysis showed that ZO-1 protein levels were significantly lower in both mild and moderate-to-severe IBS groups ( p < 0.05). In the older IBS group, ZO-1 protein levels were significantly lower than those in the younger IBS group ( p < 0.05). The IHC analysis showed that patients with IBS had significantly lower ZO-1 expression compared to those in the control group ( p < 0.001). The moderate-to-severe IBS group had significantly lower ZO-1 expression levels than the control group ( p < 0.001). The older IBS groups ( p = 0.039) showed statistically significant lower ZO-1 expression compared to the control group. The ZO-1 expression level decreased with age. Further, it was lower in individuals with more severe IBS symptoms. In a rat model, age-induced mRNA expression downregulation and decreased ZO-1 and occludin protein expressions were observed in the ileum . Previous studies on the expression of TJ proteins related to age and severity lacked sufficient data in humans. Our study may provide evidence for changes in TJ proteins according to aging and disease severity. Our findings reveal a nuanced relationship between aging, disease severity, and TJ protein expression, especially ZO-1 protein. While the decrease in TJ proteins was more pronounced in the old-aged IBS group, it’s particularly interesting that age itself did not significantly affect ZO-1 expression levels when comparing young and old control groups. This suggests that the observed alterations in TJ protein expression are more directly associated with the presence and severity of IBS-D rather than aging alone. The significant decrease in ZO-1 expression in the old-aged IBS group compared to young-aged controls, and its statistical significance across different degrees of IBS severity, underscores the potential exacerbating effect of aging on the disease’s impact on intestinal barrier function. These findings align with the hypothesis that IBS-D’s pathophysiology may be partly due to a compromised barrier function, which is further impacted by aging. In this study, the expression levels of mRNA and protein showed a significant decrease in the control group and IBS-D group. However, there was a tendency for these levels to decrease even more in the mild IBS group, contrary to the disease severity. Although the expression levels of mRNA and protein did not correlate with disease severity, particularly in IBS, disease severity is determined based on subjective symptoms felt by the patient using the IBS-SSS questionnaire. Since the method of measuring the severity of these symptoms is not based on pathology and biology, the decrease in TJ mRNA and protein may not align with the symptoms. These changes in TJ proteins may be related to symptoms due to alterations in mucosal permeability and the resulting increase in immune cells and inflammatory mediators caused by a compromised epithelial barrier . Further studies are needed to confirm the association with symptoms. This study had several limitations. First, the relatively small sample size and the different number of samples per group limited the ability to conduct appropriate analyses of generalizations and subtypes. Second, biopsy samples were taken from the sigmoid colon, and TJ protein expression may vary depending on the location in the GI tract, making identification of the entire intestine difficult. In addition, most previous studies have been conducted in the sigmoid colon, and there is a lack of research in other regions of GI tract. The expression of TJ proteins is not consistent across different sites . Third, this study focused on patients with IBS-D; therefore, results may vary depending on different subtypes. Fourth, IBS is a multifactorial disorder wherein dietary habits, lifestyle, and other potential contributing factors may be present, and these factors may not be considered or controlled. Finally, to evaluate TJ proteins, we utilized various techniques (qRT-PCR, western blot, and IHC). Variability in methodologies and potential technical limitations may introduce bias or affect the consistency of results. Additionally, there may be limitations in terms of specific populations, cross-sectional studies, and clinical relevance. Our study demonstrates that IBS-D is associated with significant alterations in the expression of specific TJ mRNA and proteins, notably claudin-1, -2, and ZO-1. These changes are further influenced by the severity of the disease and the age of the patients, suggesting that interventions aimed at restoring TJ protein expression could be beneficial, especially in older patients or those with moderate-to-severe IBS-D. Future research should focus on elucidating the mechanisms underlying these alterations and exploring therapeutic strategies to enhance intestinal barrier function in IBS-D. 1. Age and disease severity are associated with alterations in tight junction proteins, which may contribute to increased intestinal permeability in IBS patients. 2. Tight junction protein ZO-1 expression is significantly decreased in patients with IBS, especially in older patients and those with more severe symptoms. 3. This study suggests that ZO-1 is the prominent tight junction protein involved in the pathogenesis of IBS, with potential implications for understanding the disease mechanisms in older populations. |
Accessibility of basic paediatric emergency care in Malawi: analysis of a national facility census | 42298c75-afd8-4857-92ab-d21b75ca0411 | 7315502 | Pediatrics[mh] | Emergency care is among the weakest parts of health systems in low-income countries . Sub-Saharan African facilities experience higher patient loads and mortality than other regions particularly for paediatric emergency patients . Quality care is often impeded by adverse case management factors and emergency care is frequently poorly organised and lacking essential supplies . Evidence suggests the quality of emergency care can be improved without huge investments if it is better organised and staff are trained to handle emergency situations in structured ways . At the same time, lower-level facilities may have particularly limited possibilities to manage critically ill patients and must rely on adequate capacity to refer patients to hospitals or higher-level care as required. Yet time-sensitive care is challenged by potential delays during initial care-seeking to first-level facilities compounded by referral delays, which adds to the poor prognosis of already very sick patients . A maximum of 2 h travel time to facilities providing emergency care has been proposed as an international goal for surgical emergencies . A similar target-setting process for accessibility of basic paediatric emergency care is desirable. During the 1990s, the World Health Organisation (WHO) developed the Emergency Triage Assessment and Treatment (ETAT) guidelines to support hospitals in resource-poor settings to care for critically ill children . These guidelines were field tested across multiple countries in the late-1990s showing reduced in-hospital paediatric deaths . An evaluation in one Malawi hospital identified lack of staff and inadequate anaemia/malaria treatment as main limitations in delivering care for critically ill children . The main causes of death in that hospital were malaria and malaria-related illnesses, pneumonia and malnutrition. A more recent study from a separate Malawi hospital found that nearly half (43%) of paediatric deaths were among infants, and leading causes were sepsis, lower respiratory infections, gastroenteritis, meningitis and malaria. Some governments, including Kenya and Malawi, have recently updated ETAT guidelines to provide more detailed instructions on equipment and processes needed for implementation . This extended version of ETAT is known as ETAT Plus (ETAT+), and includes a hospital audit tool. While ETAT was previously implemented in Malawi hospitals, the Malawi Ministry of Health is currently developing its own ETAT+ manual to strengthen quality emergency care across its health system. Yet, to date, there has been no comprehensive assessment of Malawi hospital capacity to implement ETAT+ nor an understanding of commonly lacking equipment or supplies that could pose particular barriers to future implementation. While other research found suboptimal accessibility to surgery and emergency care in sub-Saharan Africa, these studies were based on population distance to hospitals with no assessment of hospital readiness to provide such care . In this study, we used a national facility census with comprehensive facility audits to analyse Malawi hospital readiness to care for critically ill children according to the ETAT+ hospital audit tool. We further estimated travel times to emergency-equipped hospitals from non-equipped facilities and relative to Malawi’s population distribution.
Study setting Malawi is a low-income country in sub-Saharan Africa with approximately 18 million people in 2016 including three million children under 5 years. Malawi’s health system is primarily comprised of government-run facilities and publicly-supported facilities run by the Christian Health Association of Malawi (CHAM). There are three main health system tiers: central hospitals, district hospitals, and peripheral facilities that include health centres, clinics, posts, maternities and dispensaries in our analysis. These facilities typically provide basic essential services including family planning, antenatal services, and outpatient care. The next level is the district hospital that are referral facilities providing inpatient care, laboratory diagnostics, and maternity wards. The highest level are central hospitals that are teaching and research centres with specialized medical services. To date, Malawi has targeted ETAT implementation to hospitals but going forward ETAT+ implementation will target both hospitals and health centres. Survey methods The Malawi Service Provision Assessment (SPA) was conducted in June 2013–February 2014 by the Ministry of Health and The Demographic and Health Survey (DHS) Program, which includes facility audits, observed consultations, patient exit interviews and health worker interviews. All audited facilities were geocoded to allow for spatial analyses. Survey methods are described elsewhere including procedures for obtaining ethical approval and participant consent . Briefly, Malawi SPA 2013–2014 was designed as a census of all formal public and private facilities in the country, which makes it distinct from other facility-based surveys to allow for the current investigation. An inventory questionnaire assessed facility readiness on the interview date to provide various services including: infrastructure, resources, and systems; maternal, new born, and child health; family planning; HIV/AIDS, malaria, and tuberculosis; minor surgery; and non-communicable diseases. There was no specific audit of emergency departments that may be found in larger hospitals with divided organisations of care, nor were there questions about emergency care organisation, training or triage practices. All audited facilities were included in this research. Emergency-equipped definition The emergency-equipped definition in this study was based on components specified in the latest available ETAT+ hospital audit tool from the Kenyan Ministry of Health (Table ) . We assessed emergency readiness among hospitals since those facilities have been targeted for ETAT implementation to date. A hospital was considered emergency-equipped if all items were observed or reported available on the interview date. This definition reflects staff, transport, equipment, diagnostics, medications, fluids, feeds and consumables that underpin care for critically ill children. It does not include training, management or organisation of emergency care that are also important but were not assessed during the audit. Any component with a missing value was considered unavailable. Data analysis Geocoded facility locations were used to create maps characterizing travel times to any facility and emergency-equipped hospital based on methods from Weiss et al. 2018 . Briefly, this approach applies a least cost path algorithm to facility points geolocated with a friction surface (e.g. 2D grid wherein each cell value is an estimate of the time it takes to move one meter within that cell). The friction surface accounts for surface travel through transportation networks (e.g. roads, railroads, and navigable waterways) and overland by identifying the fastest route between any two geolocated points. The road network datasets used in this analysis combined data from OpenStreetMap and Google, which creates the largest and most complete global road network datasets available. There are two key assumptions of the travel time model. First, individuals will always travel by the fastest transport means, such as a vehicle on a road rather than walking. Second, travel times are static and do not account for elements such as traffic congestion, public transit delays or infrastructure changes (e.g. flooded roads). Based on these assumptions and available datasets, maps were produced such that each pixel value refers to the number of minutes required to reach the closest facility. We tabulated travel times from every non-equipped facility to its nearest emergency-equipped hospital as well as relative to population distributions by applying gridded population data from the WorldPop project to the travel time maps . National point estimates were tabulated using weights to account for unequal probabilities of selection due to facility non-response in this national facility census. We used Pearson’s chi-squared tests to determine whether median travel times to emergency-equipped hospitals differed across sub-national regions or rural/urban areas. The level of statistical significance was set to 0.05. Stata 13.1 (Stata Corp., College Station, TX) was used for this analysis.
Malawi is a low-income country in sub-Saharan Africa with approximately 18 million people in 2016 including three million children under 5 years. Malawi’s health system is primarily comprised of government-run facilities and publicly-supported facilities run by the Christian Health Association of Malawi (CHAM). There are three main health system tiers: central hospitals, district hospitals, and peripheral facilities that include health centres, clinics, posts, maternities and dispensaries in our analysis. These facilities typically provide basic essential services including family planning, antenatal services, and outpatient care. The next level is the district hospital that are referral facilities providing inpatient care, laboratory diagnostics, and maternity wards. The highest level are central hospitals that are teaching and research centres with specialized medical services. To date, Malawi has targeted ETAT implementation to hospitals but going forward ETAT+ implementation will target both hospitals and health centres.
The Malawi Service Provision Assessment (SPA) was conducted in June 2013–February 2014 by the Ministry of Health and The Demographic and Health Survey (DHS) Program, which includes facility audits, observed consultations, patient exit interviews and health worker interviews. All audited facilities were geocoded to allow for spatial analyses. Survey methods are described elsewhere including procedures for obtaining ethical approval and participant consent . Briefly, Malawi SPA 2013–2014 was designed as a census of all formal public and private facilities in the country, which makes it distinct from other facility-based surveys to allow for the current investigation. An inventory questionnaire assessed facility readiness on the interview date to provide various services including: infrastructure, resources, and systems; maternal, new born, and child health; family planning; HIV/AIDS, malaria, and tuberculosis; minor surgery; and non-communicable diseases. There was no specific audit of emergency departments that may be found in larger hospitals with divided organisations of care, nor were there questions about emergency care organisation, training or triage practices. All audited facilities were included in this research.
The emergency-equipped definition in this study was based on components specified in the latest available ETAT+ hospital audit tool from the Kenyan Ministry of Health (Table ) . We assessed emergency readiness among hospitals since those facilities have been targeted for ETAT implementation to date. A hospital was considered emergency-equipped if all items were observed or reported available on the interview date. This definition reflects staff, transport, equipment, diagnostics, medications, fluids, feeds and consumables that underpin care for critically ill children. It does not include training, management or organisation of emergency care that are also important but were not assessed during the audit. Any component with a missing value was considered unavailable.
Geocoded facility locations were used to create maps characterizing travel times to any facility and emergency-equipped hospital based on methods from Weiss et al. 2018 . Briefly, this approach applies a least cost path algorithm to facility points geolocated with a friction surface (e.g. 2D grid wherein each cell value is an estimate of the time it takes to move one meter within that cell). The friction surface accounts for surface travel through transportation networks (e.g. roads, railroads, and navigable waterways) and overland by identifying the fastest route between any two geolocated points. The road network datasets used in this analysis combined data from OpenStreetMap and Google, which creates the largest and most complete global road network datasets available. There are two key assumptions of the travel time model. First, individuals will always travel by the fastest transport means, such as a vehicle on a road rather than walking. Second, travel times are static and do not account for elements such as traffic congestion, public transit delays or infrastructure changes (e.g. flooded roads). Based on these assumptions and available datasets, maps were produced such that each pixel value refers to the number of minutes required to reach the closest facility. We tabulated travel times from every non-equipped facility to its nearest emergency-equipped hospital as well as relative to population distributions by applying gridded population data from the WorldPop project to the travel time maps . National point estimates were tabulated using weights to account for unequal probabilities of selection due to facility non-response in this national facility census. We used Pearson’s chi-squared tests to determine whether median travel times to emergency-equipped hospitals differed across sub-national regions or rural/urban areas. The level of statistical significance was set to 0.05. Stata 13.1 (Stata Corp., College Station, TX) was used for this analysis.
The Malawi SPA 2013–2014 included 977 facilities of 1060 on the national facility list with non-response due to refusal, closure, inaccessibility or other issue (Table ). Among these 977 facilities, 116 (12%) were hospitals while 861 (88%) were lower-level facilities including health centres, maternities, dispensaries, clinics, or health posts. 478 (49%) were government-run facilities, 160 (16%) were CHAM and 339 (35%) were managed by other authorities such as non-governmental organisations or private companies. A total of 167 (17%) facilities including 23 hospitals were located in the North region where approximately 2.2 million people resided in 2016; 364 (37%) facilities including 43 hospitals were in the Central region with approximately 7.3 million people; and 446 (46%) facilities including 50 hospitals were in the South region with about 7.8 million people. Emergency-equipped facilities Four (3.5%; 95% CI: 1.3–8.9) Malawi facilities had all 25 components available on the interview date to fulfil study criteria for an emergency-equipped facility (Table ). Among these four emergency-equipped facilities, three were hospitals and one was a health centre despite current ETAT implementation targeted only to hospitals. One was in the Central region while two and one were in the North and South, respectively. Two were in rural areas and two in urban districts. One government and three CHAM facilities were emergency-equipped. Figure presents the distribution of Malawi hospitals according to the number of ETAT+ items observed or reported available on the audit date. While only 3 hospitals (plus one health centre) had all 25 items available to fulfil the emergency-equipped criteria, 74 (of 116 hospitals) had between 20 and 24 items available. Table shows availability of each component of the emergency-equipped definition among hospitals and all facilities. Least available items were nasogastric tubes in 34.5% (95% CI: 26.4–43.6) of hospitals followed by blood typing services (40.4, 95% CI: 31.9–49.6). Injectable hydrocortisone, micro nebulizers or spacers for inhalers, and radiology were found in about half of Malawi hospitals. Across all facilities, 33.0% (95% CI: 30.1–36.0) had a functional ambulance to transport critically ill patients as required. Estimated travel times between facilities Among the 973 non-equipped facilities, the median travel time was 73 min (95% CI: 67–77 min) to the nearest emergency-equipped hospital with a range of 1–507 min. In the North, the median travel time was 77 min (95% CI: 67–86 min), and in the South, the median travel time was 80 min (95% CI: 76–87 min). The median travel time in the Central region was 60 min (95% CI: 54–65 min), which was significantly lower than both other regions ( p < 0.001). In urban areas, the median travel time was 44 min (95% CI: 42–46 min) to the nearest emergency-equipped facility, which was significantly lower than in rural areas with a median travel time of 87 min (95% CI: 82–90 min) ( p < 0.001). Estimated travel times by population distributions We estimated that 45% of the Malawi population had no more than a 10-min travel time to any facility while 83 and 95% of the population lived within a 30- and 60-min travel time to any facility, respectively (Fig. ). In contrast, only 4% of the Malawi population lived within a 10-min travel time of an emergency-equipped facility while 11% lived within 30-min travel time and approximately one-third (34%) of the population lived within 60 min. More than one-quarter (27%) of Malawi’s population, or approximately 4.7 million people, must travel over 120 min to an emergency-equipped hospital. There were also significant regional differences in estimated travel times to an emergency-equipped hospital (Fig. ). The percentage of the population that must travel over 120 min to an emergency-equipped hospital was 16% in the Central region compared to 35 and 38% in the South and North regions, respectively ( p < 0.001).
Four (3.5%; 95% CI: 1.3–8.9) Malawi facilities had all 25 components available on the interview date to fulfil study criteria for an emergency-equipped facility (Table ). Among these four emergency-equipped facilities, three were hospitals and one was a health centre despite current ETAT implementation targeted only to hospitals. One was in the Central region while two and one were in the North and South, respectively. Two were in rural areas and two in urban districts. One government and three CHAM facilities were emergency-equipped. Figure presents the distribution of Malawi hospitals according to the number of ETAT+ items observed or reported available on the audit date. While only 3 hospitals (plus one health centre) had all 25 items available to fulfil the emergency-equipped criteria, 74 (of 116 hospitals) had between 20 and 24 items available. Table shows availability of each component of the emergency-equipped definition among hospitals and all facilities. Least available items were nasogastric tubes in 34.5% (95% CI: 26.4–43.6) of hospitals followed by blood typing services (40.4, 95% CI: 31.9–49.6). Injectable hydrocortisone, micro nebulizers or spacers for inhalers, and radiology were found in about half of Malawi hospitals. Across all facilities, 33.0% (95% CI: 30.1–36.0) had a functional ambulance to transport critically ill patients as required.
Among the 973 non-equipped facilities, the median travel time was 73 min (95% CI: 67–77 min) to the nearest emergency-equipped hospital with a range of 1–507 min. In the North, the median travel time was 77 min (95% CI: 67–86 min), and in the South, the median travel time was 80 min (95% CI: 76–87 min). The median travel time in the Central region was 60 min (95% CI: 54–65 min), which was significantly lower than both other regions ( p < 0.001). In urban areas, the median travel time was 44 min (95% CI: 42–46 min) to the nearest emergency-equipped facility, which was significantly lower than in rural areas with a median travel time of 87 min (95% CI: 82–90 min) ( p < 0.001).
We estimated that 45% of the Malawi population had no more than a 10-min travel time to any facility while 83 and 95% of the population lived within a 30- and 60-min travel time to any facility, respectively (Fig. ). In contrast, only 4% of the Malawi population lived within a 10-min travel time of an emergency-equipped facility while 11% lived within 30-min travel time and approximately one-third (34%) of the population lived within 60 min. More than one-quarter (27%) of Malawi’s population, or approximately 4.7 million people, must travel over 120 min to an emergency-equipped hospital. There were also significant regional differences in estimated travel times to an emergency-equipped hospital (Fig. ). The percentage of the population that must travel over 120 min to an emergency-equipped hospital was 16% in the Central region compared to 35 and 38% in the South and North regions, respectively ( p < 0.001).
Overall, only four Malawi hospitals were fully equipped to provide basic paediatric emergency care according to the study definition. More than one-quarter (27%) of Malawi’s population – or approximately 4.7 million people – must travel more than 120 min to reach an emergency-equipped hospital with significant regional differences in travel times. Our findings are more pessimistic than previous studies that estimated only 7% of Malawi’s population live more than 2 h from a public hospital with emergency care . Another recent study found that 92.5% of the sub-Saharan African population lived within 2 h of a major hospital for surgical procedures . However, actual services provided by each hospital were not known in the aforementioned studies and those results likely overestimate health system capacity to provide emergency care. The current study is the first to our knowledge to map travel times to emergency care for critically ill children in a low-income country by linking a national facility census that combined comprehensive inventory audits with global road network datasets to more accurately estimate population accessibility both nationally and at sub-national levels. Insufficiencies of equipment and supplies for emergency care within hospitals in sub-Saharan Africa is a well-described problem, and constitutes a major barrier to the provision of quality care and adherence to international guidelines . Lack of basic equipment with associated failures to provide the care needed may create a vicious cycle whereby patients’ experienced and perceived poor quality care at healthcare settings can further contribute to treatment failures . A hospital’s ability to provide the care required by visiting clients is important since it may lead to improved and more timely utilization of services and thus better patient outcomes. While our study showed great improvements in hospital readiness to manage severe malaria compared to previous ETAT evaluations in Malawi, other deficiencies were found in these results. Specifically, least available items were nasogastric tubes, blood typing services, injectable hydrocortisone, micro nebulizers or spacer inhalers and radiology. These deficiencies are especially disconcerting considering that low- and middle-income countries bear the greatest burden of death from lung disease, and paediatric anaemia . Insufficiencies in the health infrastructure reduces both quantity and safety of blood supplies in low- and middle-income countries, and improved capacity to provide blood transfusions would be life-saving since mortality from anaemia remains high . Our study also found nasogastric tubes commonly lacking in hospitals, which may pose a barrier to delivering quality care for dehydrated and malnourished children. However, this result could partly reflect data collection issues since only specific sizes of nasogastric tubes were audited with common child sizes (6 and 8G) not assessed. Paediatric asthma is commonly underdiagnosed in low-income settings with associated high mortality rates, and most Malawi facilities lack equipment and medications needed to manage an asthma exacerbation. While oxygen was commonly available in Malawi hospitals, only 15% of lower-level facilities reported oxygen availability despite its inclusion in the WHO essential medicines list . The cost effectiveness of an oxygen system strategy has been shown to compare favourably with other child survival interventions . An implementation effectiveness trial of sustainable and renewable oxygen and power systems in remote areas of low-income countries is underway and could provide promising results . The recent WHO report on quality of care recommends timely referral for every child with conditions that cannot be managed effectively at first-level facilities . This is an obvious challenge in the Malawi health system where only one-third of facilities have a functional ambulance. Minimum standards for paediatric emergency care in remote and resource-poor settings are not well-defined but pre-referral management using simple equipment and supplies could improve survival chances of very sick patients . Recent studies show that improved pre-hospital care was achieved by training commercial taxi/minibus drivers to provide basic emergency care . Implementation of motorcycle ambulances have also reduced referral times for obstetric care in Malawi . Other innovations to address referral challenges should be explored. While our results indicate that 73% of Malawi’s population live within 120 min travel time to an emergency-equipped hospital, this apparently high figure must be considered in light of related issues. First, there are significant in-country regional differences in access to basic paediatric emergency care with worse population accessibility in North and South than Central regions. Similar inequities have been demonstrated in paediatric pneumonia assessment practises . Second, the purpose of emergency care is to provide urgent medical interventions for time-critical health problems making prompt care essential for entire populations. Third, while travel times are estimated based on road networks, other barriers likely impede travel to hospitals such as financial and physical availability of transport and additional difficulties of transporting severely sick children . Malawi roads may also not be fully developed or there could be other road difficulties that further reduce travel times or require people to walk rather than using other transport means. It is thus expected that actual travel times are longer than ones presented here. Indeed, while travel time are useful in a relative sense (e.g. distinguishing highly accessible areas from remote ones) they provide best-case-scenario values that cannot be considered universally applicable. Malawi has achieved impressive reductions in child mortality and achieved Millennium Development Goal Four (MDG4) by 2013 . This progress has mainly been explained by high and equitable coverage with high-impact preventive interventions including malaria bed net distribution and reductions in malnutrition. While preventive efforts must be sustained, further decreases in child mortality in Malawi and other low-income countries will require substantial investments to expand emergency care to better manage critically ill children at highest mortality risk. Methodological limitations There are a number of methodological considerations in this study. First, equipment and supplies were assessed through audits of general outpatient departments and other service delivery sites. SPA did not specifically audit emergency departments although minor surgery sites were assessed in every facility. This should mainly affect larger hospitals more likely to have a divided organisation of care. Training, management and organisation of emergency care within each facility were not assessed, nor were quality of care or service utilization outcomes. Second, the emergency-equipped definition included items that were either observed or reported available as well as either functioning or not functioning/don’t know on the interview date. This definition may misclassify some facilities as emergency-equipped if items were reported available/functioning but were not. Third, only Malawi facilities were included in the analysis and some populations may have shorter travel times to facilities in neighbouring countries. Fourth, emergencies requiring surgical and/or orthopaedic interventions were not included and involve additional management challenges that should be considered in developing emergency care capacity of a health system. Fifth, data were collected during 2013–2014 and may not reflect current ETAT+ readiness in Malawi hospitals. However, it is not expected that paediatric emergency medicine has substantially improved since this time given the lack of major sector-specific investments. Finally, and as previously discussed, travel time estimates are best-case scenario values that indicate relative distances to emergency care for different populations and geographic areas within Malawi. These estimates are not universally applicable such that individual access to emergency care depends on use (or not) of motorised transport, as one example, that would greatly facilitate or impede individual travel times to emergency care.
There are a number of methodological considerations in this study. First, equipment and supplies were assessed through audits of general outpatient departments and other service delivery sites. SPA did not specifically audit emergency departments although minor surgery sites were assessed in every facility. This should mainly affect larger hospitals more likely to have a divided organisation of care. Training, management and organisation of emergency care within each facility were not assessed, nor were quality of care or service utilization outcomes. Second, the emergency-equipped definition included items that were either observed or reported available as well as either functioning or not functioning/don’t know on the interview date. This definition may misclassify some facilities as emergency-equipped if items were reported available/functioning but were not. Third, only Malawi facilities were included in the analysis and some populations may have shorter travel times to facilities in neighbouring countries. Fourth, emergencies requiring surgical and/or orthopaedic interventions were not included and involve additional management challenges that should be considered in developing emergency care capacity of a health system. Fifth, data were collected during 2013–2014 and may not reflect current ETAT+ readiness in Malawi hospitals. However, it is not expected that paediatric emergency medicine has substantially improved since this time given the lack of major sector-specific investments. Finally, and as previously discussed, travel time estimates are best-case scenario values that indicate relative distances to emergency care for different populations and geographic areas within Malawi. These estimates are not universally applicable such that individual access to emergency care depends on use (or not) of motorised transport, as one example, that would greatly facilitate or impede individual travel times to emergency care.
Based on a national census of 977 facilities in Malawi in 2013–2014, study findings indicate severely limited accessibility to hospitals equipped to manage critically ill children with particularly deficient capacity to treat childhood anaemia and respiratory illnesses. Population access to emergency-equipped hospitals was unevenly distributed across regions and urban/rural areas that should be considered in future health system planning and investments. There is an urgent need to strengthen Malawi health system capacity to manage basic paediatric emergencies including reliable supply chains for essential drugs and commodities and time-sensitive referral for transporting patients to higher-level care as required. While Malawi and other low-income countries have made significant progress in reducing child mortality over the past few decades, further gains will require substantial investments to expand quality emergency care to improve survival chances of critically ill children at highest mortality risk.
|
PD-L1 as a Biomarker in Gastric Cancer Immunotherapy | 2b924b9c-2748-4e54-8b8d-faf222cb84b5 | 11739645 | Anatomy[mh] | Immunotherapy has significantly improved the efficacy of gastric cancer (GC) treatment. Immune checkpoint inhibitors (ICIs), particularly those targeting programmed death-1 (PD-1) or programmed death-ligand 1 (PD-L1), have shown long-term efficacy in a subset of patients with GC. Currently, PD-1 inhibitors combined with chemotherapy are the standard first-line treatment for both human epidermal growth factor 2 (HER2)-negative and -positive locally advanced or metastatic GC . PD-L1 expression, assessed using immunohistochemistry (IHC), serves as a predictive biomarker for immunotherapy in several tumors, including GC, and functions as a companion or complementary diagnostic test for immunotherapy in patients with GC . The combined positive score (CPS) is used to evaluate PD-L1 expression in GC and offers the advantage of a comprehensive assessment of PD-L1 expression in both tumor and immune cells in a single reading . However, several challenges remain regarding the use of PD-L1 as a biomarker in IHC. In this review, we present the current practices of immunotherapy and the associated PD-L1 assays in patients with GC. We provide a detailed overview of the guidelines for interpreting PD-L1 IHC results and discuss the related clinicopathological factors. In addition, we discuss the clinical challenges associated with PD-L1 assays and outline future considerations for PD-L1 as a biomarker and an alternative prospective biomarker of immunotherapy responses in patients with GC. Recent phase III clinical trials have shown the effectiveness of immunotherapy in combination with chemotherapy as first-line treatment for advanced or metastatic GC . The efficacy of incorporating PD-1 antibodies as a first-line therapy for GC was first established in the CHECKMATE 649 trial . Nivolumab (a PD-1 inhibitor) treatment combined with chemotherapy significantly improved overall survival (OS) and progression-free survival in patients with PD-L1 CPS ≥5, as assessed using the 28-8 pharmDx assay . Additional results showed therapeutic effects in patients with PD-L1 CPS ≥1 and in all randomly assigned patients , leading to geographical variations in regulatory approvals and international guidelines regarding the addition of nivolumab to chemotherapy . The U.S. Food and Drug Administration (FDA) approved the use of nivolumab without restrictions based on the PD-L1 CPS . In contrast, the European Medicines Agency (EMA) restricted approval to patients with PD-L1 CPS ≥5 . The Korean Ministry of Food and Drug Safety (MFDS) also approved nivolumab without PD-L1 CPS restriction; however, nivolumab reimbursement for GC treatment is limited to patients with PD-L1 CPS ≥ 5. After the initial disappointing results of the KEYNOTE-062 trial, pembrolizumab (a PD-1 inhibitor) may become a new front-line treatment option for GC . The KEYNOTE-859 trial demonstrated the efficacy of pembrolizumab in combination with chemotherapy as a first-line treatment for patients with locally advanced or metastatic HER2-negative GCs, irrespective of the PD-L1 results . However, the treatment effects were enhanced in patients with CPS ≥1 or ≥10, as assessed using the 22C3 pharmDx assay . The U.S. FDA approved pembrolizumab for HER2-negative GC without PD-L1 CPS restriction, whereas the EMA recommended pembrolizumab for the PD-L1 CPS ≥1 population . The Korean MFDS approved pembrolizumab without PD-L1 restrictions. For HER2-positive GCs, the KEYNOTE-811 study indicated that pembrolizumab in combination with first-line trastuzumab and chemotherapy significantly improved progression-free survival in patients with PD-L1 CPS ≥1 . Currently, the U.S. FDA, EMA, and Korean MFDS approved pembrolizumab with trastuzumab and chemotherapy as a first-line treatment for HER2-positive GC with PD-L1 CPS ≥1, as assessed using the 22C3 pharmDx assay . In China, tislelizumab (a PD-1 inhibitor) in combination with chemotherapy was approved for patients with locally advanced/metastatic HER2-negative GC with PD-L1 tumor area positivity (TAP) ≥5%, as assessed using the SP263 assay based on the RATIONALE 305 trial . In addition, sintilimab (a PD-1 inhibitor) in combination with chemotherapy was approved for patients with locally advanced or metastatic HER2-negative GC, irrespective of PD-L1 status, based on the ORIENT-16 trial . The treatment effect was more pronounced in patients with CPS ≥5, as assessed using the 22C3 pharmDx assay . However, these agents have not been approved outside China. To guide treatment plans, PD-L1 testing is considered in patients with locally advanced, recurrent, or metastatic GC who are candidates for PD-1 inhibitor therapy A companion or complementary diagnostic test should be performed on formalin-fixed paraffin-embedded tissues . Currently, three standardized PD-L1 IHC assays (22C3 pharmDx, 28–8 pharmDx, and SP263) are used to specifically predict responses to pembrolizumab, nivolumab, and tislelizumab . For adequate evaluation, a minimum of 100 tumor cells must be present on PD-L1-stained slides . Accurate assessment of the PD-L1 CPS is crucial for reporting the exact CPS score or specifying clinically meaningful intervals (that is CPS <1; 1–4; 5–9; ≥10), which helps in selecting the best therapeutic option for patients based on their specific needs . The report should also specify the type of assay performed . In addition, it is recommended that information regarding the control tissue and sample adequacy should also be included. Currently, two approaches are used for evaluating PD-L1 expression: CPS and TAP . The 22C3 pharmDx and 28-8 pharmDx assays used the CPS scoring system, which is calculated using the following equation : C P S = N u m b e r o f P o s i t i v e T u m o r a n d I m m u n e C e l l s N u m b e r o f V i a b l e T u m o r C e l l s × 100 For tumor cells, convincing partial linear or complete membrane staining is considered PD-L1 positive, irrespective of staining intensity . Tumor cells exhibiting only cytoplasmic or membranous staining of the apical surfaces within the glands are not considered positive . For immune cells, membrane and/or cytoplasmic staining of lymphocytes and macrophages within tumor nests and adjacent stroma are counted irrespective of the staining intensity . In some instances, macrophages within the gland lumen are highly positive with no staining in tumor cells . However, this is not generally considered positive result . For the CPS, a 20× field-of-view rule is applied to define tumor-associated areas . Only immune cells within the 20× magnification field and areas directly related to the tumor response are scored . Notably, other stromal cells such as fibroblasts, neutrophils, plasma, and necrotic cells are excluded . If the calculation result exceeds 100, it is presented as a maximum score of 100 . If PD-L1 staining is heterogeneous, the final CPS is estimated by calculating the CPS results for each area within the entire tumor . This counting method for CPS is challenging and time-consuming . The responses recorded in a recent PD-L1 quality assessment survey conducted by the College of American Pathologists (2021B) indicated that only <3% of pathologists attempt to count each cell and calculate a score , preferring visual evaluation. Therefore, TAP assessment through visual evaluation has been suggested . In the RATIONALE 305 trial for tislelizumab, TAP using the SP263 assay was used instead of CPS . TAP is a simple visual method for scoring tumors and immune cells together . It uses the percentage of PD-L1 expression in tumors and immune cells, evaluated as the proportion of the tumor area occupied by all viable tumor cells and the tumor-associated stroma containing tumor-associated immune cells . The following equation applies: T A P = % P D − L 1 P o s i t i v e T u m o r A n d I m m u n e C e l l s T u m o r A r e a The method of counting only membranous staining for tumor cells, as well as membranous and/or cytoplasmic staining for immune cells, is the same as that used in CPS . However, the TAP method involves all types of immune cells, including neutrophils, which reduces the need to confirm the cell types at high magnification . Another distinction from the CPS is the application of a 10× rule to define the tumor area . Liu et al. compared TAP and CPS in GC and esophageal squamous cell carcinoma (ESCC) samples using the SP263 assay to assess concordance and time efficacy. The agreement between TAP and CPS was ≥85%, with a TAP score at 5% cutoff showing improved concordance with CPS 1 compared to a TAP score at 1% cutoff . These findings suggested that the TAP and CPS can potentially be used to identify the same patient population . In addition, the average scoring time for TAP was 5 min compared to 30 min for CPS, suggesting that TAP is less time-consuming . The agreement rate for TAP among pathologists was also high . Although TAP and CPS appear to be largely similar, further studies are required to validate their use of TAP in GC. The positivity of PD-L1 in GC varies with different assays, specimen types, and other factors. In recent clinical trials, PD-L1 positivity at a cutoff of CPS ≥1 was reported to be >70% . In the CHECKMATE 649 trial for HER2-negative GC, the positivity for CPS ≥1 was 82.0% (1,296/1,581), whereas the positivity for CPS ≥5 was 60.4% (955/1,581) . In the KEYNOTE 859 trial for HER2-negative GC, the positivity for CPS ≥1 was 78.2% (1,235/1,579), and the positivity for CPS ≥10 was 34.9% (551/1,579) . In the KEYNOTE 811 trial for HER2-positive GC, the positivity rate at a CPS cut-off of 1 was 85.1% (594/698) . Several studies reported that PD-L1 positivity was associated with Epstein-Barr virus (EBV) positivity and microsatellite instability (MSI)-high status . In particular, high PD-L1 (≥ CPS 10 or 50) is frequently observed in EBV-positive and MSI-high GCs . One meta-analysis indicated that PD-L1 expression showed no correlation with sex, age, cancer location, differentiation, or tumor stage . Another notable clinical issue is the overlap of PD-L1 expression with that of other biomarkers, which is crucial for optimal treatment planning . Notably, zolbetuximab, an anti-Claudin 18.2 monoclonal antibody, has recently been approved as a first-line treatment for GC and is expected to soon be integrated into routine practice. Kubota et al. reported a numerically lower rate of PD-L1 CPS ≥5 among Claudin 18.2-positive patients, although this finding was not statistically significant. In contrast, Kwak et al. recently reported that Claudin 18.2 positivity was higher in patients with GC with PD-L1 CPS ≥5. Future studies correlating these findings with treatment outcomes are required. CPS cutoff The CPS cutoff value for predicting patients who will benefit the most from immunotherapy remains a topic of debate. These cutoff values are subject to change as results from new clinical trials and studies become available. Moreover, the CPS cutoff values for the approval of immunotherapy vary by country or approval agency . However, the benefits of additional immunotherapy in populations with low PD-L1 expression require further investigation . Intra-tumoral heterogeneity Heterogeneous PD-L1 expression across areas within a tumor is an inherent issue that can influence its role as a predictive biomarker . Heterogeneity of PD-L1 expression within and between tumor sites has been described in other solid tumors . Colarossi et al. reported a change in PD-L1 status in only one out of 53 cases of GC, suggesting consistency in PD-L1 tumor expression between primary and metastatic tumors. However, Gao et al. observed that PD-L1 positivity was significantly higher in metastatic lymph nodes (45.4%) than in primary gastric tumors (38.7%). Zhou et al. reported marked spatial heterogeneity between primary gastric and metastatic tumors (61% concordance). In the same study, temporal heterogeneity in PD-L1 expression was noted between tumors before and after chemotherapy (63% concordance) . Elevated PD-L1 expression after neoadjuvant chemotherapy has been reported in various solid tumors, including GC . The mechanism underlying the role of chemotherapy in the variation of PD-L1 expression in GC has not been fully elucidated . In addition, the question of whether a small biopsy is representative of the PD-L1 status of the entire tumor remains to be addressed . Ye et al. reported that PD-L1 expression in tissue microarray samples had varying degrees of relevance to the corresponding surgical specimens, suggesting that at least five biopsies are required to accurately assess PD-L1 status. Inter-observer concordance Inter-observer variability among pathologists in PD-L1 assessment poses a challenge when using PD-L1 as a biomarker in GC. summarizes the studies that evaluated the inter-observer concordance of PD-L1 evaluation in GC. Previous studies have reported excellent interobserver agreement with overall percentage agreements (OPAs) >95% among pathologists . Recently, Kim et al. evaluated the inter-observer variability of the CPS in 143 clinical GC samples. Inter-observer variability, as represented by the intra-class correlation coefficient (ICC), was 0.89 and 0.88 for the 28-8 pharmDx and 22C3 antibody concentrates, respectively. In contrast, two recent studies reported high interobserver variability among pathologists for CPS in cases of GC . Fernandez et al. evaluated the concordance of PD-L1 CPS between 14 pathologists using 112 biopsy samples stained using the 22C3 pharmDx assay . At a CPS cutoff of 1, the OPA reached only 31.48%, and the ICC was 0.484 . Higher cutoffs performed better than a CPS cutoff of 1 . Robert et al. evaluated the interobserver agreement of 12 pathologists using 100 biopsies stained with PD-L1 28-8 and 22C3 pharmDx assays . Inter-observer agreement for CPS for 100 biopsies was poor, with only fair agreement for both pre- (ICC range, 0.45–0.55) and post-training (ICC range, 0.56–0.57) for both assays . Next, they evaluated the inter-observer agreement for the elements comprising the CPS in 35 biopsy fragments . Poor or fair agreements were observed for the number of PD-L1-positive immune cells (ICC, 0.19), PD-L1-positive tumor cells (ICC, 0.54), total number of viable tumor cells (ICC, 0.09), calculated CPS (ICC, 0.14), and calculated tumor cell score showed excellent agreement (ICC, 0.82) . This aligns with the results from other tumors, indicating that the interobserver concordance in immune cells is significantly lower than that in tumor cells . This high interobserver variability raises questions about PD-L1 CPS as a biomarker for GC . However, because of the increasing reliance on specific PD-L1 CPS cut-offs for clinical indications, efforts such as education and standardized guidelines are required to address this issue . Inter-assay concordance Discordant results from different PD-L1 assays pose challenges in the assessment of PD-L1 expression. Additionally, not all assays are available in all areas . Notably, limited samples such as endoscopic or peritoneal biopsies are available for certain patients with GC . Therefore, there is an increasing need to harmonize PD-L1 assays for GC . summarizes comparative studies on PD-L1 assays in patients with GC. Ahn and Kim reported that 22C3 and 28-8 pharmDx were highly comparable at various CPS cutoffs. The OPA was 96.4% at the CPS 1 cutoff, with higher OPA at the CPS 10 and 50 cutoffs . However, nonspecific staining is often observed in the 28–8 pharmDx assay, which requires caution in interpretation . Park et al. reported high concordance between the 22C3 pharmDx and SP263 assays for CPS evaluation, with OPA >90% at various CPS cutoffs. In a recent study, Klempner et al. compared the 22C3, 28-8 pharmDx, and SP263 assays using both CPS and TAP scoring systems. Moderate to strong ICCs (≥0.70) in pairwise assay comparisons between the scoring algorithms were reported, thereby highlighting analytical concordance in the three major PD-L1 assays when TAP and CPS are used. However, whether inter-assay results can be compared between the 22C3 and 28-8 assays is still unclear because discordant results have also been reported . In a recent study, Kim et al. reported suboptimal agreement between the 28–8 pharmDx and 22C3 antibody concentrate (not pharmDx), with Cohen’s kappa values and OPA between the two assays being 78.3% and 0.56 for the CPS 1 cutoff, 81.8% and 0.60 for the CPS 5 cutoff, and 88.8% and 0.66 for the CPS 10 cutoff. However, this variability should be resolved in future studies. The CPS cutoff value for predicting patients who will benefit the most from immunotherapy remains a topic of debate. These cutoff values are subject to change as results from new clinical trials and studies become available. Moreover, the CPS cutoff values for the approval of immunotherapy vary by country or approval agency . However, the benefits of additional immunotherapy in populations with low PD-L1 expression require further investigation . Heterogeneous PD-L1 expression across areas within a tumor is an inherent issue that can influence its role as a predictive biomarker . Heterogeneity of PD-L1 expression within and between tumor sites has been described in other solid tumors . Colarossi et al. reported a change in PD-L1 status in only one out of 53 cases of GC, suggesting consistency in PD-L1 tumor expression between primary and metastatic tumors. However, Gao et al. observed that PD-L1 positivity was significantly higher in metastatic lymph nodes (45.4%) than in primary gastric tumors (38.7%). Zhou et al. reported marked spatial heterogeneity between primary gastric and metastatic tumors (61% concordance). In the same study, temporal heterogeneity in PD-L1 expression was noted between tumors before and after chemotherapy (63% concordance) . Elevated PD-L1 expression after neoadjuvant chemotherapy has been reported in various solid tumors, including GC . The mechanism underlying the role of chemotherapy in the variation of PD-L1 expression in GC has not been fully elucidated . In addition, the question of whether a small biopsy is representative of the PD-L1 status of the entire tumor remains to be addressed . Ye et al. reported that PD-L1 expression in tissue microarray samples had varying degrees of relevance to the corresponding surgical specimens, suggesting that at least five biopsies are required to accurately assess PD-L1 status. Inter-observer variability among pathologists in PD-L1 assessment poses a challenge when using PD-L1 as a biomarker in GC. summarizes the studies that evaluated the inter-observer concordance of PD-L1 evaluation in GC. Previous studies have reported excellent interobserver agreement with overall percentage agreements (OPAs) >95% among pathologists . Recently, Kim et al. evaluated the inter-observer variability of the CPS in 143 clinical GC samples. Inter-observer variability, as represented by the intra-class correlation coefficient (ICC), was 0.89 and 0.88 for the 28-8 pharmDx and 22C3 antibody concentrates, respectively. In contrast, two recent studies reported high interobserver variability among pathologists for CPS in cases of GC . Fernandez et al. evaluated the concordance of PD-L1 CPS between 14 pathologists using 112 biopsy samples stained using the 22C3 pharmDx assay . At a CPS cutoff of 1, the OPA reached only 31.48%, and the ICC was 0.484 . Higher cutoffs performed better than a CPS cutoff of 1 . Robert et al. evaluated the interobserver agreement of 12 pathologists using 100 biopsies stained with PD-L1 28-8 and 22C3 pharmDx assays . Inter-observer agreement for CPS for 100 biopsies was poor, with only fair agreement for both pre- (ICC range, 0.45–0.55) and post-training (ICC range, 0.56–0.57) for both assays . Next, they evaluated the inter-observer agreement for the elements comprising the CPS in 35 biopsy fragments . Poor or fair agreements were observed for the number of PD-L1-positive immune cells (ICC, 0.19), PD-L1-positive tumor cells (ICC, 0.54), total number of viable tumor cells (ICC, 0.09), calculated CPS (ICC, 0.14), and calculated tumor cell score showed excellent agreement (ICC, 0.82) . This aligns with the results from other tumors, indicating that the interobserver concordance in immune cells is significantly lower than that in tumor cells . This high interobserver variability raises questions about PD-L1 CPS as a biomarker for GC . However, because of the increasing reliance on specific PD-L1 CPS cut-offs for clinical indications, efforts such as education and standardized guidelines are required to address this issue . Discordant results from different PD-L1 assays pose challenges in the assessment of PD-L1 expression. Additionally, not all assays are available in all areas . Notably, limited samples such as endoscopic or peritoneal biopsies are available for certain patients with GC . Therefore, there is an increasing need to harmonize PD-L1 assays for GC . summarizes comparative studies on PD-L1 assays in patients with GC. Ahn and Kim reported that 22C3 and 28-8 pharmDx were highly comparable at various CPS cutoffs. The OPA was 96.4% at the CPS 1 cutoff, with higher OPA at the CPS 10 and 50 cutoffs . However, nonspecific staining is often observed in the 28–8 pharmDx assay, which requires caution in interpretation . Park et al. reported high concordance between the 22C3 pharmDx and SP263 assays for CPS evaluation, with OPA >90% at various CPS cutoffs. In a recent study, Klempner et al. compared the 22C3, 28-8 pharmDx, and SP263 assays using both CPS and TAP scoring systems. Moderate to strong ICCs (≥0.70) in pairwise assay comparisons between the scoring algorithms were reported, thereby highlighting analytical concordance in the three major PD-L1 assays when TAP and CPS are used. However, whether inter-assay results can be compared between the 22C3 and 28-8 assays is still unclear because discordant results have also been reported . In a recent study, Kim et al. reported suboptimal agreement between the 28–8 pharmDx and 22C3 antibody concentrate (not pharmDx), with Cohen’s kappa values and OPA between the two assays being 78.3% and 0.56 for the CPS 1 cutoff, 81.8% and 0.60 for the CPS 5 cutoff, and 88.8% and 0.66 for the CPS 10 cutoff. However, this variability should be resolved in future studies. Novel immunotherapy and biomarkers The phase II EDGE-Gastric trial evaluated several novel immunotherapy-based regimens in GC patients, some of which target both PD-1 and T-cell immunoreceptors with immunoglobulin and ITIM domains (TIGIT) . Preliminary data showed that treatment-naïve patients who received both zimberelimab (a PD-1 inhibitor) and domvanalimab (a TIGIT inhibitor) in combination with chemotherapy, showed a promising outcome in all patients, especially those with TAP ≥5% using the SP263 assay. A phase III trial comparing combination therapy with zimberelimab and dombanalimab, and chemotherapy with nivolumab is ongoing. DKN-01 exhibits immunomodulatory activity, stimulates a pro-inflammatory tumor microenvironment (TME), and upregulates PD-L1 levels. A phase II study (DisTinGuish) of DKN-01 in combination with tislelizumab and chemotherapy as the first-line therapy demonstrated a prolongation of progression-free survival and OS, especially in patients with low PD-L1 expression . For this study, DKK1 was assessed by central laboratories using RNA scope and PD-L1 expression. Image analysis-assisted PD-L1 interpretation Among the issues in incorporating PD-L1 expression as a biomarker for immunotherapy, major methodological difficulties include the interpretation of PD-L1 expression. Efforts have been made to overcome these difficulties; the application of computer image analysis algorithms is one such endeavor. Several previous studies have identified image analysis algorithms as potential tools for improving the accuracy and reproducibility of PD-L1 scoring by pathologists for other solid tumors . Kim et al. generated PD-L1 CPS scores for 39 cases of GC using the Aperio IHC membrane image analysis algorithm (ScanScopeTM; Aperio Technologies, Vista, CA, USA) with additional input from manual annotation and computation, showing that PD-L1 CPS scores supported by image analysis were concordant with manual scoring performed by pathologists. Notably, PD-L1 scores derived from image analysis were comparable to manual scoring in predicting patient responses to pembrolizumab . In a recent study, an artificial intelligence (AI)-aided PD-L1 image analysis algorithm demonstrated clinical efficacy as a diagnostic aid for other tumors such as lung cancer . However, AI-aided PD-L1 assessment has not yet been attempted for GC; however, developments in this area are expected . Other predictive biomarkers for immunotherapy in GC Another potential limitation of PD-L1 IHC is the controversy regarding its efficacy as an accurate biomarker of immunotherapy. Several other biomarkers predicting immunotherapy outcomes have also been identified. MSI-high or defective mismatch repair (dMMR) (MSI-H/dMMR) has emerged as a significant biomarker for predicting response to immunotherapy in various types of solid tumors . Several clinical studies have examined the response of patients with MSI-H/dMMR GC to immunotherapy . Among patients with MSI-high, 57.1% experienced an objective response, whereas only 9.0% of patients with non-MSI-high samples achieved an objective response . In a meta-analysis of clinical trials, including the KEYNOTE-062, CheckMate-649, JAVELIN Gastric 100, and KEYNOTE-061 trials, the hazard ratios for OS benefit with anti-PD-1-based treatment was 0.34 for patients with MSI-H GC, compared to 0.85 for patients with non-MSI-H GC . EBV-associated GC accounts for approximately 10% of all cases of GC . This subtype typically exhibits distinct histological features and is characterized by significant immune cell infiltration within the tumor. Several studies have reported a positive correlation between EBV-associated GC and immunotherapy. Kim et al. reported that all six patients with EBV-associated GC treated with pembrolizumab achieved an objective response. In another study, among nine patients with EBV-positive GC treated with various ICIs, including nivolumab, three showed a partial response and five had stable disease . Notably, all seven patients who exhibited an objective response also exhibited positive PD-L1 expression . However, Wang et al. reported that only one of four patients with EBV-associated GC treated with toripalimab (a PD-1 inhibitor) achieved partial remission, whereas the remaining patients had two cases of stable disease and one case of disease progression . Another study examined the clinical response of patients with EBV-associated GC treated with camrelizumab (a PD-1 inhibitor); however, none of the patients showed an objective response . The predictive value of EBV positivity for response to immunotherapy remains uncertain, and further investigation using a larger sample size is required. Tumor mutation burden (TMB) is another biomarker that has been investigated. Tumors with high TMB (TMB-H) levels are believed to possess a higher number of neoantigens, thereby increasing the likelihood of detection by the immune system . As part of the KEYNOTE-062 study, the clinical effectiveness of pembrolizumab combined with chemotherapy as the first-line treatment of advanced GC was investigated . Improved clinical outcomes were observed in patients treated with pembrolizumab (either as monotherapy or in conjunction with chemotherapy) for GC and TMB >10 . In a phase Ib/II clinical trial that evaluated toripalimab (a PD-1 inhibitor) response in GC, the TMB-H group (defined as TMB >12 mutations/Mb) showed significantly better OS than those with low TMB (14.6 vs. 4.0 months, hazard ratio=0.48) . In another clinical trial that tested lenvatinib in combination with pembrolizumab in patients with GC, either in the first- or second-line therapeutic settings, patients with TMB-H (TMB >10) showed an 82% objective response rate, whereas those with low TMB showed a 60% objective response rate. The TME directly influences immunotherapy effectiveness. Several studies have focused on tumor-infiltrating lymphocytes (TILs) as predictive biomarkers of ICI response. Tong et al. hypothesized that intratumoral CD8 + TILs may be a positive predictive factor for clinical response to immunotherapy in PD-L1-negative advanced GC. Boku et al. examined the TILs of 91 patients with advanced GC and suggested that patients with a high proportion of CTLA-4 and LAG3 + myeloid cells before nivolumab treatment had a poor prognosis. Other studies have focused on tertiary lymphoid structures, which are another component of TME. Tertiary lymphoid structures have also been reported to correlate with positive immunotherapy responses in various cancer types . In patients with GC, the tertiary lymphoid structure score, calculated using tertiary lymphoid structure-related genes through principal component analysis, correlated with a superior response to PD-1 blockade therapy . CD73, a novel immune checkpoint protein, has also been proposed as a potential immunotherapy biomarker, and its overexpression in GC indicates better chemotherapeutic responsiveness to fluorouracil but a poorer objective response rate to pembrolizumab . However, likely because of the complex nature of the TME, a single biomarker may not be adequate to identify patients with GC who will most benefit from immunotherapy. Chen et al. investigated the density and spatial patterns of immune cells and determined that the density of CD4 + FoxP3 − PD-L1 + T cells and the effective score of CD8 + PD-1 + LAG-3 − T cells were closely associated with a positive response to anti-PD-1/PD-L1 therapy; however, CD8 + PD-1 − LAG-3 − T cells and CD68 + STING + macrophages were closely associated with a negative response. This result highlights the need for a multidimensional approach to the TME analysis. Several studies have shown that gut microbiota may be associated with tumor progression and can potentially impact the efficacy of ICIs . In the DELIVER study investigating whether gut microbiomes serve as predictors of the efficacy of nivolumab in GC, upregulation of the bacterial invasive epithelial cell pathway was associated with disease progression after nivolumab treatment . Moreover, certain bacterial species, such as those of the Odoribacter and Veillonella genera, have been associated with the nivolumab response . Helicobacter pylori is a well-known infectious microorganism that is closely associated with the development of GC. In a previous study, H. pylori seropositivity was associated with poorer survival outcomes in patients with non-small cell lung cancer undergoing anti-PD-1 therapy . A retrospective study on patients with GC showed that patients in the H. pylori -positive group had a higher risk of a poor response to anti-PD-1 antibodies than those in the H. pylori -negative group . Further research is required to explore the role of H. pylori regarding its effect on in GC immunotherapy. Among the genomic mutations, POLE/POLD1 mutation is a promising marker for ICI treatment . Based on the findings of a study that examined mutational data from 47,221 malignant tumors of various origins, including 185 esophagogastric cancers, POLE/POLD1 mutations were associated with longer OS in patients treated with ICIs . The phase II EDGE-Gastric trial evaluated several novel immunotherapy-based regimens in GC patients, some of which target both PD-1 and T-cell immunoreceptors with immunoglobulin and ITIM domains (TIGIT) . Preliminary data showed that treatment-naïve patients who received both zimberelimab (a PD-1 inhibitor) and domvanalimab (a TIGIT inhibitor) in combination with chemotherapy, showed a promising outcome in all patients, especially those with TAP ≥5% using the SP263 assay. A phase III trial comparing combination therapy with zimberelimab and dombanalimab, and chemotherapy with nivolumab is ongoing. DKN-01 exhibits immunomodulatory activity, stimulates a pro-inflammatory tumor microenvironment (TME), and upregulates PD-L1 levels. A phase II study (DisTinGuish) of DKN-01 in combination with tislelizumab and chemotherapy as the first-line therapy demonstrated a prolongation of progression-free survival and OS, especially in patients with low PD-L1 expression . For this study, DKK1 was assessed by central laboratories using RNA scope and PD-L1 expression. Among the issues in incorporating PD-L1 expression as a biomarker for immunotherapy, major methodological difficulties include the interpretation of PD-L1 expression. Efforts have been made to overcome these difficulties; the application of computer image analysis algorithms is one such endeavor. Several previous studies have identified image analysis algorithms as potential tools for improving the accuracy and reproducibility of PD-L1 scoring by pathologists for other solid tumors . Kim et al. generated PD-L1 CPS scores for 39 cases of GC using the Aperio IHC membrane image analysis algorithm (ScanScopeTM; Aperio Technologies, Vista, CA, USA) with additional input from manual annotation and computation, showing that PD-L1 CPS scores supported by image analysis were concordant with manual scoring performed by pathologists. Notably, PD-L1 scores derived from image analysis were comparable to manual scoring in predicting patient responses to pembrolizumab . In a recent study, an artificial intelligence (AI)-aided PD-L1 image analysis algorithm demonstrated clinical efficacy as a diagnostic aid for other tumors such as lung cancer . However, AI-aided PD-L1 assessment has not yet been attempted for GC; however, developments in this area are expected . Another potential limitation of PD-L1 IHC is the controversy regarding its efficacy as an accurate biomarker of immunotherapy. Several other biomarkers predicting immunotherapy outcomes have also been identified. MSI-high or defective mismatch repair (dMMR) (MSI-H/dMMR) has emerged as a significant biomarker for predicting response to immunotherapy in various types of solid tumors . Several clinical studies have examined the response of patients with MSI-H/dMMR GC to immunotherapy . Among patients with MSI-high, 57.1% experienced an objective response, whereas only 9.0% of patients with non-MSI-high samples achieved an objective response . In a meta-analysis of clinical trials, including the KEYNOTE-062, CheckMate-649, JAVELIN Gastric 100, and KEYNOTE-061 trials, the hazard ratios for OS benefit with anti-PD-1-based treatment was 0.34 for patients with MSI-H GC, compared to 0.85 for patients with non-MSI-H GC . EBV-associated GC accounts for approximately 10% of all cases of GC . This subtype typically exhibits distinct histological features and is characterized by significant immune cell infiltration within the tumor. Several studies have reported a positive correlation between EBV-associated GC and immunotherapy. Kim et al. reported that all six patients with EBV-associated GC treated with pembrolizumab achieved an objective response. In another study, among nine patients with EBV-positive GC treated with various ICIs, including nivolumab, three showed a partial response and five had stable disease . Notably, all seven patients who exhibited an objective response also exhibited positive PD-L1 expression . However, Wang et al. reported that only one of four patients with EBV-associated GC treated with toripalimab (a PD-1 inhibitor) achieved partial remission, whereas the remaining patients had two cases of stable disease and one case of disease progression . Another study examined the clinical response of patients with EBV-associated GC treated with camrelizumab (a PD-1 inhibitor); however, none of the patients showed an objective response . The predictive value of EBV positivity for response to immunotherapy remains uncertain, and further investigation using a larger sample size is required. Tumor mutation burden (TMB) is another biomarker that has been investigated. Tumors with high TMB (TMB-H) levels are believed to possess a higher number of neoantigens, thereby increasing the likelihood of detection by the immune system . As part of the KEYNOTE-062 study, the clinical effectiveness of pembrolizumab combined with chemotherapy as the first-line treatment of advanced GC was investigated . Improved clinical outcomes were observed in patients treated with pembrolizumab (either as monotherapy or in conjunction with chemotherapy) for GC and TMB >10 . In a phase Ib/II clinical trial that evaluated toripalimab (a PD-1 inhibitor) response in GC, the TMB-H group (defined as TMB >12 mutations/Mb) showed significantly better OS than those with low TMB (14.6 vs. 4.0 months, hazard ratio=0.48) . In another clinical trial that tested lenvatinib in combination with pembrolizumab in patients with GC, either in the first- or second-line therapeutic settings, patients with TMB-H (TMB >10) showed an 82% objective response rate, whereas those with low TMB showed a 60% objective response rate. The TME directly influences immunotherapy effectiveness. Several studies have focused on tumor-infiltrating lymphocytes (TILs) as predictive biomarkers of ICI response. Tong et al. hypothesized that intratumoral CD8 + TILs may be a positive predictive factor for clinical response to immunotherapy in PD-L1-negative advanced GC. Boku et al. examined the TILs of 91 patients with advanced GC and suggested that patients with a high proportion of CTLA-4 and LAG3 + myeloid cells before nivolumab treatment had a poor prognosis. Other studies have focused on tertiary lymphoid structures, which are another component of TME. Tertiary lymphoid structures have also been reported to correlate with positive immunotherapy responses in various cancer types . In patients with GC, the tertiary lymphoid structure score, calculated using tertiary lymphoid structure-related genes through principal component analysis, correlated with a superior response to PD-1 blockade therapy . CD73, a novel immune checkpoint protein, has also been proposed as a potential immunotherapy biomarker, and its overexpression in GC indicates better chemotherapeutic responsiveness to fluorouracil but a poorer objective response rate to pembrolizumab . However, likely because of the complex nature of the TME, a single biomarker may not be adequate to identify patients with GC who will most benefit from immunotherapy. Chen et al. investigated the density and spatial patterns of immune cells and determined that the density of CD4 + FoxP3 − PD-L1 + T cells and the effective score of CD8 + PD-1 + LAG-3 − T cells were closely associated with a positive response to anti-PD-1/PD-L1 therapy; however, CD8 + PD-1 − LAG-3 − T cells and CD68 + STING + macrophages were closely associated with a negative response. This result highlights the need for a multidimensional approach to the TME analysis. Several studies have shown that gut microbiota may be associated with tumor progression and can potentially impact the efficacy of ICIs . In the DELIVER study investigating whether gut microbiomes serve as predictors of the efficacy of nivolumab in GC, upregulation of the bacterial invasive epithelial cell pathway was associated with disease progression after nivolumab treatment . Moreover, certain bacterial species, such as those of the Odoribacter and Veillonella genera, have been associated with the nivolumab response . Helicobacter pylori is a well-known infectious microorganism that is closely associated with the development of GC. In a previous study, H. pylori seropositivity was associated with poorer survival outcomes in patients with non-small cell lung cancer undergoing anti-PD-1 therapy . A retrospective study on patients with GC showed that patients in the H. pylori -positive group had a higher risk of a poor response to anti-PD-1 antibodies than those in the H. pylori -negative group . Further research is required to explore the role of H. pylori regarding its effect on in GC immunotherapy. Among the genomic mutations, POLE/POLD1 mutation is a promising marker for ICI treatment . Based on the findings of a study that examined mutational data from 47,221 malignant tumors of various origins, including 185 esophagogastric cancers, POLE/POLD1 mutations were associated with longer OS in patients treated with ICIs . In recent years, the combination of anti-PD-1 agents and chemotherapy has been shown to be clinically effective as a first-line treatment for advanced and metastatic GC, thereby establishing it as standard therapy. Currently, the selection of candidates who will most likely benefit from this treatment relies on the PD-L1 expression level, as determined through IHC testing. However, the PD-L1 test has limitations, and efforts are underway to minimize its negative effects. In addition, novel biomarkers are being investigated as alternative methods for predicting the efficacy of immunotherapy. Future studies are required to refine the prediction of biomarkers for immunotherapy in patients with GC. |
Enhancing clinical skills in pediatric trainees: a comparative study of ChatGPT-assisted and traditional teaching methods | d57bc72c-6091-4b9d-8457-1afc3369aa4b | 11112818 | Pediatrics[mh] | The introduction of ChatGPT by OpenAI in November 2022 marked a watershed moment in educational technology, heralded as the third major innovation following Web 2.0’s emergence over a decade earlier and the rapid expansion of e-learning driven by the COVID-19 pandemic . In medical education, the integration of state-of-the-art Artificial Intelligence (AI) has been particularly transformative for pediatric clinical skills training—a field where AI is now at the forefront. Pediatric training, with its intricate blend of extensive medical knowledge and soft skills like empathetic patient interaction, is pivotal for effective child healthcare. The need for swift decision-making, especially in emergency care settings, underscores the specialty’s complexity. Traditional teaching methods often fall short, hindered by logistical challenges and difficulties in providing a standardized training experience. AI tools such as ChatGPT offer a promising solution, with their ability to simulate complex patient interactions and thus improve pediatric trainees’ communication, clinical reasoning, and decision-making skills across diverse scenarios . ChatGPT’s consistent, repeatable, and scalable learning experiences represent a significant advancement over traditional constraints, such as resource limitations and standardization challenges , offering a new paradigm for medical training. Its proficiency in providing immediate, personalized feedback could revolutionize the educational journey of pediatric interns. Our study seeks to investigate the full extent of this potential revolution, employing a mixed-methods approach to quantitatively and qualitatively measure the impact of ChatGPT on pediatric trainees’ clinical competencies. Despite AI’s recognized potential within the academic community, empirical evidence detailing its influence on clinical skills development is limited . Addressing this gap, our research aims to contribute substantive insights into the efficacy of ChatGPT in enhancing the clinical capabilities of pediatric trainees, establishing a new benchmark for the intersection of AI and medical education. Participants Our study evaluated the impact of ChatGPT-assisted instruction on the clinical skills of 77 medical interns enrolled in Sun Yat-sen University’s five-year program in 2023. The cohort, consisting of 42 males and 35 females, was randomly allocated into four groups based on practicum rotation, using a computer-generated randomization list. Each group, composed of 3–4 students, was assigned to either the ChatGPT-assisted or traditional teaching group for a two-week pediatric internship rotation. Randomization was stratified by baseline clinical examination scores to ensure group comparability. Our study evaluated the impact of ChatGPT-assisted instruction on the clinical skills of 77 medical interns enrolled in Sun Yat-sen University’s five-year program in 2023. The cohort, consisting of 42 males and 35 females, was randomly allocated into four groups based on practicum rotation, using a computer-generated randomization list. Each group, composed of 3–4 students, was assigned to either the ChatGPT-assisted or traditional teaching group for a two-week pediatric internship rotation. Randomization was stratified by baseline clinical examination scores to ensure group comparability. Study design A controlled experimental design was implemented with blind assessment. The interns were randomly assigned to the ChatGPT-assisted group (39 students) or the traditional group (38 students), with no significant differences in gender, age, or baseline clinical examination scores ( p > 0.05). The ChatGPT-assisted group received instruction supplemented with ChatGPT version 4.0, while the traditional group received standard bedside teaching (as depicted in Fig. ). Both groups encountered identical clinical case scenarios involving common pediatric conditions: Kawasaki disease, gastroenteritis, congenital heart disease, nephrotic syndrome, bronchopneumonia, and febrile convulsion. All interns had equal access to the same teaching materials, instructors, and intensity of courses. The core textbook was the 9th edition of “Pediatrics” published by the People’s Medical Publishing House. Ethical approval was obtained from the institutional review board, and informed consent was secured, with special attention to privacy concerns due to the involvement of pediatric patient data. A controlled experimental design was implemented with blind assessment. The interns were randomly assigned to the ChatGPT-assisted group (39 students) or the traditional group (38 students), with no significant differences in gender, age, or baseline clinical examination scores ( p > 0.05). The ChatGPT-assisted group received instruction supplemented with ChatGPT version 4.0, while the traditional group received standard bedside teaching (as depicted in Fig. ). Both groups encountered identical clinical case scenarios involving common pediatric conditions: Kawasaki disease, gastroenteritis, congenital heart disease, nephrotic syndrome, bronchopneumonia, and febrile convulsion. All interns had equal access to the same teaching materials, instructors, and intensity of courses. The core textbook was the 9th edition of “Pediatrics” published by the People’s Medical Publishing House. Ethical approval was obtained from the institutional review board, and informed consent was secured, with special attention to privacy concerns due to the involvement of pediatric patient data. Traditional teaching group Pre-rotation preparation Instructors designed typical cases representing common pediatric diseases and updated knowledge on the latest diagnostic and therapeutic advancements. They developed multimedia presentations detailing the presentation, diagnostic criteria, and treatment plans for each condition. Teaching process The teaching method during the rotation was divided into three stages: Case introduction and demonstration Instructors began with a detailed introduction of clinical cases, explaining diagnostic reasoning and emphasizing key aspects of medical history-taking and physical examination techniques. Student participation Students then conducted patient interviews and physical assessments independently, with the instructor observing. For pediatric patients, particularly infants, history was provided by the guardians. Feedback and discussion At the end of each session, instructors provided personalized feedback on student performance and answered questions, fostering an interactive learning environment. ChatGPT-assisted teaching group Pre-rotation preparation Educators prepared structured teaching plans focusing on common pediatric diseases and representative cases. The preparation phase involved configuring ChatGPT (version 4.0) settings to align with the educational objectives of the rotation. Teaching process The rotation was executed in four consecutive steps: ChatGPT orientation Students were familiarized with the functionalities and potential educational applications of ChatGPT version 4.0. ChatGPT-driven tasks In our study, ChatGPT version 4.0 was used as a supplementary educational tool within the curriculum. Students engaged with the AI to interactively explore dynamically generated clinical case vignettes based on pediatric medicine. These vignettes encompassed clinical presentations, history taking, physical examinations, diagnostic strategies, differential diagnoses, and treatment protocols, allowing students to query the AI to enhance their understanding of various clinical scenarios. Students accessed clinical vignettes in both text and video formats, with video particularly effective in demonstrating physical examination techniques and communication strategies with guardians, thereby facilitating a more interactive learning experience. ChatGPT initially guided students in forming assessments, while educators critically reviewed their work, providing immediate, personalized feedback to ensure proper development of clinical reasoning and decision-making skills. This blend of AI and direct educator involvement aimed to improve learning outcomes by leveraging AI’s scalability alongside expert educators’ insights. Bedside clinical practice Students practiced history-taking and physical examinations at the patient’s bedside, with information about infants provided by their guardians. Feedback and inquiry Instructors offered feedback on performance and addressed student queries to reinforce learning outcomes. Assessment methods The methods used to evaluate the interns’ post-rotation performance included three assessment tools: Theoretical knowledge exam Both groups completed the same closed-book exam to test their pediatric theoretical knowledge, ensuring consistency in cognitive understanding assessment. Mini-CEX assessment The Mini-CEX has been widely recognized as an effective and reliable method for assessing clinical skills . Practical skills were evaluated using the Mini-CEX, which involved students taking histories from parents of pediatric patients and conducting physical examinations on infants, supervised by an instructor. Mini-CEX scoring utilized a nine-point scale with seven criteria, assessing history-taking, physical examination, professionalism, clinical judgment, doctor-patient communication, organizational skills, and overall competence. History taking This assessment measures students’ ability to accurately collect patient histories, utilize effective questioning techniques, respond to non-verbal cues, and exhibit respect, empathy, and trust, while addressing patient comfort, dignity, and confidentiality. Physical examination This evaluates students on informing patients about examination procedures, conducting examinations in an orderly sequence, adjusting examinations based on patient condition, attending to patient discomfort, and ensuring privacy. Professionalism This assesses students’ demonstration of respect, compassion, and empathy, establishment of trust, attention to patient comfort, maintenance of confidentiality, adherence to ethical standards, understanding of legal aspects, and recognition of their professional limits. Clinical judgment This includes evaluating students’ selection and execution of appropriate diagnostic tests and their consideration of the risks and benefits of various treatment options. Doctor-patient communication This involves explaining test and treatment rationales, obtaining patient consent, educating on disease management, and discussing issues effectively and timely based on disease severity. Organizational efficiency This measures how students prioritize based on urgency, handle patient issues efficiently, demonstrate integrative skills, understand the healthcare system, and effectively use resources for optimal service. Overall competence This assesses students on judgment, integration, and effectiveness in patient care, evaluating their overall capabilities in caring and efficiency. The scale ranged from below expectations (1–3 points), meeting expectations (4–6 points), to exceeding expectations (7–9 points). To maintain assessment consistency, all Mini-CEX evaluations were conducted by a single assessor. ChatGPT method feedback survey Only for the ChatGPT-assisted group, the educational impact of the ChatGPT teaching method was evaluated post-rotation through a questionnaire. This survey used a self-assessment scale with a Cronbach’s Alpha coefficient of 0.812, confirming its internal consistency and reliability. Assessment items involved active learning engagement, communication skills, empathy, retention of clinical knowledge, and improvement in diagnostic reasoning. Participant satisfaction was categorized as (1) very satisfied, (2) satisfied, (3) neutral, or (4) dissatisfied. Statistical analysis Data were analyzed using R software (version 4.2.2) and SPSS (version 26.0). Descriptive statistics were presented as mean ± standard deviation (x ± s), and independent t-tests were performed to compare groups. Categorical data were presented as frequency and percentage (n), with chi-square tests applied where appropriate. A P -value of < 0.05 was considered statistically significant. All assessors of the Mini-CEX were blinded to the group assignments to minimize bias. Pre-rotation preparation Instructors designed typical cases representing common pediatric diseases and updated knowledge on the latest diagnostic and therapeutic advancements. They developed multimedia presentations detailing the presentation, diagnostic criteria, and treatment plans for each condition. Teaching process The teaching method during the rotation was divided into three stages: Case introduction and demonstration Instructors began with a detailed introduction of clinical cases, explaining diagnostic reasoning and emphasizing key aspects of medical history-taking and physical examination techniques. Student participation Students then conducted patient interviews and physical assessments independently, with the instructor observing. For pediatric patients, particularly infants, history was provided by the guardians. Feedback and discussion At the end of each session, instructors provided personalized feedback on student performance and answered questions, fostering an interactive learning environment. Instructors designed typical cases representing common pediatric diseases and updated knowledge on the latest diagnostic and therapeutic advancements. They developed multimedia presentations detailing the presentation, diagnostic criteria, and treatment plans for each condition. The teaching method during the rotation was divided into three stages: Case introduction and demonstration Instructors began with a detailed introduction of clinical cases, explaining diagnostic reasoning and emphasizing key aspects of medical history-taking and physical examination techniques. Student participation Students then conducted patient interviews and physical assessments independently, with the instructor observing. For pediatric patients, particularly infants, history was provided by the guardians. Feedback and discussion At the end of each session, instructors provided personalized feedback on student performance and answered questions, fostering an interactive learning environment. Instructors began with a detailed introduction of clinical cases, explaining diagnostic reasoning and emphasizing key aspects of medical history-taking and physical examination techniques. Students then conducted patient interviews and physical assessments independently, with the instructor observing. For pediatric patients, particularly infants, history was provided by the guardians. At the end of each session, instructors provided personalized feedback on student performance and answered questions, fostering an interactive learning environment. Pre-rotation preparation Educators prepared structured teaching plans focusing on common pediatric diseases and representative cases. The preparation phase involved configuring ChatGPT (version 4.0) settings to align with the educational objectives of the rotation. Teaching process The rotation was executed in four consecutive steps: ChatGPT orientation Students were familiarized with the functionalities and potential educational applications of ChatGPT version 4.0. ChatGPT-driven tasks In our study, ChatGPT version 4.0 was used as a supplementary educational tool within the curriculum. Students engaged with the AI to interactively explore dynamically generated clinical case vignettes based on pediatric medicine. These vignettes encompassed clinical presentations, history taking, physical examinations, diagnostic strategies, differential diagnoses, and treatment protocols, allowing students to query the AI to enhance their understanding of various clinical scenarios. Students accessed clinical vignettes in both text and video formats, with video particularly effective in demonstrating physical examination techniques and communication strategies with guardians, thereby facilitating a more interactive learning experience. ChatGPT initially guided students in forming assessments, while educators critically reviewed their work, providing immediate, personalized feedback to ensure proper development of clinical reasoning and decision-making skills. This blend of AI and direct educator involvement aimed to improve learning outcomes by leveraging AI’s scalability alongside expert educators’ insights. Bedside clinical practice Students practiced history-taking and physical examinations at the patient’s bedside, with information about infants provided by their guardians. Feedback and inquiry Instructors offered feedback on performance and addressed student queries to reinforce learning outcomes. Assessment methods The methods used to evaluate the interns’ post-rotation performance included three assessment tools: Theoretical knowledge exam Both groups completed the same closed-book exam to test their pediatric theoretical knowledge, ensuring consistency in cognitive understanding assessment. Mini-CEX assessment The Mini-CEX has been widely recognized as an effective and reliable method for assessing clinical skills . Practical skills were evaluated using the Mini-CEX, which involved students taking histories from parents of pediatric patients and conducting physical examinations on infants, supervised by an instructor. Mini-CEX scoring utilized a nine-point scale with seven criteria, assessing history-taking, physical examination, professionalism, clinical judgment, doctor-patient communication, organizational skills, and overall competence. History taking This assessment measures students’ ability to accurately collect patient histories, utilize effective questioning techniques, respond to non-verbal cues, and exhibit respect, empathy, and trust, while addressing patient comfort, dignity, and confidentiality. Physical examination This evaluates students on informing patients about examination procedures, conducting examinations in an orderly sequence, adjusting examinations based on patient condition, attending to patient discomfort, and ensuring privacy. Professionalism This assesses students’ demonstration of respect, compassion, and empathy, establishment of trust, attention to patient comfort, maintenance of confidentiality, adherence to ethical standards, understanding of legal aspects, and recognition of their professional limits. Clinical judgment This includes evaluating students’ selection and execution of appropriate diagnostic tests and their consideration of the risks and benefits of various treatment options. Doctor-patient communication This involves explaining test and treatment rationales, obtaining patient consent, educating on disease management, and discussing issues effectively and timely based on disease severity. Organizational efficiency This measures how students prioritize based on urgency, handle patient issues efficiently, demonstrate integrative skills, understand the healthcare system, and effectively use resources for optimal service. Overall competence This assesses students on judgment, integration, and effectiveness in patient care, evaluating their overall capabilities in caring and efficiency. The scale ranged from below expectations (1–3 points), meeting expectations (4–6 points), to exceeding expectations (7–9 points). To maintain assessment consistency, all Mini-CEX evaluations were conducted by a single assessor. ChatGPT method feedback survey Only for the ChatGPT-assisted group, the educational impact of the ChatGPT teaching method was evaluated post-rotation through a questionnaire. This survey used a self-assessment scale with a Cronbach’s Alpha coefficient of 0.812, confirming its internal consistency and reliability. Assessment items involved active learning engagement, communication skills, empathy, retention of clinical knowledge, and improvement in diagnostic reasoning. Participant satisfaction was categorized as (1) very satisfied, (2) satisfied, (3) neutral, or (4) dissatisfied. Educators prepared structured teaching plans focusing on common pediatric diseases and representative cases. The preparation phase involved configuring ChatGPT (version 4.0) settings to align with the educational objectives of the rotation. The rotation was executed in four consecutive steps: ChatGPT orientation Students were familiarized with the functionalities and potential educational applications of ChatGPT version 4.0. ChatGPT-driven tasks In our study, ChatGPT version 4.0 was used as a supplementary educational tool within the curriculum. Students engaged with the AI to interactively explore dynamically generated clinical case vignettes based on pediatric medicine. These vignettes encompassed clinical presentations, history taking, physical examinations, diagnostic strategies, differential diagnoses, and treatment protocols, allowing students to query the AI to enhance their understanding of various clinical scenarios. Students accessed clinical vignettes in both text and video formats, with video particularly effective in demonstrating physical examination techniques and communication strategies with guardians, thereby facilitating a more interactive learning experience. ChatGPT initially guided students in forming assessments, while educators critically reviewed their work, providing immediate, personalized feedback to ensure proper development of clinical reasoning and decision-making skills. This blend of AI and direct educator involvement aimed to improve learning outcomes by leveraging AI’s scalability alongside expert educators’ insights. Bedside clinical practice Students practiced history-taking and physical examinations at the patient’s bedside, with information about infants provided by their guardians. Feedback and inquiry Instructors offered feedback on performance and addressed student queries to reinforce learning outcomes. Students were familiarized with the functionalities and potential educational applications of ChatGPT version 4.0. In our study, ChatGPT version 4.0 was used as a supplementary educational tool within the curriculum. Students engaged with the AI to interactively explore dynamically generated clinical case vignettes based on pediatric medicine. These vignettes encompassed clinical presentations, history taking, physical examinations, diagnostic strategies, differential diagnoses, and treatment protocols, allowing students to query the AI to enhance their understanding of various clinical scenarios. Students accessed clinical vignettes in both text and video formats, with video particularly effective in demonstrating physical examination techniques and communication strategies with guardians, thereby facilitating a more interactive learning experience. ChatGPT initially guided students in forming assessments, while educators critically reviewed their work, providing immediate, personalized feedback to ensure proper development of clinical reasoning and decision-making skills. This blend of AI and direct educator involvement aimed to improve learning outcomes by leveraging AI’s scalability alongside expert educators’ insights. Students practiced history-taking and physical examinations at the patient’s bedside, with information about infants provided by their guardians. Instructors offered feedback on performance and addressed student queries to reinforce learning outcomes. The methods used to evaluate the interns’ post-rotation performance included three assessment tools: Theoretical knowledge exam Both groups completed the same closed-book exam to test their pediatric theoretical knowledge, ensuring consistency in cognitive understanding assessment. Mini-CEX assessment The Mini-CEX has been widely recognized as an effective and reliable method for assessing clinical skills . Practical skills were evaluated using the Mini-CEX, which involved students taking histories from parents of pediatric patients and conducting physical examinations on infants, supervised by an instructor. Mini-CEX scoring utilized a nine-point scale with seven criteria, assessing history-taking, physical examination, professionalism, clinical judgment, doctor-patient communication, organizational skills, and overall competence. History taking This assessment measures students’ ability to accurately collect patient histories, utilize effective questioning techniques, respond to non-verbal cues, and exhibit respect, empathy, and trust, while addressing patient comfort, dignity, and confidentiality. Physical examination This evaluates students on informing patients about examination procedures, conducting examinations in an orderly sequence, adjusting examinations based on patient condition, attending to patient discomfort, and ensuring privacy. Professionalism This assesses students’ demonstration of respect, compassion, and empathy, establishment of trust, attention to patient comfort, maintenance of confidentiality, adherence to ethical standards, understanding of legal aspects, and recognition of their professional limits. Clinical judgment This includes evaluating students’ selection and execution of appropriate diagnostic tests and their consideration of the risks and benefits of various treatment options. Doctor-patient communication This involves explaining test and treatment rationales, obtaining patient consent, educating on disease management, and discussing issues effectively and timely based on disease severity. Organizational efficiency This measures how students prioritize based on urgency, handle patient issues efficiently, demonstrate integrative skills, understand the healthcare system, and effectively use resources for optimal service. Overall competence This assesses students on judgment, integration, and effectiveness in patient care, evaluating their overall capabilities in caring and efficiency. The scale ranged from below expectations (1–3 points), meeting expectations (4–6 points), to exceeding expectations (7–9 points). To maintain assessment consistency, all Mini-CEX evaluations were conducted by a single assessor. ChatGPT method feedback survey Only for the ChatGPT-assisted group, the educational impact of the ChatGPT teaching method was evaluated post-rotation through a questionnaire. This survey used a self-assessment scale with a Cronbach’s Alpha coefficient of 0.812, confirming its internal consistency and reliability. Assessment items involved active learning engagement, communication skills, empathy, retention of clinical knowledge, and improvement in diagnostic reasoning. Participant satisfaction was categorized as (1) very satisfied, (2) satisfied, (3) neutral, or (4) dissatisfied. Both groups completed the same closed-book exam to test their pediatric theoretical knowledge, ensuring consistency in cognitive understanding assessment. The Mini-CEX has been widely recognized as an effective and reliable method for assessing clinical skills . Practical skills were evaluated using the Mini-CEX, which involved students taking histories from parents of pediatric patients and conducting physical examinations on infants, supervised by an instructor. Mini-CEX scoring utilized a nine-point scale with seven criteria, assessing history-taking, physical examination, professionalism, clinical judgment, doctor-patient communication, organizational skills, and overall competence. This assessment measures students’ ability to accurately collect patient histories, utilize effective questioning techniques, respond to non-verbal cues, and exhibit respect, empathy, and trust, while addressing patient comfort, dignity, and confidentiality. This evaluates students on informing patients about examination procedures, conducting examinations in an orderly sequence, adjusting examinations based on patient condition, attending to patient discomfort, and ensuring privacy. This assesses students’ demonstration of respect, compassion, and empathy, establishment of trust, attention to patient comfort, maintenance of confidentiality, adherence to ethical standards, understanding of legal aspects, and recognition of their professional limits. This includes evaluating students’ selection and execution of appropriate diagnostic tests and their consideration of the risks and benefits of various treatment options. This involves explaining test and treatment rationales, obtaining patient consent, educating on disease management, and discussing issues effectively and timely based on disease severity. This measures how students prioritize based on urgency, handle patient issues efficiently, demonstrate integrative skills, understand the healthcare system, and effectively use resources for optimal service. This assesses students on judgment, integration, and effectiveness in patient care, evaluating their overall capabilities in caring and efficiency. The scale ranged from below expectations (1–3 points), meeting expectations (4–6 points), to exceeding expectations (7–9 points). To maintain assessment consistency, all Mini-CEX evaluations were conducted by a single assessor. Only for the ChatGPT-assisted group, the educational impact of the ChatGPT teaching method was evaluated post-rotation through a questionnaire. This survey used a self-assessment scale with a Cronbach’s Alpha coefficient of 0.812, confirming its internal consistency and reliability. Assessment items involved active learning engagement, communication skills, empathy, retention of clinical knowledge, and improvement in diagnostic reasoning. Participant satisfaction was categorized as (1) very satisfied, (2) satisfied, (3) neutral, or (4) dissatisfied. Data were analyzed using R software (version 4.2.2) and SPSS (version 26.0). Descriptive statistics were presented as mean ± standard deviation (x ± s), and independent t-tests were performed to compare groups. Categorical data were presented as frequency and percentage (n), with chi-square tests applied where appropriate. A P -value of < 0.05 was considered statistically significant. All assessors of the Mini-CEX were blinded to the group assignments to minimize bias. Theoretical knowledge exam scores for both groups of trainees The theoretical knowledge exam revealed comparable results between the two groups, with the ChatGPT-assisted group achieving a mean score of 92.21 ± 2.37, and the traditional teaching group scoring slightly higher at 92.38 ± 2.68. Statistical analysis using an independent t-test showed no significant difference in the exam scores (t = 0.295, p = 0.768), suggesting that both teaching methods similarly supported the trainees’ theoretical learning. Mini-CEX evaluation results for both groups of trainees All trainees completed the Mini-CEX evaluation in 38 ± 0.5 min on average, with immediate post-evaluation feedback averaging 5.8 ± 0.6 min per student. The ChatGPT group demonstrated statistically significant improvement in professional conduct, clinical judgment, patient communication, and overall clinical skills compared to the traditional group. A detailed comparison of the CEX scoring for both student groups is presented in Table ; Fig. . Satisfaction survey results of trainees in the ChatGPT-assisted teaching Feedback from the trainees regarding the ChatGPT-assisted teaching method was overwhelmingly positive. High levels of satisfaction and interest were reported, with no instances of dissatisfaction noted. The summary of these findings, including specific aspects of the teaching method that were rated highly by the students, is detailed in Table . The theoretical knowledge exam revealed comparable results between the two groups, with the ChatGPT-assisted group achieving a mean score of 92.21 ± 2.37, and the traditional teaching group scoring slightly higher at 92.38 ± 2.68. Statistical analysis using an independent t-test showed no significant difference in the exam scores (t = 0.295, p = 0.768), suggesting that both teaching methods similarly supported the trainees’ theoretical learning. All trainees completed the Mini-CEX evaluation in 38 ± 0.5 min on average, with immediate post-evaluation feedback averaging 5.8 ± 0.6 min per student. The ChatGPT group demonstrated statistically significant improvement in professional conduct, clinical judgment, patient communication, and overall clinical skills compared to the traditional group. A detailed comparison of the CEX scoring for both student groups is presented in Table ; Fig. . Feedback from the trainees regarding the ChatGPT-assisted teaching method was overwhelmingly positive. High levels of satisfaction and interest were reported, with no instances of dissatisfaction noted. The summary of these findings, including specific aspects of the teaching method that were rated highly by the students, is detailed in Table . The integration of ChatGPT into pediatric medical education represents a significant stride in leveraging artificial intelligence (AI) to enhance the learning process. Our findings suggest that while AI does not substantially alter outcomes in theoretical knowledge assessments, it plays a pivotal role in the advancement of clinical competencies. The parity in theoretical examination scores between the ChatGPT-assisted and traditionally taught groups indicates that foundational medical knowledge can still be effectively acquired through existing educational frameworks. This underscores the potential of ChatGPT as a complementary, rather than a substitutive, educational instrument . Mini-CEX evaluations paint a different picture, revealing the ChatGPT group’s superior performance in clinical realms. These competencies are crucial for the comprehensive development of a pediatrician and highlight the value of an interactive learning environment in bridging the gap between theory and practice . The unanimous satisfaction with ChatGPT-assisted learning points to AI’s capacity to enhance student engagement. This positive response could be attributed to the personalized and interactive nature of the AI experience, catering to diverse learning styles . However, it is critical to consider the potential for overreliance on technology and the need for maintaining an appropriate balance between AI and human interaction in medical training. The ChatGPT group’s ascendency in clinical skillfulness could be a testament to the repetitive, adaptive learning scenarios proffered by AI technology. ChatGPT’s proficiency in tailoring educational content to individual performance metrics propels a more incisive and efficacious learning journey. Furthermore, the on-site, real-time feedback from evaluators is likely instrumental in consolidating clinical skillsets, echoing findings on the potency of immediate feedback in clinical education . The study’s strength lies in its pioneering exploration of ChatGPT in pediatric education and the structured use of Mini-CEX for appraising clinical competencies, but it is not without limitations. The ceiling effect may have masked subtle differences in theoretical knowledge, and our small, single-center cohort limits the generalizability of our findings. The transitory nature of the study precludes assessment of long-term retention, a factor that future research should aim to elucidate . Moreover, the ongoing evolution of AI and medical curricula necessitates continuous reevaluation of ChatGPT’s role in education. Future studies should explore multicenter trials, long-term outcomes, and integration strategies within existing curricula to provide deeper insights into AI’s role in medical education. Ethical and practical considerations, including data privacy, resource allocation, and cost, must also be carefully navigated to ensure that AI tools like ChatGPT are implemented responsibly and sustainably. In conclusion, ChatGPT’s incorporation into pediatric training did not significantly affect the acquisition of theoretical knowledge but did enhance clinical skill development. The high levels of trainee satisfaction suggest that ChatGPT is a valuable adjunct to traditional educational methods, warranting further investigation and thoughtful integration into medical curricula. |
Status and influencing factors of health literacy among college students of traditional Chinese medicine: a cross-sectional study | 3a3c8eee-fc4b-4e15-a453-79dfedb824b5 | 11847853 | Health Literacy[mh] | Introduction Health literacy refers to the ability of individuals to access and understand basic health information and services, and to use this information and services to make good decisions to maintain and promote their health . Domestic and foreign scholars intervene in people’s health literacy based on the theories of “knowledge, attitude and practice model” (a set of behavioral intervention models through the acquisition of knowledge, attitude change and behavior formation) and “health belief model” (this model believes that when individuals have sufficient health beliefs, they will take corresponding health behaviors) . In order to further improve people’s health literacy, research on the theoretical framework and assessment tools of health literacy has gradually increased in recent years, especially the research on health literacy of special populations has received increasing attention . The 14th Five-Year National Health Plan points out that health problems and influencing factors should be comprehensively intervened to further improve the health literacy level of residents . As the main force of national and social development, college students’ health literacy level has an important impact on the health literacy level of residents. After the COVID-19 pandemic, the government has paid more attention to improving the health literacy level of students. Therefore, the Ministry of Education proposed the implementation of the National Healthy Schools Construction Plan . Previous studies have reported the problems related to health literacy of college students , and there is a large gap in the health literacy level of college students. Low health literacy means low awareness and ability to seek health information. Such college students often have health problems such as smoking, low self-esteem, loneliness and so on . There are also rich related studies on health literacy of college students in western medicine colleges , but there are few studies on the health literacy of college students in traditional Chinese medicine (TCM), which limits the colleges of TCM to develop accurate plans or implement health promotion activities to improve the health literacy level of students. In 2021, a TCM college conducted an eHealth literacy survey on 1,007 undergraduates, and the results showed that the level of eHealth literacy of college students of TCM was low . Compared with western medicine colleges and universities, TCM colleges and universities have unique advantages, they pay more attention to health care services and advocate “preventive treatment of disease” , which is in line with the concept of “Healthy China” . Therefore, it is necessary to carry out the research on the status of health literacy of students in TCM colleges and universities and explore the influencing factors of health literacy. This study conducted a sampling survey of college students in a TCM university in Shandong Province, in order to explore the health literacy level and related influencing factors of college students in TCM, so as to provide reference for carrying out targeted health literacy promotion and healthy school construction in the future. Materials and methods 2.1 Study area and period This study was conducted from December 2022 to March 2023 in a TCM university in Shandong Province, China. 2.2 Study design and participants The convenience sampling method was used to select undergraduates in a TCM university in Shandong Province as the research object for a cross-sectional study. Inclusion criteria: ① Undergraduate; ② Age ≥ 18 years old. Exclusion criteria: ① those who did not complete the questionnaire or withdrew during the course. ② too long or too short time to answer the questionnaire (according to − x ± s, < 219.62 s or > 1062.51 s); ③ There were obvious regularity or abnormal values in the content of the questionnaire. The purpose and significance of the study were explained to the subjects before the study, and the investigation was conducted after obtaining the consent of the subjects. 2.3 Measures 2.3.1 Questionnaire survey The electronic questionnaire was used to conduct the survey. The related questions of the self-designed questionnaire came from the “Health literacy of Chinese citizens - Basic knowledge and Skills (2015 edition)” and the “National Residents’ Health literacy Questionnaire” prepared by the China Health Education Center . The questionnaire included two parts: general information of the respondents and questions related to health literacy. The present study was modified appropriately in terms of demographics. Health literacy related issues can be divided into three aspects: basic knowledge and concept, healthy lifestyle and behavior, and health skills. According to the public health problems, health literacy can be divided into six types of health problems: scientific health concept, infectious disease prevention, chronic disease prevention, safety and first aid, basic medical care and health information . There were 51 questions in this survey. The judgment questions and single-choice questions were scored 1 point per question, the multiple-choice questions were scored 2 points per question, and the wrong answers or underanswers were scored 0 points, with a total score of 65 points. According to the criterion of health literacy, if the score of the questionnaire reaches 80% or more of the total score (≥48 points), it can be considered to have health literacy, and the criterion of literacy level of three aspects and six types of questions also reaches 80% or more of the total score of this aspect . 2.3.2 Setting The questionnaire was developed by the China Health Education Center commissioned by the National Health Commission of the People’s Republic of China and used to survey the health literacy level of residents in China. It has high authority and has been widely used. The Cronbach’s α coefficient is 0.82–0.931, and the split-half reliability is 0.808–0.81 , which has good reliability and validity. The topic comes from the content of health literacy of Chinese citizens, and the health literacy of college students is investigated from different aspects, which is targeted and comprehensive. This study used the “Questionnaire Star” (online questionnaire survey platform) to conduct the survey, before distributing the questionnaire to the research subjects to explain the purpose and significance of the study, to obtain the consent of the research subjects after the distribution of the questionnaire, each person can only fill in the questionnaire once, and complete all questions before submission. In the data processing stage, unqualified questionnaires with short or long response time (according to − x ± s, < 219.62 s or > 1062.51 s), regular responses or abnormal values were eliminated, and the data were checked by two investigators and entered. 2.3.3 Data analyses An electronic questionnaire survey was conducted and a database was formed. SPSS 22.0 software was used for data processing and analysis. Descriptive statistical analysis was used to analyze the general data of the respondents, chi-square test was used to compare the count data between groups, and Logistic regression model was used to analyze the influencing factors of health literacy. The test level α = 0.05. Study area and period This study was conducted from December 2022 to March 2023 in a TCM university in Shandong Province, China. Study design and participants The convenience sampling method was used to select undergraduates in a TCM university in Shandong Province as the research object for a cross-sectional study. Inclusion criteria: ① Undergraduate; ② Age ≥ 18 years old. Exclusion criteria: ① those who did not complete the questionnaire or withdrew during the course. ② too long or too short time to answer the questionnaire (according to − x ± s, < 219.62 s or > 1062.51 s); ③ There were obvious regularity or abnormal values in the content of the questionnaire. The purpose and significance of the study were explained to the subjects before the study, and the investigation was conducted after obtaining the consent of the subjects. Measures 2.3.1 Questionnaire survey The electronic questionnaire was used to conduct the survey. The related questions of the self-designed questionnaire came from the “Health literacy of Chinese citizens - Basic knowledge and Skills (2015 edition)” and the “National Residents’ Health literacy Questionnaire” prepared by the China Health Education Center . The questionnaire included two parts: general information of the respondents and questions related to health literacy. The present study was modified appropriately in terms of demographics. Health literacy related issues can be divided into three aspects: basic knowledge and concept, healthy lifestyle and behavior, and health skills. According to the public health problems, health literacy can be divided into six types of health problems: scientific health concept, infectious disease prevention, chronic disease prevention, safety and first aid, basic medical care and health information . There were 51 questions in this survey. The judgment questions and single-choice questions were scored 1 point per question, the multiple-choice questions were scored 2 points per question, and the wrong answers or underanswers were scored 0 points, with a total score of 65 points. According to the criterion of health literacy, if the score of the questionnaire reaches 80% or more of the total score (≥48 points), it can be considered to have health literacy, and the criterion of literacy level of three aspects and six types of questions also reaches 80% or more of the total score of this aspect . 2.3.2 Setting The questionnaire was developed by the China Health Education Center commissioned by the National Health Commission of the People’s Republic of China and used to survey the health literacy level of residents in China. It has high authority and has been widely used. The Cronbach’s α coefficient is 0.82–0.931, and the split-half reliability is 0.808–0.81 , which has good reliability and validity. The topic comes from the content of health literacy of Chinese citizens, and the health literacy of college students is investigated from different aspects, which is targeted and comprehensive. This study used the “Questionnaire Star” (online questionnaire survey platform) to conduct the survey, before distributing the questionnaire to the research subjects to explain the purpose and significance of the study, to obtain the consent of the research subjects after the distribution of the questionnaire, each person can only fill in the questionnaire once, and complete all questions before submission. In the data processing stage, unqualified questionnaires with short or long response time (according to − x ± s, < 219.62 s or > 1062.51 s), regular responses or abnormal values were eliminated, and the data were checked by two investigators and entered. 2.3.3 Data analyses An electronic questionnaire survey was conducted and a database was formed. SPSS 22.0 software was used for data processing and analysis. Descriptive statistical analysis was used to analyze the general data of the respondents, chi-square test was used to compare the count data between groups, and Logistic regression model was used to analyze the influencing factors of health literacy. The test level α = 0.05. Questionnaire survey The electronic questionnaire was used to conduct the survey. The related questions of the self-designed questionnaire came from the “Health literacy of Chinese citizens - Basic knowledge and Skills (2015 edition)” and the “National Residents’ Health literacy Questionnaire” prepared by the China Health Education Center . The questionnaire included two parts: general information of the respondents and questions related to health literacy. The present study was modified appropriately in terms of demographics. Health literacy related issues can be divided into three aspects: basic knowledge and concept, healthy lifestyle and behavior, and health skills. According to the public health problems, health literacy can be divided into six types of health problems: scientific health concept, infectious disease prevention, chronic disease prevention, safety and first aid, basic medical care and health information . There were 51 questions in this survey. The judgment questions and single-choice questions were scored 1 point per question, the multiple-choice questions were scored 2 points per question, and the wrong answers or underanswers were scored 0 points, with a total score of 65 points. According to the criterion of health literacy, if the score of the questionnaire reaches 80% or more of the total score (≥48 points), it can be considered to have health literacy, and the criterion of literacy level of three aspects and six types of questions also reaches 80% or more of the total score of this aspect . Setting The questionnaire was developed by the China Health Education Center commissioned by the National Health Commission of the People’s Republic of China and used to survey the health literacy level of residents in China. It has high authority and has been widely used. The Cronbach’s α coefficient is 0.82–0.931, and the split-half reliability is 0.808–0.81 , which has good reliability and validity. The topic comes from the content of health literacy of Chinese citizens, and the health literacy of college students is investigated from different aspects, which is targeted and comprehensive. This study used the “Questionnaire Star” (online questionnaire survey platform) to conduct the survey, before distributing the questionnaire to the research subjects to explain the purpose and significance of the study, to obtain the consent of the research subjects after the distribution of the questionnaire, each person can only fill in the questionnaire once, and complete all questions before submission. In the data processing stage, unqualified questionnaires with short or long response time (according to − x ± s, < 219.62 s or > 1062.51 s), regular responses or abnormal values were eliminated, and the data were checked by two investigators and entered. Data analyses An electronic questionnaire survey was conducted and a database was formed. SPSS 22.0 software was used for data processing and analysis. Descriptive statistical analysis was used to analyze the general data of the respondents, chi-square test was used to compare the count data between groups, and Logistic regression model was used to analyze the influencing factors of health literacy. The test level α = 0.05. Results 3.1 Participants’ characteristics A total of 1,092 electronic questionnaires were collected in this survey, of which 925 were valid questionnaires, with an effective recovery rate of 84.7%. The Cronbach’s α coefficient of the questionnaire in this study was 0.873, and the Spearman-Brown coefficient was 0.799. The internal consistency of the questionnaire was good. Males accounted for 26.81% and females accounted for 73.19%. In terms of majors, the major was medicine, accounting for 70.70%; most of the students were not the only child (73.19%), and their close relatives were not engaged in the medical profession (83.24%). The parents’ educational background was generally low, high school or below accounted for 74.59 and 78.92%, respectively. 66.38% of college students had normal body mass index (BMI). Most of the college students never smoked (94.05%), and 78.05% of them were aware of the concept of “health literacy” (see for details). 3.2 Overall level of health literacy The overall possession rate of health literacy was 57.30%. Among the three aspects of health literacy, the possession rates of college students’ basic knowledge and concept literacy, healthy lifestyle and behavior literacy, and basic skills literacy were 53.62, 65.41, and 58.70%, respectively, and the basic knowledge and concept literacy was the lowest. Among the six types of health literacy questions, college students had the highest literacy rate of scientific health concept, which was 79.68%. The awareness rate of infectious disease prevention and treatment was the lowest (40.86%). The awareness rate of chronic disease prevention and control literacy was 60.11%. The rate of safety and first aid literacy was 72.86%; the rate of basic medical literacy was 61.30%; the rate of health information literacy was 61.62% (see for details). 3.3 Single factor analysis of influencing factors of health literacy level The analysis results showed that the overall level of health literacy of college students was significantly related to gender ( χ 2 = 30.99, p < 0.001), grade ( χ 2 = 17.85, p < 0.001), major ( χ 2 = 5.65, p = 0.017), smoking ( χ 2 = 24.23, p < 0.001) and being aware of the concept of “health literacy” ( χ 2 = 7.73, p = 0.005) were statistically significant. The three aspects and six kinds of problems of health literacy of college students in the TCM university shows the following characteristics: females students are higher than males students, students of Han Chinese have higher literacy level, sophomore students have the lowest literacy level, medical students have higher literacy level, students with too much or too little living expenses have lower literacy level, students who never smoked have higher literacy level, and students who know the concept of “health literacy” have higher literacy level. The details are shown in , . 3.4 Logistic regression analysis was used to analyze the influencing factors of health literacy Taking health literacy as the dependent variable and the influencing factors (gender, grade, major, smoking, and being aware of the concept of “health literacy”) with statistical significance in univariate analysis as the independent variables, Logistic regression analysis was conducted on the influencing factors of health literacy level. The results showed that females had a 92% more odds of higher health literacy than males (AOR: 1.92; 95%CI: 1.40–2.62). The health literacy level of sophomores was 0.62 times that of freshmen (AOR: 0.62; 95%CI: 0.41–0.95) and seniors had a 102% more odds of higher health literacy than freshmen (AOR: 2.02; 95%CI: 1.01–4.05). College students who never smoked had a 199% more odds of higher health literacy than among former or current smokers (AOR: 2.99; 95%CI: 1.57–5.72). College students who were aware of the concept of “health literacy” had a 54% more odds of higher health literacy than those who were not aware (AOR: 1.54; 95%CI: 1.11–2.13) (see ). Participants’ characteristics A total of 1,092 electronic questionnaires were collected in this survey, of which 925 were valid questionnaires, with an effective recovery rate of 84.7%. The Cronbach’s α coefficient of the questionnaire in this study was 0.873, and the Spearman-Brown coefficient was 0.799. The internal consistency of the questionnaire was good. Males accounted for 26.81% and females accounted for 73.19%. In terms of majors, the major was medicine, accounting for 70.70%; most of the students were not the only child (73.19%), and their close relatives were not engaged in the medical profession (83.24%). The parents’ educational background was generally low, high school or below accounted for 74.59 and 78.92%, respectively. 66.38% of college students had normal body mass index (BMI). Most of the college students never smoked (94.05%), and 78.05% of them were aware of the concept of “health literacy” (see for details). Overall level of health literacy The overall possession rate of health literacy was 57.30%. Among the three aspects of health literacy, the possession rates of college students’ basic knowledge and concept literacy, healthy lifestyle and behavior literacy, and basic skills literacy were 53.62, 65.41, and 58.70%, respectively, and the basic knowledge and concept literacy was the lowest. Among the six types of health literacy questions, college students had the highest literacy rate of scientific health concept, which was 79.68%. The awareness rate of infectious disease prevention and treatment was the lowest (40.86%). The awareness rate of chronic disease prevention and control literacy was 60.11%. The rate of safety and first aid literacy was 72.86%; the rate of basic medical literacy was 61.30%; the rate of health information literacy was 61.62% (see for details). Single factor analysis of influencing factors of health literacy level The analysis results showed that the overall level of health literacy of college students was significantly related to gender ( χ 2 = 30.99, p < 0.001), grade ( χ 2 = 17.85, p < 0.001), major ( χ 2 = 5.65, p = 0.017), smoking ( χ 2 = 24.23, p < 0.001) and being aware of the concept of “health literacy” ( χ 2 = 7.73, p = 0.005) were statistically significant. The three aspects and six kinds of problems of health literacy of college students in the TCM university shows the following characteristics: females students are higher than males students, students of Han Chinese have higher literacy level, sophomore students have the lowest literacy level, medical students have higher literacy level, students with too much or too little living expenses have lower literacy level, students who never smoked have higher literacy level, and students who know the concept of “health literacy” have higher literacy level. The details are shown in , . Logistic regression analysis was used to analyze the influencing factors of health literacy Taking health literacy as the dependent variable and the influencing factors (gender, grade, major, smoking, and being aware of the concept of “health literacy”) with statistical significance in univariate analysis as the independent variables, Logistic regression analysis was conducted on the influencing factors of health literacy level. The results showed that females had a 92% more odds of higher health literacy than males (AOR: 1.92; 95%CI: 1.40–2.62). The health literacy level of sophomores was 0.62 times that of freshmen (AOR: 0.62; 95%CI: 0.41–0.95) and seniors had a 102% more odds of higher health literacy than freshmen (AOR: 2.02; 95%CI: 1.01–4.05). College students who never smoked had a 199% more odds of higher health literacy than among former or current smokers (AOR: 2.99; 95%CI: 1.57–5.72). College students who were aware of the concept of “health literacy” had a 54% more odds of higher health literacy than those who were not aware (AOR: 1.54; 95%CI: 1.11–2.13) (see ). Discussion 4.1 The overall level of health literacy of students in universities of TCM is relatively high This study shows that the overall level of health literacy of college students in this TCM university is 57.30%, which is significantly higher than the level of health literacy of Chinese residents in 2021 (25.40%) . It is also higher than the health literacy level of college students in a survey in the United States in 2020 (49%) , 20 universities in China in 2020 (41.7%) , and 5 universities in Shaanxi Province in China in 2022 (39.2%) . Compared with other medical colleges and universities, the overall level of health literacy of college students in this study is also relatively high , but it is lower than the health literacy level of college students in a university in Denmark in 2020 (59.9%) . Analysis of previous studies found that women’s health literacy was generally higher than that of men , and the proportion of women in this study (73.19%) was higher, which may lead to the improvement of the overall level of health literacy. Generally speaking, the higher the economic level, the higher the attention to health . Social culture may also affect the level of health literacy, and studies have shown that women have lower health literacy in patriarchal societies . In addition, medical students generally have higher health literacy than non-medical students . 4.2 To improve the health literacy level of students in TCM colleges and universities according to the influencing factors From the analysis of three aspects of health literacy, the healthy lifestyle and behavior literacy level of college students of TCM was the highest (65.41%), suggesting that although the lifestyle of college students of TCM has changed due to the medical atmosphere in school or the impact of COVID-19 pandemic , the basic knowledge and concepts and health skills have not been improved. It shows that college students of TCM need to consolidate their theory and skills training, and integrate knowledge and practice. The results showed that males students, sophomore students, non-medical major, high or low monthly living expenses, ever or current smoking, and unawareness of the concept of “health literacy” were the negative factors for the three aspects of health literacy. The healthy lifestyle and behavior literacy level of students with monthly living expenses ≤500 (26.32%) is even lower than that of national residents in 2021 (28.05%) . Personal economic level may affect their own health care access and use , suggesting that schools should focus on these students, form targeted intervention programs according to the “knowledge, attitude and practice” model, carry out health literacy related courses or practical activities, popularize medical knowledge and skills to non-medical students, increase efforts to publicize the disadvantages of smoking and strictly implement smoking cessation control. Schools or society should provide subsidies to students with life difficulties, and improve their health literacy level as soon as possible. According to the analysis of the six questions of health literacy, the literacy level of infectious disease prevention and treatment among college students of TCM was the lowest (40.86%), which was lower than that of students in a western medicine university (45.5%) . Perhaps due to the differences in curriculum and philosophy between Chinese and western medicine universities, the awareness of disinfection and isolation of students in TCM universities was weaker than that of students in western medicine universities. It may also be related to the lack of education related to the prevention and control of infectious diseases in schools, and students are less faced with the prevention and control of infectious diseases and lack of practical combat experience , suggesting that this aspect is the deficiency of college students in TCM colleges and universities. Schools should strengthen the education of infectious disease prevention and control among students and carry out exercises to improve their health literacy level. Logistic regression analysis showed that gender, grade, smoking, and being aware of the concept of “health literacy” were significantly associated factors affecting the level of health literacy of TCM college students. The health literacy level of females is almost twice that of males, which is consistent with the higher health literacy level of females in the 2022 survey results of the health literacy level of students in 5 colleges and universities in Shaanxi Province of China which may be related to females’ more active attention to health information. However, the results of the health literacy survey of undergraduates in a university in Nepal in 2021 showed that females were 1.6 times more likely to have poor health literacy than males , which may be related to the regional economic level, social culture, school differences, professional curriculum Settings, etc. The health literacy level of non-smoking undergraduates is almost three times higher than that of former or current smokers. A cross-sectional survey of health literacy of students in a public university in northern Jordan also showed that non-smoking students have higher health literacy level , which may be related to non-smoking students’ strong self-restraint and more emphasis on their own health . Therefore, it is necessary to implement more effective tobacco control measures in schools. A survey on the intention and behavior of tobacco control among college students in 12 universities in China found that improving the performance expectation, effort expectation and eHealth literacy of non-smoking college students and creating a positive social environment can improve the tobacco control behavior of college students , and the measures taken by this school provide a reference. The health literacy level of senior students is higher than that of freshman students, which is consistent with the research results of undergraduate health literacy of 10 universities in Tianjin, China in 2021 , mainly due to the fact that senior students have more knowledge reserve, social ability and personal experience. However, the results of this survey show that the health literacy level of sophomore students is slightly lower than that of freshman students, which is not consistent with the research expectation. The analysis may be related to the sample size, and may also be due to the fact that freshman students have just entered the university and still maintain a strong sense of self-discipline and health awareness . Students who are aware of the concept of “health literacy” have a higher level of health literacy, which may be due to their strong willingness and ability to obtain health information, and they can actively collect health-related information from friends, newspapers, and the Internet . This indicates that schools need to set up health resources reasonably, carry out health education related activities and courses in a diversified way , form a new interdisciplinary talent training paradigm, and improve the health literacy level of college students. To the best of our knowledge, this is the first study to explore the health literacy level and influencing factors of students in TCM colleges and universities in China, which provides a reference for researchers to explore the reasonable setting of health resources, carry out health education related activities and courses, and improve the health literacy level of students in TCM colleges and universities. 4.3 Limitations The limitations of this study mainly include: firstly, this study was conducted in only one TCM college and the sample was not very representative. Second, due to the limitation of time and cost, the sampling method is a convenience sampling method, which may have errors. Although in order to reduce the bias caused by convenient sampling, researchers selected research subjects from different grades and classes as much as possible, collected demographic characteristics data, and analyzed the composition of the sample. Therefore, future researchers should conduct multi-center research with stratified random sampling method, and carry out effective intervention strategies to improve the health literacy level of college students. The overall level of health literacy of students in universities of TCM is relatively high This study shows that the overall level of health literacy of college students in this TCM university is 57.30%, which is significantly higher than the level of health literacy of Chinese residents in 2021 (25.40%) . It is also higher than the health literacy level of college students in a survey in the United States in 2020 (49%) , 20 universities in China in 2020 (41.7%) , and 5 universities in Shaanxi Province in China in 2022 (39.2%) . Compared with other medical colleges and universities, the overall level of health literacy of college students in this study is also relatively high , but it is lower than the health literacy level of college students in a university in Denmark in 2020 (59.9%) . Analysis of previous studies found that women’s health literacy was generally higher than that of men , and the proportion of women in this study (73.19%) was higher, which may lead to the improvement of the overall level of health literacy. Generally speaking, the higher the economic level, the higher the attention to health . Social culture may also affect the level of health literacy, and studies have shown that women have lower health literacy in patriarchal societies . In addition, medical students generally have higher health literacy than non-medical students . To improve the health literacy level of students in TCM colleges and universities according to the influencing factors From the analysis of three aspects of health literacy, the healthy lifestyle and behavior literacy level of college students of TCM was the highest (65.41%), suggesting that although the lifestyle of college students of TCM has changed due to the medical atmosphere in school or the impact of COVID-19 pandemic , the basic knowledge and concepts and health skills have not been improved. It shows that college students of TCM need to consolidate their theory and skills training, and integrate knowledge and practice. The results showed that males students, sophomore students, non-medical major, high or low monthly living expenses, ever or current smoking, and unawareness of the concept of “health literacy” were the negative factors for the three aspects of health literacy. The healthy lifestyle and behavior literacy level of students with monthly living expenses ≤500 (26.32%) is even lower than that of national residents in 2021 (28.05%) . Personal economic level may affect their own health care access and use , suggesting that schools should focus on these students, form targeted intervention programs according to the “knowledge, attitude and practice” model, carry out health literacy related courses or practical activities, popularize medical knowledge and skills to non-medical students, increase efforts to publicize the disadvantages of smoking and strictly implement smoking cessation control. Schools or society should provide subsidies to students with life difficulties, and improve their health literacy level as soon as possible. According to the analysis of the six questions of health literacy, the literacy level of infectious disease prevention and treatment among college students of TCM was the lowest (40.86%), which was lower than that of students in a western medicine university (45.5%) . Perhaps due to the differences in curriculum and philosophy between Chinese and western medicine universities, the awareness of disinfection and isolation of students in TCM universities was weaker than that of students in western medicine universities. It may also be related to the lack of education related to the prevention and control of infectious diseases in schools, and students are less faced with the prevention and control of infectious diseases and lack of practical combat experience , suggesting that this aspect is the deficiency of college students in TCM colleges and universities. Schools should strengthen the education of infectious disease prevention and control among students and carry out exercises to improve their health literacy level. Logistic regression analysis showed that gender, grade, smoking, and being aware of the concept of “health literacy” were significantly associated factors affecting the level of health literacy of TCM college students. The health literacy level of females is almost twice that of males, which is consistent with the higher health literacy level of females in the 2022 survey results of the health literacy level of students in 5 colleges and universities in Shaanxi Province of China which may be related to females’ more active attention to health information. However, the results of the health literacy survey of undergraduates in a university in Nepal in 2021 showed that females were 1.6 times more likely to have poor health literacy than males , which may be related to the regional economic level, social culture, school differences, professional curriculum Settings, etc. The health literacy level of non-smoking undergraduates is almost three times higher than that of former or current smokers. A cross-sectional survey of health literacy of students in a public university in northern Jordan also showed that non-smoking students have higher health literacy level , which may be related to non-smoking students’ strong self-restraint and more emphasis on their own health . Therefore, it is necessary to implement more effective tobacco control measures in schools. A survey on the intention and behavior of tobacco control among college students in 12 universities in China found that improving the performance expectation, effort expectation and eHealth literacy of non-smoking college students and creating a positive social environment can improve the tobacco control behavior of college students , and the measures taken by this school provide a reference. The health literacy level of senior students is higher than that of freshman students, which is consistent with the research results of undergraduate health literacy of 10 universities in Tianjin, China in 2021 , mainly due to the fact that senior students have more knowledge reserve, social ability and personal experience. However, the results of this survey show that the health literacy level of sophomore students is slightly lower than that of freshman students, which is not consistent with the research expectation. The analysis may be related to the sample size, and may also be due to the fact that freshman students have just entered the university and still maintain a strong sense of self-discipline and health awareness . Students who are aware of the concept of “health literacy” have a higher level of health literacy, which may be due to their strong willingness and ability to obtain health information, and they can actively collect health-related information from friends, newspapers, and the Internet . This indicates that schools need to set up health resources reasonably, carry out health education related activities and courses in a diversified way , form a new interdisciplinary talent training paradigm, and improve the health literacy level of college students. To the best of our knowledge, this is the first study to explore the health literacy level and influencing factors of students in TCM colleges and universities in China, which provides a reference for researchers to explore the reasonable setting of health resources, carry out health education related activities and courses, and improve the health literacy level of students in TCM colleges and universities. Limitations The limitations of this study mainly include: firstly, this study was conducted in only one TCM college and the sample was not very representative. Second, due to the limitation of time and cost, the sampling method is a convenience sampling method, which may have errors. Although in order to reduce the bias caused by convenient sampling, researchers selected research subjects from different grades and classes as much as possible, collected demographic characteristics data, and analyzed the composition of the sample. Therefore, future researchers should conduct multi-center research with stratified random sampling method, and carry out effective intervention strategies to improve the health literacy level of college students. Conclusion Compared with previous studies, the health literacy level of 925 students in TCM colleges was above the middle level. The healthy lifestyle and behavior literacy of students in TCM colleges is better than knowledge and skill literacy, which is a separation phenomenon of knowledge and practice. Gender, grade, smoking status and awareness of the concept of “health literacy” were important factors affecting the level of health literacy. The results of this study and the analysis of influencing factors can provide reference for TCM colleges and universities to improve health education activities or curriculum, help to improve the health literacy level of students in TCM colleges and universities, give full play to the professional characteristics of TCM colleges and universities, and promote the construction of healthy schools. |
Evolving Role of Social Media in Health Promotion: Updated Responsibilities for Health Education Specialists | 447be2a6-8209-44a2-b708-3a8a95c6ae8b | 7068576 | Health Communication[mh] | Our understanding of health and the impact of behavioral, sociocultural, and system-level factors on health outcomes has evolved significantly over the past several decades . Advances in technology are central to this evolution, as adoption of mobile devices connected to the Internet continues to grow across sociodemographic groups and geographic regions. One technological advancement accessed regularly is social media, which is used by 2.82 billion people worldwide . Social media is defined as activities, practices, and behaviors among communities of users who gather online to share information, knowledge, and opinions using conversational media . There are tens of thousands of health-promotion-related social media websites that are currently available to the public . In health promotion, social media is commonly accessed for networking and community building purposes, as well as for informing healthcare decision-making between patients and providers . The use of social media in public health education and promotion has been increasing in the United States (U.S.), due, in part, to its ability to remove physical barriers that traditionally impede access to healthcare support and resources. In 2017, Dr. Zsuzsanna Jakab, The World Health Organization (WHO)’s Regional Director of Europe, described the intersection of electronic health (eHealth) in public health as a “beautiful marriage“ that celebrates the global commitment and dedication towards reaping the benefits of eHealth for all . Patients, clinicians, mobile health, and social media all play unique roles in health promotion, highlighting the need to for secure data management that can facilitate more personalized medicine and more equitable public health policies . Today, it is difficult to imagine public health without social media. Although social media is viewed as acceptable and usable among multiple audiences and shows much promise in promoting health equity among disadvantaged populations (e.g., low income, rural, and older adults) , there remains inconsistent empirical evidence on the effectiveness of social media to improve public health outcomes and trends . In order to optimize the potential of social media to improve public health, there is a need to effectively leverage these technological tools to create scalable, culturally adapted health promotion programs and campaigns. Unfortunately, evidence remains limited on how to do this within the field of health promotion . Generating a better understanding regarding the benefits and drawbacks to using social media in health promotion is important, since health education specialists weigh its advantages against potential concerns over misinformation being shared to the public at large . Central to social media is interactivity. Social media facilitates greater information sharing and opportunities for community building through an Internet-mediated dialogue that allows users to create their own content (e.g., blogs, online discussion boards). This content, in turn, can become invaluable for health education specialists who are seeking formative research to design, adapt, and evaluate programs and campaigns with priority audiences. Consistently, social media hosts opportunities for consumers to exchange strategic health messages on popular social media channels, including Facebook, YouTube, and Pinterest, through various modalities (e.g., text, image, video, and gif) . Moreover, recent analytic advancements have strengthened the capacity of researchers and practitioners to compute and analyze metrics that evaluate the process of implementing social media, as well as any health-related impacts and outcomes associated with its implementation. As such, new collaborative evaluation methods are being deployed to improve the integration of social media within health-related interventions. While progress is being made, there remain significant challenges inhibiting the widespread acceptance, adoption, and use of social media in health promotion . Further examining the impact of communication and advocacy within social-media-based interventions and campaigns is central to this endeavor. Health education specialists play a critical role in creating, managing, and monitoring health promotion programs. As health promotion becomes more deeply rooted in Internet-based programming, health education specialists are tasked with becoming more competent in computer-mediated contexts that optimize both online and offline consumer health experiences. Accordingly, this Special Issue aims to explore social media as a translational health promotion tool by bridging principles of health education and health communication that examine: (1) the method with which social media users access, negotiate, and create health information that is both actionable and impactful for diverse audiences; (2) strategies for overcoming challenges to using social media in health promotion; and (3) best practices for designing, implementing, and evaluating social media campaigns and forums in public health. In this commentary, we discuss updated communication and advocacy roles and responsibilities of health education specialists in the context of using social media in research and practice.
The National Commission for Health Education Credentialing, Inc. (NCHEC) and the Society for Public Health Education (SOPHE) recently co-sponsored a new health education specialist practice analysis. A panel of 17 individuals with diverse backgrounds (i.e., work setting, experience level, education background, demographics, and geographic settings) that affect the practice of health education conducted a validation study, known as Health Education Specialist Practice Analysis II (HESPA II 2020) to re-verify the entry- and advanced-level responsibilities, competencies, and subcompetencies that provide the foundation for the professional preparation and development of all health education specialists . A broad cross-section of both certified and noncertified health education specialists from all 50 U.S. states volunteered to participate in the study. Study participants were contacted via existing lists of the sponsoring organizations with additional assistance provided by the Coalition of National Health Education Organization (CNHEO) and national and state affiliates of major health education associations. Two online surveys, one focusing on competencies and one focusing on knowledge areas, were available for a three-month window from November 2018 to January 2019, resulting in 3,851 usable surveys . Findings from this research provided significant implications for professional preparation, continuing education, and practice for the health education profession. Moreover, HESPA II 2020 produced a new hierarchical model with eight areas of responsibility, 35 competencies, and 193 subcompetencies . Within these new areas of responsibility, Advocacy (Area V) and Communication (Area VI) were designated as standalone areas of responsibility that contained a variety of new competencies and subcompetencies that reflected the increasing importance of using social media in the process and practice of health education. outlines these two areas of responsibility with five associated health education specialist competencies and six subcompetencies that directly mention social media use. 2.1. Engage Coalitions and Stakeholders in Addressing Public Health Issues Using Social Media Health education specialists are tasked with specifying strategies, timelines, and roles and responsibilities to address proposed policy, system, or environmental changes through social media. Social media allows for synchronous and asynchronous communication in a centralized, readily accessible digital location where a high degree of transparency exists. Social media can assist health education specialists in building a network of supporters, particularly for advocacy efforts . These interactive, digital tools can be used to effectively expand the reach and inclusivity of advocacy campaigns to engage stakeholders to support public health issues, regardless of geographic location and timing . Specifically, when used with traditional, relationship-building strategies, social media can bolster outreach approaches and reinforce relationships among stakeholders, including public health education coalition groups. This is done through promoting dialogue between leaders and supporters, as well as increasing collaborative communication among stakeholder groups. Additionally, social media tools are highly cost-effective for expanding communication among stakeholders and coalition groups interested in supporting public health education and promotion issue(s) . Therefore, social media technologies have potential to improve communication among stakeholders in order to further engage supporters for successful social change. However, building relationships with stakeholders and coalitions through traditional communication channels, while supporting these relationships through the use of social media technologies, is ideal for fostering lasting and productive stakeholder relationships for addressing public health issues . This allows for the opportunity to develop and nurture collaborative relationships among decision makers, which can include diverse stakeholders such as community members, organizations, and policymakers. 2.2. Engage in Health Policy Advocacy Through Leveraging Social Media Social media has become a critical tool in advocating for health policy, including its development, planning, and reform. Engagement with advocates is a key element in advocating for health policy, and social media provides a platform for new supporters and the general public to become aware of the important issues . In addition, social media tools create widespread access to public officials, many of whom have their own social media websites, for the opportunity to share information regarding health policy issues impacting constituents. While these technologies create the digital platform to increase awareness and evoke support for health policy advocacy, health education specialists must strive to promote actions that results in social change through advocacy efforts. Social media can complement traditional advocacy approaches to shift policy priorities for supporting health policy. In a framework developed by Scott and Maryman , social media and advocacy are aligned through empowerment and organization theories for shifting policy priorities. Specifically, the model suggests that quality social media presence must involve 1) critical awareness —engaging supporters through awareness of an issue that drives the desire to actively support the cause, 2) relationship building—creating relationships in a digital space and with face-to-face interactions that move passive supporters to active supporters, and 3) mobilizing action—creating action through both social media-supported online and offline forms of political engagement . Successful social media campaigns for health policy advocacy require health education specialists to utilize planning and evaluation skills to effectively assess the use of social media in this capacity. 2.3. Determine Factors that Affect Health Communication on Social Media with the Identified Audience(s) It is important for health education specialists to identify communication channels, such as social media, that are available to and used by their intended audience. Being digital-media-proficient means being able to meet priority populations where they are to bring about change within the physical, social, and online environments in which they live, work, and play. There are many challenges to effectively using digital media platforms, such as social media, within health education/promotion interventions and campaigns. These challenges are directly tied to the nature of social media itself, where health education specialists cannot fully control what, when, and how health information is shared. In some respects, social media can be considered the “wild west” for health information. Users can freely engage and interact with health information that may or may not be accurate or supported by empirical evidence. While challenges are to be expected, engagement can be maximized on social media through managing misinformation, reducing agency barriers to use, measuring audience reach and impact of posted messages/content, and keeping up with new trends in social media adoption and use. To effectively engage diverse audiences, there are several steps that can be followed to adopt a more strategic approach to social media use in health promotion: 1) understand how the priority population uses social media, 2) identify evidence-based social media strategies, 3) select appropriate communication times and channels, and 4) determine which types of social media apps will engage your audience most often in a meaningful way . 2.4. Deliver Health Message(s) Effectively Using Social Media As reflected in HESPA II 2020 competencies and subcompetencies, health education specialists are tasked with fine tuning their message delivery to ensure that intended audiences are being reached. This involves using current and emerging communication tools and digital media (e.g., social media management tools and platforms) to engage audiences. There are various social media tools, guidelines, and best practices that health education specialists can use for this purpose . For example, health education specialists should stay abreast of new forms of social media that are accessed regularly (i.e., daily or almost daily) by intended users. Next, consider adopting a social media policy. A formal social media policy on relevant topics such as hashtag use, tagging, communicating, and updating content can limit destructive posts that adversely impact online communities . Moreover, policy implementation facilitates productive interactivity that respects the diversity of user demographics, cultural backgrounds, and opinions. Finally, try to keep social media activity both lively and relevant. Skilled social media moderators are essential for maintaining social media pages and maximizing engagement through scheduling messages and responding promptly to user posts about current public health issues that are of concern. Moderators can provide invaluable social support that clinicians are often unable to offer, such as sharing insight about how to effectively communicate with healthcare providers . 2.5. Evaluate Health Promotion Activity Occurring on Social Media Evaluation is a fundamental element of most all social media activities within the field of health promotion . Process evaluation, or the measurement of factors that influence the success or failure of social media use (i.e., tracking social media analytics and performance indicators), is the most relevant type of evaluation to assess use of social media as part of an intervention or as a standalone tool . Data from process evaluation enables key decision makers and other stakeholders to monitor program inputs (e.g., messages, videos, and chat sessions) and outputs (e.g., number of followers, number of likes, and number of comments left) of social media activity . Tools such as social media analytics and data mining software can assist health education specialists in assessing the reach and dose of communication messages . Analytics also help to extract useful patterns of user activity to measure the engagement, experience, and moderator responsiveness within online communities . This type of social media data enables decision makers to learn from mistakes, make health promotion program modifications, monitor progress towards program goals, and justify the success of achieving desired health-related outcomes .
Health education specialists are tasked with specifying strategies, timelines, and roles and responsibilities to address proposed policy, system, or environmental changes through social media. Social media allows for synchronous and asynchronous communication in a centralized, readily accessible digital location where a high degree of transparency exists. Social media can assist health education specialists in building a network of supporters, particularly for advocacy efforts . These interactive, digital tools can be used to effectively expand the reach and inclusivity of advocacy campaigns to engage stakeholders to support public health issues, regardless of geographic location and timing . Specifically, when used with traditional, relationship-building strategies, social media can bolster outreach approaches and reinforce relationships among stakeholders, including public health education coalition groups. This is done through promoting dialogue between leaders and supporters, as well as increasing collaborative communication among stakeholder groups. Additionally, social media tools are highly cost-effective for expanding communication among stakeholders and coalition groups interested in supporting public health education and promotion issue(s) . Therefore, social media technologies have potential to improve communication among stakeholders in order to further engage supporters for successful social change. However, building relationships with stakeholders and coalitions through traditional communication channels, while supporting these relationships through the use of social media technologies, is ideal for fostering lasting and productive stakeholder relationships for addressing public health issues . This allows for the opportunity to develop and nurture collaborative relationships among decision makers, which can include diverse stakeholders such as community members, organizations, and policymakers.
Social media has become a critical tool in advocating for health policy, including its development, planning, and reform. Engagement with advocates is a key element in advocating for health policy, and social media provides a platform for new supporters and the general public to become aware of the important issues . In addition, social media tools create widespread access to public officials, many of whom have their own social media websites, for the opportunity to share information regarding health policy issues impacting constituents. While these technologies create the digital platform to increase awareness and evoke support for health policy advocacy, health education specialists must strive to promote actions that results in social change through advocacy efforts. Social media can complement traditional advocacy approaches to shift policy priorities for supporting health policy. In a framework developed by Scott and Maryman , social media and advocacy are aligned through empowerment and organization theories for shifting policy priorities. Specifically, the model suggests that quality social media presence must involve 1) critical awareness —engaging supporters through awareness of an issue that drives the desire to actively support the cause, 2) relationship building—creating relationships in a digital space and with face-to-face interactions that move passive supporters to active supporters, and 3) mobilizing action—creating action through both social media-supported online and offline forms of political engagement . Successful social media campaigns for health policy advocacy require health education specialists to utilize planning and evaluation skills to effectively assess the use of social media in this capacity.
It is important for health education specialists to identify communication channels, such as social media, that are available to and used by their intended audience. Being digital-media-proficient means being able to meet priority populations where they are to bring about change within the physical, social, and online environments in which they live, work, and play. There are many challenges to effectively using digital media platforms, such as social media, within health education/promotion interventions and campaigns. These challenges are directly tied to the nature of social media itself, where health education specialists cannot fully control what, when, and how health information is shared. In some respects, social media can be considered the “wild west” for health information. Users can freely engage and interact with health information that may or may not be accurate or supported by empirical evidence. While challenges are to be expected, engagement can be maximized on social media through managing misinformation, reducing agency barriers to use, measuring audience reach and impact of posted messages/content, and keeping up with new trends in social media adoption and use. To effectively engage diverse audiences, there are several steps that can be followed to adopt a more strategic approach to social media use in health promotion: 1) understand how the priority population uses social media, 2) identify evidence-based social media strategies, 3) select appropriate communication times and channels, and 4) determine which types of social media apps will engage your audience most often in a meaningful way .
As reflected in HESPA II 2020 competencies and subcompetencies, health education specialists are tasked with fine tuning their message delivery to ensure that intended audiences are being reached. This involves using current and emerging communication tools and digital media (e.g., social media management tools and platforms) to engage audiences. There are various social media tools, guidelines, and best practices that health education specialists can use for this purpose . For example, health education specialists should stay abreast of new forms of social media that are accessed regularly (i.e., daily or almost daily) by intended users. Next, consider adopting a social media policy. A formal social media policy on relevant topics such as hashtag use, tagging, communicating, and updating content can limit destructive posts that adversely impact online communities . Moreover, policy implementation facilitates productive interactivity that respects the diversity of user demographics, cultural backgrounds, and opinions. Finally, try to keep social media activity both lively and relevant. Skilled social media moderators are essential for maintaining social media pages and maximizing engagement through scheduling messages and responding promptly to user posts about current public health issues that are of concern. Moderators can provide invaluable social support that clinicians are often unable to offer, such as sharing insight about how to effectively communicate with healthcare providers .
Evaluation is a fundamental element of most all social media activities within the field of health promotion . Process evaluation, or the measurement of factors that influence the success or failure of social media use (i.e., tracking social media analytics and performance indicators), is the most relevant type of evaluation to assess use of social media as part of an intervention or as a standalone tool . Data from process evaluation enables key decision makers and other stakeholders to monitor program inputs (e.g., messages, videos, and chat sessions) and outputs (e.g., number of followers, number of likes, and number of comments left) of social media activity . Tools such as social media analytics and data mining software can assist health education specialists in assessing the reach and dose of communication messages . Analytics also help to extract useful patterns of user activity to measure the engagement, experience, and moderator responsiveness within online communities . This type of social media data enables decision makers to learn from mistakes, make health promotion program modifications, monitor progress towards program goals, and justify the success of achieving desired health-related outcomes .
Social media provides an outlet to increase and promote translational health communication strategies and effective data dissemination, in ways that allow users to not only utilize but also create and share pertinent health information. Moreover, the use of social media for advocacy and communications in health promotion offers exciting new prospects for broader reach, greater efficiency, and lowered costs of communication and advocacy campaigns. As with other technological innovations in healthcare, these efficiencies may be viewed by those providing funding as an opportunity to decrease budgets and increase the scope of health promotion activity delivered by health education specialists and their organizations. This very may well result in a reduction in the use of more established communication channels (e.g., TV, radio, and print-based media) traditionally used for health promotion. Although the application of social media in public health and health promotion has yielded some success in terms of generating support structures and networks for effective health behavior change, there are challenges and complications associated with social media use that also need to be addressed (e.g., managing misinformation, ensuring compliance with user privacy protections). While it is relatively straightforward to view social media use as a universal communication channel, especially for those who already use social media, the risk of using social media lies in reducing health information access among those who are not technologically ‘’connected’’. Social media is not likely to be an effective option for population subgroups include the elderly; the physically and cognitively disabled; and those with low text, technical, and eHealth literacy. As health education specialists, we need to be wary of designing social media interventions or campaigns that are most suited to population segments that are comfortably well off, and text-, tech- and eHealth-literate. In addition, the use of social media by health education specialists faces significant headwinds from individuals or entities using social media to promote alternative views on health-related issues (e.g., anti-vaccinations, pro fad diets, and advocating for exclusionary healthcare policies). Some social media platforms have belatedly taken action to limit some of these discussions (e.g., Facebook with anti-vaccination groups), but the response is unlikely to be timely. We acknowledge that these types of completing voices are usually far better resourced than health education specialists who have limited resources to support robust social-media-based advertising campaigns. Therefore, we must be vigilant in monitoring and evaluating public health advocacy and communication that occurs on various popular social media websites. Our Special Issue begins to tackle these important issues by bringing together international, multidisciplinary scholars who employed innovative methodologies to better understand how social media is used by multiple audiences for the purposes of health promotion and engagement. Specifically, these articles delve into the sociocognitive and affective factors that mediate the relationship between social media use, community engagement, and positive health outcomes. This was achieved by augmenting our understanding of traditional health education approaches with theories rooted in the complementary yet distinct disciplines of health communication. We sincerely hope that the new empirical knowledge generated within this Special Issue will help academic health education specialists, as well as other public health professionals, use more pragmatic paradigms for planning, implementing, and evaluating social media interventions and campaigns in the field of health promotion.
|
Assessment of Metastatic Colorectal Cancer (CRC) Tissues for Interpreting Genetic Data in Forensic Science by Applying 16 STR Loci among Saudi Patients | 2bf1c2f9-8c9b-4093-a622-61b451123046 | 8850881 | Pathology[mh] | Colorectal cancer (CRC) is viewed as a prevalent malignant formation developed in the gastrointestinal tract. According to world statistics, it is the third most prevalent type of cancer and the fourth most dominant factor of lethal outcome globally (Chen et al., 2014). Life risk percentage in regions of West Europe and North America among local populations is equal to almost 5% (Lichtenstein et al., 2000). From etiology perspective, the illness can be triggered by genetic/inherited and environmental factors, although 1/3 of disease dispersion relates to inborn biological variables (Lichtenstein et al., 2000). It is known that microsatellite instability (MSI) by short tandem repeats (STRs) become valuable providers of information for examining genetic diseases and genetic oncology. In terms of forensics, they can be utilized for paternity tests and identity establishing cases; moreover, instruments are applied for linkage disequilibrium evaluation as well as genome mapping. Other studies have indicated (Dang et al., 2020) that the valuable input secured by these polymorphisms is stipulated by variability quality. Still, stability of data is a main term of receiving valid outcomes, which is why it all depends on scientific insight about mutation rates and origins of mutational patterns. MSI provides a context where an allele in the germline microsatellite has actually acquired or deprived certain repeat units; in other words, it has come through a somatic alteration in terms length. As a rule, extended MSI informs about the so-called mismatch repair deficiency (MMRD), which might be a source of many mutations in cancer-associated genes, provoking carcinogenesis process and tumor development. In this sense, MSI is often identifiable in selected types of cancer, and CRC in not an exclusion (Lynch HT and De la Chapelle A, 2003). The effect of loss of heterozygosity (LOH) represents a case when the whole gene disappears. In this event, LOH is presented with copy number losses (a type titled CNL-LOH) as well as copy number neutral LOH (a type titled CNN-LOH). In case of CNL-LOH, the entire chromosome or only its part disappears. In case of CNN-LOH, disappearance takes place either due to a homologous recombination behavior (“gene 35 conversion”) or due to the fact that the engaged chromosome was copied (prior to or post the LOH case) (Ryland et al., 2015). LOH effect is heavily attributed to the removal of an allele of the wild type in people with a syndrome of high cancer hereditary potential. Additionally, this involves individuals who might hold a germline mutation in specific genes, namely BRCA1 and 2 resulting in several cancer types (specifically colorectal, breast, and ovarian) (Merajver et al., 1995; Bellido et al., 2018). A broad engagement of LOH in cancer development is potentially associated with exposing a physiologically mutated tumor inhibitor gene via deletion of the allele of a wild type (Lynch and De la Chapelle, 2003). To identify the impact of STR variation in provoking CRC, this study has comprehensively analyzed 16 loci (particularly 15 autosomal STR loci and a separate portion of Amelogenin) for the purpose of investigating possible CRC-stimulated MSI, LOH, loss of manifestation in proteins affected by MMRD and correlations with clinical data on CRC victims. Design, Participants and Ethics Statement Current research represents a prognostic case-control analysis. The current research had the approval of the Deanship of Scientific Research for Princess Nourah Bint Abdulrahman University. Participants for study and control groups have been selected in cooperation with King Fahad Medical City (KFMC) between August 2020 and November 2020. All methods were carried out in accordance with relevant guidelines and regulations. This study was approved by national regulation committee of ethics, KACST, KSA (study number H-01-R059, IRB LOG number 20-0287). Additionally, the written acquainted permissions have been obtained from the participating patients prior to obtaining their samples. Samples Size Current research has managed to compile a group consisted of 73 patients from Saudi Arabia. Among participants, 43 patients were diagnosed with CRC, while the rest 30 were healthy and included in the control subgroup. Prior to obtaining specimens, all respondents signed a written consent agreement with no exclusions. Moreover, they managed to fill a special questionnaire to provide data on their personal medical history. The approval from KACST, the ethics board located in Riyadh (KSA), was received after validating the research protocol. Inclusion criteria 1. Participants were with histologically and invasively approved CRC. 2. Participants unexposed to chemotherapy/radiotherapy previously. Exclusion criteria 1. Participants having other cancer complications. 2. Participants passing through routine neo-adjuvant/adjuvant chemotherapy or radiotherapy. Diagnosing of pathologies in clinical setting is presented in . Study Protocol Histopathological Tests CRC samples of fixed size (3 x 3 x 2 cm), along with adjoining cancer-free tissues have been obtained from both groups of participants. They were specifically processed with 10% solution of neutral buffered formalin (developed and provided by Sigma-Aldrich, Merck KGaA, Darmstadt, Germany). Processing was organized for eight hours at room temperature conditions. Dehydration of tissues has been completed by increasing grades of alcohol; afterwards, dehydrated tissues were purified in xylene and then integrated into separate paraffin blocks. Almost 3-5µ segments have been sectioned to be included in a typical histopathology trial. One slide was eventually attributed to each paraffin block. Particularly, Eosin and Haematoxylin staining was operated on a slide to cover the full size of a tissue studied. Finally, the stained regions have been screened to get microscopic photographs; for this purpose, a photomicroscope was utilized to estimate the histopathological segments. Genetic Analysis Extracting DNA Samples Purification of DNA has been completed in relation to blood samples (control group) and CRC samples, along with taking biopsy over the adjoining cancer-free tissues (N-CRC) in the patient group. In general, 20 ml of blood for analysis were obtained; in turn, 30 mg CRC and N-CRC biopsies were surgically obtained as well. Extraction of DNA has been operated with a help of a Qiagen DNA isolation set of instruments (Cat No. 69506, Qiagen, Hilden, developed in Germany). Extraction has been completed in accordance with manufacturer’s manual. DNA extracted has been measured by using a NanoDrop spectrophotometer (Model ND-2000-Spectrophotometer UV/Visible). The special wavelength (260 nm) has been applied. Instrumentally, Optical Density (OD) of 1 at 260 nm was attributed to 50 ng/μl of DNA composition. In this sense, the overall DNA concentration could be simply quantified by correlating with OD measurements, which resulted in the following formula: μg/ml of DNA = A260 X 50 X dilution factor The principal aim of calculating DNA is to identify the proper quantity of DNA sample to integrate in a PCR amplifying of STR loci to prevent erroneous and exaggerating input and related artifacts. Quantifying the DNA amounts in a specimen is crucial for studies involving PCR methodology, as a limited concentration array is optimally suited for the multiplex STR genotyping procedure. DNA Amplification PCR can be explained as an enzymatic activity involving multiple replications of the DNA’s certain area to generate several duplicates of a special sequence (Garnett et al., 2012). In a study, selected genetic loci have been amplified by applying the PowerPlex®16 System (developed by Promega Co, United States). This system ensured concurrent amplification and appropriate isolation of 16 STR loci. Their decoding names are the following D3S1358, D13S317, D16S539, D18S51, CSF1PO, TH01, vWA, D21S11, D7S820, D5S818, TPOX, D8S1179, FGA, in addition to Penta E, Penta D and Amelogenin (protein isoform). The list of alleles presented and developed in the allelic ladder, in addition to the genotype secured by the PowerPlex ® 16 System Control DNA 2800M (provided by Thermo Fisher Scientific) have been described in detail in . Hence, PCR reactions have been organized in compliance with the developer’s instructions using the Gene-Amp ® PCR system 9700 thermal cycle (provided by Applied Biosystems). Dilution of specimens has been organized by using deionized water (DI) which was conditional on concentration of DNA. The overall amount of specimens (quantity of samples, positive control samples, negative control samples, as well as potential pipetting errors) have been quantified. Afterwards, PCR amplification procedure has been carried out in an approved quantity of 25μL consisting of 5μL of the PowerPlex® HS 5X Master Mix along with 2.5μL of the PowerPlex®16 HS 10X Primer Pair Mix; it also contained 17.5μL of extracted DNA sample (1ng). Thermal cycler (Applied Biosystems ® 2720) has been applied in accordance with the manufacturer’s manual. STR Genotyping Capillary electrophoresis is viewed as a main procedure of isolating and identifying STR alleles applied in forensic and medical sciences (Ryland et al., 2015). The PCR outputs have been estimated by using capillary electrophoresis in terms of ABI 3130 Genetic Analyzer (provided by Applied Biosystems) to identify STR markers in specimens obtained. Every reaction mix contained 8.6 μl of Hi-Di formamide as well as 0.4 μl of Genescan-500 LIZ Size Standard. Notably, every specimen’s PCR output (in amount of 1.0 μL) showing the ultimate reaction volume of 10μL has been embedded to a genotyping plate equipped with 96 wells and then coated by the special membrane wall. They were later centrifuged to make sure that concentration levels in each well reached the bottom. Afterwards, specimens have been denatured during 3 min at established temperature 95°C to be further quickly chilled towards close-to-the-ice condition (at temperature 4°C) before putting samples into the toolkit. Upon successful chilling, specimens have been placed into the 3130 Genetic Analyzer. Genetic Evaluation and Statistical Estimations Specimens have been evaluated by utilizing the specialized software, GeneMapper® IDX (version 1.1). STR-related frequencies of the identified alleles have been quantified by implementing GenAlEx (V. 6.503). Statistical calculations have been completed by utilizing Statistical Package for Social Sciences program (version 21.0). Qualitative input has been presented in number/percent format to have a comparative goodness-of-fit analysis ( χ 2 testing). Constant variables have been presented as mean ± SD, being further contrasted with the two-sided Student’s T-test. Eventually, a P value > 0.05 has been interpreted as statistically relevant. Histological Findings Histological specimens have been taken from CRC patients from two particular areas, namely cancer-affected tissues and adjoining non-affected CRC tissues. Then, the specimen derived have undergone HandE-staining as demonstrated in the Methodology to be further investigated with a help of a light microscope . Photomicroscopic analysis of stained specimens related to adjoining cancer-free tissues revealed the histological structures within the norm, demonstrating proper cellular organization of CRC tissues. In opposition, CRC tissues taken from another area revealed mucinous CRC adenocarcinoma (MCRA) structures associated with increased reproduction disorder within mitotic cells; in addition, distinguished pleomorphism, along with huge and unstable lumen glands incorporating blood and tumor inflammatory signs have been detected . DNA Concentration provides a summary on the mean DNA concentrations in samples. A considerable distinction between CRC patients and the control participants on quantities of DNA extracted have been identified with a help of a NanoDrop spectrophotometer having a P-value (P < 0.001) as referred to the source of DNA. Additionally, a remarkable distinction (P < 0.001) in DNA quantities in CR tissues has been detected (cancer-affected and cancer-free tissues). Furthermore, illustrates the relationship between the metastasis development phases and DNA concentrations (ng/μl) focusing on CRC patients’ cancer tissues. The data indicated that DNA concentrations for participants diagnosed with CRC (M1 and uncertain metastasis development phases) had been dramatically (P < 0.001) higher compared to the ones in CRC (M0) patients without signs of metastasis and the ones from the control group participants. Autosomal STR Allocation Of all potential alleles within the loci studied, only 8 appropriate alleles were identified in the locus D13S317 (specifically 7, 9, 10, 11, 12, 13, 14, 15). In turn, 6 alleles were detected in the locus D3S1358 (specifically 12, 13, 14, 15, 16, 17). Additionally, another 7 alleles were detected in the locus D16S539 (specifically 8, 9, 10, 11, 12, 13, 14). Importantly, 15 alleles were identified in the locus Penta E (specifically 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20). Penta D revealed 6 appropriate alleles (namely 8, 9, 10, 11, 12, 13). Furthermore, 5 alleles were found in the locus TH01 (namely 7, 8, 9, 10, 11). In the locus vWA, 9 alleles were determined (namely 11, 12, 13, 14, 15, 16, 17, 18, 19). Similarly, 12 alleles were detected in the locus D21S11 (specifically 26, 27, 28, 29, 30, 31, 31.2, 32, 33, 34, 35, 37). The locus D7S820 contained 7 appropriate alleles (such as 6, 7, 8, 9, 10, 11, 12). Another 5 appropriate alleles were found in the locus D5S818 (specifically 7, 8, 10, 12, 13). Another 14 alleles were identified in the locus D18S51 (namely 10, 11, 12, 13, 13.2, 14, 14.2, 15,16, 17, 18, 19, 20, 23). Moreover, another 6 alleles were found in the locus D8S1179 (specifically 9, 10, 11, 12, 13, 14). We managed to identify 5 appropriate alleles in the locus TPOX (namely 6, 7, 8, 10, 11) and 7 alleles in the locus CSF1PO (specifically 8, 9, 10, 11, 12, 13,15). Finally, 16 alleles have been detected in the locus FGA (namely 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 26.2, 27,28, 29, 30, 31.2). Findings were associated with all study’s participants. describes the frequencies of two different mutations in 16 STR loci identified in CRC samples with no essential (P≤0.05) distinction in mutation frequencies to the adjoining N- CRC samples. A sum of 1168 loci has been studied overall. It was found that the common frequency of two specific mutations (LOH and MSI) were not greatly different by comparing CRC specimens and N-CRC specimens. Moreover, the genetic deviations have been observed in 5 STRs among other 16, such as Amelogenin, D21S11, D18S51, D8S1179 and CSF1PO. Within these 5 loci examined, 3 specific loci (namely D21S11, D18S51, and CSF1PO) showed considerable (P≤0.05) distinctions between CRC samples and the adjoining N-CRC specimens and in contrast to the control group participants. Moreover, the locus D18S51 has been identified as the most affected one; meanwhile, the locus D8S1179 was categorized as the least affected one with LOH-affected mutation type in DNA of CRC specimens. Additionally, the MSI-affected mutation was identified only in DNA of CRC specimens with no presenting mutations in DNA within the adjoining N-CRC samples. In general, no typical mutations in other loci were identified in relation to the participants from this study. The allocations of mutations identified in 16 STR loci have been broadly presented in . provides information on several samples of CRC patients demonstrating the loci affected, such as D21S11, D18S51, D8S1179 and CSF1PO. It also indicates about genetic deviations presented in the alleles as contrasted to specimens from the adjoining N-CRC tissues. Nonetheless, any genetic deviations and mutations were absent in the alleles within the locus D5S818 in specimens of the control group participants. provides the information on correlations between mutation type within STRs and factors of gender (a) and age (b) among CRC participants. Findings showed no distinctions based on the gender factor for all participants of the research. However, regarding the age factor, there were certain distinctions, demonstrating that majority of participants with CRC incorporating mutations of LOH-type fit the exact age interval (56-65 years). The minority of participants with CRC and mutations of LOH-type was associated with the age interval below 35 years. Moreover, mutation of MSI-type was categorized as an extra allele inside the same loci affected (namely D21S11, D18S51, D8S1179 and CSF1PO) in samples of 5 CRC patients (among all 43 participants from the group) at specific ages, such as 56, 59, 61, 62 as well as 63 years. Thus, the most participants with CRC affected by mutation of MSI-type were also aged between 56 and 65. Findings in a recent research literature emphasized on a growing tendency of CRC progression among population of Saudi Arabia (Al-Qahtani et al., 2020; Althubiti and Eldein, 2018; Alyabsi et al., 2020; Chaudhri et al., 2020). Studies have also revealed dramatic disparities of CRC development in relation to factors of gender, age, and geography (Bazarbashi et al., 2017). CRC was found to be the third most prevalent global form of oncology by 2012 (International Agency for Research on Cancer, 2012); moreover, it was categorized as the third most widespread cancer-related diagnosis in many Middle East states, being the second oncology development by popularity in the KSA (Bazarbashi et al., 2017). In the past years, STR profiling methods have brought limited innovations, ensuring the traditional yet complicated strategy of capillary electrophoresis (CE) applied to screen multiple loci simultaneously (Gymrek et al., 2012). Due to increase of international DNA datasets, STR loci used in forensic science improved the discriminatory potential of investigations and enhanced database interoperability. Current STR assays involving CE or MPS (known as massively parallel sequencing) strategies accept the necessity of extending research focus over multiple STR loci. Because of benefits of bigger multiplexing and improved identification of sequence variations, the use of MPS methods for implementing STR evaluation has deeply reinforced discrimination capacities. Therefore, it can further assist in detailed DNA mixture translation by multiplying the amounts of alleles analyzed. Unfortunately, high-performance assays have been scarcely used for managing and expanding forensic DNA databases (Kim et al., 2017). Research works focusing on STR variation have been completed in recent years to discover and project the genetic configuration that is subjected to cancer characteristics. The current study involved a cohort of 73 subjects (30 healthy individuals and 43 patients with CRC). They were systematized by gender, age, geography, and histological characteristics – for example, tumor density (mm), metastasis extent, nodal status, and phase of UICC. Characteristics of study’s population indicated that morbidity of CRC among patients was not correlated with patients’ gender, yet correlations with age, in the diapason between 35 and 65, were valid and visible. Additionally, histological tests have been completed by using HandE staining technique and a special light microscope. Samples were taken from two locations: CRC tissues and the adjoining N-CRC tissues. Specimens were obtained from CRC-affected patients by keeping a least length of 10 cm between the sections. The outcomes revealed specific histopathological structures in CRC tissues contrasted to the cancer-free histological structures related to the adjoining N-CRC tissues. Findings from the current study indicated that DNA concentrations were associated with a considerable growth of parameters of DNA samples from CRC tissues in contrast to N-CRC tissues (P˂0.001) as well as values from the control group of participants (P˂0.001). A vivid relationship between the quantities of DNA examined in CRC participants and the tumor’s metastasis development phases was detected (Lo et al., 2017; Sozzi et al., 2003). Hence, it was found that considerably increased levels of DNA concentration have been present in the tissues of CRC patients with developing metastasis in contrast to metastasis-free tissues and the control group. Findings achieved have been corresponding to reports from previous studies on a similar topic (Li et al., 2020; Perdyan et al., 2020). In a study completed by Leon et al. and based on radioimmunoassay to identify DNA, a comparison of serum DNA amounts among 173 patients having diverse types of cancer with DNA amounts of 55 healthy participants was completed. Notably, DNA concentrations considerably increased in the cancer-affected cohort in contrast to the control cohort. Additionally, greatly elevated amounts of DNA were discovered in the patients’ serum having metastasis in contrast to metastasis-free participants (Leon et al., 1977). Findings are formally compliant with previous studies informing about increased DNA in the cancer-affected tissues, blood serum or blood plasma in patients with related diagnosis (Leon et al., 1977; Usadel et al., 2002; Wang et al., 2003). Aside from increased DNA concentrations, identified alleles in the 16 STR loci have been examined from the specimens obtained from two locations of cancer-affected patients to compare them with DNA profiles of 30 healthy participants from the control group. As noted before, the capillary electrophoresis was used. In terms of quality, frequencies of examined alleles for every STR loci have been comparatively analyzed between the specimens and statistically estimated. The current research also examined the impact of LOH and MSI factors with a purpose of identifying markers associated with cancer. In addition, 1168 loci have been examined in sum. The general level of mutation types (namely LOH and MSI) was not found dramatically distinct between CRC and N-CRC samples. More to say, genetic deviations have been recorded in 5 loci (among 16 STR loci), specifically in D21S11, D18S51, D8S1179, CSF1PO and Amelogenin. Among 5 loci detected, 3 loci were found to have increased values (P≤0.05) of dramatic differences in the frequency of selected alleles of CRC samples and N- CRC samples, also compared to specimens from the control group. This included D21S11, D18S51, and CSF1PO loci. Findings from past studies revealed about 3 particular loci – namely D18S51 (Diep et al., 2003; Szych et al., 1999; Vauhkonen et al., 2004), D21S11 (Cho et al., 2014; Vauhkonen et al., 2004), and CSF1PO (Filoglu et al., 2014) – that were attributed to CRC occurrence. This overall corresponds to our individual findings. Hence, it might be proposed that three identified loci can potentially serve as genetic markers concerning CRC diagnosing. MSI was identified only in DNA samples associated with CRC; still, there were no mutations present in DNA of the adjoining N- CRC tissues. Importantly, Alonso and colleagues received the same outcomes upon their examination of CRC, when it was identified that somatic MS repeats were heavily inconsistent (Filoglu et al., 2014). In conclusion, concluding all remarks, this study has demonstrated that DNA concentrations in CRC samples dramatically elevated in contrast to N-CRC. Moreover, it was found that mutation rates for LOH and MSI types were not considerably different between DNA specimens from CRC tissues and the adjoining N-CRC tissues. Still, 3 particular loci (namely D21S11, D18S51, and CSF1PO) were identified as having relevant and considerable (P≤0.05) distinctions in frequencies of the allele between CRC specimens and N-CRC samples, especially in comparison with data on the control cohort. It is supposed that 3 identified loci might serve as markers for diagnosing CRC. Furthermore, CRC tissue samples might be potentially utilized as a DNA provider for forensic DNA assay. Formally, cancer-affected tissues might manifest MSI and LOH effects and thus compromise interpretation of output, if STRs are used in a forensic practice. This study has a limitation in providing restricted sample sizes. Additionally, it is worth saying that the forensic investigation needs a thorough interpretation of MSI and LOH outcomes along with data from microscopic evaluation of tissue samples. For this reason, future studies should involve a bigger size of samples in working with Saudi patients diagnosed with CRC. Abbreviations CRC: colorectal cancer N-CRC: the adjoining normal (non-cancerous tissues) MSI: microsatellite markers STRs: short tandem repeats MMRD: mismatch repair deficiency LOH: loss of heterozygosity CNN: copy number neutral OD: Optical Density DI: deionized water PCR: Polymerase chain reaction CE: capillary electrophoresis MPS: massively parallel sequencing Al-Qahtani WS participated in experimental works, investigation, participated in data analysis, and wrote the original draft and submitted the paper as a corresponding author. Al-Hazani TM participated in experimental works, investigation, participated in data analysis, and helped review and editing the draft. Alsafhi FA performed experimental works, investigation, participated in data analysis, and helped review and editing the draft. Alotaibi MA participated in experimental works, investigation, participated in data analysis, and helped review and editing the draft. Domiaty DM participated in experimental works, investigation, participated in data analysis, and helped review and editing the draft. Al-Shamrani SM participated in experimental works, investigation, participated in data analysis, and helped review and editing the draft. Alshehri E participated in experimental works, investigation, participated in data analysis, and helped review and editing the draft. Alotaibi AM participated in experimental works, investigation, participated in data analysis, and helped review and editing the draft. Alkahtani S participated in investigation, participated in data analysis, and helped review and editing. The draft All authors read and approved the final manuscript. |
Molecular studies of meningococcal and pneumococcal meningitis
patients in Ethiopia | eecd3ac5-e9e7-441b-bcac-0ef923a663dc | 6830936 | Pathology[mh] | Bacterial meningitis is an important cause of morbidity and mortality in sub-Saharan
Africa. Neisseria meningitidis , mainly capsular group A, has, until
recently, caused recurrent epidemics of meningitis in this area, including Ethiopia. Streptococcus pneumoniae is the second most prevalent pathogen and
causes even higher mortality and morbidity than N. meningitidis in
the same region. – A recent study
of bacterial meningitis patients in three university hospitals in Ethiopia from
February 2012 to June 2013 found that, among 46 patients with real-time PCR (RT-PCR)
confirmed etiology, N. meningitidis was the most prevalent cause
(59%) followed by S. pneumoniae (39%). Merely 2% had meningitis
caused by Haemophilus influenzae . The clinical presentations of N. meningitidis infections in
sub-Saharan Africa appear to differ from the disease spectrum observed in
industrialized countries on hospital admission. Meningitis, characterized by fever,
headache, neck and back rigidity and impaired cerebral function, is the dominant
clinical presentation in sub-Saharan Africa. The development of fulminant
meningococcal septicemia with persistent septic shock, multiple organ failure and
large hemorrhagic skin lesions, which are easy recognizable by health care
personnel, appears to be very uncommon in this region of Africa. In Europe, as many
as 30% of patients contracting invasive meningococcal infection may develop septic
shock usually combined with renal and pulmonary failure and extensive skin
hemorrhages. , Half of patients with septic shock also exhibit clinical signs of
meningitis; , however, coagulopathy and multiple organ failure are less pronounced, with lower
mortality than fulminant meningococcal septicemia with minimal pleocytosis
(< 100 × 10 6 leukocytes per liter cerebrospinal fluid (CSF)) and
lack of distinct signs and symptoms of meningitis. , The causes explaining the
different clinical presentations in sub-Saharan Africa versus Europe are not
obvious. European studies in the last 30 yr have documented that the different
clinical pictures can be explained by compartmentalized growth of the
meningococci. , – In patients developing distinct
meningitis without compromised circulation, the growth of meningococci occurs
primarily in the subarachnoid space, with 10 to 1000-fold higher concentrations of
LPS, N. meningitidis DNA (NmDNA) and inflammatory mediators in the
cerebrospinal fluid than in the blood. , – In patients developing fulminant
meningococcal septicemia with persistent septic shock, the bacterial proliferation
occurs primarily in blood and microcirculation in the extra-cerebral organs, leading
to much higher concentrations of NmDNA, LPS and inflammatory mediators than detected
in the subarachnoid space. , – The difference in clinical
presentation between meningococcal meningitis and fulminant septicemia is caused
primarily by a much higher growth velocity of the meningococci in the blood and
extra-cerebral vessels in patients with fulminant septicemia. – Meningococcal LPS is the most
potent trigger of the innate immune system via CD14, TLR4 and myeloid differential
factor 2 (MD2). , – One aim of this study was to
quantify the growth of meningococci in the circulation and CSF by using RT-PCR of
NmDNA and determine the copy number per milliliter. We hypothesized that the copy
number of NmDNA would be below 10 6 /ml serum since the patients did not
develop the septic shock, extensive skin hemorrhages, renal and pulmonary impairment
associated with NmDNA copy numbers in plasma or serum levels ranging from
10 6 to 10 8 /ml. Meningitis caused by S. pneumoniae has a higher case fatality rate
and causes more sequelae in patients in sub-Saharan Africa and in industrialized
countries than meningococcal meningitis. , , , The biological basis
explaining this difference has not been fully elucidated. One hypothesis is that the
inflammatory responses to the Gram-negative N. meningitidis and the
Gram-positive S. pneumoniae in the subarachnoid space are
quantitatively different, and that this is reflected in the outcome. Few studies
have performed head-to-head comparison of the cytokine profiles in CSF of the two
pathogens. Examination of CSF from patients with meningococcal or pneumococcal
meningitis living in Brazil and Burkina Faso, respectively, suggests that the
profile of cytokine and chemokine differs, possibly explaining the difference in
outcome. , A second aim of the present study was to compare the profile of inflammatory
mediators in CSF in 26 patients with N. meningitidis meningitis and
16 patients with S. pneumoniae meningitis all diagnosed by RT-PCR.
We quantified the levels of 18 different cytokines, chemokines and matrix
metallopeptidase 9 (MMP-9) in CSF and compared the results of the two groups. In
addition, we evaluated the quantitative relationship between LPS and NmDNA and the
inflammatory mediators in patients with meningococcal meningitis.
Patients and samples Patients admitted to three referral teaching university hospitals in Gondar,
Hawassa and Addis Ababa in Ethiopia during February 2012–June 2013 were
recruited to this study, as previously described. Among the 139 patients with clinical signs of meningitis and turbid CSF
having their CSF tested by RT-PCR for presence of DNA from N.
meningitidis , S. pneumoniae , or H.
influenzae , sufficient CSF volumes for testing of inflammatory
mediators were available from 43 patients (27 with meningococcal meningitis and
16 with pneumococcal meningitis). None of the patients presented at hospital
admission with typical signs and symptoms of meningitis combined with massive
skin hemorrhages, persistent shock or dwindling urine production. All patients
or parents of patients received clinical study information sheets and gave
written informed consent. A total of 27 meningococcal patients had LPS and
MMP-9, 26 had 18 different cytokines and chemokines, and 23 had the copy number
of N. meningitidis analyzed in CSF. NmDNA in serum was
quantified in 23 of the 27 patients. CSFs from the 16 patients with pneumococcal
meningitis were assayed for the same 18 different cytokines, chemokines and
MMP-9 as the meningococcal meningitis patients. All patients had a completed case report form comprising personal data, clinical
information on admission and the immediate outcome, i.e. death or survival and
immediate severe sequelae, such as deafness or neurological impairment observed
during the hospital stay. Long-term sequelae were not studied. The study
protocol was reviewed and approved by the National Research Ethics Review
Committee of Ethiopia (3-10/6/5-04) and Regional Ethics Committee in Norway (2011/825b). The CSFs and sera were stored at –20°C immediately after collection at the local
clinical laboratory, during transportation to Armauer Hansen Research Institute
(AHRI) in Addis Abeba. The samples were transported to Norwegian Institute of
Public Health by air on dry ice and, upon at arrival, were stored at –80°C until
thawed for PCR analysis. The samples were transported frozen to Oslo University
Hospital Ullevål, stored at –80°C and thawed for PCR, LPS and cytokine
analyses. Quantification of N. meningitidis DNA The copy number of NmDNA in serum and CSF was determined using quantitative PCR
(qPCR), as previously described, except that the sequence-specific hybridization probes (0.3 µM per
reaction) 5′- AGGATACG AATGTGCAGCTGAC -FL and 5′-LC
Red640- GTGGCAATGTAGTACGAA CTGTTGC -PH (Metabion, Munich,
Germany), and the Light Cycler Fast Start DNA master hybridization probe mix
(Roche, Germany) were used in the detection system. Heparin-plasma is considered
optimal for assaying bacterial copy number in blood. However, in a porcine
sepsis model and in meningococcal shock patients, we have previously shown that
serum can be used. The detection limit was 2.5 × 10 2 copies NmDNA/ml CSF and
3.5 × 10 3 copies/ml serum. Quantification of inflammatory mediators in CSF The levels were quantified by Bio-Plex Pro Cytokine, Chemokine, and Growth Factor
assay (Bio-Rad, Hercules, CA) for the following 18 cytokines and chemokines:
IL-1β, IL-1 receptor antagonist (IL-1ra), IL-2, IL-4, IL-6, IL-8, IL-10,
IL-12p70, IL-18, granulocyte colony stimulating factor (G-CSF), IFN-ϒ,
IFN-ϒ-induced protein 10 (IP-10), monocyte chemoattractant protein-1 (MCP-1),
macrophage inflammatory proteins 1α (MIP-1α) , macrophage inflammatory proteins
1β (MIP-1β), regulated on activation normal T cells expressed and secreted
(RANTES), TNF-α and TNF-related apoptosis inducing ligand (TRAIL). MMP-9 was
quantified by Bio-Plex Pro Human MMP Assay (Bio-Rad) according to the
manufacturer’s instructions ( and ). Quantification of LPS activity in CSF The levels of LPS were determined with Chromo-LAL® (Associates of Cape Cod, East
Falmouth, MA) according to the manufacturer’s instructions and as described earlier. Clinical definitions Meningitis was diagnosed when the patients revealed neck and back rigidity and
turbid CSF (≥1000 × 10 6 leukocytes/l) usually associated with fever,
headache and impaired cerebral function. Fulminant meningococcal septicemia was
diagnosed when patients presented with extensive skin hemorrhages (ecchymoses),
septic shock (systolic blood pressure <100 mm Hg in patients 12 yr or older
and 70 mm Hg in children below 12 yr old that would require fluid and vasoactive
drug therapy in addition to antibiotics). , Immediate recognizable
sequelae comprised acute deafness or major impaired hearing, stroke, epileptic
insults or other acute neurological or other impairment. Statistical analysis We assessed differences between the patient groups by using the Mann-Whitney’s
test. Correlations were carried out using Spearman´s procedure. All calculations
were performed using Graph Pad Prism version 3.0. (La Jolla, CA). The level of
significance was set at 5% in all cases.
Patients admitted to three referral teaching university hospitals in Gondar,
Hawassa and Addis Ababa in Ethiopia during February 2012–June 2013 were
recruited to this study, as previously described. Among the 139 patients with clinical signs of meningitis and turbid CSF
having their CSF tested by RT-PCR for presence of DNA from N.
meningitidis , S. pneumoniae , or H.
influenzae , sufficient CSF volumes for testing of inflammatory
mediators were available from 43 patients (27 with meningococcal meningitis and
16 with pneumococcal meningitis). None of the patients presented at hospital
admission with typical signs and symptoms of meningitis combined with massive
skin hemorrhages, persistent shock or dwindling urine production. All patients
or parents of patients received clinical study information sheets and gave
written informed consent. A total of 27 meningococcal patients had LPS and
MMP-9, 26 had 18 different cytokines and chemokines, and 23 had the copy number
of N. meningitidis analyzed in CSF. NmDNA in serum was
quantified in 23 of the 27 patients. CSFs from the 16 patients with pneumococcal
meningitis were assayed for the same 18 different cytokines, chemokines and
MMP-9 as the meningococcal meningitis patients. All patients had a completed case report form comprising personal data, clinical
information on admission and the immediate outcome, i.e. death or survival and
immediate severe sequelae, such as deafness or neurological impairment observed
during the hospital stay. Long-term sequelae were not studied. The study
protocol was reviewed and approved by the National Research Ethics Review
Committee of Ethiopia (3-10/6/5-04) and Regional Ethics Committee in Norway (2011/825b). The CSFs and sera were stored at –20°C immediately after collection at the local
clinical laboratory, during transportation to Armauer Hansen Research Institute
(AHRI) in Addis Abeba. The samples were transported to Norwegian Institute of
Public Health by air on dry ice and, upon at arrival, were stored at –80°C until
thawed for PCR analysis. The samples were transported frozen to Oslo University
Hospital Ullevål, stored at –80°C and thawed for PCR, LPS and cytokine
analyses.
N. meningitidis DNA The copy number of NmDNA in serum and CSF was determined using quantitative PCR
(qPCR), as previously described, except that the sequence-specific hybridization probes (0.3 µM per
reaction) 5′- AGGATACG AATGTGCAGCTGAC -FL and 5′-LC
Red640- GTGGCAATGTAGTACGAA CTGTTGC -PH (Metabion, Munich,
Germany), and the Light Cycler Fast Start DNA master hybridization probe mix
(Roche, Germany) were used in the detection system. Heparin-plasma is considered
optimal for assaying bacterial copy number in blood. However, in a porcine
sepsis model and in meningococcal shock patients, we have previously shown that
serum can be used. The detection limit was 2.5 × 10 2 copies NmDNA/ml CSF and
3.5 × 10 3 copies/ml serum.
The levels were quantified by Bio-Plex Pro Cytokine, Chemokine, and Growth Factor
assay (Bio-Rad, Hercules, CA) for the following 18 cytokines and chemokines:
IL-1β, IL-1 receptor antagonist (IL-1ra), IL-2, IL-4, IL-6, IL-8, IL-10,
IL-12p70, IL-18, granulocyte colony stimulating factor (G-CSF), IFN-ϒ,
IFN-ϒ-induced protein 10 (IP-10), monocyte chemoattractant protein-1 (MCP-1),
macrophage inflammatory proteins 1α (MIP-1α) , macrophage inflammatory proteins
1β (MIP-1β), regulated on activation normal T cells expressed and secreted
(RANTES), TNF-α and TNF-related apoptosis inducing ligand (TRAIL). MMP-9 was
quantified by Bio-Plex Pro Human MMP Assay (Bio-Rad) according to the
manufacturer’s instructions ( and ).
The levels of LPS were determined with Chromo-LAL® (Associates of Cape Cod, East
Falmouth, MA) according to the manufacturer’s instructions and as described earlier.
Meningitis was diagnosed when the patients revealed neck and back rigidity and
turbid CSF (≥1000 × 10 6 leukocytes/l) usually associated with fever,
headache and impaired cerebral function. Fulminant meningococcal septicemia was
diagnosed when patients presented with extensive skin hemorrhages (ecchymoses),
septic shock (systolic blood pressure <100 mm Hg in patients 12 yr or older
and 70 mm Hg in children below 12 yr old that would require fluid and vasoactive
drug therapy in addition to antibiotics). , Immediate recognizable
sequelae comprised acute deafness or major impaired hearing, stroke, epileptic
insults or other acute neurological or other impairment.
We assessed differences between the patient groups by using the Mann-Whitney’s
test. Correlations were carried out using Spearman´s procedure. All calculations
were performed using Graph Pad Prism version 3.0. (La Jolla, CA). The level of
significance was set at 5% in all cases.
Descriptive demographics of meningococcal and pneumococcal patients All patients presented as meningitis. The median age and range (in parentheses)
for 27 patients with meningococcal meningitis were 7 yr (5 d–35 yr) as compared
with 10 yr (2 mo–78 yr) for 16 patients with pneumococcal meningitis. The age
distribution between the groups was not significantly different. Among the N. meningitidis patients, 17 (63%) were males whereas 9
(56%) of the S. pneumoniae patients were males. The gender
distribution was not significantly different between the two groups. In the N. meningitidis group, two (7%) died: a 4-yr-old boy and a
20-yr-old woman. In the S. pneumoniae group three (19%) died: a
4-mo-old girl, a 50-yr-old man and a 54-yr-old woman. Two patients, a 4-mo-old
girl and a 5-yr-old boy (7%) with N. meningitidis developed
immediately recognizable sequelae. Among those with S.
pneumonia e meningitis, three (19%) had severe immediate sequelae
(two boys of 4 and 5 mo, respectively, and a girl of 4 yr of age). N. meningitidis DNA and LPS concentrations A set of 23 paired CSF and serum samples from meningococcal meningitis patients
as assayed to study the compartmentalization of the meningococci, i.e., the
growth of the bacteria in CSF versus blood. The copy number of NmDNA in CSF was
detectable in 22 (96%) of 23 tested CSF samples, with a median of
8.2 × 10 5 (range < 2.5 × 10 2 –1.4 × 10 7 )
copies/ml. Among the 23 serum samples from the corresponding patients, the
median NmDNA level was <3.5 × 10 3 /ml (range
< 3.5 × 10 3 –1.8 × 10 5 /ml). Only 6 (26%) of the 23
patients had detectable NmDNA (≥ 3.5 × 10 3 /ml) in serum, with a
median concentration of 2.1 × 10 4 (range
6.4 × 10 3 –1.8 × 10 5 ) copies/ml. The difference in the
levels of NmDNA in the two compartments (CSF and serum) was statistically
significant ( P = 0.008). In patients with paired samples from
serum and CSF, the copy numbers of NmDNA were always higher in CSF than in
serum, with the differences ranging from 10- to 1000-fold. The highest level in
CSF was 1.4 × 10 7 copies/ml with a corresponding serum level of
1.0 × 10 5 /ml. The highest serum level was 1.8 × 10 5 /ml
with a corresponding CSF level of 3.0 × 10 6 /ml. The median level of LPS in CSF of six patients with detectable NmDNA in serum was
679 (range 184–16,000) endotoxin units (EU)/ml. In 21 patients without
detectable NmDNA copies (< 3.5 × 10 3 /ml) in serum, the median CSF
level of LPS was 100 (2 – 8055) EU/ml. The difference was significant
( P = 0.01). In CSF samples, the LPS concentrations were
significantly correlated only to NmDNA ( r = 0.45, P = 0.03, n = 23), CSF levels of IL-1ra
( r = 0.46, P = 0.02, n = 26) and MMP-9 ( r = 0.50, P = 0.009, n = 27). Age did not correlate
with LPS activity. The numbers of NmDNA copies in CSF were positively correlated with IL-1β
( r = 0.62, P = 0.002, n = 23) and MMP-9 ( r = 0.46, P = 0.03, n = 23) and negatively
correlated with IL-6 ( r = –0.45, P = 0.03, n = 23). The NmDNA numbers in CSF were not significantly
correlated to age. IL-18 was negatively correlated to age
( r = –0.45, P = 0.02, n = 26). Two meningococcal meningitis patients died and two had immediate observable
sequelae. No significant difference was observed in the levels of LPS activity,
NmDNA load and levels of cytokines, chemokines, MMP-9 in CSF, gender or age
between these 4 patients and the 22 other surviving patients without immediate
observable sequelae. Comparison between meningococcal and pneumococcal meningitis patients Significantly higher levels of IL-4, IL-12p70, INF-γ, IP-10, MCP-1, MIP-1α,
MIP-1β, RANTES, and MMP-9 were found in CSF of 16 patients with S.
pneumoniae meningitis as compared with 26 patients with N.
meningitidis meningitis . TRAIL was detectable in three
out of 16 CSF samples from patients with pneumococcal meningitis as compared
with none in the 26 patients with meningococcal meningitis. No statistically
significant difference was found between patients with pneumococcal versus
meningococcal meningitis for the following cytokines and chemokines: TNF-α,
IL-1β, IL-1ra, IL-2, IL-6, IL-8, IL-10, IL-18, and G-CSF. Inflammation markers and severity of disease among pneumococcal meningitis
patients We compared inflammatory markers S. pneumoniae patients who died
( n = 3) or had immediate severe sequelae
( n = 3) versus the 10 other patients who survived. Three
deceased pneumococcal meningitis patients had significantly higher median levels
of IL-1β ( P = 0.04) and MMP-9 ( P = 0.03) than
the 13 survivors. There was no significant difference in concentration for any
of the cytokines, chemokines or MMP-9 between the six patients whom either died
or had immediately recognizable sequelae and the other 10 surviving patients who
had no immediate severe sequelae. Since sequelae were only registered during
admission to hospital, we do not know whether some of these 10 patients
developed sequelae at a later stage.
All patients presented as meningitis. The median age and range (in parentheses)
for 27 patients with meningococcal meningitis were 7 yr (5 d–35 yr) as compared
with 10 yr (2 mo–78 yr) for 16 patients with pneumococcal meningitis. The age
distribution between the groups was not significantly different. Among the N. meningitidis patients, 17 (63%) were males whereas 9
(56%) of the S. pneumoniae patients were males. The gender
distribution was not significantly different between the two groups. In the N. meningitidis group, two (7%) died: a 4-yr-old boy and a
20-yr-old woman. In the S. pneumoniae group three (19%) died: a
4-mo-old girl, a 50-yr-old man and a 54-yr-old woman. Two patients, a 4-mo-old
girl and a 5-yr-old boy (7%) with N. meningitidis developed
immediately recognizable sequelae. Among those with S.
pneumonia e meningitis, three (19%) had severe immediate sequelae
(two boys of 4 and 5 mo, respectively, and a girl of 4 yr of age).
DNA and LPS concentrations A set of 23 paired CSF and serum samples from meningococcal meningitis patients
as assayed to study the compartmentalization of the meningococci, i.e., the
growth of the bacteria in CSF versus blood. The copy number of NmDNA in CSF was
detectable in 22 (96%) of 23 tested CSF samples, with a median of
8.2 × 10 5 (range < 2.5 × 10 2 –1.4 × 10 7 )
copies/ml. Among the 23 serum samples from the corresponding patients, the
median NmDNA level was <3.5 × 10 3 /ml (range
< 3.5 × 10 3 –1.8 × 10 5 /ml). Only 6 (26%) of the 23
patients had detectable NmDNA (≥ 3.5 × 10 3 /ml) in serum, with a
median concentration of 2.1 × 10 4 (range
6.4 × 10 3 –1.8 × 10 5 ) copies/ml. The difference in the
levels of NmDNA in the two compartments (CSF and serum) was statistically
significant ( P = 0.008). In patients with paired samples from
serum and CSF, the copy numbers of NmDNA were always higher in CSF than in
serum, with the differences ranging from 10- to 1000-fold. The highest level in
CSF was 1.4 × 10 7 copies/ml with a corresponding serum level of
1.0 × 10 5 /ml. The highest serum level was 1.8 × 10 5 /ml
with a corresponding CSF level of 3.0 × 10 6 /ml. The median level of LPS in CSF of six patients with detectable NmDNA in serum was
679 (range 184–16,000) endotoxin units (EU)/ml. In 21 patients without
detectable NmDNA copies (< 3.5 × 10 3 /ml) in serum, the median CSF
level of LPS was 100 (2 – 8055) EU/ml. The difference was significant
( P = 0.01). In CSF samples, the LPS concentrations were
significantly correlated only to NmDNA ( r = 0.45, P = 0.03, n = 23), CSF levels of IL-1ra
( r = 0.46, P = 0.02, n = 26) and MMP-9 ( r = 0.50, P = 0.009, n = 27). Age did not correlate
with LPS activity. The numbers of NmDNA copies in CSF were positively correlated with IL-1β
( r = 0.62, P = 0.002, n = 23) and MMP-9 ( r = 0.46, P = 0.03, n = 23) and negatively
correlated with IL-6 ( r = –0.45, P = 0.03, n = 23). The NmDNA numbers in CSF were not significantly
correlated to age. IL-18 was negatively correlated to age
( r = –0.45, P = 0.02, n = 26). Two meningococcal meningitis patients died and two had immediate observable
sequelae. No significant difference was observed in the levels of LPS activity,
NmDNA load and levels of cytokines, chemokines, MMP-9 in CSF, gender or age
between these 4 patients and the 22 other surviving patients without immediate
observable sequelae.
Significantly higher levels of IL-4, IL-12p70, INF-γ, IP-10, MCP-1, MIP-1α,
MIP-1β, RANTES, and MMP-9 were found in CSF of 16 patients with S.
pneumoniae meningitis as compared with 26 patients with N.
meningitidis meningitis . TRAIL was detectable in three
out of 16 CSF samples from patients with pneumococcal meningitis as compared
with none in the 26 patients with meningococcal meningitis. No statistically
significant difference was found between patients with pneumococcal versus
meningococcal meningitis for the following cytokines and chemokines: TNF-α,
IL-1β, IL-1ra, IL-2, IL-6, IL-8, IL-10, IL-18, and G-CSF.
We compared inflammatory markers S. pneumoniae patients who died
( n = 3) or had immediate severe sequelae
( n = 3) versus the 10 other patients who survived. Three
deceased pneumococcal meningitis patients had significantly higher median levels
of IL-1β ( P = 0.04) and MMP-9 ( P = 0.03) than
the 13 survivors. There was no significant difference in concentration for any
of the cytokines, chemokines or MMP-9 between the six patients whom either died
or had immediately recognizable sequelae and the other 10 surviving patients who
had no immediate severe sequelae. Since sequelae were only registered during
admission to hospital, we do not know whether some of these 10 patients
developed sequelae at a later stage.
This study provides more exact information about the compartmentalized nature of N. meningitidis causing meningitis in Ethiopia. Furthermore, we
confirm that the profile of inflammatory mediators in CSF differs between
meningococcal and pneumococcal patients with meningitis as suggested by
others. , By extending the number of mediators studied, this difference
becomes even clearer. In 23 Ethiopian patients with meningococcal meningitis, the copy numbers of NmDNA
were 10- to 1000-fold higher in CSF than in serum. After an initial bacteremic
phase, meningococci penetrate the blood–brain barrier, utilizing type IV pili and
various other surface-exposed bacterial adhesion molecules interacting with CD147
and other molecules on the endothelial cells. , After gaining access to the
subarachnoid space, the bacteria proliferate and reach higher levels in CSF than in
the blood, inducing a more intense localized inflammation in the former. , – , , These patients have marked
clinical symptoms of meningitis but no persistent septic shock. Among the Ethiopian
meningococcal patients, only 6 (26%) of 23 had quantifiable meningococcal DNA
(≥ 3.5 × 10 3 /ml) in serum. These six patients also had significantly
higher numbers of NmDNA copies and higher levels of LPS/ml in CSF than the 17
patients without detectable NmDNA in the circulation. One interpretation of this
observation is that while meningococci proliferate to high levels in CSF, a certain
fraction of bacteria translocate back into the circulation resulting in detectable
bacterial load, i.e., levels above 3.5 × 10 3 meningococci/ml in serum.
Another interpretation is that the genetic constitution of certain patients allows
for more intense proliferation in both blood and CSF, although the proliferation in
blood did not reach levels associated with shock, i.e., above 10 6 copies
of N. meningitidis DNA per milliliter plasma or serum. , , , In Norway, the same pattern as
seen in this Ethiopian study has previously been observed. Only 9 of 28 (32%)
Norwegian patients with distinct meningococcal meningitis without persistent shock
had detectable NmDNA in heparin plasma with detection limit of 2.5 × 10 2 /ml. An interesting aspect concerning bacterial load in the circulation is that none of
the patients with quantifiable copy number of meningococcal DNA reached
10 6 copies/ml. Previous European studies suggested that patients
passing this level of meningococcemia are predisposed to development of persistent
septic shock, renal and pulmonary failure combined with massive activation of the
coagulation system. , , , Unlike patients with clinically distinct meningitis, many
patients with fulminant meningococcal septicemia with persistent shock and large
hemorrhagic skin lesions, previously known as Waterhouse-Friderichsen syndrome,
reveal levels of NmDNA copy numbers as high as 10 7 –10 8 /ml
plasma or serum. , , , This high number of meningococci in the circulation is
accompanied with LPS (endotoxin) levels in plasma or serum as high as 3800 EU/ml
which induce a cytokine storm in the circulation and tissues of the large
organs. , – , , , – Fulminant meningococcal septicemia with persistent shock and massive skin hemorrhages
appears to be very uncommon among patients reaching hospitals in sub-Saharan Africa. One likely explanation is that fulminant meningococcal septicemia develops so
rapidly that patients living in rural areas die before they are transported to hospital. However, patients living in cities—closer to the hospitals—should have
increased chances of being discovered and receiving treatment. In a study from
Kenyatta National Hospital in Nairobi, of 57 patients with meningococcal meningitis,
1 developed shock but multiple organ failure is not mentioned. Use of prescribed or non-prescribed oral preadmission antibiotics with
inhibiting effect on N. meningitidis in the circulation might
influence the development of shock but have little impact on the development of
meningitis. Among Ethiopian physicians, the experience is that the majority of
patients presently have received antibiotics before they are admitted. In the 27
patients with meningococcal meningitis in this study 17 (63%) had not been treated
with antibiotics before hospital admission, 7 were treated, and information was
lacking for 3 patients. Is meningococcal shock with organ failure really less common in Africa than in
industrialized countries? A search in PubMed using the key words of Waterhouse Friderichsen syndrome or
fulminant meningococcal septicemia in the sub-Saharan region did not reveal any case
description. None of our relatively small number of patients with laboratory
confirmed N. meningitidis infection had levels of NmDNA copy
numbers above 10 6 /ml. None developed septic shock, multiorgan failure or
extensive skin hemorrhages. An alternative hypothesis to explain the rarity of fulminant meningococcal septicemia
is that pre-existing Abs elicited by the microbial flora in the upper respiratory
tract and gut may influence the clinical presentation. Such Abs, reacting with
epitopes located in the meningococcal capsular polysaccharide or proteins and LPS in
the outer membrane, may not be bactericidal but could still contain the very rapidly
proliferation of N. meningitidis in the blood and capillaries of
various organs. , Africans growing up in the Europe do, however, develop
fulminant meningococcemia although the incidence has not been established. Could the difference in clinical presentation be explained by differences in
serogroups infecting patients in sub-Saharan Africa, mainly serogroup A, versus
patients in Europe where mainly serogroup B or C strains cause the infection?
Observations from the serogroup A epidemics in Moscow in the 1980s and 1990s,
however, indicate that serogroup A may cause fulminant meningococcal septicemia with
lethal shock and high LPS levels in the blood in Europeans. , Our results suggest that the levels of NmDNA copy numbers are significantly
correlated with the biological activity of LPS in CSF. This is in accordance with
European results but, to our knowledge, has not been previously studied in African patients. The correlation coefficient ( r = 0.45) in the present study
was not as high as previously found ( r = 0.96), but the number of
patients in the present study was higher, i.e. 23 versus 9 in the earlier study. However, we cannot exclude that LPS contamination may have occurred during
collection and handling of some of the CSF samples, as judged by a marked difference
between low level of NmDNA and high LPS activity. A series of previous studies
suggest that N. meningitidis LPS is the most potent, but not the
only, activator of the innate immune system. , – , , , – The relation between levels of biologically active LPS and various cytokines,
chemokines and other inflammatory mediators is complex and time dependent. The
cytokines TNF-α, IL-1β and IL-6 are locally produced in resident cells of the
meninges and CNS as well as by leukocytes penetrating the blood brain barrier. In
rabbit experimental meningitis models, TNF-α, IL-1β and IL-6 are elicited prior to
the influx of leukocytes. In our study, only IL-1β receptor antagonist and MMP-9 were significantly and
positively correlated to the levels of LPS, while Waage et al. have previously
documented that LPS activity was significantly correlated to TNF-α, but not to IL-1β
or IL-6 in CSF. The difference in assay methods, i.e. biological activity in cell assays
versus Ab-based immunological tests and the time from initiation of the symptoms to
lumbar puncture, which presumably is longer in Ethiopia than in Norway, clearly may
influence the reported results. The pattern of cytokines we observed in this study was highly variable between
individual patients in the two patient groups . IL-6, IL-8 and IP-10 were the
cytokines with the largest variability. The variability of IL-6 has previously been
observed among patients with bacterial meningitis. Why IL-6 was negatively correlated to the copy numbers of NmDNA but not to
LPS is not known and may be an arbitrary finding. We chose the panel of inflammatory mediators in this study to verify the results from
two previous studies comparing the inflammatory CSF response in meningococcal versus
pneumococcal meningitis. , In addition, we added several mediators that were relevant for
subarachnoid inflammation elicited by LPS and other cell wall components, including
those of Gram-positive bacteria and that could be determined with same assay system.
MMP-9 was relevant given its previously documented prognostic value. Patients in this study with S. pneumoniae meningitis had, as
a group, significantly (7.7-fold) higher levels of MMP-9 in CSF than patients with
meningococcal meningitis. MMP-9 disrupts the blood–brain barrier, possibly
increasing brain edema and contributing in some cases to brain abscess formation, as
observed in some patients with pneumococcal meningitis but very rarely in
meningococcal meningitis. Furthermore, the median MMP-9 and IL1β levels were
significantly higher in the three patients with lethal pneumococcal meningitis as
compared with the 13 surviving patients. However, given the low numbers of diseased
patients as compared with survivors, the result are associated with uncertainty. Age
did not influence the levels of LPS nor copy number of meningococcal DNA in CSF. Age
was significantly negatively correlated to IL-18 but not to any other cytokines or
chemokines. An interesting observation was documented in the CSF of a 7-d-old boy
who survived the initial illness (although the long term outcome is unknown) bearing
the highest level of TNF-α, IL-6, IL-8 and the second highest level of IL-1β among
26 the meningococcal patients tested. Some neonates clearly have the capacity to
mount a massive cytokine release in the subarachnoid space and still survive the
developing brain edema. When comparing the levels of the N.
meningitidis LPS and DNA or the 19 inflammatory mediators in the 4
patients who died or had immediate severe sequelae with the 22 other meningococcal
patients, we did not observe a reproducible pattern that could explain the worse
outcome. Our results suggest that the levels of cytokines in CSF do not have the
same predictive value of the acute outcome as observed in serum or plasma for
meningococcal patients. Importantly, we found significantly higher levels of 9 of 19 inflammatory mediators
in CSF in patients with pneumococcal as compared with meningococcal meningitis.
Significantly higher levels of IFN-γ, MIP-1α and MMP-9 in CSF have previously been
identified in pneumococcal versus meningococcal meningitis patients. , This study
adds MCP-1, MIP-1β, IL-4, IL-12p70, IP-10, RANTES and TRAIL as possible additional
markers of increased inflammatory response in pneumococcal versus meningococcal
meningitis. Pro-inflammatory mediators including TNF-α, IL-6, IL-8, 12p70, IL-18 and
G-CSF and anti-inflammatory mediators including IL-10 and IL-1ra did not differ
significantly between the two groups of meningitis in line with previous
studies. , If confirmed in a larger number of patients comprising both
types of meningitis, the results may indicate that a different cytokine, chemokine
and MMP-9 profile exists between these two major types of bacterial meningitis. The
results may indicate where we should focus future research to unravel the underlying
pathophysiology of pneumococcal meningitis to better understand the cause of the
high case fatality rate. Future studies should also center on the bacterial load and
outcome. ,
In this study of 27 patients with meningococcal meningitis and 16 patients with
pneumococcal meningitis admitted to three referral hospitals in Ethiopia during a
15-mo period, we found that N. meningitidis was compartmentalized
primarily to the subarachnoid space. The numbers of meningococci were 10- to
1000-fold higher in CSF than in serum. The blood levels did not reach 10 6 bacteria/ml—bacterial concentrations that are associated with the development of
persistent septic shock, multiple organ failure, severe coagulopathy and large skin
hemorrhages. Patients with pneumococcal meningitis had, as a group, significantly
higher levels in 9 out of 19 inflammatory mediators suggesting a different and more
harmful inflammatory response as compared with patients with meningococcal
meningitis.
|
Identifying threshold concepts in postgraduate general practice training: a focus group, qualitative study | 3d7f9aef-9ee6-46fe-82cc-50faa3ccd292 | 9207916 | Family Medicine[mh] | Meyer and Land developed threshold concepts (TCs) to describe crucial, profound shifts in students’ learning. This is often accompanied by an affective sense of intellectual growth (an ‘Aha! moment) and relief at having travelled past a previous blockage. Eight TC characteristics have been identified; transformative, troublesome, irreversible, integrative, bounded, discursive, reconstitutive and liminal. Although not all are required for a piece of learning to be deemed a TC, ‘transformative’ is considered the mandatory feature. TCs have been identified in various disciplines such as nursing, dentistry, occupational therapy, pharmacy and physiotherapy. Some TCs are shared by multiple healthcare professions; ‘caring’, ‘role of touch’, ‘intraprofessionality and interprofessionality’, ‘holistic approach’, ‘uncertainty’, ‘complexity of care’, ‘consider the whole person’, ‘collective competence’ and ‘patient-centredness’. TCs identified in undergraduate medical teaching include; ‘understanding of pain’, ‘the nature of evidence’, ‘homoeostasis’, ‘empathy’, ‘embodied shared care’, ‘identity formation’, ‘becoming an agentic learner’, ‘comfort with uncertainty’ and those regarding professional identity formation. Discipline-specific TCs have been identified in pathology, anaesthetics, cardiothoracic surgery, geriatrics, neurology, palliative care and psychiatry. Neve explored the potential for identifying TCs in postgraduate primary care, and Gupta and Howden described TCs identified in undergraduate primary care teaching. Although other studies have described specific, important or difficult aspects of primary care - such as the role of touch and uncertainty—these were not defined as TCs by the authors. Only two studies have explored TCs in postgraduate medical education (PGME) in general practice in Aotearoa/New Zealand (A/NZ), using ‘vocational thresholds’, a construct derived from TCs. We felt further, detailed exploration of PGME in general practice was needed. An important tool used in this study to aid the understanding of TCs and facilitate discussion and identification of TCs from our participants was the Te Whare Tapa Whā Māori model of health. Te Whare Tapa Whā is an integral and mainstream part of the curriculum in both undergraduate and PGME in general practice in A/NZ. The four domains of Te Whare Tapa Whā, all connected to the land (whenua), include; physical health (taha tinana), mental and emotional health (taha hinengaro), social and family health (taha whānau) and spiritual health (taha wairua), underpinned by connection to the land. These principles align well with other holistic approaches to health, including the biopsychosocial model, the WHO’s definition of health, the acknowledgement that health relates to the physical environment, and is relevant to all ethnic groups in A/NZ.
The primary aim of this study was to identify TCs experienced by general practice trainees via the General Practice Education Programme (GPEP), endorsed and run by the Royal New Zealand College of General Practitioners. GPEP is a 3-year postgraduate specialist general practice training programme involving a formal examination in the first year with 3 years of practical supervision. On graduation the trainee becomes a Fellow of the College. Conditions for entry into GPEP are a minimum of 2 years hospital internship after completion of 6 years undergraduate medical training. Our secondary aim was to consider future incorporation of TCs into the GPEP.
Research design An explorative, qualitative study using focus group discussions and experiential thematic analysis was performed, underpinned with critical realist ontological and contextualistic epistemological perspectives. Inductive approaches, employing ‘detailed readings of raw data to derive concepts, themes or a model… from the frequent, dominant or significant themes inherent in raw data’ are ideal for establishing research where little prior information exists. Participant recruitment and inclusion criteria Historically, TCs were constructed by experts within a discipline, but recent research suggests that students may be better at identifying TCs than teachers due to recall efficiency. Initially, we only recruited trainees, but due to slow recruitment we expanded the inclusion criteria to also include Fellows within 5 years of gaining their Fellowship and GPEP Medical Educators (ME). After ethics approval has been received, advertisements for recruitment were placed via relevant email lists, newsletters and relevant Facebook pages throughout NZ. Following written, informed consent, participants were organised into Zoom focus groups in the order of their enrolment and their availability to attend, on a first-in-first-served basis. The minimum group number was four participants per focus group. Focus groups Apart from the pilot study (see below) which was held in-person, Zoom focus groups were chosen for pragmatic reasons as this research took place during multiple lockdowns due to the COVID-19 pandemic, restricting travel and making face-to-face interviews impossible with participants scattered all across the country. They were held after-hours in the evenings according to participants’ preferences. Focus group sessions began with an original power-point presentation on TCs (see ), finishing with a summary slide of questions to help participants recognise their experience of TCs and facilitate discussion. This presentation was trialled with our department’s Qualitative Research Group and two established researchers in TCs. After minor modifications, the presentation was piloted in an initial focus group with six general practitioners (GPs) and one GPEP trainee. As no modifications were required following feedback from this group, their results are included in this study. The focus groups lasted 65 min, with an initial introduction and power-point presentation (30 min), brainstorming time (5 min) and group discussion time (30 min). Participants were asked to present any transformative or ‘Aha!’ moments in their learning experience which might underlie or identify a TC. 10.1136/bmjopen-2021-060442.supp1 Supplementary data Data collection and analysis Data were collected over a 5-month period (March–August 2021) via audiotaping and transcribing participants’ responses during the focus groups, with concurrent data analysis until saturation occurred. Transcription was performed by a professional service who had a presigned confidentiality contract with our University. Results were analysed separately by each author using standardised thematic analysis and NVivo V.12 software (QSR International, Melbourne, Australia). Data were protected using password-protected computer hardware and software. Only one researcher (AC) was aware of the identity of the participants. Triangulation occurred with both researchers initially coding separately, then comparing results and resolving differences by discussion and analysis. This was a reiterative, integrative process resulting in multiple, repeated cycles. Coding rules, with inclusion and exclusion criteria, were defined to ensure consistent classification. The first author brought lenses of thirty years of clinical experience, including PGME in general practice, academic research in bioethics, uncertainty, decision-making and ancient medical history, and NZ/European ethnicity. The second author’s lenses were those of a Korean/NZ in their second GPEP year of training. Triangulation (member checking) of results was performed by asking participants to check the analysis and provide clarification where necessary. Pseudonyms were chosen by the participants. TCs were constructed by the authors after close, repeated readings of the final thematic analysis of coded data. Patient and public involvement Nil.
An explorative, qualitative study using focus group discussions and experiential thematic analysis was performed, underpinned with critical realist ontological and contextualistic epistemological perspectives. Inductive approaches, employing ‘detailed readings of raw data to derive concepts, themes or a model… from the frequent, dominant or significant themes inherent in raw data’ are ideal for establishing research where little prior information exists.
Historically, TCs were constructed by experts within a discipline, but recent research suggests that students may be better at identifying TCs than teachers due to recall efficiency. Initially, we only recruited trainees, but due to slow recruitment we expanded the inclusion criteria to also include Fellows within 5 years of gaining their Fellowship and GPEP Medical Educators (ME). After ethics approval has been received, advertisements for recruitment were placed via relevant email lists, newsletters and relevant Facebook pages throughout NZ. Following written, informed consent, participants were organised into Zoom focus groups in the order of their enrolment and their availability to attend, on a first-in-first-served basis. The minimum group number was four participants per focus group.
Apart from the pilot study (see below) which was held in-person, Zoom focus groups were chosen for pragmatic reasons as this research took place during multiple lockdowns due to the COVID-19 pandemic, restricting travel and making face-to-face interviews impossible with participants scattered all across the country. They were held after-hours in the evenings according to participants’ preferences. Focus group sessions began with an original power-point presentation on TCs (see ), finishing with a summary slide of questions to help participants recognise their experience of TCs and facilitate discussion. This presentation was trialled with our department’s Qualitative Research Group and two established researchers in TCs. After minor modifications, the presentation was piloted in an initial focus group with six general practitioners (GPs) and one GPEP trainee. As no modifications were required following feedback from this group, their results are included in this study. The focus groups lasted 65 min, with an initial introduction and power-point presentation (30 min), brainstorming time (5 min) and group discussion time (30 min). Participants were asked to present any transformative or ‘Aha!’ moments in their learning experience which might underlie or identify a TC. 10.1136/bmjopen-2021-060442.supp1 Supplementary data
Data were collected over a 5-month period (March–August 2021) via audiotaping and transcribing participants’ responses during the focus groups, with concurrent data analysis until saturation occurred. Transcription was performed by a professional service who had a presigned confidentiality contract with our University. Results were analysed separately by each author using standardised thematic analysis and NVivo V.12 software (QSR International, Melbourne, Australia). Data were protected using password-protected computer hardware and software. Only one researcher (AC) was aware of the identity of the participants. Triangulation occurred with both researchers initially coding separately, then comparing results and resolving differences by discussion and analysis. This was a reiterative, integrative process resulting in multiple, repeated cycles. Coding rules, with inclusion and exclusion criteria, were defined to ensure consistent classification. The first author brought lenses of thirty years of clinical experience, including PGME in general practice, academic research in bioethics, uncertainty, decision-making and ancient medical history, and NZ/European ethnicity. The second author’s lenses were those of a Korean/NZ in their second GPEP year of training. Triangulation (member checking) of results was performed by asking participants to check the analysis and provide clarification where necessary. Pseudonyms were chosen by the participants. TCs were constructed by the authors after close, repeated readings of the final thematic analysis of coded data.
Nil.
Demographics Demographics are presented in . Fifty participants were recruited with eight focus group sessions of four to eight participants each (median 5.5). Two smaller interviews, using the same presentation as for the focus groups, were held with three very late enrolments, primarily to help confirm data saturation. Thematic analysis Three meta-themes identified were the physician’s role, the patient’s role and the physician–patient interaction. These were further classified into themes and subthemes, presented with their coding rules (see ). Physician’s role Within the administrative, aspects theme were three subthemes. The financial subtheme identified that GPs required an acceptance that in A/NZ, charging patients was necessary and ethically permissible. This differed from hospital-based services where no point-of-care charging occurred, while in general practice successful financial management meant maximising every dollar for the patients’ benefit. The legal subtheme related to the importance of clear communication, proper management and documentation of medicolegal aspects. The time management subtheme proved at times to be a complex task, but one which could be improved by being highly organised, reviewing old notes, adhering to a structure, actively focusing on patients and sometimes deferring issues to the future. The consultation tools theme was essential in dealing with the breadth of information required in general practice. The management guidelines subtheme included online resources which provided clarity, helped avoid errors, assisted integrating new therapies and aided the appropriate ordering of tests. Other sources of guidance included conferring with hospital or GP colleagues, while keeping in mind their limitations also. Communication techniques was the other subtheme which identified useful consultation strategies including formal Māori-oriented introductions such as pepeha and/or whakawhanaungatanga, actively inviting family presence and using open questions. Participants reported the advantage of asking patients directly about their ideas, concerns and expectations and were pleased to find these techniques learnt for their examinations also worked in real practice. These helped to improve patient engagement, time management and communication. Intraprofessional and interprofessional aspects was another important theme portraying GPs’ awareness of the differences in dynamics between general practice and hospital-based medicine. Team structure appeared more stable and family-like in general practice, as opposed to switching between hospital wards. Hence general practice required a different approach to team relationships with increased inter-dependence and the sharing of workload between each other, rather than the GP having sole responsibility. The personal experiences theme was not specifically sought for by the researchers but arose spontaneously from participants to form professionally useful TCs. These TCs related to how being a parent, patient and/or ME impacted on the GPs way and view of their practice. Becoming a patient reminded GPs of what it was like to be one, and teaching also provided challenges and unexpected TCs. The professional biases theme was identified via self-reflection which encouraged insights into personal and professional biases, with several relating to a complaint process due to communication issues. Cases of misdiagnoses also impacted on ways of practice such as a tendency for over-investigation. Cultural assumptions, the emotional toll of empathy, childhood trauma affecting adult patients, strategies to avoid burnout, acceptance of not being able to please everyone and the generalist role of a GP were all causes of self-reflection. Uncertainty was a major theme with several subthemes. Sources of uncertainty as a subtheme arose due to the often daunting breadth and complexity of general practice with its many cases of unknown aetiology. Another subtheme was participants’ reactions to uncertainty which were generally positive as they gradually learnt to accept and mentally deal with uncertainties. Sometimes, however, even with time and experience some participants expressed the level of uncertainty could remain too much. Management of uncertainty was the third subtheme identified. Confrontation with uncertainty could lead to seeking for occult illnesses, such as always checking the ears and urine in the febrile child, and expanding one’s diagnostic capability, for example, by acquiring an ultrasound. Sometimes resolving uncertainty was not required by a patient and this could be revelatory for some participants. Another way of managing uncertainty was realising that there may be several different answers or ways of managing the same issue. Using time to help manage uncertainty was a specific strategy utilising the longitudinal physician–patient relationship unique to general practice. This highlighted the benefits of being able to journey with patients while watching and waiting, with appropriate follow-up and safety netting, and checking patients were reassured and satisfied with the plan throughout their journey. Patient’s role The patient needs theme revealed the importance of understanding the distinction between the physicians’ and the patients’ agenda; for example, realising that ticket-to-entry consultations were not a waste of the physician’s time, but a way for the physician to earn trust and enabling patients to then reveal their deeper concerns. It was also important to understand that what the patient wanted may be different to what the GP thought the patient would or should want. Aspects that seemed minor to the GP could be highly significant to the patient. The patient adherence theme highlighted the importance of attending to patients’ adherence to treatment. Checking medication boxes, eliciting reasons for non-adherence (such as lack of transport, treatment preferences or language barriers) and being non-judgemental were crucial. Patient context was the third key theme of the patient’s role which involved recognising that deeply understanding one’s patient was impossible without knowing their context. The social layers surrounding the patient are many and at least some knowledge and engagement with these was essential for good patient care. Understanding this bigger picture reduced blaming the patient for their condition or ordering expensive and unnecessary tests. It could also prevent misdiagnosis and aided cultural awareness. Physician–patient interaction Within the consult-as-therapy theme, there were three subthemes. Explanation was the first subtheme and an important component of the consultation as GPs are required to be able to inform and reassure patients. Listening was an essential subtheme as patients found being well listened to therapeutic. Simply being silent, listening and, by this stillness, normalising situations could help to reduce anxiety, diffuse strong emotions, such as anger and fear, and help patients and physicians become aligned. Furthermore, the presence subtheme highlighted the therapeutic value that patients gained in being in the presence of a trusted professional, that the persona of the doctor held therapeutic value in and of itself. Finally, the relationship theme was considered pivotal by participants to general practice. Maintaining this relationship took precedence over almost everything else, even if at times it meant that the GP might have to bracket their own beliefs in order to gain the trust which would then gradually allow them to help the patient. Creating this partnership began with careful consideration of the opening question, with focused attention, human-to-human connection and the monitoring of ones’ own emotions. Identifying TCs Twenty TCs reflecting these significant and transformative moments were identified. We attempted to distill the essence of each theme (if no subthemes) or subtheme into a single TC with one exception: the management of uncertainty required three TCs to adequately capture the range of responses nested within this subtheme. Each TC acts as a heuristical device, concentrating the essence of the experiential wisdom into a simple, memorable medical proverb for ease of recollection (see ). TCs in the GPEP curriculum shows how the identified TCs relate to the six domains of core competencies in GPEP. All TCs could be mapped onto at least one core competency, with two TCs (‘The relationship is worth a thousand consults’ and ‘Guidelines, GPs little helper) mapping onto two core competencies, and two TCs (‘Money makes the practice go round’ and ‘The whole of the practice is greater than the sum of its parts’) mapping onto three.
Demographics are presented in . Fifty participants were recruited with eight focus group sessions of four to eight participants each (median 5.5). Two smaller interviews, using the same presentation as for the focus groups, were held with three very late enrolments, primarily to help confirm data saturation.
Three meta-themes identified were the physician’s role, the patient’s role and the physician–patient interaction. These were further classified into themes and subthemes, presented with their coding rules (see ). Physician’s role Within the administrative, aspects theme were three subthemes. The financial subtheme identified that GPs required an acceptance that in A/NZ, charging patients was necessary and ethically permissible. This differed from hospital-based services where no point-of-care charging occurred, while in general practice successful financial management meant maximising every dollar for the patients’ benefit. The legal subtheme related to the importance of clear communication, proper management and documentation of medicolegal aspects. The time management subtheme proved at times to be a complex task, but one which could be improved by being highly organised, reviewing old notes, adhering to a structure, actively focusing on patients and sometimes deferring issues to the future. The consultation tools theme was essential in dealing with the breadth of information required in general practice. The management guidelines subtheme included online resources which provided clarity, helped avoid errors, assisted integrating new therapies and aided the appropriate ordering of tests. Other sources of guidance included conferring with hospital or GP colleagues, while keeping in mind their limitations also. Communication techniques was the other subtheme which identified useful consultation strategies including formal Māori-oriented introductions such as pepeha and/or whakawhanaungatanga, actively inviting family presence and using open questions. Participants reported the advantage of asking patients directly about their ideas, concerns and expectations and were pleased to find these techniques learnt for their examinations also worked in real practice. These helped to improve patient engagement, time management and communication. Intraprofessional and interprofessional aspects was another important theme portraying GPs’ awareness of the differences in dynamics between general practice and hospital-based medicine. Team structure appeared more stable and family-like in general practice, as opposed to switching between hospital wards. Hence general practice required a different approach to team relationships with increased inter-dependence and the sharing of workload between each other, rather than the GP having sole responsibility. The personal experiences theme was not specifically sought for by the researchers but arose spontaneously from participants to form professionally useful TCs. These TCs related to how being a parent, patient and/or ME impacted on the GPs way and view of their practice. Becoming a patient reminded GPs of what it was like to be one, and teaching also provided challenges and unexpected TCs. The professional biases theme was identified via self-reflection which encouraged insights into personal and professional biases, with several relating to a complaint process due to communication issues. Cases of misdiagnoses also impacted on ways of practice such as a tendency for over-investigation. Cultural assumptions, the emotional toll of empathy, childhood trauma affecting adult patients, strategies to avoid burnout, acceptance of not being able to please everyone and the generalist role of a GP were all causes of self-reflection. Uncertainty was a major theme with several subthemes. Sources of uncertainty as a subtheme arose due to the often daunting breadth and complexity of general practice with its many cases of unknown aetiology. Another subtheme was participants’ reactions to uncertainty which were generally positive as they gradually learnt to accept and mentally deal with uncertainties. Sometimes, however, even with time and experience some participants expressed the level of uncertainty could remain too much. Management of uncertainty was the third subtheme identified. Confrontation with uncertainty could lead to seeking for occult illnesses, such as always checking the ears and urine in the febrile child, and expanding one’s diagnostic capability, for example, by acquiring an ultrasound. Sometimes resolving uncertainty was not required by a patient and this could be revelatory for some participants. Another way of managing uncertainty was realising that there may be several different answers or ways of managing the same issue. Using time to help manage uncertainty was a specific strategy utilising the longitudinal physician–patient relationship unique to general practice. This highlighted the benefits of being able to journey with patients while watching and waiting, with appropriate follow-up and safety netting, and checking patients were reassured and satisfied with the plan throughout their journey. Patient’s role The patient needs theme revealed the importance of understanding the distinction between the physicians’ and the patients’ agenda; for example, realising that ticket-to-entry consultations were not a waste of the physician’s time, but a way for the physician to earn trust and enabling patients to then reveal their deeper concerns. It was also important to understand that what the patient wanted may be different to what the GP thought the patient would or should want. Aspects that seemed minor to the GP could be highly significant to the patient. The patient adherence theme highlighted the importance of attending to patients’ adherence to treatment. Checking medication boxes, eliciting reasons for non-adherence (such as lack of transport, treatment preferences or language barriers) and being non-judgemental were crucial. Patient context was the third key theme of the patient’s role which involved recognising that deeply understanding one’s patient was impossible without knowing their context. The social layers surrounding the patient are many and at least some knowledge and engagement with these was essential for good patient care. Understanding this bigger picture reduced blaming the patient for their condition or ordering expensive and unnecessary tests. It could also prevent misdiagnosis and aided cultural awareness. Physician–patient interaction Within the consult-as-therapy theme, there were three subthemes. Explanation was the first subtheme and an important component of the consultation as GPs are required to be able to inform and reassure patients. Listening was an essential subtheme as patients found being well listened to therapeutic. Simply being silent, listening and, by this stillness, normalising situations could help to reduce anxiety, diffuse strong emotions, such as anger and fear, and help patients and physicians become aligned. Furthermore, the presence subtheme highlighted the therapeutic value that patients gained in being in the presence of a trusted professional, that the persona of the doctor held therapeutic value in and of itself. Finally, the relationship theme was considered pivotal by participants to general practice. Maintaining this relationship took precedence over almost everything else, even if at times it meant that the GP might have to bracket their own beliefs in order to gain the trust which would then gradually allow them to help the patient. Creating this partnership began with careful consideration of the opening question, with focused attention, human-to-human connection and the monitoring of ones’ own emotions.
Within the administrative, aspects theme were three subthemes. The financial subtheme identified that GPs required an acceptance that in A/NZ, charging patients was necessary and ethically permissible. This differed from hospital-based services where no point-of-care charging occurred, while in general practice successful financial management meant maximising every dollar for the patients’ benefit. The legal subtheme related to the importance of clear communication, proper management and documentation of medicolegal aspects. The time management subtheme proved at times to be a complex task, but one which could be improved by being highly organised, reviewing old notes, adhering to a structure, actively focusing on patients and sometimes deferring issues to the future. The consultation tools theme was essential in dealing with the breadth of information required in general practice. The management guidelines subtheme included online resources which provided clarity, helped avoid errors, assisted integrating new therapies and aided the appropriate ordering of tests. Other sources of guidance included conferring with hospital or GP colleagues, while keeping in mind their limitations also. Communication techniques was the other subtheme which identified useful consultation strategies including formal Māori-oriented introductions such as pepeha and/or whakawhanaungatanga, actively inviting family presence and using open questions. Participants reported the advantage of asking patients directly about their ideas, concerns and expectations and were pleased to find these techniques learnt for their examinations also worked in real practice. These helped to improve patient engagement, time management and communication. Intraprofessional and interprofessional aspects was another important theme portraying GPs’ awareness of the differences in dynamics between general practice and hospital-based medicine. Team structure appeared more stable and family-like in general practice, as opposed to switching between hospital wards. Hence general practice required a different approach to team relationships with increased inter-dependence and the sharing of workload between each other, rather than the GP having sole responsibility. The personal experiences theme was not specifically sought for by the researchers but arose spontaneously from participants to form professionally useful TCs. These TCs related to how being a parent, patient and/or ME impacted on the GPs way and view of their practice. Becoming a patient reminded GPs of what it was like to be one, and teaching also provided challenges and unexpected TCs. The professional biases theme was identified via self-reflection which encouraged insights into personal and professional biases, with several relating to a complaint process due to communication issues. Cases of misdiagnoses also impacted on ways of practice such as a tendency for over-investigation. Cultural assumptions, the emotional toll of empathy, childhood trauma affecting adult patients, strategies to avoid burnout, acceptance of not being able to please everyone and the generalist role of a GP were all causes of self-reflection. Uncertainty was a major theme with several subthemes. Sources of uncertainty as a subtheme arose due to the often daunting breadth and complexity of general practice with its many cases of unknown aetiology. Another subtheme was participants’ reactions to uncertainty which were generally positive as they gradually learnt to accept and mentally deal with uncertainties. Sometimes, however, even with time and experience some participants expressed the level of uncertainty could remain too much. Management of uncertainty was the third subtheme identified. Confrontation with uncertainty could lead to seeking for occult illnesses, such as always checking the ears and urine in the febrile child, and expanding one’s diagnostic capability, for example, by acquiring an ultrasound. Sometimes resolving uncertainty was not required by a patient and this could be revelatory for some participants. Another way of managing uncertainty was realising that there may be several different answers or ways of managing the same issue. Using time to help manage uncertainty was a specific strategy utilising the longitudinal physician–patient relationship unique to general practice. This highlighted the benefits of being able to journey with patients while watching and waiting, with appropriate follow-up and safety netting, and checking patients were reassured and satisfied with the plan throughout their journey.
The patient needs theme revealed the importance of understanding the distinction between the physicians’ and the patients’ agenda; for example, realising that ticket-to-entry consultations were not a waste of the physician’s time, but a way for the physician to earn trust and enabling patients to then reveal their deeper concerns. It was also important to understand that what the patient wanted may be different to what the GP thought the patient would or should want. Aspects that seemed minor to the GP could be highly significant to the patient. The patient adherence theme highlighted the importance of attending to patients’ adherence to treatment. Checking medication boxes, eliciting reasons for non-adherence (such as lack of transport, treatment preferences or language barriers) and being non-judgemental were crucial. Patient context was the third key theme of the patient’s role which involved recognising that deeply understanding one’s patient was impossible without knowing their context. The social layers surrounding the patient are many and at least some knowledge and engagement with these was essential for good patient care. Understanding this bigger picture reduced blaming the patient for their condition or ordering expensive and unnecessary tests. It could also prevent misdiagnosis and aided cultural awareness.
Within the consult-as-therapy theme, there were three subthemes. Explanation was the first subtheme and an important component of the consultation as GPs are required to be able to inform and reassure patients. Listening was an essential subtheme as patients found being well listened to therapeutic. Simply being silent, listening and, by this stillness, normalising situations could help to reduce anxiety, diffuse strong emotions, such as anger and fear, and help patients and physicians become aligned. Furthermore, the presence subtheme highlighted the therapeutic value that patients gained in being in the presence of a trusted professional, that the persona of the doctor held therapeutic value in and of itself. Finally, the relationship theme was considered pivotal by participants to general practice. Maintaining this relationship took precedence over almost everything else, even if at times it meant that the GP might have to bracket their own beliefs in order to gain the trust which would then gradually allow them to help the patient. Creating this partnership began with careful consideration of the opening question, with focused attention, human-to-human connection and the monitoring of ones’ own emotions.
Twenty TCs reflecting these significant and transformative moments were identified. We attempted to distill the essence of each theme (if no subthemes) or subtheme into a single TC with one exception: the management of uncertainty required three TCs to adequately capture the range of responses nested within this subtheme. Each TC acts as a heuristical device, concentrating the essence of the experiential wisdom into a simple, memorable medical proverb for ease of recollection (see ).
shows how the identified TCs relate to the six domains of core competencies in GPEP. All TCs could be mapped onto at least one core competency, with two TCs (‘The relationship is worth a thousand consults’ and ‘Guidelines, GPs little helper) mapping onto two core competencies, and two TCs (‘Money makes the practice go round’ and ‘The whole of the practice is greater than the sum of its parts’) mapping onto three.
The key findings of this study were that TCs arose from many aspects of learning about general practice, encompassing the context and culture of the specialty as well as the immediate clinical work. Notwithstanding this, however, all TCs fundamentally involved the doctor-patient relationship, being grounded in the meta-themes of the physician’s role, the patient’s role (as perceived by the doctor), and the interaction between these two people. This reflects an irreducible aspect of relationality in general practice. These results support approaches to PGME which include teaching and learning of the culture and context of general practice as equally legitimate parts of specialist training, in addition to the more specifically clinical skills such as history-taking and examination. As such, these TCs can act as pedagogical tools in curricula development. Identifying TCs in this study Few participants articulated immediately and in toto a TC, but more often described a situation where they felt transformed and/or had experienced an ‘Aha!’ moment. Previous research has shown context is essential, and that TCs are found in cognitive, affective, psychomotor, social and ethical domains. The incorporation of A/NZ’s mainstream Te Whare Tapa Whā model of health as a tool to understand context, and aid the identification of TCs by our participants, proved useful as evidenced in our experience of the focus groups, and by the relative ease with which many TCs were identified. Notably, our TCs are not specifically related to any specific medical diagnosis or condition as only very few quotations included these. Their generalisability is such that there are many potential clinical circumstances in which these TCs can be experienced. Our construction of TCs was an interpretative act looking beyond the initial thematic analysis, identifying overarching elements and characteristics which were contained within the quotations of that individual theme/sub-theme. We constructed our TCs as heuristical decision-making rules. Heuristics are ‘…a rule or guideline that is easily applied to make complex tasks more simple’. An everyday example of heuristical thinking are proverbs/whakataukī: for example, ‘a stitch in time saves nine’. Proverbs encapsulate wisdom based on analogous experiences in many, various situations, which can be applied in future situations when similar, but not identical, circumstances arise. Heuristical decision making is quick and efficient, particularly when making decisions under great uncertainty. The need for speed and efficiency while tolerating some uncertainty characterises general practice decision-making. Medical proverbs are not new, appearing in the Hippocratic Corpus as aphorisms, ‘pithily expressed precept[s] or observation[s]; a maxim.’ More modern examples also exist. Not all medical proverbs are TCs as they can lack the essential transformative element. A TC in proverbial form is a heuristical rule which is easy to learn and remember, and which captures the essential transformative element that leads to a ‘paradigmatic shift’. Alliteration was used to aid recollection and memorisation. What this study adds to the understanding of TCs Neve discussed several TCs (primarily uncertainty) identified in other medical disciplines as having the theoretical potential to be relevant to PGME in general practice. Gupta and Howden identified three major TCs in undergraduate primary care learning: ‘professional identity formation’, being an ‘agentic learner’ and ‘comfort with uncertainty’. Vaughan et al identified ‘being the good doctor’, ‘healthcare without a prescription’, ‘an ethic of care through relationship’, ‘negotiating the boundaries of care’ and ‘uncertainty and anxiety’ as TCs in PGME in general practice. A further paper by Vaughan, identified ‘dispositional attributes’ relating to self-identity, breaking bad news, collecting clinical evidence, learning from colleagues, self-reflection, being present and listening; and relational aspects as also possible TCs. Our study provides confirmatory evidence of all these areas as sources for TCs, and in much more detail and depth. Additionally, our study identifies other possible TCs in new domains such as practice administration, taught guidelines, communication tools, personal experiences and professional biases. Furthermore, our construction of TCs as heuristic rules, rather than as simply descriptive labels, is new and (we argue) of more practical use. Our depth of data collection and analysis also allows for a more nuanced understanding of previously identified TCs. For example, uncertainty is a concept which has been repeatedly identified as an important TC across multiple medical specialties for both undergraduate and postgraduate medicine. We found the sources of uncertainty for GPs compromise three factors: the complexity of patient presentations, the unpredictably of any working day, and the sheer diversity of what could occur (hence ‘Chew the Complexity, Unpredictability, Diversity, CUD’). Reflection on these sources is required, even if they cannot be resolved. Building resilience and a degree of acceptance of uncertainty is beneficial; hence ‘Embrace the uncertainty’. Uncertainty can be managed by seeking out occult cases (‘Seek and you shall find’), allowing time to reveal what is going on (‘Waiting and seeing, waiting and being’), or simply supporting patients who do not seek resolution but empathy (‘Not knowing is knowing’). What is helpful for patients, from the GPs’ perspectives, is understanding patients’ needs well and applying careful attention to what is said and not said that is, silence. Attention to these aspects can allow the consultation to flourish. Patient adherence to treatment plans also calls for nuanced approaches prioritising the patient before the illness. Learning how essential the understanding of the patient’s context is in providing quality care, and realising the therapeutic potential of the consultation of itself, are also important TCs. Paying careful attention to how one explains situations to patients, knowing when to listen, and always being present in the consultation are crucial in providing good care. These TCs reflect defining, seminal moments of learning for participants, profoundly altering the nature and type of care they provide for patients. The preservation and maintenance of physician–patient relationship is a key TC in understanding how general practice operates as little can be achieved without it. Due prioritisation of this means more can be achieved in the long run, even if in the short-term a GP may wait longer than they, themselves, would have preferred. For participants, these features clearly delineate general practice from hospital-based care. Connecting with the patient as a person is a crucial part of medicine, giving rise to TCs in various disciplines. General practice is no exception. Relational aspects are seen in the physician–patient interaction meta-theme, and in other themes/subthemes; for example, communication techniques (eg, using formal Māori-based introductions such as pepeha to enhance the relationship with Māori patients, described as the process of whakawhanaungatanga, and intraprofessional and interprofessional relationships. It emphasises that when physicians and patients meet in a general practice, it is two people being together in a specifically bounded yet intimate way. Implications and recommendations for teaching and learning Although our TCs map onto the six domains of competencies in GPEP, linking them to core competencies, currently, however, TC teaching is unstructured and learning serendipitous: participants learnt these TCs ad hoc during clinical experiences. We recommend formally teaching TCs in the large-group sessions in GPEP year 1, followed in years 2 and 3 by solidifying trainees’ own experiences of them via the current format of learning in small group discussions. We believe this will help the trainee to understand these significant learning points earlier in their PGME. Currently taught techniques identified in this study as a basis for a TC (eg, specific communication tools), are known to require experience to be fully appreciated and finally transform practice, and hence are revisited at multiple points in a spiral curriculum. This would suggest that a similar spiral approach should also occur when considering teaching based on TCs. Strategies for applying TCs into PGME in general practice have yet to become widely accepted and thus there is a need for research in this area. Nevertheless, TC-based clinical teaching has been shown to be feasible, and by using contextualised scenarios with discussions of lived experiences, TCs have been shown to benefit student learning and understanding. Incorporating TCs into the curriculum requires a design which encourages students to self-reflect and to pay attention to the ‘how’ and ‘why’ rather than only ‘what’. Understanding TCs for a specialty identifies the unique aspects of such and enables the transition and acquisition of the new identity. The trainee does not learn general practice; they become a GP. Strengths and limitations Strengths of this study are its sample size and the inclusion of a mixed group of trainees, Fellows and MEs allowed a greater range of TCs to be identified: there was a tendency for trainees’ TCs to come from recently taught techniques and more experiential TCs to be reported by Fellows and MEs. Another strength is that both authors brought different lenses to the analysis through our personal and professional experiences; these and our work in PGME in general practice may have assisted in discovering a greater range and breadth of TCs than Vaughan et al . who were non-medical. Our study’s weaknesses included that our results largely reflected large-urban PGME in general practice and NZ European/Pākehā experiences. Specifically researching rural and ethnic groups—especially Māori and Pasifika—contexts for PGME in general practice is needed. The focus group format also had limitations as later speakers could be influenced by previous speakers, leading to potential bias. Our hope was that the brainstorming 5 min, performed individually, might mitigate this effect. Our experience suggests this probably was helpful although the framing bias from previous speakers was still evident on occasions. Another limitation is that this research only looked at the perspectives of physicians, and patients themselves almost certainly have important TCs to contribute to this area. Additionally, although not specifically sought, GPs’ own personal experiences did inform some professional TCs. This may be an area for future research as personal experiences are known to valuably contribute towards professional identity formation. Another limitation is that our method of wording TCs is highly culturally-dependent. It reflects our backgrounds and familiarity with Western European and NZ European/Pākehā cultural motifs, advertising slogans, Christian bible references etc; for example, ‘No patient is an island’ referenced John Donne’s 17th Meditation. ‘Guidelines, GPs’ little helpers’ came from the phrase ‘Santa’s little helpers’. Even the TC ‘CUD’ required background knowledge: not knowing the gastric processes of bovines would greatly diminish its utility. Therefore, our heuristical construction of TCs exposes itself to the limitations of all metaphorical constructs, requiring certain background knowledge and experience to make sense. In A/NZ, to extend these TCs by providing suitable whakataukī (proverbs) from Te Reo Māori would be desirable, but requires careful Māori-led scholarship. Cultural overlap may exist, for example, ‘No patient is an island’ might be appropriately replaced with ‘Nā koutou i tangi, nā tātau katoa’—‘When you cry, your tears are shed by us all’. p.68 Or ‘E hara taku toa, I te toa takitahi, he toa takitini’—‘My strength is not as an individual, but as a collective’—might replace ‘The whole of the practice is greater than the sum of the parts’. p.117 Although discussed here in terms of A/NZ, these considerations also pertain to studies in other countries. Although culturally specific, and requiring a certain knowledge base, the heuristical format of TCs proposed by this study, and the associated thematic material may well be useful to inform the PGME in general practice curriculum internationally. Further research is needed in several areas, not least of all understanding the TCs patients may experience when consulting GPs, an area yet to be explored by any specialty but of potential benefit to both patients and doctors. Another important area for future research is the role of TCs in vocational choice and whether the successful negotiation (or not) is determinative of the likelihood in completing PGME in general practice and/or remaining in that specialty.
Few participants articulated immediately and in toto a TC, but more often described a situation where they felt transformed and/or had experienced an ‘Aha!’ moment. Previous research has shown context is essential, and that TCs are found in cognitive, affective, psychomotor, social and ethical domains. The incorporation of A/NZ’s mainstream Te Whare Tapa Whā model of health as a tool to understand context, and aid the identification of TCs by our participants, proved useful as evidenced in our experience of the focus groups, and by the relative ease with which many TCs were identified. Notably, our TCs are not specifically related to any specific medical diagnosis or condition as only very few quotations included these. Their generalisability is such that there are many potential clinical circumstances in which these TCs can be experienced. Our construction of TCs was an interpretative act looking beyond the initial thematic analysis, identifying overarching elements and characteristics which were contained within the quotations of that individual theme/sub-theme. We constructed our TCs as heuristical decision-making rules. Heuristics are ‘…a rule or guideline that is easily applied to make complex tasks more simple’. An everyday example of heuristical thinking are proverbs/whakataukī: for example, ‘a stitch in time saves nine’. Proverbs encapsulate wisdom based on analogous experiences in many, various situations, which can be applied in future situations when similar, but not identical, circumstances arise. Heuristical decision making is quick and efficient, particularly when making decisions under great uncertainty. The need for speed and efficiency while tolerating some uncertainty characterises general practice decision-making. Medical proverbs are not new, appearing in the Hippocratic Corpus as aphorisms, ‘pithily expressed precept[s] or observation[s]; a maxim.’ More modern examples also exist. Not all medical proverbs are TCs as they can lack the essential transformative element. A TC in proverbial form is a heuristical rule which is easy to learn and remember, and which captures the essential transformative element that leads to a ‘paradigmatic shift’. Alliteration was used to aid recollection and memorisation.
Neve discussed several TCs (primarily uncertainty) identified in other medical disciplines as having the theoretical potential to be relevant to PGME in general practice. Gupta and Howden identified three major TCs in undergraduate primary care learning: ‘professional identity formation’, being an ‘agentic learner’ and ‘comfort with uncertainty’. Vaughan et al identified ‘being the good doctor’, ‘healthcare without a prescription’, ‘an ethic of care through relationship’, ‘negotiating the boundaries of care’ and ‘uncertainty and anxiety’ as TCs in PGME in general practice. A further paper by Vaughan, identified ‘dispositional attributes’ relating to self-identity, breaking bad news, collecting clinical evidence, learning from colleagues, self-reflection, being present and listening; and relational aspects as also possible TCs. Our study provides confirmatory evidence of all these areas as sources for TCs, and in much more detail and depth. Additionally, our study identifies other possible TCs in new domains such as practice administration, taught guidelines, communication tools, personal experiences and professional biases. Furthermore, our construction of TCs as heuristic rules, rather than as simply descriptive labels, is new and (we argue) of more practical use. Our depth of data collection and analysis also allows for a more nuanced understanding of previously identified TCs. For example, uncertainty is a concept which has been repeatedly identified as an important TC across multiple medical specialties for both undergraduate and postgraduate medicine. We found the sources of uncertainty for GPs compromise three factors: the complexity of patient presentations, the unpredictably of any working day, and the sheer diversity of what could occur (hence ‘Chew the Complexity, Unpredictability, Diversity, CUD’). Reflection on these sources is required, even if they cannot be resolved. Building resilience and a degree of acceptance of uncertainty is beneficial; hence ‘Embrace the uncertainty’. Uncertainty can be managed by seeking out occult cases (‘Seek and you shall find’), allowing time to reveal what is going on (‘Waiting and seeing, waiting and being’), or simply supporting patients who do not seek resolution but empathy (‘Not knowing is knowing’). What is helpful for patients, from the GPs’ perspectives, is understanding patients’ needs well and applying careful attention to what is said and not said that is, silence. Attention to these aspects can allow the consultation to flourish. Patient adherence to treatment plans also calls for nuanced approaches prioritising the patient before the illness. Learning how essential the understanding of the patient’s context is in providing quality care, and realising the therapeutic potential of the consultation of itself, are also important TCs. Paying careful attention to how one explains situations to patients, knowing when to listen, and always being present in the consultation are crucial in providing good care. These TCs reflect defining, seminal moments of learning for participants, profoundly altering the nature and type of care they provide for patients. The preservation and maintenance of physician–patient relationship is a key TC in understanding how general practice operates as little can be achieved without it. Due prioritisation of this means more can be achieved in the long run, even if in the short-term a GP may wait longer than they, themselves, would have preferred. For participants, these features clearly delineate general practice from hospital-based care. Connecting with the patient as a person is a crucial part of medicine, giving rise to TCs in various disciplines. General practice is no exception. Relational aspects are seen in the physician–patient interaction meta-theme, and in other themes/subthemes; for example, communication techniques (eg, using formal Māori-based introductions such as pepeha to enhance the relationship with Māori patients, described as the process of whakawhanaungatanga, and intraprofessional and interprofessional relationships. It emphasises that when physicians and patients meet in a general practice, it is two people being together in a specifically bounded yet intimate way.
Although our TCs map onto the six domains of competencies in GPEP, linking them to core competencies, currently, however, TC teaching is unstructured and learning serendipitous: participants learnt these TCs ad hoc during clinical experiences. We recommend formally teaching TCs in the large-group sessions in GPEP year 1, followed in years 2 and 3 by solidifying trainees’ own experiences of them via the current format of learning in small group discussions. We believe this will help the trainee to understand these significant learning points earlier in their PGME. Currently taught techniques identified in this study as a basis for a TC (eg, specific communication tools), are known to require experience to be fully appreciated and finally transform practice, and hence are revisited at multiple points in a spiral curriculum. This would suggest that a similar spiral approach should also occur when considering teaching based on TCs. Strategies for applying TCs into PGME in general practice have yet to become widely accepted and thus there is a need for research in this area. Nevertheless, TC-based clinical teaching has been shown to be feasible, and by using contextualised scenarios with discussions of lived experiences, TCs have been shown to benefit student learning and understanding. Incorporating TCs into the curriculum requires a design which encourages students to self-reflect and to pay attention to the ‘how’ and ‘why’ rather than only ‘what’. Understanding TCs for a specialty identifies the unique aspects of such and enables the transition and acquisition of the new identity. The trainee does not learn general practice; they become a GP.
Strengths of this study are its sample size and the inclusion of a mixed group of trainees, Fellows and MEs allowed a greater range of TCs to be identified: there was a tendency for trainees’ TCs to come from recently taught techniques and more experiential TCs to be reported by Fellows and MEs. Another strength is that both authors brought different lenses to the analysis through our personal and professional experiences; these and our work in PGME in general practice may have assisted in discovering a greater range and breadth of TCs than Vaughan et al . who were non-medical. Our study’s weaknesses included that our results largely reflected large-urban PGME in general practice and NZ European/Pākehā experiences. Specifically researching rural and ethnic groups—especially Māori and Pasifika—contexts for PGME in general practice is needed. The focus group format also had limitations as later speakers could be influenced by previous speakers, leading to potential bias. Our hope was that the brainstorming 5 min, performed individually, might mitigate this effect. Our experience suggests this probably was helpful although the framing bias from previous speakers was still evident on occasions. Another limitation is that this research only looked at the perspectives of physicians, and patients themselves almost certainly have important TCs to contribute to this area. Additionally, although not specifically sought, GPs’ own personal experiences did inform some professional TCs. This may be an area for future research as personal experiences are known to valuably contribute towards professional identity formation. Another limitation is that our method of wording TCs is highly culturally-dependent. It reflects our backgrounds and familiarity with Western European and NZ European/Pākehā cultural motifs, advertising slogans, Christian bible references etc; for example, ‘No patient is an island’ referenced John Donne’s 17th Meditation. ‘Guidelines, GPs’ little helpers’ came from the phrase ‘Santa’s little helpers’. Even the TC ‘CUD’ required background knowledge: not knowing the gastric processes of bovines would greatly diminish its utility. Therefore, our heuristical construction of TCs exposes itself to the limitations of all metaphorical constructs, requiring certain background knowledge and experience to make sense. In A/NZ, to extend these TCs by providing suitable whakataukī (proverbs) from Te Reo Māori would be desirable, but requires careful Māori-led scholarship. Cultural overlap may exist, for example, ‘No patient is an island’ might be appropriately replaced with ‘Nā koutou i tangi, nā tātau katoa’—‘When you cry, your tears are shed by us all’. p.68 Or ‘E hara taku toa, I te toa takitahi, he toa takitini’—‘My strength is not as an individual, but as a collective’—might replace ‘The whole of the practice is greater than the sum of the parts’. p.117 Although discussed here in terms of A/NZ, these considerations also pertain to studies in other countries. Although culturally specific, and requiring a certain knowledge base, the heuristical format of TCs proposed by this study, and the associated thematic material may well be useful to inform the PGME in general practice curriculum internationally. Further research is needed in several areas, not least of all understanding the TCs patients may experience when consulting GPs, an area yet to be explored by any specialty but of potential benefit to both patients and doctors. Another important area for future research is the role of TCs in vocational choice and whether the successful negotiation (or not) is determinative of the likelihood in completing PGME in general practice and/or remaining in that specialty.
This study has identified twenty TCs for PGME in general practice relating to many different aspects of the clinical work, context and cultural milieu of this specialty. Underpinning all of these TCs was the irreducible relationality existing within general practice, that at its fundamental core there is a meeting of two people; the doctor and the patient. We would advise consideration of these TCs when constructing or reviewing PGME programmes in general practice. While future pedagogical research is needed into the effectiveness of teaching centred on these TCs, our results suggest explicit teaching of these TCs may well strengthen and inform the curricula of PGME in general practice, and be of benefit to trainees in terms of not only clinical skills, but also role and identity formation.
Reviewer comments Author's
manuscript
|
Standard ophthalmology residency training in China: an evaluation of resident satisfaction on training program in Guangdong Province | af428bb8-40ce-4388-b477-9f93b8e3c08b | 10401789 | Ophthalmology[mh] | Under the guidance of the National Health and Family Planning Commission in mainland China, the establishment of the national standardized training for resident doctors (STRD) has served as a kind of postgraduate education to improve the quality of physician training since 2014 . Even though the diverse subspecialties within the program ensure all trainees can become safe and independent junior practitioners with sophisticated job performance, standardized residency training is relatively new in China, especially in ophthalmologists’ training . The introduction of STRD in Ophthalmology is crucial because it prepares graduates for the rapidly evolving field dominated by technological advances, procedural diversification, and practical skills. Theoretically, ophthalmology STRD in China could start immediately after the 5-year medical school training, which consisted of basic and clinical science courses in the early years and followed by clerkship and internship training in the last 2 years. Students may start their residency training with the possibility of pursuing a Master’s (usually 3 years) or Doctor of Philosophy degree (usually 3 + 3 years or more) in clinical medicine. Specifically, some graduates could apply for master’s degrees and STRD concurrently, which is named the professional master. Usually, the STRD for ophthalmologists in China has a duration of three years for all trainees regardless of their degree, while it varies from 2 to 7 years internationally. Ophthalmology residents under STRD in China were given annual assessments on clinical theory and practice. However, their satisfaction with the program was left unmeasured. To fill the void of such measurement, this research aimed to provide a comprehensive evaluation of the ophthalmology residency programs in Guangdong Province through anonymous and independent responses from ophthalmology residents. A cross-sectional survey was conducted from September 2019 to December 2019 in all the training bases of 38 hospitals in Guangdong Province. The ophthalmology residents, including postgraduate years (PGY) 1, 2, and 3, were invited to participate in this survey. Ethical approval was taken from all the Ethics Committee of 38 ophthalmology bases (shown in the ethics approval and consent to participate section of the Declarations part in detail) of Guangdong province before September 2019. This study was conducted according to the principles of the Declaration of Helsinki. Questionnaires were sent via mobile application (Wechat®) to all resident ophthalmologists, with a cover letter explaining the purpose of the questionnaire. The responsible person in the administrative office at each hospital was contacted to ensure that all ophthalmology residents had received the questionnaires. The questionnaire was self-administered, requiring approximately 20 min to complete. Respondents were instructed to complete the questionnaire independently. Data were returned to the coordinating center at Zhongshan Ophthalmic Center and were extracted for further analysis. Potential participants were told that returning the form was indicative of informed consent, and anonymity could be maintained without the need for a signature. Questionnaire The questionnaire was designed based on published literature regarding Ophthalmology resident training . This questionnaire in Chinese was designed to obtain information from resident trainees on the following 5 categories: (1) Demographic information (gender, age, educational level, and years of training); (2) Working environment; (3) Clinical exposure, supervision, and hands-on training opportunities; (4) Involvement in academic and research activities; (5) Satisfaction. Drafted questionnaires were administered to 20 ophthalmology residents in a pilot survey before initiation. After modification, the final version with 48 items in 5 parts was confirmed (English and Chinese version of the questionnaire was available as supplementary material online). Statistical analyses All responses were verified and invalid questionnaires were excluded. Invalid questionnaires were defined as (1) unclear and incomplete responses, (2) more than one response to each one-answer question, and (3) duplicate questionnaires. Statistical analyses were performed using SPSS software version 22.0 (SPSS, Inc., Chicago, IL). Multiple logistic regression analysis was used to evaluate the factors (demographic information, working environment, clinical exposure, supervision and hands-on training opportunities, and involvement in academic and research activities) impacting the overall satisfaction. P < 0.05 was considered statistically significant. The questionnaire was designed based on published literature regarding Ophthalmology resident training . This questionnaire in Chinese was designed to obtain information from resident trainees on the following 5 categories: (1) Demographic information (gender, age, educational level, and years of training); (2) Working environment; (3) Clinical exposure, supervision, and hands-on training opportunities; (4) Involvement in academic and research activities; (5) Satisfaction. Drafted questionnaires were administered to 20 ophthalmology residents in a pilot survey before initiation. After modification, the final version with 48 items in 5 parts was confirmed (English and Chinese version of the questionnaire was available as supplementary material online). All responses were verified and invalid questionnaires were excluded. Invalid questionnaires were defined as (1) unclear and incomplete responses, (2) more than one response to each one-answer question, and (3) duplicate questionnaires. Statistical analyses were performed using SPSS software version 22.0 (SPSS, Inc., Chicago, IL). Multiple logistic regression analysis was used to evaluate the factors (demographic information, working environment, clinical exposure, supervision and hands-on training opportunities, and involvement in academic and research activities) impacting the overall satisfaction. P < 0.05 was considered statistically significant. During the period of this evaluation, there were a total of 635 post-graduate ophthalmology residents in Guangdong Province, China. A total of 471 valid questionnaires were returned from 38 training bases of Guangdong Province. The response rate was 74.17%. The demographic characteristics of the involved residents are shown in Table . In this survey, the STRD program included both theoretical and operation teaching. General satisfaction The overall satisfaction of residents with their residency training is shown in Table . 60.3% of the respondents reported overall satisfaction with their training. 60.4% of the PGY-1 residents, 55.3% of the PGY-2 residents and 66.2% of the PGY-3 residents felt satisfied with the training (Fig. ). In addition, 82.8% of the residents acknowledged this training was helpful for future clinical work. The potential factors affecting general satisfaction are shown in Table . It was shown that PGY-3 residents had the highest satisfaction (odds ratio [OR] of PGY-1 = 1.99 (p = 0.023), OR PGY-2 = 2.66 (p = 0.001)). Married residents were more satisfied with this training than single residents (OR = 0.43, p = 0.031). In addition, compared to the professional master students who received a monthly aid of about 1000 Chinese yuan, the practitioners who received more than 4000 Chinese yuan monthly reported higher satisfaction scores. Furthermore, a satisfaction score above 90 on the doctor-patient relationship had a positive effect on overall STRD satisfaction. Residents with no family members in the medical field tended to have lower overall satisfaction (OR = 2.60, p = 0.019). The less time spent on writing medical documents compared to clinical work had a statistically significant effect on overall satisfaction. Specifically, residents spending more than 80% of their working time on writing medical documents had the lowest overall satisfaction with the STRD program. For the frequency of attending academic meetings per year, only the group ‘zero’ was found to be different from the reference group ‘>5’ in affecting overall STRD satisfaction. The P value of “Parallel line test” was 0.878. McFadden R 2 = 0.223. The overall satisfaction in the ordinal regression analysis was divided into three levels, including satisfied, neutral, and dissatisfied. In addition, factors like age, gender, highest educational degree, time used for work per day, the match degree between income and initial investment, time for clinical work per week, the largest component of one’s clinical work, time of attending continuing medical education lectures in the recent 6 months, time for research work per week, and the kinds of the research work, did not affect the overall satisfaction. Quality of teaching Most of the respondents were satisfied with the quality of teaching in different settings (Table ). The satisfaction rates for operative training in the cornea and orbit departments were 55.42% and 57.53%, respectively (Table ). However, only 52.2% of PYG-2 residents reported satisfaction with operative teaching (Fig. A). Figure B displays the distribution of residents who were satisfied with the various components of operative teaching, such as variety, complexity, and exposure to different subspecialties. For residents, various streams could be employed to boost additional knowledge/training on clinical knowledge, such as reading journals/books/compact disks (36.90%), contacting residency mentors for advice (17.98%), attending seminars and meetings (12.46%), communicating with other residents (12.46%), and participating in continuing education courses (11.76%). Contrary to the wide availability of different learning methods, outstanding academic performance was achieved by a few residents. In detail, only 17.62% and 22.29% of the respondents have published academic articles on Chinese and international medical journal, respectively. Regarding the reasons for publishing articles or attending conferences, 40.76% of residents stated that it helps them obtain medical degrees. 23.14% of them used it as a platform to solve clinical problems, while 17.62% found it to be an interesting pursuit. 10.62% cited career promotion pressure as their motivation, and 7.86% recommended it as a means to enhance their academic status. Furthermore, with regards to the influence of participating in academic activities during clinical work, 63.48% of respondents believed it to be beneficial in enhancing clinical thinking, while 29.72% felt that it only leads to fatigue and is a waste of time. The remaining 6.79% reported no impact on their work. Operative experience The satisfaction with the operative experience was slightly lower than that of the teaching experience (Table ). Most of the residents were satisfied with the cases’ variety (67.72%), complexity (68.58%), and volume (69.64%). Only 44% of the PGY-2 residents were satisfied with their corneal operative experience. The percentages of the residents who had confidence in acquiring examination skills and analyzing examination reports in the first year and after the training are shown in Fig. . In the last year, significant improvements were observed in both the examination skills and the ability to analyze examination reports compared with those in the first year. Additionally, all residents reported that they could handle subconjunctival injection, retrobulbar injection, anterior chamber paracentesis, and intravitreal injection confidently when the residency training was completed. Meanwhile, 81.29%, 79.35%, 66.45%, and 9.68% of the residents could complete chalazion excision, sutures of the eyelid injury, corneoscleral suture, and phacoemulsification, respectively. Only 6.45% of them could not complete the operations above. Career preferences The career preference of the involved residents is shown in Table . 67.1% of residents chose cataract as their first choice, while only 1.1% chose genetic disease as the first choice. The main factor influencing the choice of specialized subject was interest (selected by 84.50%), while occupational accomplishment (selected by 46.92%), salary (selected by 34.82%) and flexibility (selected by 25.27%) were also important. Some residents also chose low technical difficulty (selected by 6.16%), and others (selected by 4.67%) as the reasons of their choices. The overall satisfaction of residents with their residency training is shown in Table . 60.3% of the respondents reported overall satisfaction with their training. 60.4% of the PGY-1 residents, 55.3% of the PGY-2 residents and 66.2% of the PGY-3 residents felt satisfied with the training (Fig. ). In addition, 82.8% of the residents acknowledged this training was helpful for future clinical work. The potential factors affecting general satisfaction are shown in Table . It was shown that PGY-3 residents had the highest satisfaction (odds ratio [OR] of PGY-1 = 1.99 (p = 0.023), OR PGY-2 = 2.66 (p = 0.001)). Married residents were more satisfied with this training than single residents (OR = 0.43, p = 0.031). In addition, compared to the professional master students who received a monthly aid of about 1000 Chinese yuan, the practitioners who received more than 4000 Chinese yuan monthly reported higher satisfaction scores. Furthermore, a satisfaction score above 90 on the doctor-patient relationship had a positive effect on overall STRD satisfaction. Residents with no family members in the medical field tended to have lower overall satisfaction (OR = 2.60, p = 0.019). The less time spent on writing medical documents compared to clinical work had a statistically significant effect on overall satisfaction. Specifically, residents spending more than 80% of their working time on writing medical documents had the lowest overall satisfaction with the STRD program. For the frequency of attending academic meetings per year, only the group ‘zero’ was found to be different from the reference group ‘>5’ in affecting overall STRD satisfaction. The P value of “Parallel line test” was 0.878. McFadden R 2 = 0.223. The overall satisfaction in the ordinal regression analysis was divided into three levels, including satisfied, neutral, and dissatisfied. In addition, factors like age, gender, highest educational degree, time used for work per day, the match degree between income and initial investment, time for clinical work per week, the largest component of one’s clinical work, time of attending continuing medical education lectures in the recent 6 months, time for research work per week, and the kinds of the research work, did not affect the overall satisfaction. Most of the respondents were satisfied with the quality of teaching in different settings (Table ). The satisfaction rates for operative training in the cornea and orbit departments were 55.42% and 57.53%, respectively (Table ). However, only 52.2% of PYG-2 residents reported satisfaction with operative teaching (Fig. A). Figure B displays the distribution of residents who were satisfied with the various components of operative teaching, such as variety, complexity, and exposure to different subspecialties. For residents, various streams could be employed to boost additional knowledge/training on clinical knowledge, such as reading journals/books/compact disks (36.90%), contacting residency mentors for advice (17.98%), attending seminars and meetings (12.46%), communicating with other residents (12.46%), and participating in continuing education courses (11.76%). Contrary to the wide availability of different learning methods, outstanding academic performance was achieved by a few residents. In detail, only 17.62% and 22.29% of the respondents have published academic articles on Chinese and international medical journal, respectively. Regarding the reasons for publishing articles or attending conferences, 40.76% of residents stated that it helps them obtain medical degrees. 23.14% of them used it as a platform to solve clinical problems, while 17.62% found it to be an interesting pursuit. 10.62% cited career promotion pressure as their motivation, and 7.86% recommended it as a means to enhance their academic status. Furthermore, with regards to the influence of participating in academic activities during clinical work, 63.48% of respondents believed it to be beneficial in enhancing clinical thinking, while 29.72% felt that it only leads to fatigue and is a waste of time. The remaining 6.79% reported no impact on their work. The satisfaction with the operative experience was slightly lower than that of the teaching experience (Table ). Most of the residents were satisfied with the cases’ variety (67.72%), complexity (68.58%), and volume (69.64%). Only 44% of the PGY-2 residents were satisfied with their corneal operative experience. The percentages of the residents who had confidence in acquiring examination skills and analyzing examination reports in the first year and after the training are shown in Fig. . In the last year, significant improvements were observed in both the examination skills and the ability to analyze examination reports compared with those in the first year. Additionally, all residents reported that they could handle subconjunctival injection, retrobulbar injection, anterior chamber paracentesis, and intravitreal injection confidently when the residency training was completed. Meanwhile, 81.29%, 79.35%, 66.45%, and 9.68% of the residents could complete chalazion excision, sutures of the eyelid injury, corneoscleral suture, and phacoemulsification, respectively. Only 6.45% of them could not complete the operations above. The career preference of the involved residents is shown in Table . 67.1% of residents chose cataract as their first choice, while only 1.1% chose genetic disease as the first choice. The main factor influencing the choice of specialized subject was interest (selected by 84.50%), while occupational accomplishment (selected by 46.92%), salary (selected by 34.82%) and flexibility (selected by 25.27%) were also important. Some residents also chose low technical difficulty (selected by 6.16%), and others (selected by 4.67%) as the reasons of their choices. In China, continuous improvement of professional quality in practice training is diving in. Therefore, paying attention to trainees’ feedback could aid in the sustainable innovation of STRD. To bridge this gap, we conducted a survey among all ophthalmology training bases in Guangdong Province, consisting of 38 tertiary-class hospitals. To our knowledge, this is the investigation of maximum sample size (471 responses) with a high response rate of 74.17% , ensuring high validity for our study . The overall satisfaction and all the related influencing factors were analyzed in this survey. The involved residents included undergraduate (49.3%), graduate (41%) and doctoral (10%) students, and they all completed at least 5 years of a medical school curriculum. This is different from other countries such as the US, in which students must complete 4 years of a college curriculum, followed by medical school, and finally pass the United States Medical Licensing Examination (USMLE) examinations before being qualified for postgraduate ophthalmology training . Interestingly, only 5.7% of the involved residents were older than 30 years of age, who were younger than the ones in other countries. There were fewer married residents in our survey (14.4%) compared to surveys in other countries. More surprisingly, the female/male ratio in this survey was 2.26 (326/145), while it was less than 1.0 internationally . In fact, the physician workforce in China has been female predominant since 2005 . The potential explanation for this phenomenon was that there was no significant ongoing female gender bias in China . Also, in China, ophthalmologists are paid less for clinical work, receive smaller research grants, have fewer professional ties with the medical industry, have fewer publications in peer-reviewed ophthalmology journals, and hold fewer journal editors’ chairs. Consequently, male medical students had less tendency to be ophthalmologists . In summary, 60.3% of the Ophthalmology residents in Guangdong Province were satisfied with the STRD program, while 31.63% had a neutral attitude. 82.8% of the respondents thought this program was helpful for future clinical work. Notably, PGY-3 trainees were more satisfied with STRD than PGY-1 trainees (Table ), which implied that the STRD program benefits the trainees as they continue their medical careers. In our survey, the percentage of overall satisfaction was higher in residents with doctoral (73.91%) degree than with bachelor (57.33%) or master (60.62%) degrees. It was possible that trainees with doctoral degrees had more opportunities to do independent operations, and thus reported a higher satisfaction score. Certainly, new innovations are still needed to not only improve the residents’ satisfaction but also improve their clinical competency. We found several factors that impacted the overall satisfaction (Table ). Improving income, proper working time arrangement, a harmonious doctor-patient relationship, and chances for attending academic meetings, would boost satisfaction . However, compared with ophthalmology residents training program in the U.S. (93.6% responded that they were highly satisfied with their programs), the overall satisfaction is relatively low in our survey . Insufficient opportunities, the limited availability of training positions, the lack of adequate teaching by attending physicians, and difficulty in securing a job after training, are some possible explanations for such low satisfaction, and deserve to be further emphasized in China . In addition, less than 70% of the residents were satisfied with their operation teaching and experience. By the time they finished the STRD, only 9.68% of the residents, which was far less than the percentages generated from international surveys, thought they could perform phacoemulsification cataract surgery independently, which was the basic surgery of ophthalmology . A possible reason could be in comparison to STRD in high-income countries, the under-going Ophthalmology STRD in Guangdong Province focuses more on the diagnosis and treatment principles of diseases, but less on the independent surgical procedures. The strained doctor-patient relationship is one of the reasons for this situation in China . The lack of surgical procedures in the STRD programs does not mean that they get ignored completely, especially in Guangdong Province. Under the guidance of the superior clinical tutors, every resident has the opportunity to see patients in outpatient clinics. As for teaching surgical procedures, most residents can learn through assisting operations, while few of them could perform the operations independently. In addition, wet labs and/or surgical simulators had been used for surgical training in China, especially in Zhongshan ophthalmic center, which has been proved to be effective in training ophthalmology operations . Surgical skills training in ophthalmology is still challenging globally . Teaching in an operating room is complicated for both the teachers and the residents, and it is also likely to expose patients to extra risks. The integration of virtual reality surgical simulation and wet labs for clinical judgment and technical skill assessment could be encouraged, especially in China . The most confusing result from the survey was that the satisfaction of cornea and orbit department training experience were the lowest. The percentages of the involved residents who viewed cornea and orbit as their aspired areas were 18.9% and 4.5%, respectively, which were much lower than the results from other countries . In fact, it has been proved that if the residents were well-trained, the clinical outcome of keratoplasty done by residents were similar with those done by experienced surgeons . The potential reason for the low satisfaction was that not all the training bases supported that residents could perform the keratoplasty procedures or the orbital surgeries, due to the medical environment in the real world on the aspects from the patients demands, the training plan of the hospital or the subspeciality, the social and economic efficiency. Consequently, it is time for the training director to emphasize the improvement of supervision during the cornea and orbit subspeciality training, which could lead to possible clinical growth among residents. When asked about career preferences in this survey, the most common were cataract (67.1%), refractive surgery (42.3%), vitreoretinal (36.5%), and Optometry (28.7%), which was different from the results of oculoplastic (31.4%), vitreoretinal (25.1%), glaucoma (24.6%) and cornea (24.0%) in the UK . The potential reasons for this difference are not clear but are worthy to be explored. For Ophthalmology healthcare in China, appropriate policy guidance should be in place to ensure all the ophthalmology subspecialties have enough talents to promote professional development, and to avoid the shortage of ophthalmologists, especially in areas of cornea (18.9%), ocular traumas (10.4%), pediatrics strabismus (9.8%), uveitis (5.9%), orbit (4.5%) and genetic (1.1%), where the percentage of career preference/aspiration was below than 20% . In conclusion, the STRD program in Guangdong Province has achieved a comparable satisfaction and has been well-received by trainees. Married residents intend to be more satisfied than singles. The current curricula have significantly enhanced the clinical experiences and confidence of residents, thereby improving their competency. However, there is still room for improvement, particularly in the area of operation training. Our experience with ophthalmology STRD in Guangdong Province serves as a valuable reference for the assessment and enhancement of STRD programs throughout China. Below is the link to the electronic supplementary material. Supplementary Material 1 |
Reduction of the long-term use of proton pump inhibitors by a patient-oriented electronic decision support tool (arriba-PPI): study protocol for a randomized controlled trial | d7e4af39-ff9f-4010-88a0-18c907fecee8 | 6868794 | Health Communication[mh] | Prescriptions of proton pump inhibitors (PPIs) have been increasing considerably in recent years in many countries. According to the German drug prescription report, a total of 3.7 billion defined daily doses (DDD) of PPIs were prescribed in Germany in 2015. Thus, PPIs are one of the most commonly prescribed drugs . Even though the number of PPI prescriptions slightly decreased from 2016 to 2017, the number of prescribed PPIs still remains high. The halt of the rising trend might be due to the recent discussion around possible side effects caused by PPIs when used for long periods . Positive evidence exists regarding the effectiveness of PPIs in the treatment of gastrointestinal ulcers , eradication therapy , reflux disease , and gastric pre-malignant lesions . However, PPIs are also increasingly used as a means of protecting the stomach in polypharmacy patients (i.e. the current intake of several drugs ) and in combination with non-steroidal antirheumatic drugs or platelet aggregation inhibitors. Furthermore, they are used in patients suffering from nun-ulcer dyspepsia and for stress ulcer prophylaxis during hospital stay . PPIs should be used for short periods and only few indications justify their long-term use. Even though long-term use without indication is considered inappropriate , PPIs are frequently overused as lifestyle drugs . Long-term use of PPIs poses potential risks , such as interactions with other drugs or side-effects . Once prescribed, withdrawing PPIs seems to be difficult due to a potential rebound effect reactivating dyspeptic complaints . Apart from the potential risks, the frequent use of PPIs contributes to substantial costs for the healthcare system . Given the frequent use and overuse of PPIs, withdrawing PPIs is important and supporting strategies for GPs are needed . Deprescribing is “the process of withdrawal of an inappropriate medication, supervised by a health care professional [ …]” . A recent Cochrane review identified the benefits and harms of deprescribing for chronic PPI use. Six studies were included; five of them deprescribed PPIs on-demand, whereas one abruptly discontinued PPIs. Overall, a significant reduction in the number of PPIs taken could be achieved. The deprescribing of PPIs led to side effects such as significantly more gastrointestinal complaints . However, a recently developed guideline to support deprescribing of PPIs concluded that PPIs can be withdrawn without causing any major clinical harm . Still, there are not enough data on the long-term benefits or harms of PPI withdrawal and the cost/resource use of the interventions is not known. Furthermore, the patient was not involved in the deprescribing process . Involving the patient into the deprescribing process is important and it has been shown that deprescribing interventions are most effective when they involve the patient . Furthermore, decision aids can support patients in their treatment decisions, enhance informed, value-based choices, and improve patient–practitioner communication compared to usual care . The electronic decision support tool arriba is widely used in Germany and has been originally developed to support shared decision-making (SDM) processes in GP practices in the prevention of cardiovascular diseases. Arriba was developed by Institutes of General Practice of several German universities and is nowadays managed by the independent and non-profit arriba cooperative society. It is based on the principles of evidence-based and individual patient-oriented medicine . Today, arriba consists of several modules that have been scientifically evaluated previously . Given the long-term overuse of PPIs and their potential risks, there is a strong need for effective interventions optimizing the long-term use of PPIs. Therefore, an additional module for the arriba tool has been developed. The arriba-PPI tool is targeted at the primary care setting to support GPs to identify and reduce inappropriate long-term prescribing of PPIs in a SDM process with their patients. It presents options and their evidence base in an easy to understand way and offers practical behavioral advice and individualized messages for patients. In line with the MRC framework for complex interventions , we evaluated the arriba-PPI tool in a feasibility study before the start of this trial .
We followed the Standard Protocol Items: Recommendations for Interventional trials (SPIRIT checklist) in designing the study protocol (see Additional file ). As recommended by the MRC framework , our project incorporates three of the four elements: development; piloting; and evaluation. Upon positive evaluation, the arriba-PPI tool will be implemented (fourth element of the MRC framework). This study protocol mainly focuses on the evaluation. Objectives of the study Our principal hypothesis is that a patient-oriented strategy of medication reduction using the arriba-PPI tool in primary care practices reduces PPI prescriptions by at least 15% in comparison to conventional consultations over a period of six months (see Table ). Our secondary objective is to evaluate the effectiveness of the implementation of the arriba-PPI tool in our extension study (6–12 months). Furthermore, we aim to describe the GPs’ and the patients’ experiences in using the arriba-PPI tool within a primary care consultation in two sub-studies. Trial design and setting The arriba-PPI trial is a national multicenter cluster-randomized controlled trial with an observation period of one year to reduce long-term prescription of PPIs. It will be conducted in Germany in the mid- and north Hessia and Westphalia-Lippe regions. Three study centers will be involved: the Institute of General Practice of Marburg University; Institute of General Practice of Heinrich-Heine-University Düsseldorf; and Institute of General Practice and Family Medicine of Witten/Herdecke University. The arriba-PPI trial will be located in the primary care setting. Details on the study procedure are outlined in Fig. . Ethics approval The study has been approved by the three local ethics committees (see “Declarations”). Recruitment of GP practices and patients Before the recruitment of GP practices and patients, recruitment-regions will be selected. In these prespecified regions, all GPs will be informed about the study and will be invited to participate. All GP practices will obtain a written invitation (letter and/or fax) and will be followed up by phone calls. The study design will be presented at several network meetings for GPs, where the GPs will be asked to participate. Additionally, the health insurance BARMER will identify GP practices with a higher prescription rate of PPIs compared to the median prescription rate of all practices of the BAMER database. BARMER will contact these practices twice via mail and invite them to participate. Participating GP practices will be visited by a research assistant from the study center providing detailed information about the study before patient recruitment. GP practices will then invite all consecutive patients with a PPI prescription consulting the practice to participate in the study and will inform them about the study orally and in writing. After completing recruitment, the practice will be randomized. Inclusion and exclusion criteria GP practices will need to meet the following inclusion criteria to participate: German as the predominant language in patient communication; the ability to collect prescription data in an electronic health record (EHR) as a technical prerequisite; and the willingness to provide PPI prescription data collected in the EHR. Practices will be excluded if they only treat narrowly defined patient groups or provide services only (e.g. alternative or complementary treatments), do not regularly prescribe PPIs, or do not use an EHR. Patients with a regular prescription of PPIs of ≥ 6 months will be included. We defined regular prescription as taking at least one PPI pill daily or as taking regularly several PPI pills per week (such as four pills per week/every other day). Furthermore, patients will have to be aged ≥ 18 years and give informed consent according to the declaration of Helsinki. Patients will be excluded from the study if they do not want to participate, cannot provide informed consent, or are not able to communicate in German. Only patients who have access to the practice will be included, since the arriba-PPI tool is only available for computers. Lastly, patients are excluded if their PPIs are prescribed for limited time periods or only as required. Randomization The GP practice is regarded as the unit of randomization. Each participating GP practice with the recruited patients will be randomized to either having access to the arriba-PPI tool (intervention) or not and performing care as usual (control). We decided to apply cluster randomization in order to avoid contamination if the GP would use the tool for some patients but not for others as we expect that a learning effect will take place. After completion of patient recruitment, the GP practice will notify the corresponding study center providing the number of patients recruited. The practice will then be randomized using computerized sequence generation with a simple randomization scheme generated by the random package of the program R . Randomization will be stratified by study center. Randomization lists will be kept closed. To assure concealment of allocation, no patients can be included once recruitment is completed and randomization has been performed. Blinding Due to the nature of the intervention, neither GPs nor patients can be blinded. For practical reasons, study personnel cannot be blinded either. However, all analyses will be conducted by a blinded statistician. Intervention The intervention consists of the arriba-PPI tool applied during a regular or extra patient contact in the GP practice. Before the intervention, study personnel will visit the intervention group practices to provide training for GPs and practice nurses that comprises the use of PPIs in general, the SDM processes, the withdrawal of drugs, and how to use the arriba-PPI tool. GPs are supposed to use the arriba-PPI tool with their participating study patients for the following six months and, subsequently, the arriba-PPI tool application shall be used for all patients with a PPI prescription for another six months. The application of the arriba-PPI tool requires the installation of the arriba software on one or several computers of the GP practice to enter relevant patient data including name, gender, PPI substance, dose, and indication. Once patient data are entered, a choice of four sections is available represented by the following buttons: traffic light; weighing scale; procedure; information; and print. The decision aid is displayed as a traffic light system to clarify whether stopping PPIs is recommended or not. Green indicates the clear recommendation for a withdrawal, yellow indicates that withdrawal usually is recommended, and red indicates that withdrawal usually is not recommended. The weighing scale provides arguments for and against withdrawal (such as long-term harm, short-term complaints, social constraints). GPs are supposed to discuss with their patient the pros and cons for taking PPIs, taking the patient’s preferences into consideration. Depending on the decision made by the patient and GP, the software provides suggestions for next steps to take, in particular the measures to be taken when complaints arise during withdrawal. Finally, the patient will get an individualized printout covering information on long-term effects of the drug, a withdrawal plan with dosing steps, follow-up appointments, and so on. Control GP practices participating in the control group of the study will provide care as usual for 12 months. GP practices will not make any extra appointments with their patients for this study. Measurements The primary endpoint is the cumulated DDDs of PPIs per study patient at six months (T1). Secondary endpoints are the proportion of PPI patients in the practice during the six months after allocation (T1), the cumulated DDDs of PPIs per study patient during the 12 months after allocation (T2), the proportion of PPI patients in a practice during the time span of 6–12 months after allocation (T2), and the average accumulated total DDDs of PPIs in a practice for all patients during the time span of 6–12 months after allocation (T2). For outcome measurement, information on PPI prescription (substance, dose, package size) per patient will be recorded from the practice software for a time span of six months before T0 to 12-month follow-up. Furthermore, the number of all patients taking PPIs per practice and the number of patients per practice will be assessed for the time span of T0 to 12 months after allocation (Fig. ). To evaluate the real utilization of the arriba-PPI tool and to control for confounding, GPs from the intervention group will fill out a case report form for each patient after consultation with the arriba-PPI tool, which provides information on the original indication, the result of the consultation, and on medication changes. At T1, GPs will be asked whether there has been a change in the PPI medication for each individual study patient and, if so, why. Additionally, after T1 study centers will call all patients for a short-structured phone interview based on mainly closed questions to gain information about the current PPI medications and other medications for gastric problems they are taking to monitor medication shifts, e.g. into self-medication (over-the-counter medication). In the intervention group, patients will also be asked whether the arriba-PPI tool was used during consultations. Finally, demographic data from all participating GPs will be collected before randomization using a written questionnaire. Monitoring Personnel not involved in this study will perform monitoring according to a pre-specified manual. In short, the monitors will control whether informed consent forms from all participating patients and practices are correctly filled out and signed. For a randomly chosen 15% sample, the SDM process with the application of the arriba-PPI tool will be monitored. At six months (T1), we will control in 15% of all patients whether they fulfilled the inclusion criteria before randomization. Furthermore, 15% of all data entered into SPSS will be checked by a second person not involved in the study. Sample size calculation We based our sample size calculation on our primary endpoint. According to data of the statutory health insurance AOK Hessen, the average DDD of all PPI prescriptions per GP practice (averaged across all GP practices) accounts to 8244.47 per quarter with a standard deviation of 7850.89 (variation coefficient 0.95). This value is based on an average number of patients with PPI prescriptions of 56.20 per GP practice. Therefore, an average of DDD of PPIs of 146.7 per patient is assumed (8244.47/56.2). Taking into account a variation coefficient of 0.95, the standard deviation is 139.7 DDD of PPIs on a patient level. We consider a reduction by 15% as relevant, which would correspond to a difference of 22 DDD (15% of 146.7) between the control and intervention groups at T1. According to the sample size calculator for cluster-randomized trials of the University of Aberdeen’s Health Services Research Unit , we will need 204 GP practices with 15 patients each if we want to prove such a difference with an intraclass correlation coefficient of 0.1 , a significance level of 0.05, and a power of 80%. The primary outcome relates to patient-level data. Thus, we adapted our calculation to the cluster structure (patients in GP practices). The number of practices refers to the number that needs to be randomized. To cover for drop-out practices after randomization, we increased the number of practices to be recruited to 210. In addition, patients might leave the study at their own request after inclusion in the study but before receiving the actual intervention. We decided to have a buffer considering non-predictable dropouts. Therefore, each study practice is required to recruit at least 15 patients but not more than 25 patients, as differences in cluster size can have a negative influence on intracluster variability. Practices will include patients in the study according to the chronological order in which the patients signaled their willingness to participate in the study. By setting this range of patients, the probability that the number of patients per practice varies only minimally increases. If 15 potential study patients cannot be enrolled in the study within the timeframe for recruitment, the study practice and its recruited study patients will still be included in the analysis. Data management and statistical analysis Data will be recorded into SPSS at each study center and transferred to the blinded trial statistician. We will use intention-to-treat and per-protocol analyses. The primary outcome will be analyzed by multilevel analyses with the statistical program R following intention-to-treat principles . These analyses take into account the clustering of patients by practices and allow for different modelling of predictors, e.g. group affiliation as fixed and/or random effect. Furthermore, these analyses allow adjustment for variables which showed differences between the intervention and control groups despite randomization and which are considered to be of prognostic importance. In case of missing variables, suitable imputation methods will be used . We will perform sensitivity analyses (worst case, best case, complete case) in order to check for the influence of missing data on the results. The evaluation for the secondary outcomes will also be done by multilevel analyses by adopting covariates on patient and/or cluster level. Elaborated evaluations in multivariate procedures allow a more detailed analysis of prescription behavior. All statistical tests will be two-tailed and an alpha of 5% will be used throughout. Besides statistical significance, effect sizes will also be evaluated . Qualitative sub-study In order to explore the experiences of the usage of the arriba-PPI tool, the adoption of the advice, and the SDM process, we will conduct in-depth interviews with participating GPs and patients. Therefore, we will invite randomly selected GPs and patients to separate focus group meetings or individual interviews. Interviews will be conducted by means of a semi-structured interview guideline, recorded, anonymized, and transcribed verbatim. The qualitative text will be analyzed by thematic qualitative text analysis in multidisciplinary groups.
Our principal hypothesis is that a patient-oriented strategy of medication reduction using the arriba-PPI tool in primary care practices reduces PPI prescriptions by at least 15% in comparison to conventional consultations over a period of six months (see Table ). Our secondary objective is to evaluate the effectiveness of the implementation of the arriba-PPI tool in our extension study (6–12 months). Furthermore, we aim to describe the GPs’ and the patients’ experiences in using the arriba-PPI tool within a primary care consultation in two sub-studies.
The arriba-PPI trial is a national multicenter cluster-randomized controlled trial with an observation period of one year to reduce long-term prescription of PPIs. It will be conducted in Germany in the mid- and north Hessia and Westphalia-Lippe regions. Three study centers will be involved: the Institute of General Practice of Marburg University; Institute of General Practice of Heinrich-Heine-University Düsseldorf; and Institute of General Practice and Family Medicine of Witten/Herdecke University. The arriba-PPI trial will be located in the primary care setting. Details on the study procedure are outlined in Fig. .
The study has been approved by the three local ethics committees (see “Declarations”).
Before the recruitment of GP practices and patients, recruitment-regions will be selected. In these prespecified regions, all GPs will be informed about the study and will be invited to participate. All GP practices will obtain a written invitation (letter and/or fax) and will be followed up by phone calls. The study design will be presented at several network meetings for GPs, where the GPs will be asked to participate. Additionally, the health insurance BARMER will identify GP practices with a higher prescription rate of PPIs compared to the median prescription rate of all practices of the BAMER database. BARMER will contact these practices twice via mail and invite them to participate. Participating GP practices will be visited by a research assistant from the study center providing detailed information about the study before patient recruitment. GP practices will then invite all consecutive patients with a PPI prescription consulting the practice to participate in the study and will inform them about the study orally and in writing. After completing recruitment, the practice will be randomized.
GP practices will need to meet the following inclusion criteria to participate: German as the predominant language in patient communication; the ability to collect prescription data in an electronic health record (EHR) as a technical prerequisite; and the willingness to provide PPI prescription data collected in the EHR. Practices will be excluded if they only treat narrowly defined patient groups or provide services only (e.g. alternative or complementary treatments), do not regularly prescribe PPIs, or do not use an EHR. Patients with a regular prescription of PPIs of ≥ 6 months will be included. We defined regular prescription as taking at least one PPI pill daily or as taking regularly several PPI pills per week (such as four pills per week/every other day). Furthermore, patients will have to be aged ≥ 18 years and give informed consent according to the declaration of Helsinki. Patients will be excluded from the study if they do not want to participate, cannot provide informed consent, or are not able to communicate in German. Only patients who have access to the practice will be included, since the arriba-PPI tool is only available for computers. Lastly, patients are excluded if their PPIs are prescribed for limited time periods or only as required.
The GP practice is regarded as the unit of randomization. Each participating GP practice with the recruited patients will be randomized to either having access to the arriba-PPI tool (intervention) or not and performing care as usual (control). We decided to apply cluster randomization in order to avoid contamination if the GP would use the tool for some patients but not for others as we expect that a learning effect will take place. After completion of patient recruitment, the GP practice will notify the corresponding study center providing the number of patients recruited. The practice will then be randomized using computerized sequence generation with a simple randomization scheme generated by the random package of the program R . Randomization will be stratified by study center. Randomization lists will be kept closed. To assure concealment of allocation, no patients can be included once recruitment is completed and randomization has been performed.
Due to the nature of the intervention, neither GPs nor patients can be blinded. For practical reasons, study personnel cannot be blinded either. However, all analyses will be conducted by a blinded statistician.
The intervention consists of the arriba-PPI tool applied during a regular or extra patient contact in the GP practice. Before the intervention, study personnel will visit the intervention group practices to provide training for GPs and practice nurses that comprises the use of PPIs in general, the SDM processes, the withdrawal of drugs, and how to use the arriba-PPI tool. GPs are supposed to use the arriba-PPI tool with their participating study patients for the following six months and, subsequently, the arriba-PPI tool application shall be used for all patients with a PPI prescription for another six months. The application of the arriba-PPI tool requires the installation of the arriba software on one or several computers of the GP practice to enter relevant patient data including name, gender, PPI substance, dose, and indication. Once patient data are entered, a choice of four sections is available represented by the following buttons: traffic light; weighing scale; procedure; information; and print. The decision aid is displayed as a traffic light system to clarify whether stopping PPIs is recommended or not. Green indicates the clear recommendation for a withdrawal, yellow indicates that withdrawal usually is recommended, and red indicates that withdrawal usually is not recommended. The weighing scale provides arguments for and against withdrawal (such as long-term harm, short-term complaints, social constraints). GPs are supposed to discuss with their patient the pros and cons for taking PPIs, taking the patient’s preferences into consideration. Depending on the decision made by the patient and GP, the software provides suggestions for next steps to take, in particular the measures to be taken when complaints arise during withdrawal. Finally, the patient will get an individualized printout covering information on long-term effects of the drug, a withdrawal plan with dosing steps, follow-up appointments, and so on.
GP practices participating in the control group of the study will provide care as usual for 12 months. GP practices will not make any extra appointments with their patients for this study.
The primary endpoint is the cumulated DDDs of PPIs per study patient at six months (T1). Secondary endpoints are the proportion of PPI patients in the practice during the six months after allocation (T1), the cumulated DDDs of PPIs per study patient during the 12 months after allocation (T2), the proportion of PPI patients in a practice during the time span of 6–12 months after allocation (T2), and the average accumulated total DDDs of PPIs in a practice for all patients during the time span of 6–12 months after allocation (T2). For outcome measurement, information on PPI prescription (substance, dose, package size) per patient will be recorded from the practice software for a time span of six months before T0 to 12-month follow-up. Furthermore, the number of all patients taking PPIs per practice and the number of patients per practice will be assessed for the time span of T0 to 12 months after allocation (Fig. ). To evaluate the real utilization of the arriba-PPI tool and to control for confounding, GPs from the intervention group will fill out a case report form for each patient after consultation with the arriba-PPI tool, which provides information on the original indication, the result of the consultation, and on medication changes. At T1, GPs will be asked whether there has been a change in the PPI medication for each individual study patient and, if so, why. Additionally, after T1 study centers will call all patients for a short-structured phone interview based on mainly closed questions to gain information about the current PPI medications and other medications for gastric problems they are taking to monitor medication shifts, e.g. into self-medication (over-the-counter medication). In the intervention group, patients will also be asked whether the arriba-PPI tool was used during consultations. Finally, demographic data from all participating GPs will be collected before randomization using a written questionnaire.
Personnel not involved in this study will perform monitoring according to a pre-specified manual. In short, the monitors will control whether informed consent forms from all participating patients and practices are correctly filled out and signed. For a randomly chosen 15% sample, the SDM process with the application of the arriba-PPI tool will be monitored. At six months (T1), we will control in 15% of all patients whether they fulfilled the inclusion criteria before randomization. Furthermore, 15% of all data entered into SPSS will be checked by a second person not involved in the study.
We based our sample size calculation on our primary endpoint. According to data of the statutory health insurance AOK Hessen, the average DDD of all PPI prescriptions per GP practice (averaged across all GP practices) accounts to 8244.47 per quarter with a standard deviation of 7850.89 (variation coefficient 0.95). This value is based on an average number of patients with PPI prescriptions of 56.20 per GP practice. Therefore, an average of DDD of PPIs of 146.7 per patient is assumed (8244.47/56.2). Taking into account a variation coefficient of 0.95, the standard deviation is 139.7 DDD of PPIs on a patient level. We consider a reduction by 15% as relevant, which would correspond to a difference of 22 DDD (15% of 146.7) between the control and intervention groups at T1. According to the sample size calculator for cluster-randomized trials of the University of Aberdeen’s Health Services Research Unit , we will need 204 GP practices with 15 patients each if we want to prove such a difference with an intraclass correlation coefficient of 0.1 , a significance level of 0.05, and a power of 80%. The primary outcome relates to patient-level data. Thus, we adapted our calculation to the cluster structure (patients in GP practices). The number of practices refers to the number that needs to be randomized. To cover for drop-out practices after randomization, we increased the number of practices to be recruited to 210. In addition, patients might leave the study at their own request after inclusion in the study but before receiving the actual intervention. We decided to have a buffer considering non-predictable dropouts. Therefore, each study practice is required to recruit at least 15 patients but not more than 25 patients, as differences in cluster size can have a negative influence on intracluster variability. Practices will include patients in the study according to the chronological order in which the patients signaled their willingness to participate in the study. By setting this range of patients, the probability that the number of patients per practice varies only minimally increases. If 15 potential study patients cannot be enrolled in the study within the timeframe for recruitment, the study practice and its recruited study patients will still be included in the analysis.
Data will be recorded into SPSS at each study center and transferred to the blinded trial statistician. We will use intention-to-treat and per-protocol analyses. The primary outcome will be analyzed by multilevel analyses with the statistical program R following intention-to-treat principles . These analyses take into account the clustering of patients by practices and allow for different modelling of predictors, e.g. group affiliation as fixed and/or random effect. Furthermore, these analyses allow adjustment for variables which showed differences between the intervention and control groups despite randomization and which are considered to be of prognostic importance. In case of missing variables, suitable imputation methods will be used . We will perform sensitivity analyses (worst case, best case, complete case) in order to check for the influence of missing data on the results. The evaluation for the secondary outcomes will also be done by multilevel analyses by adopting covariates on patient and/or cluster level. Elaborated evaluations in multivariate procedures allow a more detailed analysis of prescription behavior. All statistical tests will be two-tailed and an alpha of 5% will be used throughout. Besides statistical significance, effect sizes will also be evaluated .
In order to explore the experiences of the usage of the arriba-PPI tool, the adoption of the advice, and the SDM process, we will conduct in-depth interviews with participating GPs and patients. Therefore, we will invite randomly selected GPs and patients to separate focus group meetings or individual interviews. Interviews will be conducted by means of a semi-structured interview guideline, recorded, anonymized, and transcribed verbatim. The qualitative text will be analyzed by thematic qualitative text analysis in multidisciplinary groups.
In this study, we conduct a large multicenter randomized controlled trial to analyze the effectiveness and the implementation of the arriba-PPI intervention with the aim to reduce the long-term use of PPIs applying a patient-oriented SDM process. Because of the pragmatic trial design, we cannot assure blinding of staff and participants. However, the trial statistician remains blinded during analysis of data. Even though withdrawal of inappropriate drugs is recognized as being important, it remains challenging in the practice. Studies show that there are a variety of patient-related and prescriber-related factors hindering deprescribing in general. Especially in polypharmacy patients, changes in medication can impact on other drugs that the patient is currently taking . These barriers might also be encountered in our trial. Therefore, in addition to the effectiveness of the arriba-PPI tool, we will explore both, the patients’ perspective and the GPs’ perspective on deprescribing PPIs with the arriba-PPI tool. These qualitative evaluations will provide a better understanding of the effects of the implementation and shed light on optimizations needed for future implementation. In this trial we will include GPs who represent the target population working in usual German GP practices. However, GPs that consent to participate in this trial possibly have a higher affinity towards the use of an electronic decision support tool than GPs not participating in our study. Furthermore, they might have a different attitude towards PPI use and be more motivated to deprescribing PPIs. Patients in this trial in general represent patients that GPs encounter in everyday practice. However, patients that consent to participate in the study might be more motivated to make changes compared to their peers. The trial takes place in Germany, limiting the generalizability of the results to other healthcare settings. Furthermore, we did not plan any full health economic evaluation. Despite these limitations, this study will provide valuable insights into the effectiveness of deprescribing PPIs supported by the arriba-PPI tool and its impact on clinical practice. This study addresses the inappropriate use of PPIs, a drug class that is widely overused in many countries . We expect the recently developed electronic decision support tool arriba-PPI to encourage GPs to broach the issue with their patients. The fact that the arriba-PPI tool is part of the arriba software package will make implementation easier.
This manuscript presents the version of 26 June 2019 of the arriba-PPI protocol. Recruitment started in December 2018 and is expected to be completed by 15 July 2019.
Additional file 1. SPIRIT Checklist. Additional file 2. Patient consent form in German/not translated.
|
Primary care and mental health providers’ perceptions of implementation of pharmacogenetics testing for depression prescribing | c2f359b4-c5f0-4e78-8058-60628335e783 | 7594429 | Pharmacology[mh] | Pharmacogenetic testing (PGx) has potential to improve the quality of psychiatric prescribing by considering each patient’s unique genetic profile . While routinely employed in some areas of medicine (e.g. cancer treatment), evidence that PGx can improve outcomes for patients with major depressive disorder (MDD) is limited . MDD is challenging to treat and the likelihood of benefit decreases while the risk of treatment drop-out increases with each ineffective pharmacotherapy . Using pharmacogenetics to guide medication selection has the potential to enhance patient engagement, reduce side effects, and improve disease outcomes and patients’ quality of life . Although psychiatric PGx tests are commercially available and covered by some insurance plans, there is limited evidence demonstrating their effectiveness for improving MDD . Previous studies have enrolled small samples of patients and produced mixed results, with some demonstrating improvement in depression while others found reduced side effects, but no change in efficacy . A recent meta-analysis concluded that pharmacogenetic-guided treatment for depression improved response and remission rates, but noted many limitations . Given insufficient evidence to support widespread use, but a potentially large impact on patient care, the Veterans Health Administration (VHA) Precision Medicine in Mental Health Care (PRIME Care) study was designed to clarify the clinical utility of a specific commercially-available pharmacogenetic (PGx) test to assist treatment of MDD compared to usual care, and to identify how healthcare systems may best implement this novel practice. Work examining provider perceptions of PGx testing for psychiatric conditions has been limited . Providers felt testing lessened patient resistance to medications by potentially reducing side effects and reassuring patients, but were concerned about managing patients’ expectations . Baseline surveys conducted with providers engaged in PRIME Care indicated that > 85% of providers had never ordered a genetic test to assist with psychiatric prescribing, and providers reported limited training in genetics or access to genetic expertise . Implementation science examines the broad range of contextual factors that affect the use of evidence-based practices in every day settings . Incorporating qualitative inquiry within clinical trials helps to “capture greater diversity and depth of data on program outcomes,” and to better understand the experiences of trial participants . Given the pragmatic nature of PRIME Care, implementation science methods provide a valuable opportunity to understand contextual factors which may be important for future implementation. Implementation science activities were based on the Consolidated Framework for Implementation Research (CFIR) . This framework organizes factors important to implementation into five domains (Intervention Characteristics, Individual Characteristics, Inner Setting, Outer Setting, and Implementation Process) . Implementation science activities are conducted in two phases. The first phase, reported here, captured providers’ perceptions of PGx for antidepressant prescribing prior to their experience with the test, and is focused on their perceptions in the domain of Intervention (PGx) Characteristics. We describe, using qualitative methods, providers’ perceptions around the use of PGx testing for MDD prescribing at the beginning of PRIME Care, to understand how these perceptions may relate to potential barriers and facilitators to future implementation. If the trial data support broader use of psychiatric PGx testing, these lessons will be critical for broader implementation in VHA. The second phase, to be conducted late in the trial, will focus on the other CFIR domains (Individual Characteristics, Inner Setting, Outer Setting, and Implementation Process) and will examine providers’ experiences using the PGx testing during the study, barriers and facilitators they encountered, and their recommendations for future implementation.
Qualitative focus groups were conducted early in the trial. All study procedures were approved by the VHA Central IRB, as well as the appropriate oversight bodies at each of the participating sites. All participants provided written consent to participate in the PRIME Care study, including the focus groups. Study context and setting PRIME Care is a 22 site, pragmatic RCT, conducted within VHA primary care (PC) and mental health (MH) outpatient clinics. PC and MH providers were recruited to participate in the trial. Providers then referred eligible patients to the study team; the first patient was randomized in August 2017. Randomization occurred at the patient/provider dyad level; each dyad received PGx testing results either immediately (~ 2–5 days after enrollment) or 6 months later (delayed group). Study outcomes include whether the immediate PGx test result dyads have significantly better treatment outcomes (i.e., improved PHQ-9 scores, remission of depression) and to what extent prescribers used PGx information when selecting an antidepressant. Recruitment Study sites began enrolling provider participants in July 2017. Eligible prescribing providers (MD, DO, PA, or CRNP) were outpatient providers in mental health clinics and primary care clinics. Providers were classified as PC or MH based on their specialty training, their practice location, and scope of practice within the VA. During provider enrollment each site was responsible for introducing the study and the specific PGx test being used. The speed of provider enrollment varied across sites; therefore focus groups were conducted at sites where a sufficient number of both MH and PC providers were enrolled at the time of data collection. Within these sites, local site investigators recruited a convenience sample from enrolled and consented providers to participate in site-based groups. Data collection At enrollment, providers completed a demographic questionnaire which assessed their specialty, practice location, years in practice, age, race/ethnicity, and gender. Focus groups were conducted virtually through the VHA tele-conferencing system allowing the same team of interviewers to facilitate all groups. Focus groups were conducted between December 2017 and May 2018 before most providers had any experience with referring patients and using the test results. Participants could call in separately using their individual workstations. At four sites, two separate focus groups were held, one for MH and one for PC providers. The fifth site hosted one focus group for MH providers, and then due to insufficient numbers of available PC providers at that site, a sixth site was added and held one group for PC providers. Participants were verbally reminded of consent information at the beginning of the focus group session. The sessions were facilitated by the implementation science leads (LW, SC), with a research assistant to manage logistics. Each session began with a brief presentation on pharmacogenetic testing, including a sample report, discussion of how results might be interpreted, and a brief question and answer period. This presentation was provided by PRIME Care leadership (DO, MT), experts in pharmacogenetics evidence and practice. These experts then left the call and audio recording began. Focus groups were conducted by a PhD-level medical anthropologist (BV) trained in qualitative health services research. The interviewer had no prior relationship with participants. Focus group questions (Table ) were developed based on CFIR Intervention Characteristics constructs, including: Intervention Source, Evidence, Relative Advantage, Adaptability, Trialability, Complexity, Intervention Design, and Cost. Feedback from key study stakeholders (study leadership, advisory board, and local site investigators) identified priority constructs within the Intervention Characteristics Domain. Stakeholders ranked each construct’s importance to PGx implementation on a Likert Scale and participated in an open-ended discussion during an in-person study kick-off meeting. Stakeholder feedback emphasized the constructs of Evidence, Relative Advantage, and Complexity as the most important areas to cover during the discussion and focus group questions were targeted accordingly, but contained flexibility to allow other constructs and topics to arise. Data analysis Focus group audio recordings were transcribed verbatim by the VA Centralized Transcription Services Program. Transcripts were analyzed using rapid analytic procedures . This approach was chosen so that the resulting findings could inform study implementation by identifying early factors (e.g., provider attitudes, perceptions, knowledge about the intervention) that may affect provider recruitment, retention, and engagement with the trial . Rapid analysis has been shown to generate results and interpretations comparable to traditional in-depth qualitative analysis methods . Two primary analysts (BV, LB) reviewed each transcript and independently completed a template in Microsoft Excel, summarizing the findings in each area. The analysts then compared summaries and developed a final comprehensive summary for each group. Each summary contained an overall synopsis, an identification of key themes, and exemplary text and quotations from the transcripts. Each theme was then coded by CFIR construct. Each summary was reviewed with its transcript by an additional analyst (SC, GB, LH) to ensure completeness and accuracy. Any questions were resolved by the two primary analysts through discussion and refinement of definitions. Finally, the group summaries were combined into an overall spreadsheet, organized by the CFIR Intervention Characteristics constructs, with each groups’ findings in a separate column. This matrix facilitated comparison of findings across groups and MH and PC providers. All authors met to review and discuss the analysis and final interpretation of the results.
PRIME Care is a 22 site, pragmatic RCT, conducted within VHA primary care (PC) and mental health (MH) outpatient clinics. PC and MH providers were recruited to participate in the trial. Providers then referred eligible patients to the study team; the first patient was randomized in August 2017. Randomization occurred at the patient/provider dyad level; each dyad received PGx testing results either immediately (~ 2–5 days after enrollment) or 6 months later (delayed group). Study outcomes include whether the immediate PGx test result dyads have significantly better treatment outcomes (i.e., improved PHQ-9 scores, remission of depression) and to what extent prescribers used PGx information when selecting an antidepressant.
Study sites began enrolling provider participants in July 2017. Eligible prescribing providers (MD, DO, PA, or CRNP) were outpatient providers in mental health clinics and primary care clinics. Providers were classified as PC or MH based on their specialty training, their practice location, and scope of practice within the VA. During provider enrollment each site was responsible for introducing the study and the specific PGx test being used. The speed of provider enrollment varied across sites; therefore focus groups were conducted at sites where a sufficient number of both MH and PC providers were enrolled at the time of data collection. Within these sites, local site investigators recruited a convenience sample from enrolled and consented providers to participate in site-based groups.
At enrollment, providers completed a demographic questionnaire which assessed their specialty, practice location, years in practice, age, race/ethnicity, and gender. Focus groups were conducted virtually through the VHA tele-conferencing system allowing the same team of interviewers to facilitate all groups. Focus groups were conducted between December 2017 and May 2018 before most providers had any experience with referring patients and using the test results. Participants could call in separately using their individual workstations. At four sites, two separate focus groups were held, one for MH and one for PC providers. The fifth site hosted one focus group for MH providers, and then due to insufficient numbers of available PC providers at that site, a sixth site was added and held one group for PC providers. Participants were verbally reminded of consent information at the beginning of the focus group session. The sessions were facilitated by the implementation science leads (LW, SC), with a research assistant to manage logistics. Each session began with a brief presentation on pharmacogenetic testing, including a sample report, discussion of how results might be interpreted, and a brief question and answer period. This presentation was provided by PRIME Care leadership (DO, MT), experts in pharmacogenetics evidence and practice. These experts then left the call and audio recording began. Focus groups were conducted by a PhD-level medical anthropologist (BV) trained in qualitative health services research. The interviewer had no prior relationship with participants. Focus group questions (Table ) were developed based on CFIR Intervention Characteristics constructs, including: Intervention Source, Evidence, Relative Advantage, Adaptability, Trialability, Complexity, Intervention Design, and Cost. Feedback from key study stakeholders (study leadership, advisory board, and local site investigators) identified priority constructs within the Intervention Characteristics Domain. Stakeholders ranked each construct’s importance to PGx implementation on a Likert Scale and participated in an open-ended discussion during an in-person study kick-off meeting. Stakeholder feedback emphasized the constructs of Evidence, Relative Advantage, and Complexity as the most important areas to cover during the discussion and focus group questions were targeted accordingly, but contained flexibility to allow other constructs and topics to arise.
Focus group audio recordings were transcribed verbatim by the VA Centralized Transcription Services Program. Transcripts were analyzed using rapid analytic procedures . This approach was chosen so that the resulting findings could inform study implementation by identifying early factors (e.g., provider attitudes, perceptions, knowledge about the intervention) that may affect provider recruitment, retention, and engagement with the trial . Rapid analysis has been shown to generate results and interpretations comparable to traditional in-depth qualitative analysis methods . Two primary analysts (BV, LB) reviewed each transcript and independently completed a template in Microsoft Excel, summarizing the findings in each area. The analysts then compared summaries and developed a final comprehensive summary for each group. Each summary contained an overall synopsis, an identification of key themes, and exemplary text and quotations from the transcripts. Each theme was then coded by CFIR construct. Each summary was reviewed with its transcript by an additional analyst (SC, GB, LH) to ensure completeness and accuracy. Any questions were resolved by the two primary analysts through discussion and refinement of definitions. Finally, the group summaries were combined into an overall spreadsheet, organized by the CFIR Intervention Characteristics constructs, with each groups’ findings in a separate column. This matrix facilitated comparison of findings across groups and MH and PC providers. All authors met to review and discuss the analysis and final interpretation of the results.
Participants Ten focus groups (5 primary care, 5 mental health) were conducted with providers ( n = 31) from 6 sites. The number of participants per group ranged from 1 to 5 with an average of 3.1. One scheduled group had only one attendee, therefore an individual interview was conducted. Focus groups lasted approximately 45 min. As intended, providers were approximately half MH (48.4%) and half PC (51.6%). 61.3% of participants completed their medical training after 2000. Internists and psychiatrists represented 38.7% of participants each, 3.2% were family medicine, and 16.1% were Nurse Practitioners or Physician Assistants. Additional participant characteristics are presented in Table . During the focus groups, most providers stated they had not yet referred any patients to the study. Qualitative findings Focus group analysis revealed themes within the CFIR Intervention Characteristics domain constructs of Evidence, Relative Advantage, Adaptability, Trialability, Complexity, and Intervention Design that are important for understanding providers’ attitudes and perceptions around implementation of PGx testing for depression prescribing (Table ). Evidence Evidence is used to describe “stakeholders’ perceptions of the quality and validity of evidence supporting the belief that the intervention will have desired outcomes.” Providers discussed limited knowledge of the evidence around PGx testing for depression. As one MH provider stated, “ I think it definitely would be a great guide. And I’m still not as convinced with the strength of the evidence, but I’d like to see it a lot more. ” [MH Group] While generally not familiar with the literature, providers did have a general understanding that the evidence for psychiatric prescribing was not yet definitive and were therefore uncertain of the clinical utility of the PGx testing. One MH provider expressed concern, but was hopeful, “I am concerned because a lot of genetic studies have been really, really promising in the past, but haven’t really panned out in terms of helping clinically to a great extent. So, kind of cautiously optimistic, I’ll take that data that we get at the end of this study and see really how it matches up.” [MH group] PC providers were comfortable with and knowledgeable about the level of evidence for genetic testing in other areas of medicine (e.g., for anti-platelet medications and genetic susceptibility to disease). They expressed concern that the evidence for PGx in depression prescribing was limited and were therefore hesitant about implementing the intervention. “I mean warfarin is the one that’s the most well defined, but really it hasn’t changed anybody’s practice particularly. And there’s also ones for [drug name], but as far as I know, nobody actually uses that in their decision making at least here at the VA, so, I guess I’d say we need more evidence as to whether this is clinically useful or not.” [PC Group] As a result of this general uncertainty around the evidence, providers expressed caution about PGx testing. They discussed not giving the test results much weight until there is more evidence and they have clinical experience with the test. “I don’t know that I know enough about the strength. I think there’s still a lot of unanswered questions and for me, understanding what is the incremental value of having this is still unclear … there’s not enough evidence to support its widespread use … I’ll wait until the evidence tips more in favor and is a little more concrete.” [PC Group] Relative Advantage Relative Advantage refers to “stakeholders’ perception of the advantage of implementing the intervention versus an alternative solution” or usual practice . Given limited evidence, participants were also unsure whether PGx testing would improve upon their current practice. However, providers expressed interest in the potential, and were hopeful that PGx testing would help them identify more effective medications for their patients. “I think the advantage is hopefully avoiding a little bit of that trial and error that we always have.” [PC Group] Providers saw PGx testing as providing an advantage over usual care for patients who have been difficult to treat or have had poor responses to medication. “Especially I think patients with multiple comorbid medical conditions … just getting a sense some patients who are fast metabolizers and they do need higher doses of the medications and stuff like that. I think that will kind of open doors to having a better understanding of their medication regimen.” [MH Group] Many commented that this approach could increase patient buy-in to the use of medication to treat their depression and help overcome patient resistance or concerns around medication and possible side effects. “If a patient believes that this [has] been this process to help them choose the right medication and initially creates some positive placebo effect until the medication’s real effect kicks in … if you can create a positive experience with the medication they’re more likely to be med compliant.” [MH Group] Generally, participants saw PGx testing as one additional piece of information to add to other factors that impact their decisions about depression medication, but were unsure how much relative weight it would have. As one provider expressed, ‘The fact that psychiatry is touched by so many different components of a human person … So we’re looking at the genes as if they are the sixth unmovable set of dictates that kind of tells you what is going to happen to a person. And yet then, we have the nun study that they’ve got 97-year-old nuns with the APOE gene and have never demonstrated any symptoms of Alzheimer’s … So hanging all of our hats on these hooks may not be where it’s at in terms of fixing the problem of psychiatry not making people better. But I’m hoping that it can give us one more tool to use wisely and judiciously.” [MH Group] Participants also expressed concerns over the inherent delay in prescribing due to the wait for PGx test results compared to usual practice. Rather than writing a prescription at the appointment, ordering the test requires waiting up to 3 days and re-contacting the patient, which providers felt might lead to a missed treatment opportunity. “The time waste, sort of the point of care act timeliness is probably the biggest area I would say … assuming that I fully buy-in and I fully believe the evidence, and there’s clear support for this being of incremental value, I think the process barrier that I see to actually using it is the timeliness because if you’ve got a patient who is there with you and going to pick up the meds, you don’t want to lose that opportunity.” [PC Group] Adaptability, Trialability, and Complexity Focus group conversations related to the CFIR constructs of Adaptability, Trialability, and Complexity contained significant overlap, and therefore are discussed together. Within CFIR, Adaptability refers to “the degree to which an intervention can be adapted, tailored, refined, or reinvented to meet local needs,” while trialability is defined as “the ability to test the intervention on a small scale in the organization, and to be able to reverse course (undo implementation) if warranted.” Complexity is, “the perceived difficulty of the intervention, reflected by duration, scope, radicalness, disruptiveness, centrality, and intricacy and number of steps required to implement.” As mentioned, participants expressed concern over how PGx testing will fit in to the workflow for depression treatment, seeing it as potentially difficult to adapt to existing practices. In addition to the potential prescribing delay, providers were concerned about complex discussions of PGx, especially with patients with new depression or patients who were resistant to treatment. This concern was prominent among the PC providers, who noted that depression is not usually the primary or only complaint they address within a patient visit. As one provider described: “It’s the uncommon patient who comes in and says my chief complaint is that ‘I’m depressed and I want you to help me with it.’ It’s usually something that trickles out two-thirds of the way through a visit for routine care. And it’s a long conversation to begin with in terms of assessing how they’re doing, how severe their problem is, are they suicidal, where is their interest in seeking treatment or other kinds of modalities for care … I’m wondering where I’m going to squeeze the rest of this in.” [PC Group] Participants in other groups echoed similar concerns around fitting time in to discuss PGx testing and provide education to the patient, indicating that they would not use the testing depending on those circumstances. “Where this makes the most sense to me is in patients that have familiarity with medications for depression, prior trials that either worked well for a time and then became less effective or ineffective or had problematic side effects in the outset. I think it is more challenging in the context of a discussion about 'What is depression? What are medications for depression?' with somebody who is new to both questions.” [PC Group] PC providers also expressed concern over their familiarity with many of the medications in the report, and their ability to adapt the test results. One provider mentioned, “There’s a comfort level we have, we have a usual repertoire and beyond that there’s also, some of it maybe that we don’t use as often, but we do have a comfort level with. And there were some that we would never use. If I remember correctly, I think that the ones that were maybe most recommended were things that I would never consider using. And I use one, something that was sort of in the middle row that seemed like it would be helpful. So I think it does help, at least gives us food for thought … there still are some things that we would probably never feel that comfortable prescribing.” [PC Group] Due to these concerns, providers discussed a preference for implementing PGx slowly, testing out the process in a few patients (i.e., trialability). They also discussed how the decision to introduce PGx testing would depend on other factors they are addressing with a patient (i.e., adaptability). “How we decide to prescribe the medication always depends on what are the exact factors we’re targeting. Either we want the patient to sleep more, or sleep less. You know their safety profile for that patient, interaction with other medication, so I think this will be an important tool to kind of add to those factors when prescribing the medication.” [MH Group] Intervention Design The CFIR construct of Intervention Design refers to “perceived excellence in how the intervention is bundled, presented, and assembled.” During the focus groups, we asked providers to reflect on the layout, format, and use of the PGx test result reports, which provide a list of medications categorized into columns by green (no drug-gene interactions), yellow (possible drug-gene interaction) and red (serious drug-gene interaction), with footnotes that provide guidance on potential prescribing modifications. Many of the comments around design also reflected the complexity of the PGx testing. Providers generally felt the report was simple to use and that the color-coding of medications was straightforward. As one provider said, “I think the simple categorization is useful just because I myself do not want to spend a lot of time thinking about this, right? And so having a relatively simple … the sort of green light, yellow light, red light is fine. I mean in one way what it's analogous to is the CDC immunization schedule … I think having sort of a two-level thing where I can get a quick answer and if I need more I can look and get more, to me is a good way of doing it .” [PC Group] However, both MH and PC providers expressed concern over the potential for misinterpretation related to the color coding, especially by patients. “I guess if there was some reason that there was a particular medication that we really wanted to use and it was on the yellow or the red list and then the patient is saying well, ‘why would you want to put me on something that’s on the yellow or the red?’ just kind of looking at it. And even if in my clinical judgment I thought for other reasons that they were the most appropriate medication, I could see that that could kind of complicate the conversation.” [MH Group] “So I think that liability is a concern … you know if there is an adverse reaction … I’m concerned that if there’s some formal piece of information that says, hey this guy had this adverse outcome and he was on a medicine that was … “use with extreme caution”, I’m a little nervous about that but maybe that’s an unrealistic concern to have. But unlike clinical judgment where it’s easy to sort of you know, defend that, this seems kind of black and white, or red … I’d be less concerned about us misusing it than the misperception from other people.” [PC Group]
Ten focus groups (5 primary care, 5 mental health) were conducted with providers ( n = 31) from 6 sites. The number of participants per group ranged from 1 to 5 with an average of 3.1. One scheduled group had only one attendee, therefore an individual interview was conducted. Focus groups lasted approximately 45 min. As intended, providers were approximately half MH (48.4%) and half PC (51.6%). 61.3% of participants completed their medical training after 2000. Internists and psychiatrists represented 38.7% of participants each, 3.2% were family medicine, and 16.1% were Nurse Practitioners or Physician Assistants. Additional participant characteristics are presented in Table . During the focus groups, most providers stated they had not yet referred any patients to the study.
Focus group analysis revealed themes within the CFIR Intervention Characteristics domain constructs of Evidence, Relative Advantage, Adaptability, Trialability, Complexity, and Intervention Design that are important for understanding providers’ attitudes and perceptions around implementation of PGx testing for depression prescribing (Table ). Evidence Evidence is used to describe “stakeholders’ perceptions of the quality and validity of evidence supporting the belief that the intervention will have desired outcomes.” Providers discussed limited knowledge of the evidence around PGx testing for depression. As one MH provider stated, “ I think it definitely would be a great guide. And I’m still not as convinced with the strength of the evidence, but I’d like to see it a lot more. ” [MH Group] While generally not familiar with the literature, providers did have a general understanding that the evidence for psychiatric prescribing was not yet definitive and were therefore uncertain of the clinical utility of the PGx testing. One MH provider expressed concern, but was hopeful, “I am concerned because a lot of genetic studies have been really, really promising in the past, but haven’t really panned out in terms of helping clinically to a great extent. So, kind of cautiously optimistic, I’ll take that data that we get at the end of this study and see really how it matches up.” [MH group] PC providers were comfortable with and knowledgeable about the level of evidence for genetic testing in other areas of medicine (e.g., for anti-platelet medications and genetic susceptibility to disease). They expressed concern that the evidence for PGx in depression prescribing was limited and were therefore hesitant about implementing the intervention. “I mean warfarin is the one that’s the most well defined, but really it hasn’t changed anybody’s practice particularly. And there’s also ones for [drug name], but as far as I know, nobody actually uses that in their decision making at least here at the VA, so, I guess I’d say we need more evidence as to whether this is clinically useful or not.” [PC Group] As a result of this general uncertainty around the evidence, providers expressed caution about PGx testing. They discussed not giving the test results much weight until there is more evidence and they have clinical experience with the test. “I don’t know that I know enough about the strength. I think there’s still a lot of unanswered questions and for me, understanding what is the incremental value of having this is still unclear … there’s not enough evidence to support its widespread use … I’ll wait until the evidence tips more in favor and is a little more concrete.” [PC Group] Relative Advantage Relative Advantage refers to “stakeholders’ perception of the advantage of implementing the intervention versus an alternative solution” or usual practice . Given limited evidence, participants were also unsure whether PGx testing would improve upon their current practice. However, providers expressed interest in the potential, and were hopeful that PGx testing would help them identify more effective medications for their patients. “I think the advantage is hopefully avoiding a little bit of that trial and error that we always have.” [PC Group] Providers saw PGx testing as providing an advantage over usual care for patients who have been difficult to treat or have had poor responses to medication. “Especially I think patients with multiple comorbid medical conditions … just getting a sense some patients who are fast metabolizers and they do need higher doses of the medications and stuff like that. I think that will kind of open doors to having a better understanding of their medication regimen.” [MH Group] Many commented that this approach could increase patient buy-in to the use of medication to treat their depression and help overcome patient resistance or concerns around medication and possible side effects. “If a patient believes that this [has] been this process to help them choose the right medication and initially creates some positive placebo effect until the medication’s real effect kicks in … if you can create a positive experience with the medication they’re more likely to be med compliant.” [MH Group] Generally, participants saw PGx testing as one additional piece of information to add to other factors that impact their decisions about depression medication, but were unsure how much relative weight it would have. As one provider expressed, ‘The fact that psychiatry is touched by so many different components of a human person … So we’re looking at the genes as if they are the sixth unmovable set of dictates that kind of tells you what is going to happen to a person. And yet then, we have the nun study that they’ve got 97-year-old nuns with the APOE gene and have never demonstrated any symptoms of Alzheimer’s … So hanging all of our hats on these hooks may not be where it’s at in terms of fixing the problem of psychiatry not making people better. But I’m hoping that it can give us one more tool to use wisely and judiciously.” [MH Group] Participants also expressed concerns over the inherent delay in prescribing due to the wait for PGx test results compared to usual practice. Rather than writing a prescription at the appointment, ordering the test requires waiting up to 3 days and re-contacting the patient, which providers felt might lead to a missed treatment opportunity. “The time waste, sort of the point of care act timeliness is probably the biggest area I would say … assuming that I fully buy-in and I fully believe the evidence, and there’s clear support for this being of incremental value, I think the process barrier that I see to actually using it is the timeliness because if you’ve got a patient who is there with you and going to pick up the meds, you don’t want to lose that opportunity.” [PC Group] Adaptability, Trialability, and Complexity Focus group conversations related to the CFIR constructs of Adaptability, Trialability, and Complexity contained significant overlap, and therefore are discussed together. Within CFIR, Adaptability refers to “the degree to which an intervention can be adapted, tailored, refined, or reinvented to meet local needs,” while trialability is defined as “the ability to test the intervention on a small scale in the organization, and to be able to reverse course (undo implementation) if warranted.” Complexity is, “the perceived difficulty of the intervention, reflected by duration, scope, radicalness, disruptiveness, centrality, and intricacy and number of steps required to implement.” As mentioned, participants expressed concern over how PGx testing will fit in to the workflow for depression treatment, seeing it as potentially difficult to adapt to existing practices. In addition to the potential prescribing delay, providers were concerned about complex discussions of PGx, especially with patients with new depression or patients who were resistant to treatment. This concern was prominent among the PC providers, who noted that depression is not usually the primary or only complaint they address within a patient visit. As one provider described: “It’s the uncommon patient who comes in and says my chief complaint is that ‘I’m depressed and I want you to help me with it.’ It’s usually something that trickles out two-thirds of the way through a visit for routine care. And it’s a long conversation to begin with in terms of assessing how they’re doing, how severe their problem is, are they suicidal, where is their interest in seeking treatment or other kinds of modalities for care … I’m wondering where I’m going to squeeze the rest of this in.” [PC Group] Participants in other groups echoed similar concerns around fitting time in to discuss PGx testing and provide education to the patient, indicating that they would not use the testing depending on those circumstances. “Where this makes the most sense to me is in patients that have familiarity with medications for depression, prior trials that either worked well for a time and then became less effective or ineffective or had problematic side effects in the outset. I think it is more challenging in the context of a discussion about 'What is depression? What are medications for depression?' with somebody who is new to both questions.” [PC Group] PC providers also expressed concern over their familiarity with many of the medications in the report, and their ability to adapt the test results. One provider mentioned, “There’s a comfort level we have, we have a usual repertoire and beyond that there’s also, some of it maybe that we don’t use as often, but we do have a comfort level with. And there were some that we would never use. If I remember correctly, I think that the ones that were maybe most recommended were things that I would never consider using. And I use one, something that was sort of in the middle row that seemed like it would be helpful. So I think it does help, at least gives us food for thought … there still are some things that we would probably never feel that comfortable prescribing.” [PC Group] Due to these concerns, providers discussed a preference for implementing PGx slowly, testing out the process in a few patients (i.e., trialability). They also discussed how the decision to introduce PGx testing would depend on other factors they are addressing with a patient (i.e., adaptability). “How we decide to prescribe the medication always depends on what are the exact factors we’re targeting. Either we want the patient to sleep more, or sleep less. You know their safety profile for that patient, interaction with other medication, so I think this will be an important tool to kind of add to those factors when prescribing the medication.” [MH Group] Intervention Design The CFIR construct of Intervention Design refers to “perceived excellence in how the intervention is bundled, presented, and assembled.” During the focus groups, we asked providers to reflect on the layout, format, and use of the PGx test result reports, which provide a list of medications categorized into columns by green (no drug-gene interactions), yellow (possible drug-gene interaction) and red (serious drug-gene interaction), with footnotes that provide guidance on potential prescribing modifications. Many of the comments around design also reflected the complexity of the PGx testing. Providers generally felt the report was simple to use and that the color-coding of medications was straightforward. As one provider said, “I think the simple categorization is useful just because I myself do not want to spend a lot of time thinking about this, right? And so having a relatively simple … the sort of green light, yellow light, red light is fine. I mean in one way what it's analogous to is the CDC immunization schedule … I think having sort of a two-level thing where I can get a quick answer and if I need more I can look and get more, to me is a good way of doing it .” [PC Group] However, both MH and PC providers expressed concern over the potential for misinterpretation related to the color coding, especially by patients. “I guess if there was some reason that there was a particular medication that we really wanted to use and it was on the yellow or the red list and then the patient is saying well, ‘why would you want to put me on something that’s on the yellow or the red?’ just kind of looking at it. And even if in my clinical judgment I thought for other reasons that they were the most appropriate medication, I could see that that could kind of complicate the conversation.” [MH Group] “So I think that liability is a concern … you know if there is an adverse reaction … I’m concerned that if there’s some formal piece of information that says, hey this guy had this adverse outcome and he was on a medicine that was … “use with extreme caution”, I’m a little nervous about that but maybe that’s an unrealistic concern to have. But unlike clinical judgment where it’s easy to sort of you know, defend that, this seems kind of black and white, or red … I’d be less concerned about us misusing it than the misperception from other people.” [PC Group]
Evidence is used to describe “stakeholders’ perceptions of the quality and validity of evidence supporting the belief that the intervention will have desired outcomes.” Providers discussed limited knowledge of the evidence around PGx testing for depression. As one MH provider stated, “ I think it definitely would be a great guide. And I’m still not as convinced with the strength of the evidence, but I’d like to see it a lot more. ” [MH Group] While generally not familiar with the literature, providers did have a general understanding that the evidence for psychiatric prescribing was not yet definitive and were therefore uncertain of the clinical utility of the PGx testing. One MH provider expressed concern, but was hopeful, “I am concerned because a lot of genetic studies have been really, really promising in the past, but haven’t really panned out in terms of helping clinically to a great extent. So, kind of cautiously optimistic, I’ll take that data that we get at the end of this study and see really how it matches up.” [MH group] PC providers were comfortable with and knowledgeable about the level of evidence for genetic testing in other areas of medicine (e.g., for anti-platelet medications and genetic susceptibility to disease). They expressed concern that the evidence for PGx in depression prescribing was limited and were therefore hesitant about implementing the intervention. “I mean warfarin is the one that’s the most well defined, but really it hasn’t changed anybody’s practice particularly. And there’s also ones for [drug name], but as far as I know, nobody actually uses that in their decision making at least here at the VA, so, I guess I’d say we need more evidence as to whether this is clinically useful or not.” [PC Group] As a result of this general uncertainty around the evidence, providers expressed caution about PGx testing. They discussed not giving the test results much weight until there is more evidence and they have clinical experience with the test. “I don’t know that I know enough about the strength. I think there’s still a lot of unanswered questions and for me, understanding what is the incremental value of having this is still unclear … there’s not enough evidence to support its widespread use … I’ll wait until the evidence tips more in favor and is a little more concrete.” [PC Group]
Relative Advantage refers to “stakeholders’ perception of the advantage of implementing the intervention versus an alternative solution” or usual practice . Given limited evidence, participants were also unsure whether PGx testing would improve upon their current practice. However, providers expressed interest in the potential, and were hopeful that PGx testing would help them identify more effective medications for their patients. “I think the advantage is hopefully avoiding a little bit of that trial and error that we always have.” [PC Group] Providers saw PGx testing as providing an advantage over usual care for patients who have been difficult to treat or have had poor responses to medication. “Especially I think patients with multiple comorbid medical conditions … just getting a sense some patients who are fast metabolizers and they do need higher doses of the medications and stuff like that. I think that will kind of open doors to having a better understanding of their medication regimen.” [MH Group] Many commented that this approach could increase patient buy-in to the use of medication to treat their depression and help overcome patient resistance or concerns around medication and possible side effects. “If a patient believes that this [has] been this process to help them choose the right medication and initially creates some positive placebo effect until the medication’s real effect kicks in … if you can create a positive experience with the medication they’re more likely to be med compliant.” [MH Group] Generally, participants saw PGx testing as one additional piece of information to add to other factors that impact their decisions about depression medication, but were unsure how much relative weight it would have. As one provider expressed, ‘The fact that psychiatry is touched by so many different components of a human person … So we’re looking at the genes as if they are the sixth unmovable set of dictates that kind of tells you what is going to happen to a person. And yet then, we have the nun study that they’ve got 97-year-old nuns with the APOE gene and have never demonstrated any symptoms of Alzheimer’s … So hanging all of our hats on these hooks may not be where it’s at in terms of fixing the problem of psychiatry not making people better. But I’m hoping that it can give us one more tool to use wisely and judiciously.” [MH Group] Participants also expressed concerns over the inherent delay in prescribing due to the wait for PGx test results compared to usual practice. Rather than writing a prescription at the appointment, ordering the test requires waiting up to 3 days and re-contacting the patient, which providers felt might lead to a missed treatment opportunity. “The time waste, sort of the point of care act timeliness is probably the biggest area I would say … assuming that I fully buy-in and I fully believe the evidence, and there’s clear support for this being of incremental value, I think the process barrier that I see to actually using it is the timeliness because if you’ve got a patient who is there with you and going to pick up the meds, you don’t want to lose that opportunity.” [PC Group]
Focus group conversations related to the CFIR constructs of Adaptability, Trialability, and Complexity contained significant overlap, and therefore are discussed together. Within CFIR, Adaptability refers to “the degree to which an intervention can be adapted, tailored, refined, or reinvented to meet local needs,” while trialability is defined as “the ability to test the intervention on a small scale in the organization, and to be able to reverse course (undo implementation) if warranted.” Complexity is, “the perceived difficulty of the intervention, reflected by duration, scope, radicalness, disruptiveness, centrality, and intricacy and number of steps required to implement.” As mentioned, participants expressed concern over how PGx testing will fit in to the workflow for depression treatment, seeing it as potentially difficult to adapt to existing practices. In addition to the potential prescribing delay, providers were concerned about complex discussions of PGx, especially with patients with new depression or patients who were resistant to treatment. This concern was prominent among the PC providers, who noted that depression is not usually the primary or only complaint they address within a patient visit. As one provider described: “It’s the uncommon patient who comes in and says my chief complaint is that ‘I’m depressed and I want you to help me with it.’ It’s usually something that trickles out two-thirds of the way through a visit for routine care. And it’s a long conversation to begin with in terms of assessing how they’re doing, how severe their problem is, are they suicidal, where is their interest in seeking treatment or other kinds of modalities for care … I’m wondering where I’m going to squeeze the rest of this in.” [PC Group] Participants in other groups echoed similar concerns around fitting time in to discuss PGx testing and provide education to the patient, indicating that they would not use the testing depending on those circumstances. “Where this makes the most sense to me is in patients that have familiarity with medications for depression, prior trials that either worked well for a time and then became less effective or ineffective or had problematic side effects in the outset. I think it is more challenging in the context of a discussion about 'What is depression? What are medications for depression?' with somebody who is new to both questions.” [PC Group] PC providers also expressed concern over their familiarity with many of the medications in the report, and their ability to adapt the test results. One provider mentioned, “There’s a comfort level we have, we have a usual repertoire and beyond that there’s also, some of it maybe that we don’t use as often, but we do have a comfort level with. And there were some that we would never use. If I remember correctly, I think that the ones that were maybe most recommended were things that I would never consider using. And I use one, something that was sort of in the middle row that seemed like it would be helpful. So I think it does help, at least gives us food for thought … there still are some things that we would probably never feel that comfortable prescribing.” [PC Group] Due to these concerns, providers discussed a preference for implementing PGx slowly, testing out the process in a few patients (i.e., trialability). They also discussed how the decision to introduce PGx testing would depend on other factors they are addressing with a patient (i.e., adaptability). “How we decide to prescribe the medication always depends on what are the exact factors we’re targeting. Either we want the patient to sleep more, or sleep less. You know their safety profile for that patient, interaction with other medication, so I think this will be an important tool to kind of add to those factors when prescribing the medication.” [MH Group]
The CFIR construct of Intervention Design refers to “perceived excellence in how the intervention is bundled, presented, and assembled.” During the focus groups, we asked providers to reflect on the layout, format, and use of the PGx test result reports, which provide a list of medications categorized into columns by green (no drug-gene interactions), yellow (possible drug-gene interaction) and red (serious drug-gene interaction), with footnotes that provide guidance on potential prescribing modifications. Many of the comments around design also reflected the complexity of the PGx testing. Providers generally felt the report was simple to use and that the color-coding of medications was straightforward. As one provider said, “I think the simple categorization is useful just because I myself do not want to spend a lot of time thinking about this, right? And so having a relatively simple … the sort of green light, yellow light, red light is fine. I mean in one way what it's analogous to is the CDC immunization schedule … I think having sort of a two-level thing where I can get a quick answer and if I need more I can look and get more, to me is a good way of doing it .” [PC Group] However, both MH and PC providers expressed concern over the potential for misinterpretation related to the color coding, especially by patients. “I guess if there was some reason that there was a particular medication that we really wanted to use and it was on the yellow or the red list and then the patient is saying well, ‘why would you want to put me on something that’s on the yellow or the red?’ just kind of looking at it. And even if in my clinical judgment I thought for other reasons that they were the most appropriate medication, I could see that that could kind of complicate the conversation.” [MH Group] “So I think that liability is a concern … you know if there is an adverse reaction … I’m concerned that if there’s some formal piece of information that says, hey this guy had this adverse outcome and he was on a medicine that was … “use with extreme caution”, I’m a little nervous about that but maybe that’s an unrealistic concern to have. But unlike clinical judgment where it’s easy to sort of you know, defend that, this seems kind of black and white, or red … I’d be less concerned about us misusing it than the misperception from other people.” [PC Group]
Focus group results demonstrated the applicability of the CFIR constructs for understanding factors that may affect use of PGx testing and the context for broader dissemination. Providers expressed concerns about the intervention itself in terms of the evidence supporting it, the relative advantage compared to usual practice, and the feasibility of using PGx testing in daily practice. These concerns about the strength of the evidence are expected, as PRIME Care is a hybrid type 1 trial , developing the evidence for PGx testing while simultaneously studying its implementation. Providers’ hesitation around PGx testing points to the need during the trial to remind providers that their participation is an important step towards contributing to the evidence and determining whether widespread use is justified. Further, understanding providers’ baseline knowledge and attitudes about PGx testing will provide insight into barriers to be addressed in future clinical implementation should the trial demonstrate the clinical utility of PGx. While our questions were targeted to elicit responses pertinent to constructs within a specific CFIR domain (Intervention Characteristics), participants’ view of their context resulted in broader discussions. For example, the needs and opinions of those served by the organization, i.e., the patients (CFIR domain of Outer Setting), were frequently raised as a factor in providers’ thoughts about how they may use PGx testing. Providers felt some patients would be more appropriate than others based on their characteristics and needs. Our findings also revealed aspects which cut across CFIR constructs that are important to understand in preparing for future implementation. First, providers had limited experience and knowledge of PGx testing and its evidence base, particularly for psychiatric medications. While providers held favorable attitudes towards PGx testing, they wanted additional information on the evidence and the clinical utility. Second, providers were hopeful that PGx could increase their precision in prescribing antidepressants and improve patient engagement, but were uncertain how much results would influence treatment. Participants were mixed as to how helpful they felt this extra information would be. Third, providers indicated some serious concerns about potential misinterpretation of PGx results, especially on the part of patients who might not understand the report’s nuances. They also had significant concerns about incorporating testing into their workflow, given the delay in receiving the results and potential difficulties around re-engaging patients. These findings are consistent with quantitative data on the full sample of PRIME Care participating providers as well as those concerning the use of PGx more broadly . Most studies have used quantitative surveys to examine these attitudes; the use of focus groups and qualitative data allow us to understand and explore the context underlying providers’ thoughts on PGx, which help identify modifiable factors to be addressed in implementation. The PRIME Care trial offers a unique opportunity to examine differences between MH and PC providers’ use of PGx testing in the context of depression care. Previous studies have documented familiarity, comfort, and use of PGx broadly among primary care providers, but have not focused specifically on psychiatric prescribing . Our results demonstrate that PC providers were less familiar and comfortable with the application of PGx testing to antidepressant prescribing than MH providers, and may be less comfortable with the range of medications available in the report. This finding suggests that additional support may be needed in PC settings. Within the VHA, such assistance is available via integrated mental health primary care models, but may be more difficult to access in systems where integration is not available. Given variable comfort and confidence in dealing with depression in primary care settings and variations in the use and perceived value of integrated mental health care , it is important to understand how PC providers can be supported to use novel resources and tools such as PGx testing to improve PC management of depression. In the event that the trial produces a negative result and does not show PGx to be effective, the insights gained into how PC and MH providers approached the use of PGx testing for antidepressant prescribing will likely be applicable to the implementation of other new psychiatric practices. Limitations Limitations of this study include generalizability. As a qualitative study, these results may not be generalizable to all providers. The sample was drawn entirely from the VHA, which may have resulted in different contextual factors being emphasized. For example, cost of the PGx testing was not a focus in these discussions, but may be important in the private sector. Further, the sample of providers may be subject to volunteer bias, as they already consented to participate in a PGx trial and may have different views from those who chose not to participate. However, the enrolled providers may also represent PGx enthusiasts who would be most willing to use PGx testing in their clinical practice; therefore, the volunteer bias may mirror real-life individual predispositions towards early adoption. The PRIME Care study examines the use of one specific commercially-available PGx test report, and results may not be generalizable to all currently available PGx testing. Finally, focus group questions targeted a particular CFIR domain (Intervention Characteristics). Other CFIR domains may be equally salient, and will be studied in ongoing implementation science activities during other phases of the study.
Limitations of this study include generalizability. As a qualitative study, these results may not be generalizable to all providers. The sample was drawn entirely from the VHA, which may have resulted in different contextual factors being emphasized. For example, cost of the PGx testing was not a focus in these discussions, but may be important in the private sector. Further, the sample of providers may be subject to volunteer bias, as they already consented to participate in a PGx trial and may have different views from those who chose not to participate. However, the enrolled providers may also represent PGx enthusiasts who would be most willing to use PGx testing in their clinical practice; therefore, the volunteer bias may mirror real-life individual predispositions towards early adoption. The PRIME Care study examines the use of one specific commercially-available PGx test report, and results may not be generalizable to all currently available PGx testing. Finally, focus group questions targeted a particular CFIR domain (Intervention Characteristics). Other CFIR domains may be equally salient, and will be studied in ongoing implementation science activities during other phases of the study.
Overall, our findings demonstrate possible barriers and facilitators to be considered in future implementation of PGx testing for depression. Phase 2 of the PRIME Care implementation science activities will ask providers to reflect on other CFIR domains, including Individual Characteristics, Inner Setting and Outer Setting, and Implementation Process in order to understand their experience in the trial, explore their use of the PGx testing, barriers and facilitators encountered in the process, and solicit thoughts on clinical implementation. These future data will provide insight into how perceptions and attitudes around PGx testing at the beginning of the trial relate to actual use of the test, and implications for future clinical implementation of this novel psychiatric practice.
|
Potential of biochar to restoration of microbial biomass and enzymatic activity in a highly degraded semiarid soil | 8fef5d69-921b-4c5b-bcfc-d15052855d11 | 11525580 | Microbiology[mh] | Soil is an essential resource to support plant growth and human life. However, to maintain the productivity level without increasing the environmental level of degradation has become a global challenge . Nowadays, there is an estimative that ~ 25% of soils worldwide face several types of degradation affecting, directly, 3 billion people . Soil degradation is especially significant across arid and semiarid regions. This is primarily due to unsuitable land use combined with fragile soils and concentrated rainfall, leading to erosion and soil organic carbon loss . Brazilian semiarid soils are facing a strong degradation process mainly due to intense pasture and livestock production , where farmers use native vegetation to support animal feeding . Importantly, overgrazing, which occurs when native plants are subjected to intensive grazing without adequate recovery time, significantly impacts soil health. It reduces vegetation cover and exposes the soil to erosion . Consequently, soil organic C has been lost, affecting the soil attributes, especially the biological properties , . In the restoration process of these degraded soils, grazing exclusion has been applied in this semiarid region with positive effects on soil , , but this process takes at least two decades to restore microbial properties . Since degradation has decreased soil organic matter (SOM), the use of carbon-rich organic residues, such as biochar, could be an interesting short-term strategy for improving soil properties , and increase soil C pools . Biochar is a stable material obtained through the pyrolysis of biomass, such as agricultural residues, in a low-oxygen environment . This carbon-rich material has the potential to increase soil organic C , enhance soil structure , nutrient availability , and soil productivity. Although previous studies have reported positive effects from biochar application on soil properties, little is known about its potential for restoration of degraded soils, mainly regarding the biological properties. These features are important since biological properties are essential to soil functioning and they can be indicators of soil restoration . Soil microbial biomass (SMB) plays an essential role in soil functioning, being particularly important during soil restoration . The SMB releases extra-cellular enzymes that boost biological activity, mainly related to C, N, and P cycling and availability . Particularly, the assessment of stoichiometry of extra-cellular enzymes (C-, N-, and P-acquiring enzymes) could allow us to understand both limitation and availability of C, N, and P to soil microbial biomass . The understanding of potential enzymatic activity provides information about nutrient turnover and ecosystem metabolism which is important to assess the restoration strategy . While several studies have reported the responses of soil microbial biomass, enzymatic activity, and their stoichiometry, from biochar soil application – little is known about the potential of applying biochar in degraded soils and its effect on soil microbial biomass, enzymatic activity, and stoichiometry. Brazil is one of the world’s largest cashew producers, with the state of Ceará accounting for the highest share of the country’s production. This significant output generates a considerable amount of bagasse from agro-industrial processes. To date, this study represents one of the first attempts to explore the potential of cashew bagasse for restoring microbial properties in tropical soils affected by desertification. Thus, could the application of carbon-rich materials improve the microbial community (biomass and enzyme activity) in highly degraded soil, reducing nutrient limitation? In this study, we hypothesized that biochar obtained through pyrolysis of pseudo fruit bagasse of cashew - a native plant species from Brazilian semiarid region, could be effective in restoring the soil microbial biomass and enzymatic activity and influencing the enzymatic stoichiometry in degraded soil. More importantly, we hypothesized that cashew biochar stimulates microbial properties faster than the well-known biochar obtained from sewage sludge (which has more recalcitrant compounds). Thus, this study assessed the efficiency of biochar sources (from cashew bagasse and sewage sludge) and doses (0, 5, 10, 20, and 40 Mg ha − 1 ) to restore soil microbial indicators (biomass, enzymes, and their stoichiometry) in a highly degraded dryland soil (Fig. ).
Soil microbial biomass, organic C and N and available P contents The application of both cashew and sewage sludge biochar increased soil respiration compared to unamended soil (Fig. a). However, the most significant effect on soil respiration was found with the application of cashew biochar, where the increase ranged from 50 to 150% at the lowest and highest doses, respectively. The application of 5 and 10 Mg ha − 1 of cashew biochar, and 10 and 20 Mg ha − 1 of sewage sludge biochar, increased MBC compared to unamended soil (Fig. b). The effect on MBC was more significant with the application of sewage sludge biochar, where the increase was 100% compared to unamended soil. The values of TOC increased ~ 50% with the application of both cashew and sewage sludge biochar at the lowest and highest doses, respectively (Fig. c). Both cashew and sewage sludge biochar strongly increased MBN compared to unamended soil (Fig. d), showing an increase of ~ 500% with the application of both cashew and sewage sludge biochar. The values of total N increased with the application of both biochar, but the highest value was observed for the highest doses of cashew biochar (~ 100% compared to unamended soil) (Fig. e). Available soil phosphorus increased with the application of both biochar, mainly in 20 and 40 Mg ha − 1 and in sewage sludge treatments (Fig. f). The microbial C quotient increased with the application of 10 and 20 Mg ha − 1 of both biochar, but the effect was more prominent with the application of sewage sludge biochar (Fig. a). The application of sewage sludge biochar increased the values of microbial C quotient by ~ 100% compared to unamended soil. Regarding the microbial N quotient, we observed the highest values with the application of both biochar, with values increased by about 1,000% compared to unamended soil (Fig. b). Since the application of both biochar significantly increased MBN, the values of the MBC: MBN ratio decreased significantly (Fig. c). On the other hand, the application of 5 and 10 Mg ha − 1 of cashew biochar, and 10 and 20 Mg ha − 1 of sewage sludge biochar, increased the values of the organic C: N ratio (Fig. d). Soil enzymatic activity The soil enzymatic activity showed distinct responses to the application of both cashew and sewage sludge biochar (Fig. ). The application of both biochar did not change the activity of arylsulfatase in soil (Fig. a). The responses of ß-glucosidase were contrasting when compared cashew and sewage sludge biochar (Fig. b). The application of cashew biochar decreased the activity of ß-glucosidase in soil but the application of 5 Mg ha − 1 of sewage sludge biochar while increased the activity of ß-glucosidase, it decreased enzyme activity with the increase of the doses. The acid and alkaline phosphatase showed distinct patterns of response to biochar. The acid phosphatase decreased after the application of cashew biochar at 5, 10, and 20 Mg ha − 1 , but increased at the highest dose (Fig. c). When sewage sludge biochar was applied, acid phosphatase increased with the application of 10 Mg ha − 1 and 20 Mg ha − 1 . The alkaline phosphatase increased 150% to the highest doses of both cashew and sewage sludge biochar compared to unamended soil (Fig. d). The urease activity increased with the application of both biochar (Fig. e). However, the highest values of urease activity were observed with the application of 40 Mg ha − 1 of cashew biochar and 20 Mg ha − 1 of sewage sludge biochar. The application of cashew biochar increased urease activity by ~ 400%, while sewage sludge biochar increased it by ~ 600% compared to unamended soil. Enzymatic stoichiometry The application of both cashew and sewage sludge biochar decreased the enzymatic C: N ratio (Fig. a). In contrast, the enzymatic N: P ratio increased with the application of cashew biochar, with the highest enzymatic N: P ratio observed at 10, 20, and 40 Mg ha − 1 (Fig. c). The application of sewage sludge biochar also increased the enzymatic N: P ratio, but the highest value was found with the application of 20 Mg ha − 1 . Interestingly, the enzymatic C: P ratio was higher in unamended soil and with the application of 10 Mg ha − 1 of cashew biochar (Fig. b). When sewage sludge biochar was applied, we observed a higher enzymatic C: P ratio with the application of 5 Mg ha − 1 , while 20 and 40 Mg ha − 1 promoted a decrease in the enzymatic C: P ratio compared to unamended soil. The A (angle) and L (unitless) changed when comparing cashew and sewage sludge biochar and their application doses (Table ). The unamended soil and those with the application of 5 Mg ha − 1 of cashew biochar showed the highest vector A, while the application of sewage sludge biochar decreased vector A compared to unamended soil. The highest values of vector L were found in unamended soil (0.99) and decreased with increased doses of both biochar, reaching 0.64 and 0.69 at the highest doses of cashew and sewage sludge biochar.
The application of both cashew and sewage sludge biochar increased soil respiration compared to unamended soil (Fig. a). However, the most significant effect on soil respiration was found with the application of cashew biochar, where the increase ranged from 50 to 150% at the lowest and highest doses, respectively. The application of 5 and 10 Mg ha − 1 of cashew biochar, and 10 and 20 Mg ha − 1 of sewage sludge biochar, increased MBC compared to unamended soil (Fig. b). The effect on MBC was more significant with the application of sewage sludge biochar, where the increase was 100% compared to unamended soil. The values of TOC increased ~ 50% with the application of both cashew and sewage sludge biochar at the lowest and highest doses, respectively (Fig. c). Both cashew and sewage sludge biochar strongly increased MBN compared to unamended soil (Fig. d), showing an increase of ~ 500% with the application of both cashew and sewage sludge biochar. The values of total N increased with the application of both biochar, but the highest value was observed for the highest doses of cashew biochar (~ 100% compared to unamended soil) (Fig. e). Available soil phosphorus increased with the application of both biochar, mainly in 20 and 40 Mg ha − 1 and in sewage sludge treatments (Fig. f). The microbial C quotient increased with the application of 10 and 20 Mg ha − 1 of both biochar, but the effect was more prominent with the application of sewage sludge biochar (Fig. a). The application of sewage sludge biochar increased the values of microbial C quotient by ~ 100% compared to unamended soil. Regarding the microbial N quotient, we observed the highest values with the application of both biochar, with values increased by about 1,000% compared to unamended soil (Fig. b). Since the application of both biochar significantly increased MBN, the values of the MBC: MBN ratio decreased significantly (Fig. c). On the other hand, the application of 5 and 10 Mg ha − 1 of cashew biochar, and 10 and 20 Mg ha − 1 of sewage sludge biochar, increased the values of the organic C: N ratio (Fig. d).
The soil enzymatic activity showed distinct responses to the application of both cashew and sewage sludge biochar (Fig. ). The application of both biochar did not change the activity of arylsulfatase in soil (Fig. a). The responses of ß-glucosidase were contrasting when compared cashew and sewage sludge biochar (Fig. b). The application of cashew biochar decreased the activity of ß-glucosidase in soil but the application of 5 Mg ha − 1 of sewage sludge biochar while increased the activity of ß-glucosidase, it decreased enzyme activity with the increase of the doses. The acid and alkaline phosphatase showed distinct patterns of response to biochar. The acid phosphatase decreased after the application of cashew biochar at 5, 10, and 20 Mg ha − 1 , but increased at the highest dose (Fig. c). When sewage sludge biochar was applied, acid phosphatase increased with the application of 10 Mg ha − 1 and 20 Mg ha − 1 . The alkaline phosphatase increased 150% to the highest doses of both cashew and sewage sludge biochar compared to unamended soil (Fig. d). The urease activity increased with the application of both biochar (Fig. e). However, the highest values of urease activity were observed with the application of 40 Mg ha − 1 of cashew biochar and 20 Mg ha − 1 of sewage sludge biochar. The application of cashew biochar increased urease activity by ~ 400%, while sewage sludge biochar increased it by ~ 600% compared to unamended soil.
The application of both cashew and sewage sludge biochar decreased the enzymatic C: N ratio (Fig. a). In contrast, the enzymatic N: P ratio increased with the application of cashew biochar, with the highest enzymatic N: P ratio observed at 10, 20, and 40 Mg ha − 1 (Fig. c). The application of sewage sludge biochar also increased the enzymatic N: P ratio, but the highest value was found with the application of 20 Mg ha − 1 . Interestingly, the enzymatic C: P ratio was higher in unamended soil and with the application of 10 Mg ha − 1 of cashew biochar (Fig. b). When sewage sludge biochar was applied, we observed a higher enzymatic C: P ratio with the application of 5 Mg ha − 1 , while 20 and 40 Mg ha − 1 promoted a decrease in the enzymatic C: P ratio compared to unamended soil. The A (angle) and L (unitless) changed when comparing cashew and sewage sludge biochar and their application doses (Table ). The unamended soil and those with the application of 5 Mg ha − 1 of cashew biochar showed the highest vector A, while the application of sewage sludge biochar decreased vector A compared to unamended soil. The highest values of vector L were found in unamended soil (0.99) and decreased with increased doses of both biochar, reaching 0.64 and 0.69 at the highest doses of cashew and sewage sludge biochar.
Soil microbial biomass and organic C and N This study used a highly degraded soil by overgrazing to verify the potential of biochar application to restore soil biological properties. The degraded soil evaluated presents low soil microbial biomass due to the high degradation, and losses of vegetation cover and soil organic C . Thus, after the application of both cashew and sewage sludge biochar, it was verified a restoration of soil microbial biomass. These results are in line with the main hypothesis that the application of biochar, mainly cashew biochar obtained through a native plant species, was effective in restoring soil microbial biomass and enzymatic activity. The application of both biochar significantly increased soil basal respiration which means more biological activity derived from decomposition of organic materials . Particularly, soil respiration was higher with the application of cashew biochar suggesting a better adaptation of microbial community to this organic material obtained from native plant species. In addition, the cashew biochar presents higher content of C as compared to sewage sludge biochar (Table ). We observed that increased biochar doses provide higher available C to soil microbes, increasing soil respiration. Indeed, previous studies have shown increased soil respiration after the application of biochar due to higher labile C incorporated into the soil , . However, the microbial biomass C was higher in low to medium biochar doses which indicates that the highest doses of biochar stimulated soil respiration through an input of available C, but it did not reflect in higher microbial biomass C levels. Both biochar presented low C/N ratio (< 20) which promotes lower C immobilization into microbial biomass , mainly in higher doses. The microbial biomass N was more influenced by the application of both biochar reflecting the high content of N in both materials compared with unamended soil. Indeed, this contributed to the higher total N found in soil, and to the lower soil C: N ratio (TOC/total N), mainly when cashew biochar was applied in higher doses. Since the application of sewage sludge biochar increased microbial biomass C (mainly 10 and 20 Mg ha − 1 ), it reflected in the increased microbial C quotient (qMic-C) which means a higher fraction of organic matter incorporated as microbial biomass . Regarding microbial N quotient (qMic-N), we observed a higher increase with the application of both biochar, and this reflected the high increase in microbial biomass N. Thus, as N was more incorporated into microbial biomass than C, this decreased the microbial C: N ratio (MBC/MBN), indicating a higher positive effect of both biochar on bacteria than fungi . Thus, it means that the application of both cashew and sewage sludge biochar stimulates more bacterial than fungal communities in the soil. Indeed, previous studies reported that bacterial communities are more active in biochar-treated soil than fungal communities , . Future studies should aim to comprehensively understand the entire ecology of both bacterial and fungal communities with different biochar doses. Soil enzymatic activity The extra-cellular enzymes provide information about the potential mineralization of C (ß-glucosidase), N (urease), P (acid phosphatase), and S (arylsulphatase) which can indicate potential changes in soil biochemical status . The applications of both biochar stimulated soil enzymatic activity, but with distinct responses according to different sources. The application of both biochar did not affect the activity of arylsulfatase which suggests little effect on cycling of S in soil. While no studies were done with cashew biochar yet, a previous study using sewage sludge biochar showed no alterations in arylsulfatase activity and no effect on S cycling . In addition, this no response of arylsulfatase can be related to a lower abundance of fungi than bacteria in soil, as previously reported. It is known that fungi contain ester sulfate that produces sulfatases; thus, the lower abundance of fungi decreases arylsulfatase activity . In contrast, ß-glucosidase activity was more variable as affected by distinct biochar which have different impacts on C cycling. Thus, the application of cashew biochar was not enough to increase the ß-glucosidase in this degraded soil. The addition of labile C-sources (e.g., cashew based-biochar) could influence the ß-glucosidase activity since Wei et al. demonstrated that labile organic C input reduced the related C-acquisition enzyme activities. Interestingly, Foster et al. found that ß-glucosidase reduction may be due to the adsorption of ß-glucosidase substrate in the biochar surface. Similarly, Lehmann et al. suggested that the reduced enzyme synthesis was due to the interaction between microbes and carbon on the biochar surface, where biochar adsorption onto ß-glucosidase led to decreased enzyme production . On the other hand, sewage sludge biochar applied in the lowest doses (5 and 10 Mg ha − 1 ) promoted a stimulatory effect on ß-glucosidase possibly due to the addition of C-recalcitrant content . However, higher doses of sewage sludge biochar can increase the content of metals and, thus, affect the ß-glucosidase . A previous study reported β-glucosidase as the most sensitive indicator of the adverse impact of metals . Several factors could contribute to the inhibition of C-acquiring enzymes, including biomass, pyrolysis temperature, and soil texture. For example, biochars from herbs and wood diminish C-acquiring enzyme activity . This suppression could be related to either the inherent properties of the biochar or alterations in the microbial community following the application of the biochar (e.g., inhibitory compounds in biochar such as metals). Also, the presence of phenolic and lignin compounds can change the chemical composition of soil organic matter, thereby decreasing the bioavailability of carbon compounds that beta-glucosidase can decompose in higher doses of sewage sludge application. The activity of soil phosphatases showed distinct effects of biochar on biochemical processes related to P cycling. The decrease of acid phosphatase with the application of cashew biochar at 5, 10, and 20 Mg ha − 1 could be due to a high P content on biochar and, more importantly, a higher C limitation status observed on enzymes stoichiometry. Jin et al. demonstrated that manure biochar decreased the acid phosphatase activity, which could be attributed to a higher nutrient P availability in biochar. However, in the highest doses of cashew and sewage sludge biochar, we observed an increased acid phosphatase (and in soil P contents), probably due to the higher values of total N found in soil after applying the biochar. Previous studies have reported that both acid and alkaline phosphatases are produced at the cost of N , , which indicates that the content of total N is a determinant of phosphatase activities. In addition, the increased alkaline phosphatase with the highest doses of each biochar suggests a direct effect of the alkaline pH value found in both tested biochar . It suggests that pyrolytic biochar could enhance soil P contents via both acid and alkaline phosphatases. The observed increase in urease activity following the application of cashew and sewage sludge biochar demonstrates the significant impact that these amendments can have on N cycling in soil. Additionally, urease activity is significantly modulated by soil pH; applying alkaline biochar to acidic soil can enhance urease activity by improving the soil to a higher pH . Also, in a nutrient-limited system (see below), microbial proliferation could induce enzymatic activity to use C-sources derived from biochar application . Importantly, Zhang et al. demonstrated a positive correlation between urease activity and microbial biomass carbon after the application of wheat straw-derived biochar (8 t ha − 1 and 16 t ha − 1 ), indicating that soil N cycling is driving via the potential of the microbial community to increase its biomass. Enzymatic stoichiometry The application of biochar changed the soil enzymatic stoichiometry which influenced the availability and limitations of C, N, and P to microbes. Further, the decrease in enzymatic C: N ratios due to biochar application suggests that microbial biomass uses more N than C in its biological processes. This reflected the higher values of N-acquiring enzymes (urease) found with the application of biochar. On the other hand, the degraded soil which was not amended with biochar showed a higher enzymatic C: N ratio, as also observed by Silva et al. for degraded soil. The application of both biochar increased the enzymatic N: P ratio, which confirms the positive effect of applying biochar, i.e., the input of N, promoting the activity of N-acquiring enzymes. This is important since these N-acquiring enzymes improve the cycling of N in the soil . In general, the application of both biochar at high rates decreased the enzymatic C: P ratio, reflecting the lower activity of C-acquiring enzymes (ß-glucosidase). The values of vectors L (length) and A (angle) are useful to show the degree of C limitation (vector L) and P limitation relative to N (vector A) . The highest values of vector L found in unamended soil suggest more C limitation to soil microbes , while when biochar is applied this C limitation decreases. Regarding vector A, the unamended (and 5 Mg ha − 1 of cashew biochar) soil showed the highest values which indicate more P limitation than N to soil microbes. These results confirm that the application of biochar increases the availability of C and N while promoting a limitation of P to soil microbes. Interestingly, P limitation decreased with higher biochar doses, as indicated by the lower A angle, which could be related to an improvement in soil P content (Fig. f). However, despite this increase, the A angle remains higher than 45º (the limit for P limitation ) and the rate of P accumulation remains very low for Brazilian semiarid soils .
This study used a highly degraded soil by overgrazing to verify the potential of biochar application to restore soil biological properties. The degraded soil evaluated presents low soil microbial biomass due to the high degradation, and losses of vegetation cover and soil organic C . Thus, after the application of both cashew and sewage sludge biochar, it was verified a restoration of soil microbial biomass. These results are in line with the main hypothesis that the application of biochar, mainly cashew biochar obtained through a native plant species, was effective in restoring soil microbial biomass and enzymatic activity. The application of both biochar significantly increased soil basal respiration which means more biological activity derived from decomposition of organic materials . Particularly, soil respiration was higher with the application of cashew biochar suggesting a better adaptation of microbial community to this organic material obtained from native plant species. In addition, the cashew biochar presents higher content of C as compared to sewage sludge biochar (Table ). We observed that increased biochar doses provide higher available C to soil microbes, increasing soil respiration. Indeed, previous studies have shown increased soil respiration after the application of biochar due to higher labile C incorporated into the soil , . However, the microbial biomass C was higher in low to medium biochar doses which indicates that the highest doses of biochar stimulated soil respiration through an input of available C, but it did not reflect in higher microbial biomass C levels. Both biochar presented low C/N ratio (< 20) which promotes lower C immobilization into microbial biomass , mainly in higher doses. The microbial biomass N was more influenced by the application of both biochar reflecting the high content of N in both materials compared with unamended soil. Indeed, this contributed to the higher total N found in soil, and to the lower soil C: N ratio (TOC/total N), mainly when cashew biochar was applied in higher doses. Since the application of sewage sludge biochar increased microbial biomass C (mainly 10 and 20 Mg ha − 1 ), it reflected in the increased microbial C quotient (qMic-C) which means a higher fraction of organic matter incorporated as microbial biomass . Regarding microbial N quotient (qMic-N), we observed a higher increase with the application of both biochar, and this reflected the high increase in microbial biomass N. Thus, as N was more incorporated into microbial biomass than C, this decreased the microbial C: N ratio (MBC/MBN), indicating a higher positive effect of both biochar on bacteria than fungi . Thus, it means that the application of both cashew and sewage sludge biochar stimulates more bacterial than fungal communities in the soil. Indeed, previous studies reported that bacterial communities are more active in biochar-treated soil than fungal communities , . Future studies should aim to comprehensively understand the entire ecology of both bacterial and fungal communities with different biochar doses.
The extra-cellular enzymes provide information about the potential mineralization of C (ß-glucosidase), N (urease), P (acid phosphatase), and S (arylsulphatase) which can indicate potential changes in soil biochemical status . The applications of both biochar stimulated soil enzymatic activity, but with distinct responses according to different sources. The application of both biochar did not affect the activity of arylsulfatase which suggests little effect on cycling of S in soil. While no studies were done with cashew biochar yet, a previous study using sewage sludge biochar showed no alterations in arylsulfatase activity and no effect on S cycling . In addition, this no response of arylsulfatase can be related to a lower abundance of fungi than bacteria in soil, as previously reported. It is known that fungi contain ester sulfate that produces sulfatases; thus, the lower abundance of fungi decreases arylsulfatase activity . In contrast, ß-glucosidase activity was more variable as affected by distinct biochar which have different impacts on C cycling. Thus, the application of cashew biochar was not enough to increase the ß-glucosidase in this degraded soil. The addition of labile C-sources (e.g., cashew based-biochar) could influence the ß-glucosidase activity since Wei et al. demonstrated that labile organic C input reduced the related C-acquisition enzyme activities. Interestingly, Foster et al. found that ß-glucosidase reduction may be due to the adsorption of ß-glucosidase substrate in the biochar surface. Similarly, Lehmann et al. suggested that the reduced enzyme synthesis was due to the interaction between microbes and carbon on the biochar surface, where biochar adsorption onto ß-glucosidase led to decreased enzyme production . On the other hand, sewage sludge biochar applied in the lowest doses (5 and 10 Mg ha − 1 ) promoted a stimulatory effect on ß-glucosidase possibly due to the addition of C-recalcitrant content . However, higher doses of sewage sludge biochar can increase the content of metals and, thus, affect the ß-glucosidase . A previous study reported β-glucosidase as the most sensitive indicator of the adverse impact of metals . Several factors could contribute to the inhibition of C-acquiring enzymes, including biomass, pyrolysis temperature, and soil texture. For example, biochars from herbs and wood diminish C-acquiring enzyme activity . This suppression could be related to either the inherent properties of the biochar or alterations in the microbial community following the application of the biochar (e.g., inhibitory compounds in biochar such as metals). Also, the presence of phenolic and lignin compounds can change the chemical composition of soil organic matter, thereby decreasing the bioavailability of carbon compounds that beta-glucosidase can decompose in higher doses of sewage sludge application. The activity of soil phosphatases showed distinct effects of biochar on biochemical processes related to P cycling. The decrease of acid phosphatase with the application of cashew biochar at 5, 10, and 20 Mg ha − 1 could be due to a high P content on biochar and, more importantly, a higher C limitation status observed on enzymes stoichiometry. Jin et al. demonstrated that manure biochar decreased the acid phosphatase activity, which could be attributed to a higher nutrient P availability in biochar. However, in the highest doses of cashew and sewage sludge biochar, we observed an increased acid phosphatase (and in soil P contents), probably due to the higher values of total N found in soil after applying the biochar. Previous studies have reported that both acid and alkaline phosphatases are produced at the cost of N , , which indicates that the content of total N is a determinant of phosphatase activities. In addition, the increased alkaline phosphatase with the highest doses of each biochar suggests a direct effect of the alkaline pH value found in both tested biochar . It suggests that pyrolytic biochar could enhance soil P contents via both acid and alkaline phosphatases. The observed increase in urease activity following the application of cashew and sewage sludge biochar demonstrates the significant impact that these amendments can have on N cycling in soil. Additionally, urease activity is significantly modulated by soil pH; applying alkaline biochar to acidic soil can enhance urease activity by improving the soil to a higher pH . Also, in a nutrient-limited system (see below), microbial proliferation could induce enzymatic activity to use C-sources derived from biochar application . Importantly, Zhang et al. demonstrated a positive correlation between urease activity and microbial biomass carbon after the application of wheat straw-derived biochar (8 t ha − 1 and 16 t ha − 1 ), indicating that soil N cycling is driving via the potential of the microbial community to increase its biomass.
The application of biochar changed the soil enzymatic stoichiometry which influenced the availability and limitations of C, N, and P to microbes. Further, the decrease in enzymatic C: N ratios due to biochar application suggests that microbial biomass uses more N than C in its biological processes. This reflected the higher values of N-acquiring enzymes (urease) found with the application of biochar. On the other hand, the degraded soil which was not amended with biochar showed a higher enzymatic C: N ratio, as also observed by Silva et al. for degraded soil. The application of both biochar increased the enzymatic N: P ratio, which confirms the positive effect of applying biochar, i.e., the input of N, promoting the activity of N-acquiring enzymes. This is important since these N-acquiring enzymes improve the cycling of N in the soil . In general, the application of both biochar at high rates decreased the enzymatic C: P ratio, reflecting the lower activity of C-acquiring enzymes (ß-glucosidase). The values of vectors L (length) and A (angle) are useful to show the degree of C limitation (vector L) and P limitation relative to N (vector A) . The highest values of vector L found in unamended soil suggest more C limitation to soil microbes , while when biochar is applied this C limitation decreases. Regarding vector A, the unamended (and 5 Mg ha − 1 of cashew biochar) soil showed the highest values which indicate more P limitation than N to soil microbes. These results confirm that the application of biochar increases the availability of C and N while promoting a limitation of P to soil microbes. Interestingly, P limitation decreased with higher biochar doses, as indicated by the lower A angle, which could be related to an improvement in soil P content (Fig. f). However, despite this increase, the A angle remains higher than 45º (the limit for P limitation ) and the rate of P accumulation remains very low for Brazilian semiarid soils .
Soil sampling The degraded area is located at Irauçuba, Ceará state, Brazil (3°46’16.38"S, 39°49’54.00"W, Fig. ). This region exhibits highly degraded soil due to overgrazing . The soil is classified as Planosol , and the climate is categorized as Bshw – tropical hot semiarid , with an annual rainfall of 539 mm, concentrated in January to April, and an average temperature of 26 to 28 ºC . The region presents intensive human activities, which accelerate the soil degradation process. Soil samples were collected from a depth of 0–10 cm, sieved through a 2 mm mesh to remove large debris, and immediately used in the experiment to ensure the survival of microorganisms. Soil chemical characterization (Table ) was performed following the methodology described by EMBRAPA . Biochar production and characterization The biochars were produced by the pyrolysis of cashew ( Anacardium occidentale ) (pseudo fruit) bagasse and by the co-pyrolysis of sewage sludge. Cashew bagasse was collected from a nut farmer in Aracati municipality, while the sludge was obtained from a domestic sewage treatment plant in Fortaleza municipality (Upflow Anaerobic Sludge Blanket), both located in Ceará state, Brazil. The pyrolysis temperature was 500 °C, and a residence time of 190 min for cashew bagasse and 97 min for co-pyrolysis (sewage sludge), with a heating rate of 10 °C min − 1 under moderate nitrogen flow (SPPT Technological Research Company). The nutrient content (Ca, Mg, Al, Fe, Mn, and Zn) in the biochar were determined by inductively coupled plasma optical emission spectrometry (ICP-OES), while K and Na were measured by flame photometry where the samples were previously submitted to acid digestion following dry ash method suggested by Enders and Lehmann . After acid digestion, P content was determined by the molybdovanadophosphoric acid (MAPA) colorimetric method, measuring the absorbance at 400 nm (AJX-1600 spectrophotometer, Micronal ® ). Total nitrogen was obtained according to Mendonça and Matos , applying acid digestion with sulfuric acid and following the Kjeldahl method, while carbon was determined via the Walkely-Black method. The chemical characterization of biochar is presented in Table . Experiment setup This experiment was carried out in greenhouse conditions at the Federal University of Ceará state, Fortaleza municipally, Brazil (3°44’35.51"S, 38°34’33.37"W, Fig. ). We used a completely randomized design in a 2 × 5 factorial scheme: two sources of biochar (cashew and sewage sludge) and five doses (0, 5, 10, 20, and 40 Mg ha⁻¹), with four replicates, resulting in 40 experimental units. The tested doses were defined below the dose of 2% (w/w) considered limiting for biochar application in soils . Polyvinyl chloride (PVC) columns (20 cm in diameter and 50 cm in height) were used, and each column was filled with degraded soil. Each biochar was incorporated and followed by a 30-day incubation period. Maize ( Zea mays L., BRS 2022 cultivar) was the plant species cultivated in this experiment. Soil fertilization was applied to each column with urea (837 mg), simple superphosphate (4433.6 mg), and KCl (418.6 mg) before plant emergence. Additional applications of urea (837 mg) and KCl (209.3 mg) were made at 25 and 45 days after plant emergence, respectively . Each column received tensiometers with mercury manometers at a depth of 0.2 m to measure the matric potential. The matric potential readings were taken twice a day (early morning and early afternoon). The values were converted into moisture using the soil-water retention curve (SWRC) specific to each treatment. Irrigation was based on the available water capacity (AWC) for each treatment, defined as the difference between soil moisture at field capacity (FC) and at the permanent wilting point (PWP) (AWC = FC – PWP). Irrigation with distilled water was initiated whenever it was determined that 30% of the AWC had been depleted, as indicated by soil moisture measurements. When required, the needed water to raise the soil moisture to FC was calculated by considering the difference between the FC and the moisture at the measurement time. The experiment concluded when the plants reached the flowering stage, 60 days after sowing, totaling 90 days. Soil chemical and microbial activity analysis Total C, N and P contents All chemical and microbiological analyses were conducted after the plant harvest. Briefly, total organic carbon (TOC) was measured using the potassium dichromate digestion method in an acidic medium, followed by titration with ferrous ammonium sulfate . Total nitrogen (Total N) was measured according to the method described by Mendonça and Matos , which involves extracting nitrogen from the soil with sulfuric acid, performing Kjeldahl distillation with sodium hydroxide, and titrating with boric acid. Soil available phosphorus was determined through the Melich-1 extractor as proposed by EMBRAPA . Soil basal respiration and microbial biomass Soil basal respiration (SBR) was assessed using the method of Anderson and Domsch . Soil respiration measured the volume of CO 2 released over ten days, with readings recorded every 24 h. Soil microbial biomass C (MBC) and N (MBN) were measured by the fumigation-extraction method . MBC and MBN were calculated by the difference between fumigated and non-fumigated samples, with a conversion factor of 0.33 for MBC and 0.54 for MBN . Microbial C and N quotients (i.e., q Mic-C and q Mic-N, respectively) were calculated using the relationship between MBC and TOC (MBC/TOC) and MBN and total N (MBN/Total N). Soil enzyme activity The potential activities of β-glucosidase (EC 3.2.1.21), acid (E.C. 3.1.3.2), alkaline (E C 3.1.3.1) phosphatases, arylsulphatase (EC 3.1.6.1) and urease (EC 3.5.1.5), were determined using standard methods. Briefly, β-glucosidase activity was measured with ρ -nitrophenyl β-glucopyranoside as the substrate, incubated for 1 h at 37 °C, and the resulting ρ -nitrophenol was quantified spectrophotometrically at 400 nm . Acid and alkaline phosphatase activity were assessed using disodium ρ -nitrophenyl phosphate as the substrate, incubated for 1 h at 37 °C, and the ρ -nitrophenol produced was measured at 420 nm . Arylsulphatase was measured after the release of ρ -nitrophenol, when the soil was incubated with ρ -nitrophenyl potassium sulfate solution . Urease activity was determined using urea as the substrate, incubated for 2 h at 37 °C, and the ammonium produced was measured spectrophotometrically at 660 nm . Enzymatic stoichiometry The soil enzymatic stoichiometry was determined following two distinct methodologies: (1) The ratios of enzymatic activities, including C: N (β-glucosidase / urease), C:P (β-glucosidase / acid phosphatase) and N: P (urease / acid phosphatase) ; (2) A vector analysis of enzymatic stoichiometry , . Vector length and vector angle were calculated following Moorhead et al. . 1 [12pt]{minimal}
$$\: =^{2}+{Y}^{2}}$$ 2 [12pt]{minimal}
$${}$$ where X is: 3 [12pt]{minimal}
$$\:X=\:(\:-glucosidase)}{(\:-glucosidase)+(acid\:phospatase)}\:$$ and Y is: 4 [12pt]{minimal}
$$\:Y=\:(\:-glucosidase)}{(\:-glucosidase+(urease)}\:$$ A longer vector length represents high C limitation (C/energy deficiency to other nutrients), and a vector angle < 45 ◦ or > 45 ◦ indicates N or P limitation, respectively. Statistical analysis The data obtained were subjected to Levene’s test for homogeneity and the Shapiro-Wilk test for normality. Subsequently, a two-way analysis of variance (ANOVA) was performed using the F-test ( p ≤ 0.05). Means were compared using the Scott-Knott test ( p ≤ 0.05). We used R Studio software (version 1.3.1093).
The degraded area is located at Irauçuba, Ceará state, Brazil (3°46’16.38"S, 39°49’54.00"W, Fig. ). This region exhibits highly degraded soil due to overgrazing . The soil is classified as Planosol , and the climate is categorized as Bshw – tropical hot semiarid , with an annual rainfall of 539 mm, concentrated in January to April, and an average temperature of 26 to 28 ºC . The region presents intensive human activities, which accelerate the soil degradation process. Soil samples were collected from a depth of 0–10 cm, sieved through a 2 mm mesh to remove large debris, and immediately used in the experiment to ensure the survival of microorganisms. Soil chemical characterization (Table ) was performed following the methodology described by EMBRAPA .
The biochars were produced by the pyrolysis of cashew ( Anacardium occidentale ) (pseudo fruit) bagasse and by the co-pyrolysis of sewage sludge. Cashew bagasse was collected from a nut farmer in Aracati municipality, while the sludge was obtained from a domestic sewage treatment plant in Fortaleza municipality (Upflow Anaerobic Sludge Blanket), both located in Ceará state, Brazil. The pyrolysis temperature was 500 °C, and a residence time of 190 min for cashew bagasse and 97 min for co-pyrolysis (sewage sludge), with a heating rate of 10 °C min − 1 under moderate nitrogen flow (SPPT Technological Research Company). The nutrient content (Ca, Mg, Al, Fe, Mn, and Zn) in the biochar were determined by inductively coupled plasma optical emission spectrometry (ICP-OES), while K and Na were measured by flame photometry where the samples were previously submitted to acid digestion following dry ash method suggested by Enders and Lehmann . After acid digestion, P content was determined by the molybdovanadophosphoric acid (MAPA) colorimetric method, measuring the absorbance at 400 nm (AJX-1600 spectrophotometer, Micronal ® ). Total nitrogen was obtained according to Mendonça and Matos , applying acid digestion with sulfuric acid and following the Kjeldahl method, while carbon was determined via the Walkely-Black method. The chemical characterization of biochar is presented in Table .
This experiment was carried out in greenhouse conditions at the Federal University of Ceará state, Fortaleza municipally, Brazil (3°44’35.51"S, 38°34’33.37"W, Fig. ). We used a completely randomized design in a 2 × 5 factorial scheme: two sources of biochar (cashew and sewage sludge) and five doses (0, 5, 10, 20, and 40 Mg ha⁻¹), with four replicates, resulting in 40 experimental units. The tested doses were defined below the dose of 2% (w/w) considered limiting for biochar application in soils . Polyvinyl chloride (PVC) columns (20 cm in diameter and 50 cm in height) were used, and each column was filled with degraded soil. Each biochar was incorporated and followed by a 30-day incubation period. Maize ( Zea mays L., BRS 2022 cultivar) was the plant species cultivated in this experiment. Soil fertilization was applied to each column with urea (837 mg), simple superphosphate (4433.6 mg), and KCl (418.6 mg) before plant emergence. Additional applications of urea (837 mg) and KCl (209.3 mg) were made at 25 and 45 days after plant emergence, respectively . Each column received tensiometers with mercury manometers at a depth of 0.2 m to measure the matric potential. The matric potential readings were taken twice a day (early morning and early afternoon). The values were converted into moisture using the soil-water retention curve (SWRC) specific to each treatment. Irrigation was based on the available water capacity (AWC) for each treatment, defined as the difference between soil moisture at field capacity (FC) and at the permanent wilting point (PWP) (AWC = FC – PWP). Irrigation with distilled water was initiated whenever it was determined that 30% of the AWC had been depleted, as indicated by soil moisture measurements. When required, the needed water to raise the soil moisture to FC was calculated by considering the difference between the FC and the moisture at the measurement time. The experiment concluded when the plants reached the flowering stage, 60 days after sowing, totaling 90 days.
Total C, N and P contents All chemical and microbiological analyses were conducted after the plant harvest. Briefly, total organic carbon (TOC) was measured using the potassium dichromate digestion method in an acidic medium, followed by titration with ferrous ammonium sulfate . Total nitrogen (Total N) was measured according to the method described by Mendonça and Matos , which involves extracting nitrogen from the soil with sulfuric acid, performing Kjeldahl distillation with sodium hydroxide, and titrating with boric acid. Soil available phosphorus was determined through the Melich-1 extractor as proposed by EMBRAPA . Soil basal respiration and microbial biomass Soil basal respiration (SBR) was assessed using the method of Anderson and Domsch . Soil respiration measured the volume of CO 2 released over ten days, with readings recorded every 24 h. Soil microbial biomass C (MBC) and N (MBN) were measured by the fumigation-extraction method . MBC and MBN were calculated by the difference between fumigated and non-fumigated samples, with a conversion factor of 0.33 for MBC and 0.54 for MBN . Microbial C and N quotients (i.e., q Mic-C and q Mic-N, respectively) were calculated using the relationship between MBC and TOC (MBC/TOC) and MBN and total N (MBN/Total N). Soil enzyme activity The potential activities of β-glucosidase (EC 3.2.1.21), acid (E.C. 3.1.3.2), alkaline (E C 3.1.3.1) phosphatases, arylsulphatase (EC 3.1.6.1) and urease (EC 3.5.1.5), were determined using standard methods. Briefly, β-glucosidase activity was measured with ρ -nitrophenyl β-glucopyranoside as the substrate, incubated for 1 h at 37 °C, and the resulting ρ -nitrophenol was quantified spectrophotometrically at 400 nm . Acid and alkaline phosphatase activity were assessed using disodium ρ -nitrophenyl phosphate as the substrate, incubated for 1 h at 37 °C, and the ρ -nitrophenol produced was measured at 420 nm . Arylsulphatase was measured after the release of ρ -nitrophenol, when the soil was incubated with ρ -nitrophenyl potassium sulfate solution . Urease activity was determined using urea as the substrate, incubated for 2 h at 37 °C, and the ammonium produced was measured spectrophotometrically at 660 nm . Enzymatic stoichiometry The soil enzymatic stoichiometry was determined following two distinct methodologies: (1) The ratios of enzymatic activities, including C: N (β-glucosidase / urease), C:P (β-glucosidase / acid phosphatase) and N: P (urease / acid phosphatase) ; (2) A vector analysis of enzymatic stoichiometry , . Vector length and vector angle were calculated following Moorhead et al. . 1 [12pt]{minimal}
$$\: =^{2}+{Y}^{2}}$$ 2 [12pt]{minimal}
$${}$$ where X is: 3 [12pt]{minimal}
$$\:X=\:(\:-glucosidase)}{(\:-glucosidase)+(acid\:phospatase)}\:$$ and Y is: 4 [12pt]{minimal}
$$\:Y=\:(\:-glucosidase)}{(\:-glucosidase+(urease)}\:$$ A longer vector length represents high C limitation (C/energy deficiency to other nutrients), and a vector angle < 45 ◦ or > 45 ◦ indicates N or P limitation, respectively. Statistical analysis The data obtained were subjected to Levene’s test for homogeneity and the Shapiro-Wilk test for normality. Subsequently, a two-way analysis of variance (ANOVA) was performed using the F-test ( p ≤ 0.05). Means were compared using the Scott-Knott test ( p ≤ 0.05). We used R Studio software (version 1.3.1093).
All chemical and microbiological analyses were conducted after the plant harvest. Briefly, total organic carbon (TOC) was measured using the potassium dichromate digestion method in an acidic medium, followed by titration with ferrous ammonium sulfate . Total nitrogen (Total N) was measured according to the method described by Mendonça and Matos , which involves extracting nitrogen from the soil with sulfuric acid, performing Kjeldahl distillation with sodium hydroxide, and titrating with boric acid. Soil available phosphorus was determined through the Melich-1 extractor as proposed by EMBRAPA .
Soil basal respiration (SBR) was assessed using the method of Anderson and Domsch . Soil respiration measured the volume of CO 2 released over ten days, with readings recorded every 24 h. Soil microbial biomass C (MBC) and N (MBN) were measured by the fumigation-extraction method . MBC and MBN were calculated by the difference between fumigated and non-fumigated samples, with a conversion factor of 0.33 for MBC and 0.54 for MBN . Microbial C and N quotients (i.e., q Mic-C and q Mic-N, respectively) were calculated using the relationship between MBC and TOC (MBC/TOC) and MBN and total N (MBN/Total N).
The potential activities of β-glucosidase (EC 3.2.1.21), acid (E.C. 3.1.3.2), alkaline (E C 3.1.3.1) phosphatases, arylsulphatase (EC 3.1.6.1) and urease (EC 3.5.1.5), were determined using standard methods. Briefly, β-glucosidase activity was measured with ρ -nitrophenyl β-glucopyranoside as the substrate, incubated for 1 h at 37 °C, and the resulting ρ -nitrophenol was quantified spectrophotometrically at 400 nm . Acid and alkaline phosphatase activity were assessed using disodium ρ -nitrophenyl phosphate as the substrate, incubated for 1 h at 37 °C, and the ρ -nitrophenol produced was measured at 420 nm . Arylsulphatase was measured after the release of ρ -nitrophenol, when the soil was incubated with ρ -nitrophenyl potassium sulfate solution . Urease activity was determined using urea as the substrate, incubated for 2 h at 37 °C, and the ammonium produced was measured spectrophotometrically at 660 nm .
The soil enzymatic stoichiometry was determined following two distinct methodologies: (1) The ratios of enzymatic activities, including C: N (β-glucosidase / urease), C:P (β-glucosidase / acid phosphatase) and N: P (urease / acid phosphatase) ; (2) A vector analysis of enzymatic stoichiometry , . Vector length and vector angle were calculated following Moorhead et al. . 1 [12pt]{minimal}
$$\: =^{2}+{Y}^{2}}$$ 2 [12pt]{minimal}
$${}$$ where X is: 3 [12pt]{minimal}
$$\:X=\:(\:-glucosidase)}{(\:-glucosidase)+(acid\:phospatase)}\:$$ and Y is: 4 [12pt]{minimal}
$$\:Y=\:(\:-glucosidase)}{(\:-glucosidase+(urease)}\:$$ A longer vector length represents high C limitation (C/energy deficiency to other nutrients), and a vector angle < 45 ◦ or > 45 ◦ indicates N or P limitation, respectively.
The data obtained were subjected to Levene’s test for homogeneity and the Shapiro-Wilk test for normality. Subsequently, a two-way analysis of variance (ANOVA) was performed using the F-test ( p ≤ 0.05). Means were compared using the Scott-Knott test ( p ≤ 0.05). We used R Studio software (version 1.3.1093).
Applying cashew and sewage sludge biochar in highly degraded soil significantly changed soil microbial biomass and activity. Thus, applying biochar in degraded soil could be a potential strategy to restore soil microbial biomass and enzymatic activity. However, the responses of extra-cellular enzymes vary according to biochar feedstock and biochar application rate, indicating complex interactions. The results of enzymatic stoichiometry and vector analysis showed an increase in P limitation to soil microbes with the application of both biochar, even sewage sludge increasing soil P contents. In contrast, both biochar reduced the limitation of C to soil microbes. These findings reinforce the potential of biochar to restore soil biological properties and increase the availability of nutrients. These features bring implications to restoration practices in degraded lands of semiarid regions. Although key soil extracellular enzymes have been analyzed, there are additional enzymes crucial for assessing the soil’s potential in nutrient cycling, such as N-acetyl-β-glucosaminidase (NAG) for nitrogen. The metabolic processes in soil involve a wide range of enzymes , and future studies on biochar-based products should adopt a holistic approach to microbial nutrient cycling.
|
Membranous Overexpression of Fibronectin Predicts Microvascular Invasion and Poor Survival Outcomes in Patients with Hepatocellular Carcinoma | 52dddb33-ea18-46ef-a719-eeb31dcc7e49 | 11907257 | Biochemistry[mh] | Fibronectin (FN) is a multifunctional high-molecular-weight glycoprotein that exists in soluble plasma form and insoluble cellular form on the cell surface. - Plasma FN, produced by hepatocytes, is a major protein component of blood plasma, and cellular FN is a major component of the extracellular matrix. It is secreted by various cells, including fibroblasts, hepatic stellate cells, and endothelial cells as a soluble dimer, and is then assembled into an insoluble matrix. , , FN binds to integrin, a transmembrane receptor, and other extracellular matrix proteins such as collagen, fibrin, actin, and heparan sulfate proteoglycans, which plays major roles in cell adhesion, migration, proliferation and differentiation. , , , FN is also involved in embryonic development, wound healing, and the pathogenesis of cancer and fibrosis. - , , - Although the role of FN in tumorigenesis and malignant progression is controversial, the expression of FN in several types of cancer has been studied, and overexpression of FN has been associated with poor prognosis in esophageal, gastric, colorectal, pancreatic, and renal cancer. , - Abnormal and increased FN expression has also been reported in hepatocellular carcinoma (HCC) , and has recently been associated with vascular invasion in HCC. , Vascular invasion is a major prognostic factor for HCC. However, it is difficult to identify microvascular invasion (MVI) on preoperative imaging or biopsy specimens, which can only be detected in the peritumoral liver parenchyma under microscopic examination. - Thus, the identification of biomarkers for MVI would be beneficial for preoperative risk stratification of patients with HCC. In this study, we evaluated the expression patterns of FN in HCCs and its clinicopathological implications, including vascular invasion status.
1. Patient selection and clinicopathological analysis We retrospectively reviewed a cohort of 258 consecutive adult patients with surgically resected HCCs at Seoul National University Hospital between January 2009 and December 2011. This study was approved by the Institutional Review Board of Seoul National University Hospital (IRB number: 2305-013-1427). The requirement for patient consent was waived by the IRB because of the retrospective nature of the study. Clinical data, including patient age, sex, underlying etiology, preoperative locoregional treatment (including radiofrequency ablation and transarterial chemo/radioembolization), and preoperative laboratory findings (including serum alpha-fetoprotein [AFP] and protein induced by vitamin K absence-II [PIVKA-II] levels) were retrieved from the electronic medical records. Pathology reports for all 258 cases and microscopy slides for 175 cases, for which glass or digital slides were available for assessment, were reviewed by two pathologists (Y.J.H. and H.K.). The following information was recorded: underlying cirrhosis, infiltrative gross type, tumor size, multiplicity, histological differentiation grade (according to the Edmondson-Steiner [E-S] system), presence of major vascular invasion or MVI, histologic subtype (especially macrotrabecular massive [MTM] subtype), pathological T stage according to the American Joint Committee on Cancer 8th edition, vessel-encapsulating tumor clusters (VETC) pattern and cytokeratin 19 positivity. The infiltrative gross type includes multinodular confluent, nodular with perinodular extension, and infiltrative type. Multiplicity was defined as the presence of two or more tumors, including intrahepatic metastases and multicentric occurrences. Major vascular invasion was defined as the invasion of the main portal vein and first-order branches; the right, middle, and left hepatic veins; or the right or left hepatic artery. MVI was defined as the invasion of microvessels, which were identifiable only under microscopic examination, located in the fibrous capsule surrounding the tumor or in the peritumoral hepatic parenchyme, and not in the portal vein, hepatic veins, or hepatic arteries. MTM subtype was defined as a tumor showing thick trabeculae with more than six cells in more than 50% of the tumor area. 2. Tissue microarray and immunohistochemistry Tissue microarray cores of 2 mm diameter, consisting of one to three cores from HCCs and matched nonneoplastic tissues, were obtained from 258 HCCs (SuperBioChips Laboratory, Seoul, Korea). A total of 818 cores with 543 HCC and 275 adjacent liver parenchymal cores were evaluated in this study. Immunohistochemical stain for FN (MAB1918; R&D Systems, Minneapolis, MN, USA) and CD34 (M716501-2; Agilent, Santa Clara, CA, USA) was performed on 4 μm-thick tissue microarray sections manually or by using the Ventana BenchMark GX automated platform (Ventana Medical Systems, Oro Valley, AZ, USA). FN expression was evaluated according to the staining intensity as faint (discernable at ×100), weak (discernable at ×40), moderate (discernable at ×12.5), or strong (as strong as in trophoblasts), and according to the location of expression as cytoplasmic (stained in the cytoplasm of hepatocytes or tumor cells), membranous (stained at the membrane of hepatocytes or tumor cells), and sinusoidal (stained at endothelial cells of hepatic sinusoids) . CD34 staining was used to confirm the VETC pattern, and one or more foci of tumor cell clusters completely surrounded by CD34-positive endothelial cells were considered VETC-positive. 3. Statistical analysis Statistical analyses were performed using commercially available software (SPSS Statistics for Windows version 26.0, IBM Corp., Armonk, NY, USA; R Studio software for Windows version 2022.12.0+353, R Foundation for Statistical Computing, Vienna, Austria; GraphPad Prism Software version 7, GraphPad Software, San Diego, CA, USA). Categorical variables were analyzed using the chi-square test, linear-by-linear association, and Fisher exact test. Survival analyses for overall survival (OS) and disease-free survival (DFS) were performed using the Kaplan-Meier method, log-rank test, and Cox proportional hazards regression analysis. The multivariable analysis was performed using the stepwise backward selection method. OS was defined as the interval between the date of surgery and the date of the last follow-up or death. DFS was defined as the interval between the date of operation and the date of local recurrence or intrahepatic or distant metastasis. Statistical significance was set at p<0.05.
We retrospectively reviewed a cohort of 258 consecutive adult patients with surgically resected HCCs at Seoul National University Hospital between January 2009 and December 2011. This study was approved by the Institutional Review Board of Seoul National University Hospital (IRB number: 2305-013-1427). The requirement for patient consent was waived by the IRB because of the retrospective nature of the study. Clinical data, including patient age, sex, underlying etiology, preoperative locoregional treatment (including radiofrequency ablation and transarterial chemo/radioembolization), and preoperative laboratory findings (including serum alpha-fetoprotein [AFP] and protein induced by vitamin K absence-II [PIVKA-II] levels) were retrieved from the electronic medical records. Pathology reports for all 258 cases and microscopy slides for 175 cases, for which glass or digital slides were available for assessment, were reviewed by two pathologists (Y.J.H. and H.K.). The following information was recorded: underlying cirrhosis, infiltrative gross type, tumor size, multiplicity, histological differentiation grade (according to the Edmondson-Steiner [E-S] system), presence of major vascular invasion or MVI, histologic subtype (especially macrotrabecular massive [MTM] subtype), pathological T stage according to the American Joint Committee on Cancer 8th edition, vessel-encapsulating tumor clusters (VETC) pattern and cytokeratin 19 positivity. The infiltrative gross type includes multinodular confluent, nodular with perinodular extension, and infiltrative type. Multiplicity was defined as the presence of two or more tumors, including intrahepatic metastases and multicentric occurrences. Major vascular invasion was defined as the invasion of the main portal vein and first-order branches; the right, middle, and left hepatic veins; or the right or left hepatic artery. MVI was defined as the invasion of microvessels, which were identifiable only under microscopic examination, located in the fibrous capsule surrounding the tumor or in the peritumoral hepatic parenchyme, and not in the portal vein, hepatic veins, or hepatic arteries. MTM subtype was defined as a tumor showing thick trabeculae with more than six cells in more than 50% of the tumor area.
Tissue microarray cores of 2 mm diameter, consisting of one to three cores from HCCs and matched nonneoplastic tissues, were obtained from 258 HCCs (SuperBioChips Laboratory, Seoul, Korea). A total of 818 cores with 543 HCC and 275 adjacent liver parenchymal cores were evaluated in this study. Immunohistochemical stain for FN (MAB1918; R&D Systems, Minneapolis, MN, USA) and CD34 (M716501-2; Agilent, Santa Clara, CA, USA) was performed on 4 μm-thick tissue microarray sections manually or by using the Ventana BenchMark GX automated platform (Ventana Medical Systems, Oro Valley, AZ, USA). FN expression was evaluated according to the staining intensity as faint (discernable at ×100), weak (discernable at ×40), moderate (discernable at ×12.5), or strong (as strong as in trophoblasts), and according to the location of expression as cytoplasmic (stained in the cytoplasm of hepatocytes or tumor cells), membranous (stained at the membrane of hepatocytes or tumor cells), and sinusoidal (stained at endothelial cells of hepatic sinusoids) . CD34 staining was used to confirm the VETC pattern, and one or more foci of tumor cell clusters completely surrounded by CD34-positive endothelial cells were considered VETC-positive.
Statistical analyses were performed using commercially available software (SPSS Statistics for Windows version 26.0, IBM Corp., Armonk, NY, USA; R Studio software for Windows version 2022.12.0+353, R Foundation for Statistical Computing, Vienna, Austria; GraphPad Prism Software version 7, GraphPad Software, San Diego, CA, USA). Categorical variables were analyzed using the chi-square test, linear-by-linear association, and Fisher exact test. Survival analyses for overall survival (OS) and disease-free survival (DFS) were performed using the Kaplan-Meier method, log-rank test, and Cox proportional hazards regression analysis. The multivariable analysis was performed using the stepwise backward selection method. OS was defined as the interval between the date of surgery and the date of the last follow-up or death. DFS was defined as the interval between the date of operation and the date of local recurrence or intrahepatic or distant metastasis. Statistical significance was set at p<0.05.
1. Baseline characteristics The clinicopathological features of the study population are summarized in . Two hundred and fourteen patients (83%) were male and 44 (17%) were female, and the median age at operation was 59 years (interquartile range, 51 to 64 years). The most common etiology was hepatitis B viral infection (n=213, 83%) with the mean viral load of 219,026 IU/mL (range, 0 to 29,800,000 IU/mL), followed by hepatitis C viral infection (n=18, 7%) and alcohol intake (n=8, 3%). The median tumor size was 4.0 cm (interquartile range, 2.5 to 8.0 cm), and 195 tumors (75%) were E-S grade III or IV. MVI and major vessel invasion were observed in 108 (42%) and 44 patients (17%), respectively. The MTM subtype was identified in 26 out of 175 cases (15%), and the VETC pattern was identified in 61 cases (24%). 2. FN expression in HCCs and nonneoplastic livers The staining intensity of FN was stronger in the HCCs than in the adjacent parenchyma . In nonneoplastic livers, moderate cytoplasmic or sinusoidal staining was rarely observed (n=3, 0.6% and n=4, 0.7%, respectively), while strong cytoplasmic/sinusoidal staining or moderate-to-strong membranous staining were not identified. In contrast, a much greater proportion of HCCs exhibited moderate or strong FN staining (cytoplasmic, n=59, 10.8%; membranous, n=77, 14.2%; sinusoidal, n=198, 36.5%). FN expression in the nonneoplastic tissue was mostly negative in the cytoplasm (n=225, 81.8%) and membrane (n=248, 90.2%) of hepatocytes, and predominantly negative or faint in sinusoidal endothelial cells (n=212, 77.1%). On the other hand, negative cytoplasmic or membranous staining was observed in a much smaller proportion of tumor tissue (n=296, 54.5% and n=255, 47.0%, respectively), with only 8.5% of tumors showing negative sinusoidal staining (n=46). Based on these staining patterns, moderate or strong FN staining was considered indicative of FN expression (FN-positive). Cytoplasmic FN expression was more frequently observed in HCC (n=59, 10.9%) than in the adjacent parenchyma (n=3, 1.1%, p<0.001). Membranous FN expression was observed in tumor cells (n=77, 14.2%), but not in nonneoplastic hepatocytes (p<0.001). Sinusoidal FN expression was observed significantly more frequently in the tumors (n=198, 36.5%) than in the adjacent liver tissue (n=4, 1.5%, p<0.001). Among the 258 patients, cytoplasmic FN expression was observed in 36 patients (14%), membranous expression was identified in 50 (19%), and sinusoidal expression was observed in 119 (46%). 3. Clinicopathological characteristics and survival according to the expression pattern of FN A comparison of the clinicopathological characteristics according to the FN expression patterns is presented in and and . Cytoplasmic FN expression was significantly associated with high serum PIVKA-II levels (p=0.021) and MVI (p=0.004) . Membranous FN expression was significantly associated with high serum AFP (p<0.001) and PIVKA-II levels (p=0.045), infiltrative gross type (p=0.001), poor E-S grade (p=0.027), MVI (p=0.001), major vessel invasion (p=0.007), MTM subtype (p<0.001), high T stage (p<0.001), and VETC pattern (p=0.001). Sinusoidal FN expression was significantly associated with high serum AFP (p=0.029) and PIVKA-II levels (p=0.002), infiltrative gross type (p=0.034), large tumor size (p=0.013), MVI (p=0.038), MTM subtype (p-0.011), and VETC patterns (p<0.001). OS and DFS in patients with HCCs showing membranous FN expression were significantly shorter than those in patients without membranous expression (p=0.003 in both cases) . There was no significant difference in OS and DFS between patients with cytoplasmic or sinusoidal FN-positive HCC and those without cytoplasmic or sinusoidal expression. DFS in patients with tumors showing sinusoidal FN expression tended to be shorter than that in those without sinusoidal expression (p=0.067). Univariable analysis showed that serum AFP ≥1,000 ng/mL, PIVKA-II ≥200 mAU/mL, infiltrative gross type, tumor size >5 cm, E-S grade III or IV, MVI, major vessel invasion, MTM subtype, cytokeratin 19 positivity, and membranous FN expression were significant prognostic factors for OS and DFS. In the multivariate analysis, tumor size >5 cm, E-S grade III or IV, and cytokeratin 19 positivity were significant independent factors for predicting OS and DFS . 4. FN1 expression analysis in TCGA database We analyzed FN1 mRNA expression in The Cancer Genome Atlas-Liver Hepatocellular Carcinoma (TCGA-LIHC) cohort . On comparison between tumor and normal tissues from 49 HCC patients for whom matched normal tissue samples were available, we found slightly higher FN1 mRNA expression levels in HCCs, although not statistically significant (p=0.249). For survival analysis, patients were divided into high and low FN1 groups based on the median value of FN1 mRNA expression, and we found that OS was significantly decreased in the high FN1 expression group (p=0.027). In addition, we compared FN1 mRNA expression levels according to the vascular invasion status; however, there was no significant difference in FN1 expression levels between cases with and without vascular invasion.
The clinicopathological features of the study population are summarized in . Two hundred and fourteen patients (83%) were male and 44 (17%) were female, and the median age at operation was 59 years (interquartile range, 51 to 64 years). The most common etiology was hepatitis B viral infection (n=213, 83%) with the mean viral load of 219,026 IU/mL (range, 0 to 29,800,000 IU/mL), followed by hepatitis C viral infection (n=18, 7%) and alcohol intake (n=8, 3%). The median tumor size was 4.0 cm (interquartile range, 2.5 to 8.0 cm), and 195 tumors (75%) were E-S grade III or IV. MVI and major vessel invasion were observed in 108 (42%) and 44 patients (17%), respectively. The MTM subtype was identified in 26 out of 175 cases (15%), and the VETC pattern was identified in 61 cases (24%).
The staining intensity of FN was stronger in the HCCs than in the adjacent parenchyma . In nonneoplastic livers, moderate cytoplasmic or sinusoidal staining was rarely observed (n=3, 0.6% and n=4, 0.7%, respectively), while strong cytoplasmic/sinusoidal staining or moderate-to-strong membranous staining were not identified. In contrast, a much greater proportion of HCCs exhibited moderate or strong FN staining (cytoplasmic, n=59, 10.8%; membranous, n=77, 14.2%; sinusoidal, n=198, 36.5%). FN expression in the nonneoplastic tissue was mostly negative in the cytoplasm (n=225, 81.8%) and membrane (n=248, 90.2%) of hepatocytes, and predominantly negative or faint in sinusoidal endothelial cells (n=212, 77.1%). On the other hand, negative cytoplasmic or membranous staining was observed in a much smaller proportion of tumor tissue (n=296, 54.5% and n=255, 47.0%, respectively), with only 8.5% of tumors showing negative sinusoidal staining (n=46). Based on these staining patterns, moderate or strong FN staining was considered indicative of FN expression (FN-positive). Cytoplasmic FN expression was more frequently observed in HCC (n=59, 10.9%) than in the adjacent parenchyma (n=3, 1.1%, p<0.001). Membranous FN expression was observed in tumor cells (n=77, 14.2%), but not in nonneoplastic hepatocytes (p<0.001). Sinusoidal FN expression was observed significantly more frequently in the tumors (n=198, 36.5%) than in the adjacent liver tissue (n=4, 1.5%, p<0.001). Among the 258 patients, cytoplasmic FN expression was observed in 36 patients (14%), membranous expression was identified in 50 (19%), and sinusoidal expression was observed in 119 (46%).
A comparison of the clinicopathological characteristics according to the FN expression patterns is presented in and and . Cytoplasmic FN expression was significantly associated with high serum PIVKA-II levels (p=0.021) and MVI (p=0.004) . Membranous FN expression was significantly associated with high serum AFP (p<0.001) and PIVKA-II levels (p=0.045), infiltrative gross type (p=0.001), poor E-S grade (p=0.027), MVI (p=0.001), major vessel invasion (p=0.007), MTM subtype (p<0.001), high T stage (p<0.001), and VETC pattern (p=0.001). Sinusoidal FN expression was significantly associated with high serum AFP (p=0.029) and PIVKA-II levels (p=0.002), infiltrative gross type (p=0.034), large tumor size (p=0.013), MVI (p=0.038), MTM subtype (p-0.011), and VETC patterns (p<0.001). OS and DFS in patients with HCCs showing membranous FN expression were significantly shorter than those in patients without membranous expression (p=0.003 in both cases) . There was no significant difference in OS and DFS between patients with cytoplasmic or sinusoidal FN-positive HCC and those without cytoplasmic or sinusoidal expression. DFS in patients with tumors showing sinusoidal FN expression tended to be shorter than that in those without sinusoidal expression (p=0.067). Univariable analysis showed that serum AFP ≥1,000 ng/mL, PIVKA-II ≥200 mAU/mL, infiltrative gross type, tumor size >5 cm, E-S grade III or IV, MVI, major vessel invasion, MTM subtype, cytokeratin 19 positivity, and membranous FN expression were significant prognostic factors for OS and DFS. In the multivariate analysis, tumor size >5 cm, E-S grade III or IV, and cytokeratin 19 positivity were significant independent factors for predicting OS and DFS .
We analyzed FN1 mRNA expression in The Cancer Genome Atlas-Liver Hepatocellular Carcinoma (TCGA-LIHC) cohort . On comparison between tumor and normal tissues from 49 HCC patients for whom matched normal tissue samples were available, we found slightly higher FN1 mRNA expression levels in HCCs, although not statistically significant (p=0.249). For survival analysis, patients were divided into high and low FN1 groups based on the median value of FN1 mRNA expression, and we found that OS was significantly decreased in the high FN1 expression group (p=0.027). In addition, we compared FN1 mRNA expression levels according to the vascular invasion status; however, there was no significant difference in FN1 expression levels between cases with and without vascular invasion.
This study revealed that FN was overexpressed in HCC compared to nontumorous liver tissue in three patterns: (1) cytoplasmic, (2) membranous, and (3) sinusoidal. FN expression was significantly associated with MVI and other aggressive clinicopathological features including high serum AFP and PIVKA-II levels, infiltrative gross type, large tumor size, poor histological differentiation, and major vessel invasion. Membranous and sinusoidal FN expression was significantly associated with the MTM subtype and VETC pattern of vascularization. Membranous FN expression was a significant predictive factor for poor OS and DFS. HCC is an aggressive tumor, and vascular invasion is an important prognostic predictor in several staging systems. However, aggressive HCC with MVI may be missed because most HCCs are diagnosed based on clinical and imaging findings without biopsy, and MVI cannot be accurately evaluated based on imaging findings. The VETC pattern, a distinct vascularization pattern of HCC, has recently been reported to be associated with poor prognostic factors, including MVI, and is enriched in the MTM subtype, which is an aggressive subtype of HCC. - Thus, the importance of evaluating VETC and macrotrabecular patterns in a biopsy, which are unique and relatively easily identifiable but still challenging to detect in a small specimen, has been emerging. A biomarker beneficial for identifying these patterns would help triage the aggressive HCC group with poor prognosis. FN is an extracellular matrix glycoprotein involved in various functions. FN has been shown to significantly impact disease pathogenesis, particularly by promoting proliferation, invasion, and metastasis through interactions with integrins and other cell surface receptors. , - Several studies have revealed the poor prognostic effect of FN expression in various types of cancer. , - In HCC, a few studies demonstrated that serum FN levels increased in patients with early HCC and decreased after treatment, , and cellular FN was upregulated in HCC tumor cells. In this study, we demonstrated FN overexpression in HCC, and highlighted its expression patterns and significance. Among the three FN expression patterns, the membranous pattern showed a broader range of associations with the aggressive parameters compared to the cytoplasmic and sinusoidal patterns. FN overexpression on the cell membrane can more easily facilitate the interactions with cell surface receptors, including integrins, leading to the activation of downstream signaling pathways involved in tumor growth, angiogenesis, and metastasis. In consequence, it might contribute to a stronger association with aggressive clinicopathological parameters. This suggests that the localization of FN is crucial for its functional role, and further research is needed to elucidate the exact mechanisms and pathways through which FN exerts its effects. FN is recently proposed as a potential biomarker for vascular invasion in HCC. Cellular FN is a well-known structural element of angiogenesis in embryogenesis and wound healing and is involved in the formation of tumor vessels as well. - Several studies have indicated that cellular FN upregulation is associated with MVI. This study revealed the significant association between vascular invasion and FN expression. The mechanism of FN in tumor angiogenesis provides a ridged structure for neovascular lumen formation and the binding of vascular endothelial growth factor to maintain a directional concentration gradient for blood vessel formation. , , This may explain why the FN is related to the vascularization pattern in HCC. There have been studies on FN as a molecular target for therapy, although its usefulness is still unclear. The method of conjugating a drug to extra-domain A or extra-domain B antibodies for drug delivery to malignant cells, which are contained in cellular FN, is expected to be promising given that extra-domain A or extra-domain B is limitedly expressed in malignancy. , , To validate our results, we analyzed FN expression using the external bulk RNA sequencing data of the TCGA-LIHC cohort. Although we found slightly higher FN levels in HCCs compared to paired non-neoplastic livers and decreased survival for the high FN group, there was no significant difference in vascular invasion status according to FN expression status. The discrepancy between our data and the TCGA analysis results could be explained by the fact that RNA sequencing measures the average RNA levels across the entire tissue, while immunohistochemistry reveals the distribution and localization of proteins within tissue. On immunohistochemistry, the expression in membranes of hepatocytes or in sinusoidal endothelial cells was most significant, and this suggests that for FN, the localization/distribution of protein expression could be more relevant than the level of expression. In addition, it may be possible that the post-transcriptional protein expression levels could exert a more significant influence than the RNA expression levels. To confirm this, further research analyzing the correlation between immunohistochemistry and RNA expression in the same tissue samples is necessary. A limitation of this study is that this is a retrospective cohort study performed on archival formalin-fixed paraffin-embedded tissues from resected HCC specimens, and serum samples were not available for serum FN analysis. A prospective study would be necessary in order to correlate the tissue FN expression with the serum FN levels. In conclusion, FN expression was associated with MVI and aggressive clinicopathological parameters in HCC; thus, FN may be a potential biomarker for an aggressive group of HCC with MVI, especially in the biopsy setting, and a potential molecular target for therapy.
|
A novel methodology for epidemic risk assessment of COVID-19 outbreak | fa23dec9-7f46-45a3-9659-e195d0c7bb8b | 7935987 | Preventive Medicine[mh] | The prediction of the future developments of a natural phenomenon is one of the main goals of science, but it remains always a great challenge when dealing with an epidemic. This proved to be particularly true in the case of the COVID-19 global pandemic that the world is suffering since January 2020. SARS-CoV-2 is a novel coronavirus, initially announced as the causative agent of pneumonia of unknown etiology in Wuhan city, China. The genome sequence is related to a viral species named severe acute respiratory syndrome (SARS) related-CoV. These viral species also comprise some viruses detected in rhinolophid bat in Europe and Asia , . The mechanisms of immunological response to the virus infection are partially known, however a dysregulation of the immune system is very likely responsible for a worse outcome especially in patients with pre-existing respiratory or systemic diseases . Most infections by coronavirus are mild and self-treated. Therefore, especially in the early stages of the disease evolution, it can be misleading to estimate the real spread of the virus just on reports of hospital and general practitioner reports. Moreover, such reports vary according to how measurements are performed, the number of tests being related very often only to the number of symptomatic patients. Despite all this, the large amount of official data published in the last months, and updated daily , , has nourished the development of several mathematical models, which are fundamental to understand the possible evolution of an epidemic and to plan effective control strategies – . However, due to the incompleteness of the data and to the intrinsic complexity of our globalized world, predicting the evolution, the peak or the end of the pandemic is a very difficult challenge , . In this paper we propose a different approach aiming, instead, at evaluating the a-priori risk of an epidemic, in particular the one caused by COVID-19. It can also result helpful in setting sound strategies to prevent or decrease the impact of future epidemic waves. The COVID-19 outbreak started officially in China in January 2020, although probably the virus had been already circulating in the country since late October 2019 according to a recent report . In Italy the first infected Italian patient was officially detected on the night of February 20 in Codogno (Lombardia), even if a recent research study of Lombardia Region reveals that more than 1000 positive cases were already present (but not tested) in that region already in the second half of January or even before . Moreover, at the end of January, a couple of Chinese tourists coming from Wuhan were hospitalized in Rome (Lazio) after the confirmation test of the infection. This proves that, in Italy, we have had at least two official starting points of the COVID-19 outbreak, one in the north of Italy and one in the central part , thus leaving some doubts about the reasons of a faster diffusion of the virus in the northern regions of Italy with respect to the central ones. Then, on March 9, a period of strict lockdown was imposed by law in order to contain the rise of the contagion. After the end of the lockdown, at the beginning of May, Italian people were able to travel again with no restrictions: most of them went to south, either to go back home or for vacation. However, this reopening of the country did not change the different and more dramatic impact that the pandemic has continued to have on different parts of Italy. In fact, at the beginning of autumn 2020, when the second epidemic wave arrived in Italy, the same northern regions (Lombardia, Emilia Romagna, Piemonte, Veneto), which have suffered more due to the first wave, seem to be still the most impacted by the pandemic with respect to the central and southern regions, in terms of severe cases and deceased persons . To go more into details of this asymmetric impact of Covid-19 on the various Italian regions, let us look at the official data released by the Italian Ministry of Health on April 2, just before the epidemic peak, and on July 14, at the end of the first wave in Italy. We report in Fig. a,b the apparent case fatality rate. It results to be quite high in northern Italian regions, about 10–14%, compared to the center and southern ones, where it is about 1–4%. In general, these estimates are higher than those observed in other parts of the world, e.g., South Korea and Japan (1.7% and 2.6%, respectively) . This can be explained by considering that the official data underestimate the correct numbers of infected people, as shown in several recent studies. For example, in March, at least 60% of infected individuals were asymptomatic, and usually difficult to detect . The testing strategies adopted in Italy, especially in March–April, generally consisted in checking only people showing severe symptoms and, in particular, aged over 65. Therefore, daily official records depend very much on the number of tests done on the population, resulting in a biased sample towards aged patients. On the other hand, also the number of officially reported deaths due to Covid-19 seems to be quite underestimated, since many aged people have most probably died in their houses or nursing houses without having been tested for Covid-19. Thus, in order to have a more reliable indicator of damages caused by SARS-CoV-2, it is convenient to look at the excess of total mortality observed in Italy with respect to the average value of past five years, as reported by official data provided by ISTAT, the Italian Institute for Statistics , and shown here in Fig. c. The figure shows that the impact of the pandemic has been much more dramatic than it results from official Covid-19 data. Further, it is also clear that regions in the North of Italy have been most affected by the pandemic, even after the March–April lockdown. The approach proposed in this paper can offer a possible explanation for the observed different diffusion and severe impact of the disease, based on a series of cofactors that differentiate the regions of Italy in various respects. In particular, the methodology we introduce, based on the Crichton’s triangle , , evaluates the epidemic risk index of the various Italian regions in terms of several factors, such as air pollution, people mobility, winter temperature, housing concentration, health care density, population size and age, which can be quantified using available historical data. The rationale behind the selection of these factors, as explained in the Methods section, relies on the literature, on the easy accessibility of statistical spatial data and on their uneven distribution among the Italian regions. These factors have been then combined to construct a reliable indicator of the a-priori epidemic risk index in Italy, which has been compared to the impact of the current COVID-19 outbreak registered in two moments: one close to the epidemic peak and the other at the end of the first epidemic wave. As we will show hereafter, Italian regions mostly affected by the pandemic (in terms of total cases, patients in intensive care units (ICU) and deceased ones) are also those with the highest risk of joining a higher propensity to spread the virus with a greater vulnerability of the population to the damage of the disease. Furthermore, we will show that our epidemic risk index fits quite well also the official available data of seasonal flu in Italy for 2019–2020 . Finally, we will propose a theoretical policy model, with actual examples, to design strategies aimed to the reduction of both the risk and the impact of new epidemic waves before their occurrence. Identification of the risk variables and their correlations with the COVID-19 damages We have investigated a series of factors contributing to the risk of an epidemic diffusion and its impact on the population. Among many possible, we selected the following variables: mobility index, housing concentration, healthcare density, air pollution, average winter temperature and age of population. In paragraph 1 of Methods section we motivate our choice on such variables (mainly based on epidemics literature and features of the COVID-19 outbreak), show the related data (see Table ) and explain the adopted normalization. The first step is, of course, to estimate to what extent the chosen normalized variables individually correlate with the main impact indicators of the COVID-19 epidemic, i.e., total cases and total deaths detected in each Italian region, cumulated up to July 14, 2020 , when the first epidemic wave seemed to have finished, and the intensive care occupancy recorded on April 2, 2020 , when the epidemic peak was reached. In the first two rows of Fig. , from panel (a) to panel (f), the spatial distributions of the six risk indicators, multiplied by the population of each region, are reported as chromatic maps and thus can be visually compared with the analogous maps of the three impact indicators, panels (g), (h) and (i) in the third row. As detailed in Table , in paragraph 2 of Methods section, pairwise correlations between risk indicators are, with a few exceptions, quite weak; furthermore, in Table , results of the linear least squares fit of each individual risk indicator to damages are reported. We found correlation coefficients ranging from 0.71 to 0.96, always higher than those observed as a function of the population, which can be considered the null model; however, the relative quadratic errors stay quite high (from 0.26 to 0.62). This suggests that some opportune combination of risk indicators could better capture the risk associated to each region. In the next paragraph, we propose a risk assessment framework aimed to this. Definition of a risk assessment framework and calibration with COVID-19 data Conventional risk assessment theory relies on “Crichton’s Risk Triangle” , , shown in panel (l) of Fig. . In this framework, risk is evaluated as a function of three components: Hazard, Exposure and Vulnerability. Hazard is the potential for an event to cause harm (e.g., earthquake, flooding, epidemics); Exposure measures the amount of assets exposed to harm (e.g., buildings, infrastructures, population); Vulnerability is the harm proneness of assets if exposed to hazard events (e.g., building characteristics, drainage systems, age of population). The risk is present only when all of the three components co-exist in the same place. Used for the first time in the insurance industry , this approach has been extended to assess spatially distributed risks in many fields of disaster management, such as those related to climate change impact – and earthquakes . In the present paper, we consider Hazard as the degree of diffusion of the virus over the population of an Italian region (influenced by a set of factors, related to spatial and socio-economic characteristics of the region itself); Exposure is the amount of people who might potentially be infected by the virus as a consequence of the Hazard (it should coincide with the size of the population of the region); Vulnerability is the propensity of an infected person to become sick or die (in general, it is strongly related to the age and pre-existing health conditions prior to infection). The combination of Vulnerability and Exposure provides a measure of the absolute damage (i.e., the number of ill people due to pathologies related to the virus in the region), which we called Consequences. In paragraph 3 of Methods section we propose two models that differ in the way the risk indicators are aggregated into the three components of the Crichton’s risk triangle. In particular, we consider the E_HV model, where the effect of Hazard and Vulnerability are combined in a single affine function of the six indicators, and the E_H_V model, where Hazard and Vulnerability are considered as affine functions of, respectively, mobility index, housing concentration and healthcare density, on one hand, and air pollution, average winter temperature and age of population on the other hand (see Fig. (m) for a summary). In both models the Exposure is represented by the population of each region. Furthermore, two versions of each model have been considered: an optimized one, where the weights of the risk indicators are obtained through a least-square fitting versus real COVID-19 data, and an a-priori one, where all the weights are assumed to be equal. As shown in Tables and of Methods section, models based on data fitting perform better, both in terms of relative mean quadratic error and correlation coefficient, as expected. In particular, the E_H_V model fits the best. Furthermore, in agreement with the strong correlation of the variables with the targets, most coefficients are positive. Indeed, all coefficients obtained by fitting the number of cases and the intensive care occupancy are positive, and only one negative coefficient appears in each model, when fitting the number of deceased. However, the numerical value of the coefficients strongly depends on both models and targets, making these models not very robust. On the other hand, the a-priori models are independent of the targets, depending only on the choice of the variables we decided to include in the risk evaluation. Among the two considered a-priori models, where all coefficients assume the same value, we observe that the E_H_V model produces a smaller error with respect to real COVID-19 data and better correlation coefficients than the E_HV model, thus justifying the multiplicative approach which define the risk intensity in terms of the product between Hazard and Vulnerability (we used data at April 2, 2020 for this preliminary analysis but similar results would be obtained using data at July 14, 2020). Moreover, the aggregation of risk indicators in the three components of the E_H_V model follows better our motivations to choose those indicators (as explained in Methods, paragraph 1). Validation of the a-priori E_H_V model on COVID-19 data Once we established the robustness of the a-priori E_H_V model, let us now build the corresponding regional risk ranking and validate the model with the regional COVID-19 data as a case study. In particular, following the scheme of Fig. (m), by multiplying Exposure and Vulnerability for the k -th region, we first calculate the Consequences ( [12pt]{minimal} $$C_{k} = E_{k} V_{k}$$ C k = E k · V k , k = 1,…,20). Then, by multiplying Hazard and Consequences, we obtain the global risk index [12pt]{minimal} $$R_{k}$$ R k for each region ( [12pt]{minimal} $$R_{k} = H_{k} C_{k}$$ R k = H k · C k , k = 1,…, 20). In this respect, the risk index can be interpreted as the product of what is related to the occurrence of causes of the virus diffusion in a given region ( [12pt]{minimal} $$H_{k}$$ H k ) and what is related to the severity of effects on people ( [12pt]{minimal} $$C_{k}$$ C k ). In Fig. a we can appreciate the predictive capability of our model by looking at the a-pr iori risk ranking of the Italian regions, compared with the COVID-19 data , in terms of total cases (cumulated), deaths (cumulated) and intensive care occupancy (daily, not cumulated), updated both at April 2, 2020 and July 14, 2020. The values of [12pt]{minimal} $$R_{k}$$ R k have been normalized to their maximum value, so that Lombardia results to have [12pt]{minimal} $$R_{k}$$ R k = 1. The average of [12pt]{minimal} $$R_{k}$$ R k over all the regions is [12pt]{minimal} $$R_{av} = 0.15$$ R av = 0.15 and can be considered approximately a reference level for the Italian country (even if, of course, it has only a relative value). As already explained, due to the intrinsic limitations of the official COVID-19 data, it is convenient to make the comparison at the aggregate level of groups of regions, without expecting to predict the exact rank within each group. Let us therefore arrange the 20 regions in four risk groups, each one characterized by a different color and ordered according to decreasing values of the risk index: very high risk ( [12pt]{minimal} $$0.4 < R_{k} 1$$ 0.4 < R k ≤ 1 , in red), high risk ( [12pt]{minimal} $$0.2 < R_{k} 0.4$$ 0.2 < R k ≤ 0.4 , in brown), medium risk ( [12pt]{minimal} $$0.03 < R_{k} 0.2$$ 0.03 < R k ≤ 0.2 , in beige) and low risk ( [12pt]{minimal} $$R_{k} 0.03$$ R k ≤ 0.03 , in pink). With this choice, our model is clearly able to correctly identify the four northern regions where the epidemic effects have been far more evident, in terms of cases, deaths and intensive care occupancy: the first in the ranking, i.e. Lombardia (whose risk score is about three times the second classified) and the group of the three regions immediately after it, Veneto, Piemonte and Emilia Romagna (even if not in the exact order of damage). A quite good agreement can be observed also for the other two groups: only for Sardegna the effects on both total cases and deaths seem to have been slightly overestimated (its insularity might play a role), while for other two regions, Umbria and Valle d’Aosta, some impact indicators have been slightly underestimated. Notice that the proposed risk classification seems quite robust, since it holds both near to the peak of April and at the end of the first wave, in July, when the intensive care occupancy of the majority of the regions was zero. In Table reported in Methods, a further analysis of the robustness of this classification has been performed by eliminating, one by one, single indicators from the risk index definition: results show that the position of some regions slightly changes inside each group, but the composition of the four risk groups remains for the mostly unchanged with just few exceptions worsening the agreement with the impact indicators shown in Fig. a. This confirms the advantage of including all indicators in the risk index. The clear separation between northern regions from central and southern ones is also confirmed in the bottom part of Fig. , where the a-priori risk color map, in panel (c), is compared with the map of COVID-19 total cases in July, panel (b), and the map of the serious cases and deaths of the seasonal flu 2019/20 in Italy, panel (d) (ISS data ). The agreement is clearly visible. In Fig. we show the correlations between the a-priori risk index and the three main impact indicators related to the outbreak, i.e. the total number of cases (a) and the total number of deaths (b), cumulated up to July 14, 2020, and the intensive care occupancy (c), registered at April 2, 2020. For each plot, a linear regression has been performed, with Pearson correlation coefficients always taking values greater or equal to 0.97, indicating a strong positive correlation. On the right of each plot we report the corresponding percentages of damage observed in the three Italian macro-regions—North, Center and South, see the geographic map (d). Also in this case the correlation is evident, if compared with the percentage of cumulated a-priori risk associated to the same macro-regions (e). Another interesting way to visualize these correlations is to represent the a-priori risk index through its two main aggregated components, Hazard and Consequences, and plotting each region as a point of coordinates [12pt]{minimal} $$(H_{i} ,C_{i} )$$ ( H i , C i ) in the plane [12pt]{minimal} $$\{ {H C} \}$$ H × C . This Risk Diagram is reported in Fig. a, where the points have been also characterized by the same color of the corresponding risk group of Fig. . It is evident that the iso-risk line described by the equation C = R av / H (being R av = 0.15 the average regional risk value) is correctly able to separate the four more damaged and highly risky, northern regions (plus Lazio) from all the others. The value of the risk index is reported in parentheses next to each region name. As shown in Fig. b, where the ranking of the Italian regions has been disaggregated for both Hazard and Consequences, it is interesting to notice that some regions (such as Friuli, Trentino or Valle d’Aosta) exhibit high values of Hazard and quite low values of Consequences, while for other regions (such as Campania or Piemonte) the opposite is true. See also the colored geographic maps in Fig. c,d for a visual comparison. This confirms that it is necessary to aggregate such two main components in a single global index to have a more reliable indication of the regional a-priori risk. Let us close this paragraph by showing, in Fig. , three sequences of the geographic distribution of the total cases (a), total number of deaths (b) and current intensive care occupancy (c) as a function of time, from March 9 to July 14, 2020. These sequences are compared with the geographic map of the a-priori risk level (the bordered image on the right in each sequence), the latter being independent of time. In all the plots, damages seem to spread over the regions with a variable intensity (expressed by the color scale) quite correctly predicted by our a-priori risk analysis. The intensive care occupancy map compared with the risk map is dated April 2, since the occupancy on July 14 is zero almost everywhere (with the exception of Lombardia and a few other regions). In the next paragraph, the methodology proposed in this paper, and in particular this representation in terms of risk diagram, will be used to build a policy model aimed at mitigating damages in case of an epidemic outbreak similar to the COVID-19 one. A proposal for a policy protocol to reduce the epidemic risk We have seen how the risk can be thought as composed in two components, one related to the causes of the infection diffusion and the other to the consequences. In this paragraph we will interpret the consequences in terms of protection and required support to people with the goal of improving the social result and/or reducing the economic cost. It is evident that enhancing the capability of the healthcare system appears to be the most important action: basically, the insufficient carrying capacity creates the emergency. Beyond specific factors explained above, the epidemic crisis in Lombardia essentially showed a breakdown of its healthcare system, caused by high demand rate for hospital admissions, long permanence times in intensive care, insufficient health assistance (diagnosis equipment, staff, spaces, etc.). Previously illustrated data provide a positive analysis of an epidemic disease (i.e., how things are, in a given state of the world). The normative approach here described presents a viable framework to assess possible policy protocols. Several variables affecting the diffusion of an infection can be looked at as suitable policy instruments to manage both the spreading process and the stress level to the healthcare system of a given district (such as a country, a region, an urban area, etc.). Following the evidence suggested by data, we propose a theoretical model (whose details are presented in the Methods section, paragraph 4) based on two independent variables influencing the level of risk, namely the infection ratio, i.e., the proportion of infected individuals over the total population, and the number of per capita hospital beds, as a measure of the impact of consequences caused by the spreading of the disease. We adopt an approach based on a standard model of economic policy, in which a series of instruments explicitly affecting the infection ratio and the per capita hospital beds endowment can be used to approach the target, i.e., the minimization of the risk level. A similar rationale, covering other topics, can be found in Samuelson and Solow (1960) and builds upon a widely consolidated literature which dates back in time – (among many others). Despite the analysis concerns a collective problem, the model here proposed describes elements of a possible decision process followed by an individual policy-maker, thus remaining microeconomic in nature. Panel (a) in Fig. shows the risk function, while the right panel provides an illustration of the family of its convex contours, for a finite set of risk levels (limited for graphic convenience): Panel (b) in Fig. replicates the meaning of Fig. a by translating the consequences indicated by data as the required per capita hospital beds, while explaining that the position of each iso-risk curve corresponds to the different actual composition of the scenario at hand. We assume a unique care strategy based on the structural carrying capacity of the healthcare system, defined as the available number of per capita hospital beds. Such a carrying capacity derives from the health expenditure [12pt]{minimal} $$G_{H}$$ G H , which is set to a level considered sufficient. Such a choice is based on political decisions and is reasonably inferred from past experience, structural elements of population, such as age and territorial density, etc. A part of the deliberated budget is dedicated to set up intensive care beds, as an advanced assistance service provision. During an emergency, possibly deriving from an epidemic spreading, the number of beds can suddenly reveal insufficient. In other words, it is possible that the amount of hospital beds required at a certain point is greater than the current availability. In the model, we assume the number of hospital beds, H , and the proportion of intensive care beds, [12pt]{minimal} $$$$ α , as exogenously determined by the policy-maker who fixes [12pt]{minimal} $$G_{H}$$ G H . The actual carrying capacity is shown as a function of the infection ratio, x , computed as the infected population over the total, as shown in panel (c) of Fig. , and detailed in paragraph 4 of Methods. Changes in the proportion of per capita intensive care hospital beds over the total, cause instead, a variation in the slope of the line (which becomes steeper for reduction in the proportion of intensive care beds). Finally, changes in the overall expenditure shift the line with the same slope (above for increments of the expenditure). In particular, it is worth to notice that the political choice of the ratio [12pt]{minimal} $$ = HH/H$$ α = H H / H may imply that the overall capacity to assist the entire population is not guaranteed (i.e. the intercept on the [12pt]{minimal} $$x$$ x axis might be less than [12pt]{minimal} $$1$$ 1 ). A direct comparison of elements contained in panels (a-b) and (c-d) of Fig. provides a quick inspection of the policy problem, focused to control the epidemic spreading. The constraint should be considered as a dynamic law, but since the speed of adjustment is reasonably low, we will proceed by means of a comparative statics perspective, in which a comparison of different strategies can be presented, by starting from different, static, scenarios. Further, by definition, an emergency challenges the usual policy settings, since the speed of damages is greater than that of policy tools. In panel (e) of Fig. a hypothetic country has a given carrying capacity to sustain the risk level represented by the iso-risk curve. Without an immediate availability of funds to increase the carrying capacity, the main policy target could easily be described as the transposition of the iso-risk curve to the bottom-left: the closer the curve to the origin, the higher the satisfaction for the community. Secondly, the meaning of the relationship between the curve and the line is that until the curve touches the line, the policy maker has a sort of measure of how much the problem is out of control, given by the distance between the curve and the constraint. Third, policies may try to transpose the curve to lower levels or, equivalently, the constraint upwards (with or without modification of the slope). A minimal result is reached if both are at least tangent, as depicted in panel (f) of Fig. . Whenever such a tangency condition has been reached, the highest infection rate that the given health care system can sustain has been found. Further policy actions are possible to approach a lower iso-risk curve or to save resources and/or re-allocate them differently. A policy can be considered satisfactory when any of points belonging to the arc TT’ is reached, e.g. the point L. Alternative policies are neither equivalent, nor requiring the same actions, and the policy-maker has to choose actions with reference to the actual data collected by its own Country. Points F and G, although carrying the same risk level as E, still represent out-of-control positions. Different regions of the plot have a different signaling power: at point F, the infection rate is low and, thus, very difficult to be further reduced. In such a case, for example, it would be advisable to suggest health protocols which improve people safety. On the contrary, at point G, the infection rate is so high that a limit on social interaction easily appears to be much more urgent than medical protocols. The right mix between a demand-side and a supply-side policy to adopt is a decision of political nature. A distinction can be made by saying that demand-side policies are devoted to reduce the number of newly infected people (by means of restrictions to movements, quarantine regulations, rules of conduct, etc.) and their effects are able to lower the iso-risk curves; supply-side policies are, instead, aimed at incrementing the carrying capacity of the system (by means of expenditure for the healthcare system, increments of dedicated personnel and intensive care beds, in-house medical protocols) and their effects can shift the constraint representing the carrying capacity of the system. Politics has, then, to decide when the risk is low enough or the constraint is sufficiently high. Specific calibration of the model will allow, in a forthcoming research, a detailed analysis of policy implications, by considering actual conditions and risk factors of specific districts, thus providing the policy-maker with a toolbox for normative directions. For instance, the model can be read to analyze differences in proposed actions in Lombardia and Veneto, and in other regions or countries. We have investigated a series of factors contributing to the risk of an epidemic diffusion and its impact on the population. Among many possible, we selected the following variables: mobility index, housing concentration, healthcare density, air pollution, average winter temperature and age of population. In paragraph 1 of Methods section we motivate our choice on such variables (mainly based on epidemics literature and features of the COVID-19 outbreak), show the related data (see Table ) and explain the adopted normalization. The first step is, of course, to estimate to what extent the chosen normalized variables individually correlate with the main impact indicators of the COVID-19 epidemic, i.e., total cases and total deaths detected in each Italian region, cumulated up to July 14, 2020 , when the first epidemic wave seemed to have finished, and the intensive care occupancy recorded on April 2, 2020 , when the epidemic peak was reached. In the first two rows of Fig. , from panel (a) to panel (f), the spatial distributions of the six risk indicators, multiplied by the population of each region, are reported as chromatic maps and thus can be visually compared with the analogous maps of the three impact indicators, panels (g), (h) and (i) in the third row. As detailed in Table , in paragraph 2 of Methods section, pairwise correlations between risk indicators are, with a few exceptions, quite weak; furthermore, in Table , results of the linear least squares fit of each individual risk indicator to damages are reported. We found correlation coefficients ranging from 0.71 to 0.96, always higher than those observed as a function of the population, which can be considered the null model; however, the relative quadratic errors stay quite high (from 0.26 to 0.62). This suggests that some opportune combination of risk indicators could better capture the risk associated to each region. In the next paragraph, we propose a risk assessment framework aimed to this. Conventional risk assessment theory relies on “Crichton’s Risk Triangle” , , shown in panel (l) of Fig. . In this framework, risk is evaluated as a function of three components: Hazard, Exposure and Vulnerability. Hazard is the potential for an event to cause harm (e.g., earthquake, flooding, epidemics); Exposure measures the amount of assets exposed to harm (e.g., buildings, infrastructures, population); Vulnerability is the harm proneness of assets if exposed to hazard events (e.g., building characteristics, drainage systems, age of population). The risk is present only when all of the three components co-exist in the same place. Used for the first time in the insurance industry , this approach has been extended to assess spatially distributed risks in many fields of disaster management, such as those related to climate change impact – and earthquakes . In the present paper, we consider Hazard as the degree of diffusion of the virus over the population of an Italian region (influenced by a set of factors, related to spatial and socio-economic characteristics of the region itself); Exposure is the amount of people who might potentially be infected by the virus as a consequence of the Hazard (it should coincide with the size of the population of the region); Vulnerability is the propensity of an infected person to become sick or die (in general, it is strongly related to the age and pre-existing health conditions prior to infection). The combination of Vulnerability and Exposure provides a measure of the absolute damage (i.e., the number of ill people due to pathologies related to the virus in the region), which we called Consequences. In paragraph 3 of Methods section we propose two models that differ in the way the risk indicators are aggregated into the three components of the Crichton’s risk triangle. In particular, we consider the E_HV model, where the effect of Hazard and Vulnerability are combined in a single affine function of the six indicators, and the E_H_V model, where Hazard and Vulnerability are considered as affine functions of, respectively, mobility index, housing concentration and healthcare density, on one hand, and air pollution, average winter temperature and age of population on the other hand (see Fig. (m) for a summary). In both models the Exposure is represented by the population of each region. Furthermore, two versions of each model have been considered: an optimized one, where the weights of the risk indicators are obtained through a least-square fitting versus real COVID-19 data, and an a-priori one, where all the weights are assumed to be equal. As shown in Tables and of Methods section, models based on data fitting perform better, both in terms of relative mean quadratic error and correlation coefficient, as expected. In particular, the E_H_V model fits the best. Furthermore, in agreement with the strong correlation of the variables with the targets, most coefficients are positive. Indeed, all coefficients obtained by fitting the number of cases and the intensive care occupancy are positive, and only one negative coefficient appears in each model, when fitting the number of deceased. However, the numerical value of the coefficients strongly depends on both models and targets, making these models not very robust. On the other hand, the a-priori models are independent of the targets, depending only on the choice of the variables we decided to include in the risk evaluation. Among the two considered a-priori models, where all coefficients assume the same value, we observe that the E_H_V model produces a smaller error with respect to real COVID-19 data and better correlation coefficients than the E_HV model, thus justifying the multiplicative approach which define the risk intensity in terms of the product between Hazard and Vulnerability (we used data at April 2, 2020 for this preliminary analysis but similar results would be obtained using data at July 14, 2020). Moreover, the aggregation of risk indicators in the three components of the E_H_V model follows better our motivations to choose those indicators (as explained in Methods, paragraph 1). a-priori E_H_V model on COVID-19 data Once we established the robustness of the a-priori E_H_V model, let us now build the corresponding regional risk ranking and validate the model with the regional COVID-19 data as a case study. In particular, following the scheme of Fig. (m), by multiplying Exposure and Vulnerability for the k -th region, we first calculate the Consequences ( [12pt]{minimal} $$C_{k} = E_{k} V_{k}$$ C k = E k · V k , k = 1,…,20). Then, by multiplying Hazard and Consequences, we obtain the global risk index [12pt]{minimal} $$R_{k}$$ R k for each region ( [12pt]{minimal} $$R_{k} = H_{k} C_{k}$$ R k = H k · C k , k = 1,…, 20). In this respect, the risk index can be interpreted as the product of what is related to the occurrence of causes of the virus diffusion in a given region ( [12pt]{minimal} $$H_{k}$$ H k ) and what is related to the severity of effects on people ( [12pt]{minimal} $$C_{k}$$ C k ). In Fig. a we can appreciate the predictive capability of our model by looking at the a-pr iori risk ranking of the Italian regions, compared with the COVID-19 data , in terms of total cases (cumulated), deaths (cumulated) and intensive care occupancy (daily, not cumulated), updated both at April 2, 2020 and July 14, 2020. The values of [12pt]{minimal} $$R_{k}$$ R k have been normalized to their maximum value, so that Lombardia results to have [12pt]{minimal} $$R_{k}$$ R k = 1. The average of [12pt]{minimal} $$R_{k}$$ R k over all the regions is [12pt]{minimal} $$R_{av} = 0.15$$ R av = 0.15 and can be considered approximately a reference level for the Italian country (even if, of course, it has only a relative value). As already explained, due to the intrinsic limitations of the official COVID-19 data, it is convenient to make the comparison at the aggregate level of groups of regions, without expecting to predict the exact rank within each group. Let us therefore arrange the 20 regions in four risk groups, each one characterized by a different color and ordered according to decreasing values of the risk index: very high risk ( [12pt]{minimal} $$0.4 < R_{k} 1$$ 0.4 < R k ≤ 1 , in red), high risk ( [12pt]{minimal} $$0.2 < R_{k} 0.4$$ 0.2 < R k ≤ 0.4 , in brown), medium risk ( [12pt]{minimal} $$0.03 < R_{k} 0.2$$ 0.03 < R k ≤ 0.2 , in beige) and low risk ( [12pt]{minimal} $$R_{k} 0.03$$ R k ≤ 0.03 , in pink). With this choice, our model is clearly able to correctly identify the four northern regions where the epidemic effects have been far more evident, in terms of cases, deaths and intensive care occupancy: the first in the ranking, i.e. Lombardia (whose risk score is about three times the second classified) and the group of the three regions immediately after it, Veneto, Piemonte and Emilia Romagna (even if not in the exact order of damage). A quite good agreement can be observed also for the other two groups: only for Sardegna the effects on both total cases and deaths seem to have been slightly overestimated (its insularity might play a role), while for other two regions, Umbria and Valle d’Aosta, some impact indicators have been slightly underestimated. Notice that the proposed risk classification seems quite robust, since it holds both near to the peak of April and at the end of the first wave, in July, when the intensive care occupancy of the majority of the regions was zero. In Table reported in Methods, a further analysis of the robustness of this classification has been performed by eliminating, one by one, single indicators from the risk index definition: results show that the position of some regions slightly changes inside each group, but the composition of the four risk groups remains for the mostly unchanged with just few exceptions worsening the agreement with the impact indicators shown in Fig. a. This confirms the advantage of including all indicators in the risk index. The clear separation between northern regions from central and southern ones is also confirmed in the bottom part of Fig. , where the a-priori risk color map, in panel (c), is compared with the map of COVID-19 total cases in July, panel (b), and the map of the serious cases and deaths of the seasonal flu 2019/20 in Italy, panel (d) (ISS data ). The agreement is clearly visible. In Fig. we show the correlations between the a-priori risk index and the three main impact indicators related to the outbreak, i.e. the total number of cases (a) and the total number of deaths (b), cumulated up to July 14, 2020, and the intensive care occupancy (c), registered at April 2, 2020. For each plot, a linear regression has been performed, with Pearson correlation coefficients always taking values greater or equal to 0.97, indicating a strong positive correlation. On the right of each plot we report the corresponding percentages of damage observed in the three Italian macro-regions—North, Center and South, see the geographic map (d). Also in this case the correlation is evident, if compared with the percentage of cumulated a-priori risk associated to the same macro-regions (e). Another interesting way to visualize these correlations is to represent the a-priori risk index through its two main aggregated components, Hazard and Consequences, and plotting each region as a point of coordinates [12pt]{minimal} $$(H_{i} ,C_{i} )$$ ( H i , C i ) in the plane [12pt]{minimal} $$\{ {H C} \}$$ H × C . This Risk Diagram is reported in Fig. a, where the points have been also characterized by the same color of the corresponding risk group of Fig. . It is evident that the iso-risk line described by the equation C = R av / H (being R av = 0.15 the average regional risk value) is correctly able to separate the four more damaged and highly risky, northern regions (plus Lazio) from all the others. The value of the risk index is reported in parentheses next to each region name. As shown in Fig. b, where the ranking of the Italian regions has been disaggregated for both Hazard and Consequences, it is interesting to notice that some regions (such as Friuli, Trentino or Valle d’Aosta) exhibit high values of Hazard and quite low values of Consequences, while for other regions (such as Campania or Piemonte) the opposite is true. See also the colored geographic maps in Fig. c,d for a visual comparison. This confirms that it is necessary to aggregate such two main components in a single global index to have a more reliable indication of the regional a-priori risk. Let us close this paragraph by showing, in Fig. , three sequences of the geographic distribution of the total cases (a), total number of deaths (b) and current intensive care occupancy (c) as a function of time, from March 9 to July 14, 2020. These sequences are compared with the geographic map of the a-priori risk level (the bordered image on the right in each sequence), the latter being independent of time. In all the plots, damages seem to spread over the regions with a variable intensity (expressed by the color scale) quite correctly predicted by our a-priori risk analysis. The intensive care occupancy map compared with the risk map is dated April 2, since the occupancy on July 14 is zero almost everywhere (with the exception of Lombardia and a few other regions). In the next paragraph, the methodology proposed in this paper, and in particular this representation in terms of risk diagram, will be used to build a policy model aimed at mitigating damages in case of an epidemic outbreak similar to the COVID-19 one. We have seen how the risk can be thought as composed in two components, one related to the causes of the infection diffusion and the other to the consequences. In this paragraph we will interpret the consequences in terms of protection and required support to people with the goal of improving the social result and/or reducing the economic cost. It is evident that enhancing the capability of the healthcare system appears to be the most important action: basically, the insufficient carrying capacity creates the emergency. Beyond specific factors explained above, the epidemic crisis in Lombardia essentially showed a breakdown of its healthcare system, caused by high demand rate for hospital admissions, long permanence times in intensive care, insufficient health assistance (diagnosis equipment, staff, spaces, etc.). Previously illustrated data provide a positive analysis of an epidemic disease (i.e., how things are, in a given state of the world). The normative approach here described presents a viable framework to assess possible policy protocols. Several variables affecting the diffusion of an infection can be looked at as suitable policy instruments to manage both the spreading process and the stress level to the healthcare system of a given district (such as a country, a region, an urban area, etc.). Following the evidence suggested by data, we propose a theoretical model (whose details are presented in the Methods section, paragraph 4) based on two independent variables influencing the level of risk, namely the infection ratio, i.e., the proportion of infected individuals over the total population, and the number of per capita hospital beds, as a measure of the impact of consequences caused by the spreading of the disease. We adopt an approach based on a standard model of economic policy, in which a series of instruments explicitly affecting the infection ratio and the per capita hospital beds endowment can be used to approach the target, i.e., the minimization of the risk level. A similar rationale, covering other topics, can be found in Samuelson and Solow (1960) and builds upon a widely consolidated literature which dates back in time – (among many others). Despite the analysis concerns a collective problem, the model here proposed describes elements of a possible decision process followed by an individual policy-maker, thus remaining microeconomic in nature. Panel (a) in Fig. shows the risk function, while the right panel provides an illustration of the family of its convex contours, for a finite set of risk levels (limited for graphic convenience): Panel (b) in Fig. replicates the meaning of Fig. a by translating the consequences indicated by data as the required per capita hospital beds, while explaining that the position of each iso-risk curve corresponds to the different actual composition of the scenario at hand. We assume a unique care strategy based on the structural carrying capacity of the healthcare system, defined as the available number of per capita hospital beds. Such a carrying capacity derives from the health expenditure [12pt]{minimal} $$G_{H}$$ G H , which is set to a level considered sufficient. Such a choice is based on political decisions and is reasonably inferred from past experience, structural elements of population, such as age and territorial density, etc. A part of the deliberated budget is dedicated to set up intensive care beds, as an advanced assistance service provision. During an emergency, possibly deriving from an epidemic spreading, the number of beds can suddenly reveal insufficient. In other words, it is possible that the amount of hospital beds required at a certain point is greater than the current availability. In the model, we assume the number of hospital beds, H , and the proportion of intensive care beds, [12pt]{minimal} $$$$ α , as exogenously determined by the policy-maker who fixes [12pt]{minimal} $$G_{H}$$ G H . The actual carrying capacity is shown as a function of the infection ratio, x , computed as the infected population over the total, as shown in panel (c) of Fig. , and detailed in paragraph 4 of Methods. Changes in the proportion of per capita intensive care hospital beds over the total, cause instead, a variation in the slope of the line (which becomes steeper for reduction in the proportion of intensive care beds). Finally, changes in the overall expenditure shift the line with the same slope (above for increments of the expenditure). In particular, it is worth to notice that the political choice of the ratio [12pt]{minimal} $$ = HH/H$$ α = H H / H may imply that the overall capacity to assist the entire population is not guaranteed (i.e. the intercept on the [12pt]{minimal} $$x$$ x axis might be less than [12pt]{minimal} $$1$$ 1 ). A direct comparison of elements contained in panels (a-b) and (c-d) of Fig. provides a quick inspection of the policy problem, focused to control the epidemic spreading. The constraint should be considered as a dynamic law, but since the speed of adjustment is reasonably low, we will proceed by means of a comparative statics perspective, in which a comparison of different strategies can be presented, by starting from different, static, scenarios. Further, by definition, an emergency challenges the usual policy settings, since the speed of damages is greater than that of policy tools. In panel (e) of Fig. a hypothetic country has a given carrying capacity to sustain the risk level represented by the iso-risk curve. Without an immediate availability of funds to increase the carrying capacity, the main policy target could easily be described as the transposition of the iso-risk curve to the bottom-left: the closer the curve to the origin, the higher the satisfaction for the community. Secondly, the meaning of the relationship between the curve and the line is that until the curve touches the line, the policy maker has a sort of measure of how much the problem is out of control, given by the distance between the curve and the constraint. Third, policies may try to transpose the curve to lower levels or, equivalently, the constraint upwards (with or without modification of the slope). A minimal result is reached if both are at least tangent, as depicted in panel (f) of Fig. . Whenever such a tangency condition has been reached, the highest infection rate that the given health care system can sustain has been found. Further policy actions are possible to approach a lower iso-risk curve or to save resources and/or re-allocate them differently. A policy can be considered satisfactory when any of points belonging to the arc TT’ is reached, e.g. the point L. Alternative policies are neither equivalent, nor requiring the same actions, and the policy-maker has to choose actions with reference to the actual data collected by its own Country. Points F and G, although carrying the same risk level as E, still represent out-of-control positions. Different regions of the plot have a different signaling power: at point F, the infection rate is low and, thus, very difficult to be further reduced. In such a case, for example, it would be advisable to suggest health protocols which improve people safety. On the contrary, at point G, the infection rate is so high that a limit on social interaction easily appears to be much more urgent than medical protocols. The right mix between a demand-side and a supply-side policy to adopt is a decision of political nature. A distinction can be made by saying that demand-side policies are devoted to reduce the number of newly infected people (by means of restrictions to movements, quarantine regulations, rules of conduct, etc.) and their effects are able to lower the iso-risk curves; supply-side policies are, instead, aimed at incrementing the carrying capacity of the system (by means of expenditure for the healthcare system, increments of dedicated personnel and intensive care beds, in-house medical protocols) and their effects can shift the constraint representing the carrying capacity of the system. Politics has, then, to decide when the risk is low enough or the constraint is sufficiently high. Specific calibration of the model will allow, in a forthcoming research, a detailed analysis of policy implications, by considering actual conditions and risk factors of specific districts, thus providing the policy-maker with a toolbox for normative directions. For instance, the model can be read to analyze differences in proposed actions in Lombardia and Veneto, and in other regions or countries. We have shown how a data-driven epidemic risk analysis, accounting for a proper combination of a set of cofactors, can contribute to understand the highly inhomogeneous spread of COVID-19 in Italy during the first epidemic wave (from March 2020 to June 2020), in terms of a different a-priori risk exposure of different geographical areas. Regions such as Lombardia, Veneto, Piemonte and Emilia Romagna result indeed in the first positions of our proposed a-priori risk ranking, which consists of three main components, Hazard, Exposure and Vulnerability, related, directly or indirectly, to the probability of spreading of a virus and of its harming ability. We have evaluated these three components by using historical available data on various factors that can contribute to the territorial risk. Then, assuming the existing data are reliable, we compared our risk map with real impact indicators both close to the epidemic peak and at the end of the first epidemic wave. We are aware that the information about total number of cases can heavily be underestimated and is strictly dependent on the testing strategies. For this reason we also adopted for the comparison the total number of deaths and the intensive care occupancy. In all the cases we were able to correctly identify four groups of regions where the observed epidemic effects match with the a-priori risk level. In the second part of the paper we then advanced a theoretical policy model that provides a decision-making toolbox to face a complex phenomenon as that of an epidemic emergency. In what follows, we provide an example illustrating the application of the model, following the steps of its practical implementation. A policy maker facing an emergency outbreak, should: Detect the current Risk Profile: compute the infection ratio over the population, x; measure the demand for hospital beds, b; Measure the current Carrying Capacity of the healthcare system (supply of hospital beds), z; Check the sustainability of the epidemic burden and assess costs of possible interventions; Apply the chosen policy; Evaluate results and, if necessary, repeat. In order to see the procedure in action, let us now go through a numerical example with hypothetical data, which can be easily substituted with actual data of any country/region, if available. Imagine a district with a population of 10,000 people, invested by a pandemic without a known therapy. Assume, further, that the healthcare system has 2500 hospital beds (that is a very conspicuous endowment), among which 1000 are intensive care ones. Consider a first case in which 1500 persons are infected and 1200 of them present symptoms and need hospital treatments. Thus, in terms of our theoretical model presented in Methods, values are set as: n = 10,000, H = 2500, HH = 1000; then, α = 0.4, h = 0.6 and z H = 0.25. STEP 1) The infection ratio results, currently, to be equal to x = 1500/10,000 = 0.15 and the evidence suggests that, over the infected part of the population, 80% requires hospital treatments. Therefore, the actual estimate of the absorbed allowance (the demand of per capita hospital beds) is b = 1200/10,000 = 0.12. The point representing the country risk profile in the proposed plane hazard-consequences would, then, have coordinates (0.15, 0.12), as the point A in panel g) of Fig. . STEP 2) The line representing the carrying capacity of the healthcare system intercepts the vertical axis to the point z H = 0.25 and its slope is h = (1−α) = 1–1000/2500 = 0.6, as the black line depicted in panel g) of Fig. . It is worth to notice that, in this example, the healthcare system would not be able to handle infection ratios greater than 41.6%, as shown by the intercept on the horizontal axis. STEP 3) The policy maker can see that point A can be managed by means of the current carrying capacity: it lies below the line representing the constraint. Despite this apparently encouraging result, what comes next might depend on the speed of the epidemic spreading. This model is, however, not dynamic; it uses instead a comparative static approach. Let us therefore consider two hypotheses. In case the contagion is not proceeding by involving a greater share of the population, the policy maker can reasonably decide to do nothing. Contrariwise, in case of a situation where the contagion is still in the ascending phase, the policy maker can (i) compare the pace of epidemic progression, (ii) measure the time remained before the free allowance will be used and (iii) decide, correspondingly, whether it is the case to intervene or not. In case the choice is “do nothing”, STEP 4) and STEP 5) are not necessary. Consider now a different example, in the same district as before, in which the contagion has reached 3000 persons and 2500 of them need hospital treatments. In this case, the algorithm would lead to: The infection ratio results to be equal to x = 3000/10,000 = 0.3 and the actual estimate of the absorbed allowance (the demand of per capita hospital beds) is b = 2500/10,000 = 0.25. The point representing the country risk profile in the proposed plane hazard-consequences has, now, coordinates (0.3, 0.25), as the point B in panel g) of Fig. , reported also in panel h). The line representing the carrying capacity of the healthcare system is the same as before, reported as the black line in panel g) and h) of Fig. . The policy maker can immediately understand that the situation is out of control, since the constraint says that the healthcare system is able to allocate no more than z = 0.07 per-capita beds if the infection rate reaches x = 0.3, while the situation in progress requires b = 0.25 per-capita beds. This can be easily seen by comparing point B and the point on the constraint with the same abscissa of B. Since b > z, the policy maker has to decide what to do in order to satisfy the excess demand. Moreover, the situation could in principle represent a further element of diffusion of the epidemic, thus making the policy intervention more urgent. Different strategies are possible and, also in this case, the speed of contagion spreading matters in tuning opportune actions. If, for example, the progression of the disease is decreasing or even constant, a viable decision is to allocate resources to fill the gap in terms of hospital beds. Consequently, the expenditure will be incremented by ΔH, represented by a parallel shift upwards of the constraint, as the dashed line passing through point B. It is worth to notice that, in principle, the deliberated expenditure could also exceed the required gap and set the constraint to higher allowance. If, instead, the progression of the disease is increasing, the policy maker could decide to modify also the proportion of intensive care beds (α), in order to face the probable growth of infected persons. This would make the constraint line flatter, while new expenditure will also be required to shift it upward to reach, at least, point B, as the dotted line passing through it. Under both 3.a and 3.b, the policy maker can comparatively consider restrictive interventions aimed to reduce the infection ratio, while deciding the expenditure to expand the endowment of hospital beds. Examples of such restrictive policies could be a forced closure of restaurants, gyms and cinemas. Such interventions would have the effect to shift the point B to the bottom-left, thus associating infection ratios with lower per capita beds requirements. Then, the cost of additive hospital beds has to be compared to social cost of restrictions, in terms of tax revenues, required subsidies, unemployment and social uncertainty. The political preference and the availability of budget funds will guide the choice. Consider that the effectiveness of such restrictive initiatives is estimated according to the presumed knowledge of the social custom in the district at hand. In the example, a society where restaurant consumptions are very frequent is very likely to respond well to a restriction of this type. It is worth to notice that such choices, i.e., the amount of governmental expenditure in the healthcare system (influencing the vertical distance between the new and the former linear constraint), the details of the regulation imposing the restrictions, etc., are of political nature and cannot be decided by the model. They will be tailored according to political preferences of the policy maker. The chosen policy is applied and the consequences are measured, while the spreading continues at its pace. Evaluation and new measurements occur and the process starts again (STEP 5). While preserving simplicity, the model is able to depict various scenarios according to actual data and can help designing policy strategies fitting the situation at hand. In particular, elements of the model can be depicted by importing data of a district (i.e., a region, a country, etc.) and follow the presented algorithm to tailor the most adequate policy. As explained above, the political preference will guide the decision, in terms of the chosen expenditure profile (i.e., whether to change H only or also α), and in terms of possible restrictions for society, as different lockdown strategies (e.g., more drastic but specific vs. more gradual but generalized). In all cases, the forced closure of socio-economic activities will serve as an ancillary tool aimed to support a potentially insufficient endowment of hospital beds, but the actual implementation relies on the ability and preference of the Government. In conclusion, our work is a first attempt to jointly consider different factors contributing to evaluate the a-priori epidemic risk in a geographical area. Better medical knowledge and data availability will be important to further refine and improve the proposed methodology, which could also be easily applied to other countries provided that they make the necessary information accessible. Further studies will deal also with dynamic implications, thus providing more specific intuitions according to different evolutionary paths of contagion spreading. Synthetic description of the risk indicators, population and data sources This paragraph presents all the indicators we used, their rationale and the set of references supporting their selection. Table reports the data used for each region, their definition, unit of measure and relevant source. We included in the a-priori risk index only territorial or environmental factors unevenly distributed among the regions, easily available on national databases. Mobility index Commuting data are often used to correlate population mobility and the spreading of an infection . On the other hand, many recently published papers have monitored to what extent people are complying to issued travel restrictions and if they are proofing to be effective in the reduction of the Covid-19 epidemic spreading . According to available data, the average trip rate of mobile population in Italy is 2.50 per day and the average distance covered is 28.5 km per day . We characterize each region with a “mobility index” as the regional average of the ratio between the sum of commuting flows (incoming and outgoing) for each municipality and its employed population. The data source is the Italian Ministry of Economic Policy Planning and Coordination . Housing concentration Urbanization increasingly affects the epidemiological characteristics of infectious disease , . Close proximity of people in their short range mobility and the attitude to use crowded public transport is amplified in compact and dense cities . We capture those circumstances by the “Housing Concentration”, measured as the ratio between the number of houses classified as "non detached houses" and the total number of houses. The data source is the same database cited for the mobility index. Healthcare density Delayed hospital admission, misdiagnosis, unsuitable air conditioning systems, lack of infected patients segregation and inter-hospitals transfers, all might contribute in what it is commonly called super-spreading event . It is worthwhile to notice that super-spreading events can occur in many situations where overcrowding in closed spaces favour the transmission of the infection. Nevertheless, our intention was to include in the a-priori risk index only territorial or environmental factors unevenly distributed among the regions, easily available on national databases. We measure the potential occurrence of this events in the infection spreading by including the “Healthcare Density” as the number of hospital beds per 10.000 inhabitants at regional level. Data source is the website of the Ministry of Health ( http://www.dati.salute.gov.it/dati/dettaglioDataset.jsp?menu=dati&idPag=17 ). Pollution Long-term exposure to air pollution may be one of the most important contributors to fatality caused by the COVID-19 in Europe and in northern Italy . Particulate matter (PM) is able to penetrate deeply into the respiratory tract and increase the risk of respiratory diseases . According to the European Environment Agency (EEA Report No 10/2019), PM concentration in 2016 were responsible for about 374.000 premature deaths in the EU-28. Recently has been evidenced that PM10 determines a hyperactivation of JAK/STAT protein family that is associated with cells proliferation and survival . JAK/STAT is also hyperactivated by several cytokines generated during the Covid-19 infection . For these reasons a strengthening mechanism between PM10 and Covid-19 infection could be assumed. Besides, very recent studies are directly correlating the population exposed to particulate pollution and the contagion from COVID-19 and the consequent health damage – . Based on these premises, we decided to include the PM10 annual average of the mean daily concentration as a factor influencing the vulnerability of people exposed to the infection. The data source is WHO ( https://www.who.int/airpollution/data/cities/en/ ), that provides measures at urban background, residential areas, commercial and mixed areas for the period 2013–2016. Temperature Weather plays a role in the spread of 2019-nCoV – , although not fully established. Chan et al. (2011) report that a low temperature and low humidity environment may facilitate the virus transmission in subtropical areas during the spring and in air-conditioned environments. It is also commonly accepted that cold cuts down the defense barriers of the respiratory tracts , . We decided to include the average winter (from December 2016 to April 2017) daily mean temperature in each region as a factor potentially enhancing the individual vulnerability. The source of data is the Italian Ministry of Agriculture ( https://www.politicheagricole.it/ ). Age of population Most of the official data sources report more severe impacts of 2019-nCoV on elderly people, probably both for an intrinsic weakness of their immunity system and for the co-existence of other chronic pathologies. Therefore we use the ratio between the population over 60 and the total population to take into account this vulnerability factor, even if it shows only relatively small differences from one region to another. Data source is the same as population, i.e. ISTAT database ( www.istat.it/it/archivio/104317 ). Population Of course, as anyone who is infected might get ill, it is straightforward to use the total population of each region that could be affected by the infection as a measure of the risk exposure. We also use the population as a multiplying factor of each risk indicator when measuring its degree of correlation to the damage of each region (see first paragraph in the Results section). About 43% of the population is concentrated in the five regions of Northern Italy and one out of six in Lombardia. Data on regional population are available on ISTAT database ( www.istat.it/it/archivio/104317 ). Comparison between single risk indicators and impact indicators The seven risk indicators under consideration are named below, together with their reference interval: Population [12pt]{minimal} $$X_{0} $$ X 0 ∈ [126806, 9704151]. Mobility index [12pt]{minimal} $$X_{1}$$ X 1 [12pt]{minimal} $$$$ ∈ [0.74, 0.84]. Housing concentration [12pt]{minimal} $$X_{2} $$ X 2 ∈ [0.80, 0.96]. Healthcare density [12pt]{minimal} $$X_{3} $$ X 3 ∈ [29.6, 40.80]. Air Pollution [12pt]{minimal} $$X_{4} $$ X 4 ∈ [18.09, 31.07]. Average Winter Temperature [12pt]{minimal} $$X_{5} $$ X 5 ∈ [− 2.29, 11.92]. Age of Population (fraction of over-60 individuals) [12pt]{minimal} $$X_{6} $$ X 6 ∈ [0.22, 0.34]. These variables are suitably normalized between 0 and 1 as: [12pt]{minimal} $$x_{0} = }}{{{}(X_{0} )}} ; x_{i} = - {}(X_{i} )}}{{{}(X_{i} ) - {}(X_{i} )}}, i = 1,2,3,4,6, x_{5} = }(X_{5} ) - X_{5} }}{{{}(X_{5} ) - {}(X_{5} )}}$$ x 0 = X 0 max ( X 0 ) ; x i = X i - min ( X i ) max ( X i ) - min ( X i ) , i = 1 , 2 , 3 , 4 , 6 , x 5 = max ( X 5 ) - X 5 max ( X 5 ) - min ( X 5 ) where [12pt]{minimal} $${}(X_{i} )$$ min ( X i ) and [12pt]{minimal} $${}(X_{i} )$$ max ( X i ) are, respectively, the minimum and the maximum value assumed by each variable [12pt]{minimal} $$X_{i}$$ X i in its own reference interval. The new normalized variables are also dimensionless. Notice that the normalization is different for the population, since we want to avoid values equal to zero, and for the temperature, since, at variance with all other quantities, we expect that the risk increases with the decrease of the temperature. The first test is to check possible pairwise correlations among normalized indicators, with the exception of the population (whose correlation with many other indicators is quite obvious). The Pearson correlation coefficient is reported for each couple in the correlation matrix, see Table . As one can see, the majority of the indicators are weakly correlated. Noticeable exceptions concern the moderate positive correlations of some indicators, such as mobility and healthcare, with the inverted temperature. These can be explained by observing that northern Italian regions are, on average, colder and with greater mobility and healthcare density than central and southern regions. The second thing is to check if any of the six risk indicators, [12pt]{minimal} $$x_{1} , ,x_{6}$$ x 1 , … , x 6 , each considered separately, can fit any of the targets [12pt]{minimal} $$T_{l} , l = 1, 2, 3$$ T l , l = 1 , 2 , 3 , i.e. our three impact indicators: cumulative number of cases, cumulative number of deceased and number of hospitalized in intensive care at April 2, 2020. For each variable [12pt]{minimal} $$x_{i} , i = 1, ,6,$$ x i , i = 1 , … , 6 , we consider a risk [12pt]{minimal} $$R_{i,l} = x_{0} * _{i,l} *x_{i}$$ R i , l = x 0 ∗ α i , l ∗ x i , and each [12pt]{minimal} $$_{i,l }$$ α i , l is determined by matching the target [12pt]{minimal} $$T_{l}$$ T l in the least square sense. In particular, we perform a linear least square fit, minimizing the following quadratic error: [12pt]{minimal} $$_{il}^{2} = _{k = 1}^{n} ( {T_{lk} - R_{i,l} } )^{2}$$ ϵ il 2 = ∑ k = 1 n T lk - R i , l 2 for the i -th risk indicator with respect to the l -th impact indicator. In this expression [12pt]{minimal} $$n = 20$$ n = 20 is the number of regions and [12pt]{minimal} $$T_{lk}$$ T lk denotes the impact indicator ( [12pt]{minimal} $$l = 1,$$ l = 1 , total cases, [12pt]{minimal} $$l = 2$$ l = 2 , number of deceased, [12pt]{minimal} $$l = 3$$ l = 3 , intensive care occupancy) for region [12pt]{minimal} $$k$$ k . The relative mean quadratic error is defined as [12pt]{minimal} $$_{il}^{2} = ^{2} }}{{ _{k = 1}^{n} T_{kl}^{2} }}.$$ ε il 2 = ϵ il 2 ∑ k = 1 n T kl 2 . The result is summarized in Table . It appears that each single parameter correlates with all three targets (respectively number of cases, deceased and hospitalized in intensive care) well above the obvious correlation coefficient of the population, shown in the first column, however the correlation are not strikingly high, and the mean quadratic error is not so small. Definition of the Risk Index and comparison among some models The goal of this paragraph is to choose the best model of aggregation of the risk indicators presented before in the framework of the Crichton’s Risk Triangle (see Results section). We observe that, in any model of this kind, the risk has to be necessarily proportional to the exposure, represented by the population. Therefore, we will assume that the risk R is given by the product of Exposure E times a given function F HV of the other parameters, related with Hazard and Vulnerability: [12pt]{minimal} $$R = ( {E*F_{HV} } )$$ R = E ∗ F HV We propose and compare the following models, in order to understand which one is most suited for a robust risk evaluation. E_HV Linear Model Here the effect of Hazard and Vulnerability are combined in a single affine function of the parameters. We assume a dependence of the risk of the form: [12pt]{minimal} $$R_{E\_HV} = E*F_{HV}$$ R E _ H V = E ∗ F HV [12pt]{minimal} $$E = x_{0} ; F_{HV} = c_{HV} + _{1} x_{1} + + _{6} x_{6}$$ E = x 0 ; F HV = c HV + α 1 x 1 + ⋯ + α 6 x 6 The coefficients [12pt]{minimal} $$c_{HV} , _{1} , , _{6}$$ c HV , α 1 , … , α 6 , in turn can be: obtained by a least square fitting; assigned a-priori with [12pt]{minimal} $$c_{HV} = 0$$ c HV = 0 and all the [12pt]{minimal} $$_{i} , i = 1, ,6$$ α i , i = 1 , … , 6 , assumed to be equal. E_H_V Multiplicative Model Here, Hazard and Vulnerability are considered as affine functions of, respectively, [12pt]{minimal} $$x_{1} ,x_{2} , x_{3}$$ x 1 , x 2 , x 3 and [12pt]{minimal} $$x_{4} ,x_{5} , x_{6}$$ x 4 , x 5 , x 6 . We assume that [12pt]{minimal} $$F_{HV}$$ F HV is the product of Hazard and Vulnerability, i.e.: 1 [12pt]{minimal} $$R_{E\_H\_V} = E*F_{HV} = E*H* V,$$ R E _ H _ V = E ∗ F HV = E ∗ H ∗ V , 2 [12pt]{minimal} $$E = _{0} x_{0} ,$$ E = α 0 x 0 , 3 [12pt]{minimal} $$H = c_{H} + _{1} x_{1} + _{2} x_{2} + _{3} x_{3} ,$$ H = c H + α 1 x 1 + α 2 x 2 + α 3 x 3 , 4 [12pt]{minimal} $$V = c_{V} + _{4} x_{4} + _{5} x_{5} + _{6} x_{6} .$$ V = c V + α 4 x 4 + α 5 x 5 + α 6 x 6 . Again, [12pt]{minimal} $$c_{H} , c_{V} , _{0} , , _{6}$$ c H , c V , α 0 , … , α 6 can be: obtained by a least square fitting; assigned a priori, by setting [12pt]{minimal} $$c_{H} = 0, c_{V} = 0,$$ c H = 0 , c V = 0 , and with all the [12pt]{minimal} $$_{i} , i = 1, ,6,$$ α i , i = 1 , … , 6 , assumed to be equal. As before, we shall compare these four models (1.a, 1.b, 2.a and 2.b) versus the three types of targets available, [12pt]{minimal} $$T_{l} (l = 1, 2, 3$$ T l ( l = 1 , 2 , 3 ), represented by our impact indicators: cumulative number of cases, cumulative number of deceased or number of hospitalized in intensive care, all registered at April 2, 2020. In particular, for models 1.b and 2.b we adopt a linear least square fit, while, for determining the optimal parameters of models 1.a and 2.a we perform a nonlinear least square best-fit, by trying to fit the total number of cases in each region up to April 2, 2020. Since in the E_H_V model the dependence of the risk on [12pt]{minimal} $$E, H$$ E , H and V is multiplicative, we may add two normalization conditions in order to avoid infinite solutions, for example: 5 [12pt]{minimal} $$_{1} + _{2} + _{3} = 1,$$ α 1 + α 2 + α 3 = 1 , 6 [12pt]{minimal} $$_{4} + _{5} + _{6} = 1.$$ α 4 + α 5 + α 6 = 1 . For all the models, we minimize the error: [12pt]{minimal} $$_{l}^{2} = _{k = 1}^{n} ( {T_{lk} - E_{k} *F_{HV} ( {x_{1}^{( k )} , ,x_{6}^{( k )} ;params} )} )^{2}$$ ε l 2 = ∑ k = 1 n T lk - E k ∗ F HV x 1 k , … , x 6 k ; p a r a m s 2 with respect to the parameters. In this expression [12pt]{minimal} $$n = 20$$ n = 20 is the number of regions, [12pt]{minimal} $$T_{lk}$$ T lk denotes the impact indicator ( [12pt]{minimal} $$l = 1,$$ l = 1 , total cases, [12pt]{minimal} $$l = 2$$ l = 2 , number of deceased, [12pt]{minimal} $$l = 3$$ l = 3 , intensive care occupancy) for region [12pt]{minimal} $$k$$ k , [12pt]{minimal} $$E_{k}$$ E k indicates the population of region [12pt]{minimal} $$k,$$ k , and the function [12pt]{minimal} $$F_{HV}$$ F HV depends on the model considered. The relative mean quadratic error [12pt]{minimal} $$$$ ε is defined as [12pt]{minimal} $$_{l}^{2} = ^{2} }}{{ _{k = 1}^{n} T_{kl}^{2} }}.$$ ε l 2 = ε l 2 ∑ k = 1 n T kl 2 . The non-linear fit is obtained by minimizing the error function by using the Levenberg–Marquardt algorithm of Matlab® (Optimization Toolbox™). Results of the best-fit with the four models are summarized in Table , while [12pt]{minimal} $$$$ α coefficients of the fitting parameters, normalized so that the sum of their absolute value is 2, are reported in Table for both E_HV (1.a) and E_H_V (2.a) models. Finally, in Table , a stability analysis of the risk ranking of the Italian regions obtained with the E_H_V a-priori model (1), performed by eliminating each one of the six risk indicators, [12pt]{minimal} $$x_{1} , ,x_{6}$$ x 1 , … , x 6 in turn. The theoretical model for policy assessment We propose a theoretical framework to discuss the policy problem. The risk of a community has been described above as depending on several components. We now adopt a simplification and consider the whole set of possible elements reconciled on two aspects, namely the proportion of infected individuals over the total population, which we call here infection ratio, [12pt]{minimal} $$x$$ x , and the impact of consequences caused by the spreading of the disease, measured as the number of per capita hospital beds, [12pt]{minimal} $$b$$ b , required by the emergency situation. Without loss of generality, we assume, for the sake of simplicity, that the above-explained negative role played by hospitals as contagion spreading factors, can be neglected here. Thus, define such a simplified notion of risk as [12pt]{minimal} $$R:( {0,1} ) {} ( {0,1} )$$ R : 0 , 1 × R → 0 , 1 be a [12pt]{minimal} $$C^{2}$$ C 2 function, determined by [12pt]{minimal} $$x ( {0,1} )$$ x ∈ 0 , 1 and [12pt]{minimal} $$b {}$$ b ∈ R , i.e., [12pt]{minimal} $$R = f( {x,b} )$$ R = f x , b . Consistently with previous analysis, we assume that [12pt]{minimal} $$} > 0$$ ∂ R ∂ x > 0 , [12pt]{minimal} $$} > 0$$ ∂ R ∂ b > 0 and since, the level of risk is subject to saturation, [12pt]{minimal} $$ R}}{{ x^{2} }} < 0$$ ∂ 2 R ∂ x 2 < 0 , [12pt]{minimal} $$ R}}{{ b^{2} }} < 0.$$ ∂ 2 R ∂ b 2 < 0 . The level of risk is the target variable that the policy-maker tries to minimize, given the constraint constituted by the current carrying capacity, i.e., the endowment of hospital beds [12pt]{minimal} $$( H )$$ H financed by the expenditure in the healthcare system [12pt]{minimal} $$G_{H}$$ G H . In principle, it may be considered as a dynamic variable, but we will proceed with a comparative-static analysis, by presenting policy intuitions from different initial configurations. Thus, the constraint is the result of the political orientation of the Government. In particular, let the part of the global allowance dedicated to intensive care beds, [12pt]{minimal} $$HH$$ HH , be the sole remedy to the epidemy and define [12pt]{minimal} $$HH = H,{ } ( {0,1} )$$ H H = α H , α ∈ 0 , 1 . The current carrying capacity of the healthcare system, i.e. the available number of hospital beds, is the function [12pt]{minimal} $$Z{ }:( {0,1} )^{2} {} {} ( {0,1} )$$ Z : 0 , 1 2 × N × R → 0 , 1 , defined as [12pt]{minimal} $$Z = H - h{ }x{ }n$$ Z = H - h x n , where [12pt]{minimal} $$h = ( {1 - } )$$ h = 1 - α and [12pt]{minimal} $$n$$ n is the population of the district under consideration. In per capita terms, it can be rewritten as [12pt]{minimal} $$z = z_{H} - hx$$ z = z H - h x , with [12pt]{minimal} $$z = Z/n$$ z = Z / n and [12pt]{minimal} $$z_{H} = H/n$$ z H = H / n . Within a comparative statics perspective, for any given couple [12pt]{minimal} $$( {H,{ } } )$$ H , α we can consider a reduced form of the carrying capacity constraint, depending on the infection diffusion rate as a negatively sloped line in the plane (infection ratio, hospital beds), where also the convex contours of the risk function can be considered. It is worth to notice that the model refers to the variable hospital beds in both demand, [12pt]{minimal} $$b$$ b , and supply, [12pt]{minimal} $$z$$ z , terms. All in all, the proposed framework matches required and available beds per infection ratio. Policies may try to affect [12pt]{minimal} $$x$$ x and can effectively tune [12pt]{minimal} $$z$$ z , by adjusting [12pt]{minimal} $$$$ α and [12pt]{minimal} $$G_{H}$$ G H , in such a way that the exploitation of available resources allows the minimization of [12pt]{minimal} $$b$$ b , at least, a tangency point between the capacity constraint and the risk level. The model assumes that the intensive care is the sole remedy to the epidemy. The fact that other effective protocols exists may have an effect on the slope of the linear constraint that represents the current carrying capacity of the healthcare system. In case other protocols exist, the model operates as described, but the line is translated downwards, other things being equal. In effect, other therapies or remedies would operate as an alternative to hospital beds, thus making the constraint less binding, i.e., reducing the needed allowance to face a given infection ratio. This paragraph presents all the indicators we used, their rationale and the set of references supporting their selection. Table reports the data used for each region, their definition, unit of measure and relevant source. We included in the a-priori risk index only territorial or environmental factors unevenly distributed among the regions, easily available on national databases. Mobility index Commuting data are often used to correlate population mobility and the spreading of an infection . On the other hand, many recently published papers have monitored to what extent people are complying to issued travel restrictions and if they are proofing to be effective in the reduction of the Covid-19 epidemic spreading . According to available data, the average trip rate of mobile population in Italy is 2.50 per day and the average distance covered is 28.5 km per day . We characterize each region with a “mobility index” as the regional average of the ratio between the sum of commuting flows (incoming and outgoing) for each municipality and its employed population. The data source is the Italian Ministry of Economic Policy Planning and Coordination . Housing concentration Urbanization increasingly affects the epidemiological characteristics of infectious disease , . Close proximity of people in their short range mobility and the attitude to use crowded public transport is amplified in compact and dense cities . We capture those circumstances by the “Housing Concentration”, measured as the ratio between the number of houses classified as "non detached houses" and the total number of houses. The data source is the same database cited for the mobility index. Healthcare density Delayed hospital admission, misdiagnosis, unsuitable air conditioning systems, lack of infected patients segregation and inter-hospitals transfers, all might contribute in what it is commonly called super-spreading event . It is worthwhile to notice that super-spreading events can occur in many situations where overcrowding in closed spaces favour the transmission of the infection. Nevertheless, our intention was to include in the a-priori risk index only territorial or environmental factors unevenly distributed among the regions, easily available on national databases. We measure the potential occurrence of this events in the infection spreading by including the “Healthcare Density” as the number of hospital beds per 10.000 inhabitants at regional level. Data source is the website of the Ministry of Health ( http://www.dati.salute.gov.it/dati/dettaglioDataset.jsp?menu=dati&idPag=17 ). Pollution Long-term exposure to air pollution may be one of the most important contributors to fatality caused by the COVID-19 in Europe and in northern Italy . Particulate matter (PM) is able to penetrate deeply into the respiratory tract and increase the risk of respiratory diseases . According to the European Environment Agency (EEA Report No 10/2019), PM concentration in 2016 were responsible for about 374.000 premature deaths in the EU-28. Recently has been evidenced that PM10 determines a hyperactivation of JAK/STAT protein family that is associated with cells proliferation and survival . JAK/STAT is also hyperactivated by several cytokines generated during the Covid-19 infection . For these reasons a strengthening mechanism between PM10 and Covid-19 infection could be assumed. Besides, very recent studies are directly correlating the population exposed to particulate pollution and the contagion from COVID-19 and the consequent health damage – . Based on these premises, we decided to include the PM10 annual average of the mean daily concentration as a factor influencing the vulnerability of people exposed to the infection. The data source is WHO ( https://www.who.int/airpollution/data/cities/en/ ), that provides measures at urban background, residential areas, commercial and mixed areas for the period 2013–2016. Temperature Weather plays a role in the spread of 2019-nCoV – , although not fully established. Chan et al. (2011) report that a low temperature and low humidity environment may facilitate the virus transmission in subtropical areas during the spring and in air-conditioned environments. It is also commonly accepted that cold cuts down the defense barriers of the respiratory tracts , . We decided to include the average winter (from December 2016 to April 2017) daily mean temperature in each region as a factor potentially enhancing the individual vulnerability. The source of data is the Italian Ministry of Agriculture ( https://www.politicheagricole.it/ ). Age of population Most of the official data sources report more severe impacts of 2019-nCoV on elderly people, probably both for an intrinsic weakness of their immunity system and for the co-existence of other chronic pathologies. Therefore we use the ratio between the population over 60 and the total population to take into account this vulnerability factor, even if it shows only relatively small differences from one region to another. Data source is the same as population, i.e. ISTAT database ( www.istat.it/it/archivio/104317 ). Population Of course, as anyone who is infected might get ill, it is straightforward to use the total population of each region that could be affected by the infection as a measure of the risk exposure. We also use the population as a multiplying factor of each risk indicator when measuring its degree of correlation to the damage of each region (see first paragraph in the Results section). About 43% of the population is concentrated in the five regions of Northern Italy and one out of six in Lombardia. Data on regional population are available on ISTAT database ( www.istat.it/it/archivio/104317 ). The seven risk indicators under consideration are named below, together with their reference interval: Population [12pt]{minimal} $$X_{0} $$ X 0 ∈ [126806, 9704151]. Mobility index [12pt]{minimal} $$X_{1}$$ X 1 [12pt]{minimal} $$$$ ∈ [0.74, 0.84]. Housing concentration [12pt]{minimal} $$X_{2} $$ X 2 ∈ [0.80, 0.96]. Healthcare density [12pt]{minimal} $$X_{3} $$ X 3 ∈ [29.6, 40.80]. Air Pollution [12pt]{minimal} $$X_{4} $$ X 4 ∈ [18.09, 31.07]. Average Winter Temperature [12pt]{minimal} $$X_{5} $$ X 5 ∈ [− 2.29, 11.92]. Age of Population (fraction of over-60 individuals) [12pt]{minimal} $$X_{6} $$ X 6 ∈ [0.22, 0.34]. These variables are suitably normalized between 0 and 1 as: [12pt]{minimal} $$x_{0} = }}{{{}(X_{0} )}} ; x_{i} = - {}(X_{i} )}}{{{}(X_{i} ) - {}(X_{i} )}}, i = 1,2,3,4,6, x_{5} = }(X_{5} ) - X_{5} }}{{{}(X_{5} ) - {}(X_{5} )}}$$ x 0 = X 0 max ( X 0 ) ; x i = X i - min ( X i ) max ( X i ) - min ( X i ) , i = 1 , 2 , 3 , 4 , 6 , x 5 = max ( X 5 ) - X 5 max ( X 5 ) - min ( X 5 ) where [12pt]{minimal} $${}(X_{i} )$$ min ( X i ) and [12pt]{minimal} $${}(X_{i} )$$ max ( X i ) are, respectively, the minimum and the maximum value assumed by each variable [12pt]{minimal} $$X_{i}$$ X i in its own reference interval. The new normalized variables are also dimensionless. Notice that the normalization is different for the population, since we want to avoid values equal to zero, and for the temperature, since, at variance with all other quantities, we expect that the risk increases with the decrease of the temperature. The first test is to check possible pairwise correlations among normalized indicators, with the exception of the population (whose correlation with many other indicators is quite obvious). The Pearson correlation coefficient is reported for each couple in the correlation matrix, see Table . As one can see, the majority of the indicators are weakly correlated. Noticeable exceptions concern the moderate positive correlations of some indicators, such as mobility and healthcare, with the inverted temperature. These can be explained by observing that northern Italian regions are, on average, colder and with greater mobility and healthcare density than central and southern regions. The second thing is to check if any of the six risk indicators, [12pt]{minimal} $$x_{1} , ,x_{6}$$ x 1 , … , x 6 , each considered separately, can fit any of the targets [12pt]{minimal} $$T_{l} , l = 1, 2, 3$$ T l , l = 1 , 2 , 3 , i.e. our three impact indicators: cumulative number of cases, cumulative number of deceased and number of hospitalized in intensive care at April 2, 2020. For each variable [12pt]{minimal} $$x_{i} , i = 1, ,6,$$ x i , i = 1 , … , 6 , we consider a risk [12pt]{minimal} $$R_{i,l} = x_{0} * _{i,l} *x_{i}$$ R i , l = x 0 ∗ α i , l ∗ x i , and each [12pt]{minimal} $$_{i,l }$$ α i , l is determined by matching the target [12pt]{minimal} $$T_{l}$$ T l in the least square sense. In particular, we perform a linear least square fit, minimizing the following quadratic error: [12pt]{minimal} $$_{il}^{2} = _{k = 1}^{n} ( {T_{lk} - R_{i,l} } )^{2}$$ ϵ il 2 = ∑ k = 1 n T lk - R i , l 2 for the i -th risk indicator with respect to the l -th impact indicator. In this expression [12pt]{minimal} $$n = 20$$ n = 20 is the number of regions and [12pt]{minimal} $$T_{lk}$$ T lk denotes the impact indicator ( [12pt]{minimal} $$l = 1,$$ l = 1 , total cases, [12pt]{minimal} $$l = 2$$ l = 2 , number of deceased, [12pt]{minimal} $$l = 3$$ l = 3 , intensive care occupancy) for region [12pt]{minimal} $$k$$ k . The relative mean quadratic error is defined as [12pt]{minimal} $$_{il}^{2} = ^{2} }}{{ _{k = 1}^{n} T_{kl}^{2} }}.$$ ε il 2 = ϵ il 2 ∑ k = 1 n T kl 2 . The result is summarized in Table . It appears that each single parameter correlates with all three targets (respectively number of cases, deceased and hospitalized in intensive care) well above the obvious correlation coefficient of the population, shown in the first column, however the correlation are not strikingly high, and the mean quadratic error is not so small. The goal of this paragraph is to choose the best model of aggregation of the risk indicators presented before in the framework of the Crichton’s Risk Triangle (see Results section). We observe that, in any model of this kind, the risk has to be necessarily proportional to the exposure, represented by the population. Therefore, we will assume that the risk R is given by the product of Exposure E times a given function F HV of the other parameters, related with Hazard and Vulnerability: [12pt]{minimal} $$R = ( {E*F_{HV} } )$$ R = E ∗ F HV We propose and compare the following models, in order to understand which one is most suited for a robust risk evaluation. E_HV Linear Model Here the effect of Hazard and Vulnerability are combined in a single affine function of the parameters. We assume a dependence of the risk of the form: [12pt]{minimal} $$R_{E\_HV} = E*F_{HV}$$ R E _ H V = E ∗ F HV [12pt]{minimal} $$E = x_{0} ; F_{HV} = c_{HV} + _{1} x_{1} + + _{6} x_{6}$$ E = x 0 ; F HV = c HV + α 1 x 1 + ⋯ + α 6 x 6 The coefficients [12pt]{minimal} $$c_{HV} , _{1} , , _{6}$$ c HV , α 1 , … , α 6 , in turn can be: obtained by a least square fitting; assigned a-priori with [12pt]{minimal} $$c_{HV} = 0$$ c HV = 0 and all the [12pt]{minimal} $$_{i} , i = 1, ,6$$ α i , i = 1 , … , 6 , assumed to be equal. E_H_V Multiplicative Model Here, Hazard and Vulnerability are considered as affine functions of, respectively, [12pt]{minimal} $$x_{1} ,x_{2} , x_{3}$$ x 1 , x 2 , x 3 and [12pt]{minimal} $$x_{4} ,x_{5} , x_{6}$$ x 4 , x 5 , x 6 . We assume that [12pt]{minimal} $$F_{HV}$$ F HV is the product of Hazard and Vulnerability, i.e.: 1 [12pt]{minimal} $$R_{E\_H\_V} = E*F_{HV} = E*H* V,$$ R E _ H _ V = E ∗ F HV = E ∗ H ∗ V , 2 [12pt]{minimal} $$E = _{0} x_{0} ,$$ E = α 0 x 0 , 3 [12pt]{minimal} $$H = c_{H} + _{1} x_{1} + _{2} x_{2} + _{3} x_{3} ,$$ H = c H + α 1 x 1 + α 2 x 2 + α 3 x 3 , 4 [12pt]{minimal} $$V = c_{V} + _{4} x_{4} + _{5} x_{5} + _{6} x_{6} .$$ V = c V + α 4 x 4 + α 5 x 5 + α 6 x 6 . Again, [12pt]{minimal} $$c_{H} , c_{V} , _{0} , , _{6}$$ c H , c V , α 0 , … , α 6 can be: obtained by a least square fitting; assigned a priori, by setting [12pt]{minimal} $$c_{H} = 0, c_{V} = 0,$$ c H = 0 , c V = 0 , and with all the [12pt]{minimal} $$_{i} , i = 1, ,6,$$ α i , i = 1 , … , 6 , assumed to be equal. As before, we shall compare these four models (1.a, 1.b, 2.a and 2.b) versus the three types of targets available, [12pt]{minimal} $$T_{l} (l = 1, 2, 3$$ T l ( l = 1 , 2 , 3 ), represented by our impact indicators: cumulative number of cases, cumulative number of deceased or number of hospitalized in intensive care, all registered at April 2, 2020. In particular, for models 1.b and 2.b we adopt a linear least square fit, while, for determining the optimal parameters of models 1.a and 2.a we perform a nonlinear least square best-fit, by trying to fit the total number of cases in each region up to April 2, 2020. Since in the E_H_V model the dependence of the risk on [12pt]{minimal} $$E, H$$ E , H and V is multiplicative, we may add two normalization conditions in order to avoid infinite solutions, for example: 5 [12pt]{minimal} $$_{1} + _{2} + _{3} = 1,$$ α 1 + α 2 + α 3 = 1 , 6 [12pt]{minimal} $$_{4} + _{5} + _{6} = 1.$$ α 4 + α 5 + α 6 = 1 . For all the models, we minimize the error: [12pt]{minimal} $$_{l}^{2} = _{k = 1}^{n} ( {T_{lk} - E_{k} *F_{HV} ( {x_{1}^{( k )} , ,x_{6}^{( k )} ;params} )} )^{2}$$ ε l 2 = ∑ k = 1 n T lk - E k ∗ F HV x 1 k , … , x 6 k ; p a r a m s 2 with respect to the parameters. In this expression [12pt]{minimal} $$n = 20$$ n = 20 is the number of regions, [12pt]{minimal} $$T_{lk}$$ T lk denotes the impact indicator ( [12pt]{minimal} $$l = 1,$$ l = 1 , total cases, [12pt]{minimal} $$l = 2$$ l = 2 , number of deceased, [12pt]{minimal} $$l = 3$$ l = 3 , intensive care occupancy) for region [12pt]{minimal} $$k$$ k , [12pt]{minimal} $$E_{k}$$ E k indicates the population of region [12pt]{minimal} $$k,$$ k , and the function [12pt]{minimal} $$F_{HV}$$ F HV depends on the model considered. The relative mean quadratic error [12pt]{minimal} $$$$ ε is defined as [12pt]{minimal} $$_{l}^{2} = ^{2} }}{{ _{k = 1}^{n} T_{kl}^{2} }}.$$ ε l 2 = ε l 2 ∑ k = 1 n T kl 2 . The non-linear fit is obtained by minimizing the error function by using the Levenberg–Marquardt algorithm of Matlab® (Optimization Toolbox™). Results of the best-fit with the four models are summarized in Table , while [12pt]{minimal} $$$$ α coefficients of the fitting parameters, normalized so that the sum of their absolute value is 2, are reported in Table for both E_HV (1.a) and E_H_V (2.a) models. Finally, in Table , a stability analysis of the risk ranking of the Italian regions obtained with the E_H_V a-priori model (1), performed by eliminating each one of the six risk indicators, [12pt]{minimal} $$x_{1} , ,x_{6}$$ x 1 , … , x 6 in turn. We propose a theoretical framework to discuss the policy problem. The risk of a community has been described above as depending on several components. We now adopt a simplification and consider the whole set of possible elements reconciled on two aspects, namely the proportion of infected individuals over the total population, which we call here infection ratio, [12pt]{minimal} $$x$$ x , and the impact of consequences caused by the spreading of the disease, measured as the number of per capita hospital beds, [12pt]{minimal} $$b$$ b , required by the emergency situation. Without loss of generality, we assume, for the sake of simplicity, that the above-explained negative role played by hospitals as contagion spreading factors, can be neglected here. Thus, define such a simplified notion of risk as [12pt]{minimal} $$R:( {0,1} ) {} ( {0,1} )$$ R : 0 , 1 × R → 0 , 1 be a [12pt]{minimal} $$C^{2}$$ C 2 function, determined by [12pt]{minimal} $$x ( {0,1} )$$ x ∈ 0 , 1 and [12pt]{minimal} $$b {}$$ b ∈ R , i.e., [12pt]{minimal} $$R = f( {x,b} )$$ R = f x , b . Consistently with previous analysis, we assume that [12pt]{minimal} $$} > 0$$ ∂ R ∂ x > 0 , [12pt]{minimal} $$} > 0$$ ∂ R ∂ b > 0 and since, the level of risk is subject to saturation, [12pt]{minimal} $$ R}}{{ x^{2} }} < 0$$ ∂ 2 R ∂ x 2 < 0 , [12pt]{minimal} $$ R}}{{ b^{2} }} < 0.$$ ∂ 2 R ∂ b 2 < 0 . The level of risk is the target variable that the policy-maker tries to minimize, given the constraint constituted by the current carrying capacity, i.e., the endowment of hospital beds [12pt]{minimal} $$( H )$$ H financed by the expenditure in the healthcare system [12pt]{minimal} $$G_{H}$$ G H . In principle, it may be considered as a dynamic variable, but we will proceed with a comparative-static analysis, by presenting policy intuitions from different initial configurations. Thus, the constraint is the result of the political orientation of the Government. In particular, let the part of the global allowance dedicated to intensive care beds, [12pt]{minimal} $$HH$$ HH , be the sole remedy to the epidemy and define [12pt]{minimal} $$HH = H,{ } ( {0,1} )$$ H H = α H , α ∈ 0 , 1 . The current carrying capacity of the healthcare system, i.e. the available number of hospital beds, is the function [12pt]{minimal} $$Z{ }:( {0,1} )^{2} {} {} ( {0,1} )$$ Z : 0 , 1 2 × N × R → 0 , 1 , defined as [12pt]{minimal} $$Z = H - h{ }x{ }n$$ Z = H - h x n , where [12pt]{minimal} $$h = ( {1 - } )$$ h = 1 - α and [12pt]{minimal} $$n$$ n is the population of the district under consideration. In per capita terms, it can be rewritten as [12pt]{minimal} $$z = z_{H} - hx$$ z = z H - h x , with [12pt]{minimal} $$z = Z/n$$ z = Z / n and [12pt]{minimal} $$z_{H} = H/n$$ z H = H / n . Within a comparative statics perspective, for any given couple [12pt]{minimal} $$( {H,{ } } )$$ H , α we can consider a reduced form of the carrying capacity constraint, depending on the infection diffusion rate as a negatively sloped line in the plane (infection ratio, hospital beds), where also the convex contours of the risk function can be considered. It is worth to notice that the model refers to the variable hospital beds in both demand, [12pt]{minimal} $$b$$ b , and supply, [12pt]{minimal} $$z$$ z , terms. All in all, the proposed framework matches required and available beds per infection ratio. Policies may try to affect [12pt]{minimal} $$x$$ x and can effectively tune [12pt]{minimal} $$z$$ z , by adjusting [12pt]{minimal} $$$$ α and [12pt]{minimal} $$G_{H}$$ G H , in such a way that the exploitation of available resources allows the minimization of [12pt]{minimal} $$b$$ b , at least, a tangency point between the capacity constraint and the risk level. The model assumes that the intensive care is the sole remedy to the epidemy. The fact that other effective protocols exists may have an effect on the slope of the linear constraint that represents the current carrying capacity of the healthcare system. In case other protocols exist, the model operates as described, but the line is translated downwards, other things being equal. In effect, other therapies or remedies would operate as an alternative to hospital beds, thus making the constraint less binding, i.e., reducing the needed allowance to face a given infection ratio. |
Integration of Catholic Values and Professional Obligations in the Provision of Family Planning Services | 809cd0e1-6f43-4a90-8949-08da055763a8 | 7550969 | Gynaecology[mh] | The intersection of family planning and Catholicism highlights a complex relationship between religion and medicine. The US Conference of Catholic Bishops provides medical guidelines called the Ethical and Religious Directives for Catholic Health Care Services (referred to hereafter as the Directives), updated in 2018. The Directives are intended for all practitioners in Catholic health settings and also for Catholic practitioners in non-Catholic settings. With respect to family planning, the Directives only allow natural family planning counseling for heterosexual married couples. A survey performed between 2008 and 2009 of more 1000 US obstetrician-gynecologists found that 7% did not provide 1 or more contraceptives, and 86% did not perform abortions. Although Catholicism was associated with nonprovision, , many Catholic obstetrician-gynecologists provided family planning services. , , Given that many Catholic obstetrician-gynecologists do not strictly adhere to the Directives, we used qualitative methods to explore their integration of religious beliefs and professional obligations related to family planning services.
This qualitative study received exemption from the Colorado Multiple Institutional Review Board because it posed minimal risk to the participants. Informed consent was obtained as described later in this section. This study follows the Standards for Reporting Qualitative Research ( SRQR ) reporting guideline. Using an online REDCap survey, we recruited US-based obstetrician-gynecologists who self-identified as Catholic and queried about demographic characteristics, practice settings, self-identified religiosity, and family planning services provided; participants received a $5 gift card. On the basis of their responses, we sought Catholic obstetrician-gynecologists with a range of practice patterns and were interested in interviewing 3 clinically relevant groups, defined as low if they only provided natural family planning, moderate if they provided additional contraceptives, and high if they provided abortion routinely. We focused recruitment of low practitioners using contact information available on the American Association of Pro-Life Obstetrician-Gynecologists website. To recruit high practitioners, we emailed the Fellowship in Family Planning listserv. We also posted a survey link to Catholic Physicians and OB-GYN Moms groups on Facebook. We recruited survey participants and purposively sampled the 3 practice groups for qualitative telephone interviews focusing on respondents with higher self-reported religiosity measures (very religious or moderately religious). To improve sample diversity, we also purposively recruited those who identified as male, non-White, and/or Hispanic-Latino. We excluded physicians for whom reproductive health care is not within their usual practice and trainees. Before starting telephone interviews, we obtained verbal consent for participation and audio recording. Participants were informed that they could discontinue at any time or decline to respond and that all responses, including any quotations, would be deidentified. We then used a semistructured interview guide focused on the following: (1) importance of religion in professional and personal settings, (2) religion-related professional and personal conflicts, and (3) strategies used when experiencing conflict. Participants received a $100 gift card. We conducted interviews until theoretical saturation occurred within each group from June through October 2018. All interview recordings were securely uploaded, professionally transcribed, and deidentified before analysis. We applied grounded theory , , to understand the phenomenon of how Catholic obstetrician-gynecologists integrate their religious beliefs and professional obligations as this previously lacked an applicable theory. The 3 coders (A.M., R.S., and M.G.) all had prior qualitative research experience and/or formal training. We initiated analysis by individually performing open-line coding of the first 6 interviews (2 from each group) and then met to reach consensus and begin a code dictionary. Using an iterative process, A.M. and R.S. individually first-line coded the remaining interviews, updated the code dictionary, and applied new codes to previously analyzed interviews. The 3 coders met at regular intervals to discuss memoranda and to perform second-line coding using Atlas TI software version 8 (Scientific Software Development GmbH) to formulate concepts, build categories, and create conceptual frameworks; data analysis continued from November 2018 to February 2019. To improve reliability, we used in-person group consensus to resolve differences and discussed each coder’s implicit biases throughout.
Participants We received survey responses from 174 obstetrician-gynecologists; 162 were eligible for interviews. We extended rolling interview invitations to 60 respondents. The final sample consisted of 34 physicians, including 10 low, 15 moderate, and 9 high practitioners from 19 states. demonstrates participant characteristics; most were White (29 participants [85.3%]), female (27 participants [79.4%]), and non-Hispanic (29 participants [85.3%]); identified as very or moderately religious (29 participants [85.3%]); and attended church at least once per week (18 participants [52.9%]). There were more men in the low group (4 participants [40.0%]) compared with the moderate (2 participants [13.3%]) and high (0 participants) groups. As practitioners reported greater family planning service provision, it became difficult to purposively sample for higher self-identified religiosity. Catholic Obstetrician-Gynecologist Morality Emerging themes demonstrated that Catholic religion was only one piece of a broader morality that influenced family planning practice decisions, consistent with Bandura’s Social-Cognitive Theory of Moral Thought and Action. We applied this theory, which states that morality is influenced by bidirectional interactions between external, personal, and behavioral factors, to organize the specific influences of Catholic obstetrician-gynecologists’ morality development and modified the framework, as demonstrated in . External Factors The most common external factor cited by low practitioners cited was religious doctrine as they believed they had a moral imperative to adhere to church teachings. One stated, “To serve God is the most important thing in my life coming before even family and medicine.” Other physicians discussed societal norms in relation to beliefs that their patients did not generally need abortion services or fears of being harmed for providing abortion. All participants discussed family and social influences. A moderate participant said, “I guess that’s what changed my thinking about contraception, is after I got married, actually, considering her [my wife’s] feelings.” Participants reflected on influences not necessarily specific to medical training. Many moderate and high practitioners felt compelled to practice contrary to Catholic guidance on the basis of patient encounters. One high practitioner, explained, “…it’s easy to be ‘judgey’ when you think in absolutes, when you aren’t on that journey with the patient, and experiencing those times of the grayness of situations.” Those who worked in Catholic institutions often discussed how institutional regulations restricted medically necessary care. A moderate practitioner working in a Catholic Hospital stated, “…I really can’t offer my patient a lot of different options at our site…it kind of put me at odds with my religion.” Such physicians expressed greater tensions with their religion. The idea that the church should modernize to reflect the changing medical culture emerged as a nondominant theme, expressed by this moderate practitioner, “…the patriarchal or paternalistic nature of the church, it needs to evolve to help women and their providers deal with these certain real issues . ” The intersection of medical ethical principles is described later. Personal Factors All participants used various coping strategies, including avoidance, talking to priests or therapists, and community support. Some described ongoing internal conflicts, such as this moderate practitioner: “I don’t want to turn my back on my patients, but I feel very guilty when I do perform that procedure [abortion].” This internal conflict led her to limit her abortion provision. There was ubiquitous behavior rationalization. One of the low practitioners justified her practice as follows: “I had a senior resident who was like, ‘You’re denying care’…and I was like, okay, I think that this patient has plenty of resources.” Other participants believed the Catholic church could not be an absolute moral guide because of their observations of hypocrisy within the church, specifically regarding scandals and perceived inconsistencies between the church’s commitment to social justice and its restrictions on family planning. There was frequent discussion of one’s internal sense of right and wrong, often referred to as conscience . Participants often stated they had innate senses that guided their provision of family planning services. Examples included, “ I know I’m doing the right thing and in the end that’s all I can really be comfortable doing is trying to do what God wants me to do” (low practitioner), “I don’t think God thinks that you’re a bad person for giving contraception” (moderate practitioner), and “I’ve maintained this kind of balance, that I think that what I’m doing is consistent with Catholic teachings. I know others obviously would differ with that, but I’m at peace with where I am” (high practitioner). Behaviors Many described seeking job settings according to their family planning service approach, which often reinforced that their way of practicing was morally acceptable. A high practitioner stated, “Everyone in the practice practices the same as far as offering abortion services…we’re all very supportive of each other.” Many low practitioners discussed how their expertise in natural family planning provided a unique service that was well-received by patients and colleagues. Integration of Medical Ethics and Catholic Values in the Provision of Family Planning We created a second framework that demonstrates the ethical and religious influences that guide family planning service patterns , because these were the primary influences expressed by participants. As themes emerged among the 3 provision groups, we noted that many reflected medical ethical principles and subsequently categorized themes as nonmaleficence, beneficence, autonomy, or justice . Representative quotations are reported in . Although we purposively sampled the participants into 3 groups on the basis of survey responses, we found variation in service provision within the low and moderate groups, as demonstrated by the spectrum of services delineated adjacent to the provision levels in . Some of the low practitioners provided contraception for medical indications and/or suggested other service practitioners without necessarily providing direct referrals. The moderates varied more, with some avoiding contraceptives that the Church states are abortifacients, and others providing abortion in specific circumstances. Low Provision The most prominent theme among low practitioners was their promotion of natural approaches to avoid iatrogenic risks, classified as nonmaleficence. One said, “So for me to say I’m not gonna do contraception, that’s really not fair to the patient, not to give them a different alternative.” These participants promoted natural family planning as an effective method without harmful adverse effects. The low provision group addressed beneficence based on their desire to address broader patient issues. Multiple participants called contraception a Band-Aid that other obstetrician-gynecologists used to avoid dealing with underlying physical or social issues. Often, they described their role as a physician as broader: “I think we are called to try to take care of some of that spiritual component also.” Participants also described dedication to previsit transparency about their practices to support patient autonomy. For example, one said, “We’re not keeping you from that [birth control], we’re just telling you right up front, because we would never want to deceive.” These participants avoided conflicts by encouraging patients to seek care elsewhere when patient requests did not align with their values. Moderate Provision The primary theme among the moderate practitioners was their dedication to abortion prevention by providing contraception. One stated, “…there’s a huge difference between preventing pregnancy and killing a baby once it’s conceived.” These practitioners acknowledged that providing contraception was against Catholic doctrine, but saw this as “the lesser of two evils.” We categorized this as nonmaleficence on the basis of their concern for fetal harm. The moderate group expressed respect for patient autonomy by acknowledging their obligation to refer for abortion when desired by patients, irrespective of their own beliefs. One stated, “I strive to not let them know what I’m personally thinking about that [abortion] and I provide them with resources so that they can get the care that they’re needing.” Many reiterated their attempts to conceal their opinions of abortion to avoid alienating patients. Most moderate practitioners believed that abortion is justifiable for specific medical indications, such as fetal anomalies, reflecting beneficence. In these circumstances, physicians expressed greater empathy for their patients and some were comfortable performing the abortion. Another stated, “…that’s kind of been my way of getting through those situations, that this is for the mother and also that it was an extremely poor outcome if the pregnancy were to continue.” In such circumstances, they often felt abortion was in the best interest of their patients. High Provision The most prominent theme among the high practitioners was their belief that meeting patient needs overrides their personal beliefs, classified as autonomy. Many expressed how they often compartmentalize their religion and their profession, but also commented how Catholicism motivated their respect for persons. One participant said: “…when I pull upon how religion may have informed or impacted my relationship to reproductive health…it relates to a compassionate desire to respect women’s human integrity and autonomy.” The high practitioners were the only ones who emphasized justice, believing that family planning is integral for achieving social justice. This was motivated by Catholic principles as described here: “…my Catholic upbringing and the exposure that I’ve had to Catholicism has made me very aware of social justice issues…and as part of that, I really believe that family planning is actually part of social justice.” Many commented on the responsibility to provide a service that is difficult to access. High practitioners also expressed how providing abortion reflects compassionate and nonjudgmental care, classified as beneficence. One described the harmonious relationship between her religion and abortion provision: “…what I do is completely in line with the important part of Catholicism…we’re just taking care of people who need help, people who are in distress, people who are underserved, people who need empathy and compassion.” The high practitioners linked their Catholic faith with the need to provide family planning services.
We received survey responses from 174 obstetrician-gynecologists; 162 were eligible for interviews. We extended rolling interview invitations to 60 respondents. The final sample consisted of 34 physicians, including 10 low, 15 moderate, and 9 high practitioners from 19 states. demonstrates participant characteristics; most were White (29 participants [85.3%]), female (27 participants [79.4%]), and non-Hispanic (29 participants [85.3%]); identified as very or moderately religious (29 participants [85.3%]); and attended church at least once per week (18 participants [52.9%]). There were more men in the low group (4 participants [40.0%]) compared with the moderate (2 participants [13.3%]) and high (0 participants) groups. As practitioners reported greater family planning service provision, it became difficult to purposively sample for higher self-identified religiosity.
Emerging themes demonstrated that Catholic religion was only one piece of a broader morality that influenced family planning practice decisions, consistent with Bandura’s Social-Cognitive Theory of Moral Thought and Action. We applied this theory, which states that morality is influenced by bidirectional interactions between external, personal, and behavioral factors, to organize the specific influences of Catholic obstetrician-gynecologists’ morality development and modified the framework, as demonstrated in . External Factors The most common external factor cited by low practitioners cited was religious doctrine as they believed they had a moral imperative to adhere to church teachings. One stated, “To serve God is the most important thing in my life coming before even family and medicine.” Other physicians discussed societal norms in relation to beliefs that their patients did not generally need abortion services or fears of being harmed for providing abortion. All participants discussed family and social influences. A moderate participant said, “I guess that’s what changed my thinking about contraception, is after I got married, actually, considering her [my wife’s] feelings.” Participants reflected on influences not necessarily specific to medical training. Many moderate and high practitioners felt compelled to practice contrary to Catholic guidance on the basis of patient encounters. One high practitioner, explained, “…it’s easy to be ‘judgey’ when you think in absolutes, when you aren’t on that journey with the patient, and experiencing those times of the grayness of situations.” Those who worked in Catholic institutions often discussed how institutional regulations restricted medically necessary care. A moderate practitioner working in a Catholic Hospital stated, “…I really can’t offer my patient a lot of different options at our site…it kind of put me at odds with my religion.” Such physicians expressed greater tensions with their religion. The idea that the church should modernize to reflect the changing medical culture emerged as a nondominant theme, expressed by this moderate practitioner, “…the patriarchal or paternalistic nature of the church, it needs to evolve to help women and their providers deal with these certain real issues . ” The intersection of medical ethical principles is described later. Personal Factors All participants used various coping strategies, including avoidance, talking to priests or therapists, and community support. Some described ongoing internal conflicts, such as this moderate practitioner: “I don’t want to turn my back on my patients, but I feel very guilty when I do perform that procedure [abortion].” This internal conflict led her to limit her abortion provision. There was ubiquitous behavior rationalization. One of the low practitioners justified her practice as follows: “I had a senior resident who was like, ‘You’re denying care’…and I was like, okay, I think that this patient has plenty of resources.” Other participants believed the Catholic church could not be an absolute moral guide because of their observations of hypocrisy within the church, specifically regarding scandals and perceived inconsistencies between the church’s commitment to social justice and its restrictions on family planning. There was frequent discussion of one’s internal sense of right and wrong, often referred to as conscience . Participants often stated they had innate senses that guided their provision of family planning services. Examples included, “ I know I’m doing the right thing and in the end that’s all I can really be comfortable doing is trying to do what God wants me to do” (low practitioner), “I don’t think God thinks that you’re a bad person for giving contraception” (moderate practitioner), and “I’ve maintained this kind of balance, that I think that what I’m doing is consistent with Catholic teachings. I know others obviously would differ with that, but I’m at peace with where I am” (high practitioner). Behaviors Many described seeking job settings according to their family planning service approach, which often reinforced that their way of practicing was morally acceptable. A high practitioner stated, “Everyone in the practice practices the same as far as offering abortion services…we’re all very supportive of each other.” Many low practitioners discussed how their expertise in natural family planning provided a unique service that was well-received by patients and colleagues.
The most common external factor cited by low practitioners cited was religious doctrine as they believed they had a moral imperative to adhere to church teachings. One stated, “To serve God is the most important thing in my life coming before even family and medicine.” Other physicians discussed societal norms in relation to beliefs that their patients did not generally need abortion services or fears of being harmed for providing abortion. All participants discussed family and social influences. A moderate participant said, “I guess that’s what changed my thinking about contraception, is after I got married, actually, considering her [my wife’s] feelings.” Participants reflected on influences not necessarily specific to medical training. Many moderate and high practitioners felt compelled to practice contrary to Catholic guidance on the basis of patient encounters. One high practitioner, explained, “…it’s easy to be ‘judgey’ when you think in absolutes, when you aren’t on that journey with the patient, and experiencing those times of the grayness of situations.” Those who worked in Catholic institutions often discussed how institutional regulations restricted medically necessary care. A moderate practitioner working in a Catholic Hospital stated, “…I really can’t offer my patient a lot of different options at our site…it kind of put me at odds with my religion.” Such physicians expressed greater tensions with their religion. The idea that the church should modernize to reflect the changing medical culture emerged as a nondominant theme, expressed by this moderate practitioner, “…the patriarchal or paternalistic nature of the church, it needs to evolve to help women and their providers deal with these certain real issues . ” The intersection of medical ethical principles is described later.
All participants used various coping strategies, including avoidance, talking to priests or therapists, and community support. Some described ongoing internal conflicts, such as this moderate practitioner: “I don’t want to turn my back on my patients, but I feel very guilty when I do perform that procedure [abortion].” This internal conflict led her to limit her abortion provision. There was ubiquitous behavior rationalization. One of the low practitioners justified her practice as follows: “I had a senior resident who was like, ‘You’re denying care’…and I was like, okay, I think that this patient has plenty of resources.” Other participants believed the Catholic church could not be an absolute moral guide because of their observations of hypocrisy within the church, specifically regarding scandals and perceived inconsistencies between the church’s commitment to social justice and its restrictions on family planning. There was frequent discussion of one’s internal sense of right and wrong, often referred to as conscience . Participants often stated they had innate senses that guided their provision of family planning services. Examples included, “ I know I’m doing the right thing and in the end that’s all I can really be comfortable doing is trying to do what God wants me to do” (low practitioner), “I don’t think God thinks that you’re a bad person for giving contraception” (moderate practitioner), and “I’ve maintained this kind of balance, that I think that what I’m doing is consistent with Catholic teachings. I know others obviously would differ with that, but I’m at peace with where I am” (high practitioner).
Many described seeking job settings according to their family planning service approach, which often reinforced that their way of practicing was morally acceptable. A high practitioner stated, “Everyone in the practice practices the same as far as offering abortion services…we’re all very supportive of each other.” Many low practitioners discussed how their expertise in natural family planning provided a unique service that was well-received by patients and colleagues.
We created a second framework that demonstrates the ethical and religious influences that guide family planning service patterns , because these were the primary influences expressed by participants. As themes emerged among the 3 provision groups, we noted that many reflected medical ethical principles and subsequently categorized themes as nonmaleficence, beneficence, autonomy, or justice . Representative quotations are reported in . Although we purposively sampled the participants into 3 groups on the basis of survey responses, we found variation in service provision within the low and moderate groups, as demonstrated by the spectrum of services delineated adjacent to the provision levels in . Some of the low practitioners provided contraception for medical indications and/or suggested other service practitioners without necessarily providing direct referrals. The moderates varied more, with some avoiding contraceptives that the Church states are abortifacients, and others providing abortion in specific circumstances. Low Provision The most prominent theme among low practitioners was their promotion of natural approaches to avoid iatrogenic risks, classified as nonmaleficence. One said, “So for me to say I’m not gonna do contraception, that’s really not fair to the patient, not to give them a different alternative.” These participants promoted natural family planning as an effective method without harmful adverse effects. The low provision group addressed beneficence based on their desire to address broader patient issues. Multiple participants called contraception a Band-Aid that other obstetrician-gynecologists used to avoid dealing with underlying physical or social issues. Often, they described their role as a physician as broader: “I think we are called to try to take care of some of that spiritual component also.” Participants also described dedication to previsit transparency about their practices to support patient autonomy. For example, one said, “We’re not keeping you from that [birth control], we’re just telling you right up front, because we would never want to deceive.” These participants avoided conflicts by encouraging patients to seek care elsewhere when patient requests did not align with their values. Moderate Provision The primary theme among the moderate practitioners was their dedication to abortion prevention by providing contraception. One stated, “…there’s a huge difference between preventing pregnancy and killing a baby once it’s conceived.” These practitioners acknowledged that providing contraception was against Catholic doctrine, but saw this as “the lesser of two evils.” We categorized this as nonmaleficence on the basis of their concern for fetal harm. The moderate group expressed respect for patient autonomy by acknowledging their obligation to refer for abortion when desired by patients, irrespective of their own beliefs. One stated, “I strive to not let them know what I’m personally thinking about that [abortion] and I provide them with resources so that they can get the care that they’re needing.” Many reiterated their attempts to conceal their opinions of abortion to avoid alienating patients. Most moderate practitioners believed that abortion is justifiable for specific medical indications, such as fetal anomalies, reflecting beneficence. In these circumstances, physicians expressed greater empathy for their patients and some were comfortable performing the abortion. Another stated, “…that’s kind of been my way of getting through those situations, that this is for the mother and also that it was an extremely poor outcome if the pregnancy were to continue.” In such circumstances, they often felt abortion was in the best interest of their patients. High Provision The most prominent theme among the high practitioners was their belief that meeting patient needs overrides their personal beliefs, classified as autonomy. Many expressed how they often compartmentalize their religion and their profession, but also commented how Catholicism motivated their respect for persons. One participant said: “…when I pull upon how religion may have informed or impacted my relationship to reproductive health…it relates to a compassionate desire to respect women’s human integrity and autonomy.” The high practitioners were the only ones who emphasized justice, believing that family planning is integral for achieving social justice. This was motivated by Catholic principles as described here: “…my Catholic upbringing and the exposure that I’ve had to Catholicism has made me very aware of social justice issues…and as part of that, I really believe that family planning is actually part of social justice.” Many commented on the responsibility to provide a service that is difficult to access. High practitioners also expressed how providing abortion reflects compassionate and nonjudgmental care, classified as beneficence. One described the harmonious relationship between her religion and abortion provision: “…what I do is completely in line with the important part of Catholicism…we’re just taking care of people who need help, people who are in distress, people who are underserved, people who need empathy and compassion.” The high practitioners linked their Catholic faith with the need to provide family planning services.
The most prominent theme among low practitioners was their promotion of natural approaches to avoid iatrogenic risks, classified as nonmaleficence. One said, “So for me to say I’m not gonna do contraception, that’s really not fair to the patient, not to give them a different alternative.” These participants promoted natural family planning as an effective method without harmful adverse effects. The low provision group addressed beneficence based on their desire to address broader patient issues. Multiple participants called contraception a Band-Aid that other obstetrician-gynecologists used to avoid dealing with underlying physical or social issues. Often, they described their role as a physician as broader: “I think we are called to try to take care of some of that spiritual component also.” Participants also described dedication to previsit transparency about their practices to support patient autonomy. For example, one said, “We’re not keeping you from that [birth control], we’re just telling you right up front, because we would never want to deceive.” These participants avoided conflicts by encouraging patients to seek care elsewhere when patient requests did not align with their values.
The primary theme among the moderate practitioners was their dedication to abortion prevention by providing contraception. One stated, “…there’s a huge difference between preventing pregnancy and killing a baby once it’s conceived.” These practitioners acknowledged that providing contraception was against Catholic doctrine, but saw this as “the lesser of two evils.” We categorized this as nonmaleficence on the basis of their concern for fetal harm. The moderate group expressed respect for patient autonomy by acknowledging their obligation to refer for abortion when desired by patients, irrespective of their own beliefs. One stated, “I strive to not let them know what I’m personally thinking about that [abortion] and I provide them with resources so that they can get the care that they’re needing.” Many reiterated their attempts to conceal their opinions of abortion to avoid alienating patients. Most moderate practitioners believed that abortion is justifiable for specific medical indications, such as fetal anomalies, reflecting beneficence. In these circumstances, physicians expressed greater empathy for their patients and some were comfortable performing the abortion. Another stated, “…that’s kind of been my way of getting through those situations, that this is for the mother and also that it was an extremely poor outcome if the pregnancy were to continue.” In such circumstances, they often felt abortion was in the best interest of their patients.
The most prominent theme among the high practitioners was their belief that meeting patient needs overrides their personal beliefs, classified as autonomy. Many expressed how they often compartmentalize their religion and their profession, but also commented how Catholicism motivated their respect for persons. One participant said: “…when I pull upon how religion may have informed or impacted my relationship to reproductive health…it relates to a compassionate desire to respect women’s human integrity and autonomy.” The high practitioners were the only ones who emphasized justice, believing that family planning is integral for achieving social justice. This was motivated by Catholic principles as described here: “…my Catholic upbringing and the exposure that I’ve had to Catholicism has made me very aware of social justice issues…and as part of that, I really believe that family planning is actually part of social justice.” Many commented on the responsibility to provide a service that is difficult to access. High practitioners also expressed how providing abortion reflects compassionate and nonjudgmental care, classified as beneficence. One described the harmonious relationship between her religion and abortion provision: “…what I do is completely in line with the important part of Catholicism…we’re just taking care of people who need help, people who are in distress, people who are underserved, people who need empathy and compassion.” The high practitioners linked their Catholic faith with the need to provide family planning services.
Our study describes how moral and ethical values are integrated in the context of Catholicism and medical practice. We found that Catholic obstetrician-gynecologist morality was developed and operationalized in a manner consistent with the Social-Cognitive Theory of Moral Thought and Action. Morality among Catholic obstetrician-gynecologists was not uniform and involved varying reconciliations with respect to religious and professional expectations, with certain religious or ethical principles emphasized over others. Our finding that Catholic physician morality was shaped by multiple interacting factors is consistent with the experiences of Catholic patients. , The church’s approach to family planning is based on Humanae Vitae , an encyclical from Pope Paul VI in 1968 that established that sexual intercourse must have both “love-giving” and “life-giving” intentions. Participants discussed this as an ideal, but also expressed that Catholic values such as compassion, caring for the underserved, and social responsibility may allow for broader acceptability of family planning services. Notably, no group emphasized all 4 medical ethical principles. This likely highlights that with respect to religion and reproductive care, moral and/or ethical compromises must be made. Most prominently, the low and moderate practitioners emphasized the principle of nonmaleficence, the high practitioners emphasized autonomy and were the only group to underscore justice, and all commented on beneficence. Nonmaleficence and beneficence are the oldest among the medical ethical principles, originating in the time of Hippocrates. The concepts of autonomy and justice are newer concepts that indicate a shift in the practice of medicine to one where a patient’s personal agency surpasses the physician’s professional opinion. , Some participants expressed that the Catholic church should modernize to reflect the changing medical culture. Concerns that health professionals may compromise their moral values if forced to provide certain services have led to conscientious objection protections. Recently, the US Department of Health and Human Services attempted to strengthen such protections, issuing a rule prohibiting discrimination against individuals and health care institutions who act according to their consciences. Low and some moderate practitioners identified how conscientious objection is an important mechanism to resolve conflicts between personal and professional values. On the other hand, some moderate and many high practitioners expressed how provision of services was a common way of achieving a clear conscience. Conscientious provision has been previously described among abortion practitioners; that is, their conscience is what informs abortion provision. , Those who worked in Catholic settings discussed family planning restrictions, particularly restrictions on services deemed medically necessary, as a source of conflict with their religion; such concerns have been highlighted in other investigations of physicians in Catholic settings. These findings suggest that without conscientious provision protections, many physicians may face unresolved moral conflicts when faced with organizational restrictions at the institutional or governmental levels. Limitations and Strengths This study has limitations that should be addressed. We limited our recruitment to obstetrician-gynecologists and cannot comment on other physicians. Both our study population and research team were racially and ethnically homogeneous, so the experiences and interpretations may not reflect underrepresented populations. Furthermore, a higher proportion of the low practitioners were men compared with the other 2 groups; thus, it is possible we were unable to detect the association of gender differences with moral development. Focusing on only 1 religion with clear guidelines for family planning service provision was a strength, because it allowed for nuanced interpretations. Although we used convenience sampling, we had a robust response to our recruitment survey, allowing us to purposively sample and reach theoretical saturation within each group. To enhance reliability and validity, multiple coders analyzed the data while maintaining reflexivity.
This study has limitations that should be addressed. We limited our recruitment to obstetrician-gynecologists and cannot comment on other physicians. Both our study population and research team were racially and ethnically homogeneous, so the experiences and interpretations may not reflect underrepresented populations. Furthermore, a higher proportion of the low practitioners were men compared with the other 2 groups; thus, it is possible we were unable to detect the association of gender differences with moral development. Focusing on only 1 religion with clear guidelines for family planning service provision was a strength, because it allowed for nuanced interpretations. Although we used convenience sampling, we had a robust response to our recruitment survey, allowing us to purposively sample and reach theoretical saturation within each group. To enhance reliability and validity, multiple coders analyzed the data while maintaining reflexivity.
These findings provide guidance for effective interventions. Given the benefits seen from values clarification exercises, and a Providers Share workshop, , Catholic physicians will likely benefit from similar workshops tailored to the inherent conflicts and the sensitive nature of this intersection of medicine and religion. Understanding the dilemmas faced by Catholic obstetrician-gynecologists and their moral and ethical decision-making can improve discourse on this topic and help destigmatize the varying choices Catholic patients and health professionals make. Physician concerns may provide insight to church leaders charged with providing guidance for modern medical issues. As an example, the low practitioners highlighted the need for transparency about services to support patient autonomy, yet prior studies have suggested that many Catholic institutions appear to limit transparency about such restrictions to care. , , In addition, although interpretation of certain Catholic teachings are seemingly inconsistent with family planning service provision, our participants highlighted how other Catholic values are consistent with provision. Broader consideration of these values and Catholic teachings such as the Principle of Toleration may provide a mechanism for more permissive reproductive care guidelines rooted in religious values.
|
Organisationale Gesundheitskompetenz in Krankenhäusern und Pflegeeinrichtungen – Stand und Perspektiven | 4d005bd8-2aac-42fb-a1e5-59651b411d64 | 11868172 | Health Literacy[mh] | Organisationale Gesundheitskompetenz (oGK) bezeichnet den Grad, zu dem Einrichtungen in der Lage sind, ihren Klient:innen, unabhängig von deren individueller Gesundheitskompetenz (GK), gute gesundheitsbezogene Entscheidungen zu ermöglichen . Organisationen, v. a. der gesundheitlichen Versorgung, zeigen oGK, indem sie Strukturen und Prozesse so gestalten, dass Individuen, die mit und innerhalb der Organisation interagieren, Zugang zu Gesundheitsinformationen und -dienstleistungen erhalten und befähigt werden, diese zu nutzen und informierte Entscheidungen für sich und andere zu treffen . Indem oGK die Bedingungen adressiert, unter denen Bürger:innen Informationen einholen und Entscheidungen treffen können, nimmt sie die Organisationen in die Verantwortung, für partizipative, individuelle Entscheidungen „zu sorgen“, und erweitert die häufig auf individuelle GK eingeengte Debatte um die Orientierung an den Verhältnissen . Schnittstellen und -mengen zeigt oGK zu anderen Konzepten wie der Ottawa Charta (Gesundheitsförderung in Lebenswelten; ), zur Responsivität gesundheitlicher Versorgung , der Patientenzentrierung , Konzepten der Organisationsentwicklung und des Qualitätsmanagements sowie der betrieblichen Gesundheitsförderung . Ähnlich wie diese Strategien zielt oGK darauf ab, Einzelne durch eine zielgruppenadäquate Informationsbereitstellung, niederschwellige Navigationsmöglichkeiten und Kompetenzstärkung zu befähigen, ihre Gesundheit und den Genesungsprozess aktiv mitzugestalten, und zwar nicht nur als Wert an sich, sondern auch mit dem Ziel, Patientensicherheit und Mitarbeiterorientierung zu gewährleisten, die Qualität der Versorgung zu erhalten bzw. zu verbessern und ggf. auch zur Effizienzsteigerung der Organisation und des Gesundheitssystems beizutragen . Maßgeblich handlungsleitend für die Ausgestaltung, Differenzierung, Operationalisierung und Erfassung der oGK im internationalen und nationalen Diskurs sind die von Brach et al. veröffentlichten „Ten Attributes of Health Literate Health Care Organizations“ . Zwischenzeitlich sind mehr als 15 Instrumente auffindbar, unter denen der Fragebogen zu „Health Literate Health Care Organisations“ (HLHO-10; ), der sich mit seinen 10-Items eng an die 10 Attribute von anlehnt, eines der kürzesten Instrumente ist. Entwickelt als (kurzes) Selbstbewertungsinstrument, formulieren 2 grundsätzliche Anwendungsbereiche für den HLHO-10: Der erste ist die Beantwortung wissenschaftlicher Fragestellungen, beispielsweise um den Stand der oGK in Einrichtungen der gesundheitlichen Versorgung und ihre Fähigkeit, auf die Belange von Patient:innen bzw. Klient:innen einzugehen, festzuhalten. Der zweite Anwendungsbereich ist praktischer Natur: Mit dem HLHO-10 können Einrichtungen der gesundheitlichen Versorgung selbst aktiv werden, gesundheitskompetenzbezogene Stärken und Schwächen in der Organisation feststellen und Ansatzpunkte für Maßnahmen identifizieren. Erfahrungen mit dem HLHO-10 liegen bislang aus dem Setting „Krankenhaus“ vor, sowohl aus Befragungen von Leitungspersonen als auch der Belegschaft . Berichtet werden Unterschiede im Ausmaß der oGK in Abhängigkeit von der Trägerschaft des Krankenhauses (wenn auch mit widersprüchlichen Ergebnissen; ) und von der Berufs- bzw. Statusgruppe . Auch wird ein positiver Zusammenhang zwischen oGK und Patientenzufriedenheit gefunden . Evidenz zum Ausmaß der oGK in Organisationen der Altenpflege ist aktuell nicht verfügbar. In Bezug auf den Umfang deutlich am anderen Ende des Spektrums bewegt sich das „International Self-Assessment Tool for Organizational Health Literacy of Hospitals, Version 1.1“ (SAT-OHL-Hos-v1.1) der „International Working Group Health Promoting Hospitals and Health Literate Healthcare Organizations“ . Ursprünglich in Österreich (und in deutscher Sprache) entwickelt umfasst es 141 Hauptindikatoren und gehört damit wie auch 2 weitere im deutschen Sprachraum entwickelte Selbstbewertungsinstrumente zu den ausführlichen Fragebögen. Inhaltlich haben diese ausführlichen Instrumente 4 Aspekte gemeinsam: (1) Sie weisen erkennbare Parallelen zum (Qualitäts‑)Management auf . (2) Ihnen liegt, auch wenn sie sich nominell auf Organisationen der gesundheitlichen Versorgung im Allgemeinen beziehen, implizit oft die Organisation „Krankenhaus“ zugrunde. (3) Für die Bewertung der Indikatoren ist im Vergleich zu einem kurzen Fragebogen mehr Zeit erforderlich. (4) Die Rückmeldungen zu den wahrgenommenen Stärken und Schwächen der GK in der Organisation sind aber dafür detaillierter als die, die mit einem kürzeren Instrument erhalten werden, sodass zielgerichtete Maßnahmen zur Verbesserung abgeleitet werden können. Noch ungeklärt ist, welche oGK-bezogenen Aspekte bei der Entwicklung eines Instruments zu beachten sind, das nicht nur die Beurteilung der oGK in einer Organisation ermöglicht (z. B. Klinik, Pflegeheim) und dessen Ergebnisse die klare Ableitung passender Fördermaßnahmen erlauben, sondern das auch international einsetzbar ist. Erst dann können auch detaillierte Erkenntnisse zur oGK standardisiert und (inter)national vergleichend erhoben analysiert werden und so Forschung und Praxis zu oGK befördern. Der vorliegende Beitrag kombiniert die Ergebnisse von insgesamt 3 unabhängig voneinander durchgeführten wissenschaftlichen Forschungsprojekten, um folgende Forschungsfragen zur beantworten: Welches Ausmaß an oGK zeigen Krankenhäuser und Einrichtungen der Altenpflege in Deutschland? Welche Aspekte von oGK sollen in einem detaillierten, trotzdem handhabbaren und international einsetzbaren Instrument zur Erfassung von oGK beibehalten werden? Um die erste Frage zu beantworten, stellen wir im ersten Teil des Beitrags Ergebnisse von 2 unabhängig voneinander, aber methodisch vergleichbar durchgeführten, bundesweiten standardisierten Online-Befragungen von Leitungspersonen in der stationären akut- bzw. pflegerischen Versorgung vor. Zur Beantwortung der zweiten Frage ziehen wir im zweiten Teil Ergebnisse aus dem deutschen Teilprojekt des M‑POHL-Netzwerkes zur organisationalen Gesundheitskompetenz heran . Methodik Studiendesign Bei den beiden Erhebungen handelt es sich um Daten aus dem Projekt „Entwicklung der Gesundheitskompetenz in Einrichtungen der Gesundheitsversorgung (EwiKo)“ und dem Projekt „Gute Gesundheitsentscheidungen im Krankenhaus ermöglichen“ (GK-KH; ). Im Projekt EwiKo wurde zwischen März und Juli 2021 eine Befragung von Leitungspersonen im Krankenhaus, in Einrichtungen der Pflege und in der Eingliederungshilfe durchgeführt. Kontaktiert wurden im Rahmen einer Online-Befragung insgesamt N = 3266 Einrichtungen, wobei sich die hier vorgestellten Ergebnisse ausschließlich auf Krankenhäuser ( N = 1792, nach dem im deutschen Krankenhausverzeichnis 2018, § 108 SGB V) und Einrichtungen der Pflege (in Sachsen und Thüringen: N = 1474) konzentrieren. Zur Teilnahme eingeladen wurden Personen, die mittlere oder hohe Leitungsfunktionen in den Organisationen einnehmen, z. B. Einrichtungsleitung, Oberärztin/Oberarzt. Sie wurden in Anlehnung an die „Total Design Method“ 2‑mal an die Teilnahme erinnert, ein Incentive gab es nicht. Bei dem GK-KH-Projekt handelt es sich um eine zwischen November und Dezember 2022 durchgeführte, standardisierte querschnittliche Befragung, die online oder postalisch beantwortet werden konnte. Angeschrieben und zur Befragung eingeladen wurden ärztliche, pflegerische und kaufmännische Leitungen von 1250 der 1476 im deutschen Krankenhausverzeichnis gemäß § 108 SGB V gelisteten Krankenhäusern mit mehr als 50 Betten und vollständigen Kontaktangaben (Grundgesamtheit Leitungspersonen: 3301). Die Befragten wurden ebenfalls 2‑mal an die Teilnahme erinnert. Für jeden vollständig ausgefüllten Fragebogen gab es eine Spende an eine Freiburger Wohltätigkeitsorganisation. Erhebungsinstrument In beiden Studien kam der HLHO-10-Fragebogen zum Einsatz. Die Antworten sind auf einer 7‑Punkte-Likert-Skala von 1 (überhaupt nicht) bis 7 (in sehr großem Maße) anzugeben. Der HLHO-10 wurde in Deutschland mit leitenden Mitarbeiter:innen zertifizierter Brustzentren validiert und weist gute psychometrische Eigenschaften auf . Neben dem HLHO-10 umfassten die Erhebungsinstrumente in beiden Untersuchungen Fragen zur ausfüllenden Person und zu strukturellen Merkmalen der jeweiligen Einrichtung. Die statistischen Analysen wurden mit der Software IBM® SPSS® 28.0 in einem gepoolten Datensatz durchgeführt, dabei wurden deskriptive Statistiken wie Mittelwert, Median und Prozentwerte und Konfidenzintervalle berechnet. Zur Prüfung statistisch signifikanter Unterschiede in Subgruppen (Position im Krankenhaus und unterschiedliche Untersuchungsstichproben) diente ANOVA. Ergebnisse Stichprobenbeschreibung Die EwiKo-Stichprobe umfasst 62 Krankenhäuser (KH) deutschlandweit (Rücklaufquote: 3,5 %) und 195 Pflegeeinrichtungen (Pflege, in Sachsen und Thüringen, Rücklaufquote: 13,2 %), die GK-KH-Stichprobe insgesamt 371 Krankenhausleitungen (Rücklaufquote: 11 %). In beiden Erhebungen sind die Einrichtungen mehrheitlich in freigemeinnütziger Trägerschaft. Je nach Datenbasis haben in den Krankenhäusern unterschiedliche Führungskräfte den Fragebogen beantwortet: In EwiKo sind unter den Teilnehmenden in Pflegeheimen rund 55 % der Personen in der Geschäftsführung bzw. kaufmännischen Leitung tätig, während in der GK-KH-Befragung in etwa die Hälfte der auswertbaren Fragebögen von Pflegedienstleitungen ausgefüllt wurden (Tab. ). Stand der oGK in Krankenhäusern und Pflegeeinrichtungen in Deutschland In Tab. sind für die 10 Standards des HLHO-10 die Mittelwerte, Standardabweichungen sowie 95 %-Konfidenzintervalle aus den beiden Studien getrennt für Krankenhäuser und Pflegeeinrichtungen aufgeführt. Die Selbstbewertung der oGK durch die Befragten schließt Mittelwerte zwischen 3,5 (Standard 7) und 5,7 (Standard 6 und Standard 9) ein, d. h., sie schätzen die Umsetzung der Bedingungen in ihren Einrichtungen im mittleren bis guten Bereich ein. Am wenigsten umgesetzt ist nach den beiden Stichproben der EwiKo-Studie der Standard 4 (Bereitstellung individualisierter Informationen) mit mittlerem Grad der Umsetzung zwischen 3,7 und 4,3. In der GK-KH-Studie ist der Standard 3 (Entwicklung von Gesundheitsinformationen unter Einbezug von Patient:innen) am niedrigsten ausgeprägt. Neben dem Standard 9 ist der Grad der Umsetzung bei Standard 6 am höchsten, bei dem es darum geht, wie gut sich Patient:innen in der Einrichtung zurechtfinden können. Es fällt auf, dass die beiden EwiKo-Stichproben sehr viel ähnlicher antworten als die GK-KH-Stichprobe und dass die in GK-KH Befragten die Umsetzung in den Krankenhäusern in 8 von 10 Standards, zum Teil deutlich und statistisch signifikant, weniger gut bewerten als die Krankenhäuser in der EwiKo-Studie (z. B. Standard 1 und Standard 10). Im Vergleich zu EwiKo schätzen die Befragten der GK-KH-Studie den Umsetzungsrad in 3 Standards deutlich und statistisch hoch signifikant niedriger ein als Befragte der EwiKo-Studie. Dabei handelt es sich um Standard 1 „Fokussierung der GK durch die Leitungspersonen“ (F (2,624) = 16,35; p < 0,001; η 2 = 0,05), Standard 2 „Berücksichtigung der GK im Qualitätsmanagement“ (F (2,624) = 19,16; p < 0,001; η 2 = 0,06) und Standard 3 „partizipative Entwicklung von Gesundheitsinformationen“ (F (2,624) = 15,32; p < 0,001; η2 = 0,05). Auch den Umsetzungsgrad der Standards 4, 6, 9 und 10 schätzen die Befragten der GK-KH-Studie, wenn auch nicht so deutlich, aber doch signifikant geringer ein als die Befragten in EwiKo: Standard 6 „navigationale GK“ (F (2,624) = 11,76; p < 0,001; η 2 = 0,04), Standard 4 „Nutzung verschiedener Medien zur Information bei der Patient:innenberatung“ (F (2,624) = 4,19; p = 0,016; η 2 = 0,01), Standard 9 „Kommunikation über Behandlungskosten“ (F (2,623) = 4,573; p = 0,011; η 2 = 0,02) und Standard 10 „Mitarbeiter:innenschulungen“ (F (2,622) = 19,20; p < 0,001; η 2 = 0,06). Keine substanziellen Unterschiede zwischen den Einrichtungen bzw. den Befragungen bestehen bei Standard 5 „Kommunikationsstandards“. Im Vergleich zur EwiKo etwas höher schätzen die GK-KH-Befragten den Grad der Umsetzung des Standards 7 „Bereitstellung individualisierter Informationen“ ein (F (2,624) = 6,131; p = 0,002; η 2 = 0,02). Einschätzung der oGK nach Position der Befragten in Krankenhäusern der EwiKo- und GK-KH-Studie Leitendes Pflegepersonal schätzt im Vergleich zu anderen Befragten das Maß der „Partizipation bei der Informationsentwicklung“ (F (3,349) = 3,79; p = 0,011; η 2 = 0,032) und die „Kommunikation über Kosten in ihren Organisationen“(F (3,347) = 4,31; p = 0,005; η 2 = 0,036) wie auch die „Gewährleistung des Einverständnisses der Patient:innen in Hochrisikosituationen“ (F (3,348) = 4,33; p = 0,005; η 2 = 0, 36) geringer ein als Beschäftigte in anderen Positionen. Die „Schulung der Mitarbeiter:innen zur GK“ nehmen ärztliches und pflegerisches Leitungspersonal in ihrer eigenen Organisation im Vergleich zu den kaufmännischen Leitungen als weniger umgesetzt wahr (F (3,349) = 3,15; p = 0,025, η 2 = 0,026). Studiendesign Bei den beiden Erhebungen handelt es sich um Daten aus dem Projekt „Entwicklung der Gesundheitskompetenz in Einrichtungen der Gesundheitsversorgung (EwiKo)“ und dem Projekt „Gute Gesundheitsentscheidungen im Krankenhaus ermöglichen“ (GK-KH; ). Im Projekt EwiKo wurde zwischen März und Juli 2021 eine Befragung von Leitungspersonen im Krankenhaus, in Einrichtungen der Pflege und in der Eingliederungshilfe durchgeführt. Kontaktiert wurden im Rahmen einer Online-Befragung insgesamt N = 3266 Einrichtungen, wobei sich die hier vorgestellten Ergebnisse ausschließlich auf Krankenhäuser ( N = 1792, nach dem im deutschen Krankenhausverzeichnis 2018, § 108 SGB V) und Einrichtungen der Pflege (in Sachsen und Thüringen: N = 1474) konzentrieren. Zur Teilnahme eingeladen wurden Personen, die mittlere oder hohe Leitungsfunktionen in den Organisationen einnehmen, z. B. Einrichtungsleitung, Oberärztin/Oberarzt. Sie wurden in Anlehnung an die „Total Design Method“ 2‑mal an die Teilnahme erinnert, ein Incentive gab es nicht. Bei dem GK-KH-Projekt handelt es sich um eine zwischen November und Dezember 2022 durchgeführte, standardisierte querschnittliche Befragung, die online oder postalisch beantwortet werden konnte. Angeschrieben und zur Befragung eingeladen wurden ärztliche, pflegerische und kaufmännische Leitungen von 1250 der 1476 im deutschen Krankenhausverzeichnis gemäß § 108 SGB V gelisteten Krankenhäusern mit mehr als 50 Betten und vollständigen Kontaktangaben (Grundgesamtheit Leitungspersonen: 3301). Die Befragten wurden ebenfalls 2‑mal an die Teilnahme erinnert. Für jeden vollständig ausgefüllten Fragebogen gab es eine Spende an eine Freiburger Wohltätigkeitsorganisation. Erhebungsinstrument In beiden Studien kam der HLHO-10-Fragebogen zum Einsatz. Die Antworten sind auf einer 7‑Punkte-Likert-Skala von 1 (überhaupt nicht) bis 7 (in sehr großem Maße) anzugeben. Der HLHO-10 wurde in Deutschland mit leitenden Mitarbeiter:innen zertifizierter Brustzentren validiert und weist gute psychometrische Eigenschaften auf . Neben dem HLHO-10 umfassten die Erhebungsinstrumente in beiden Untersuchungen Fragen zur ausfüllenden Person und zu strukturellen Merkmalen der jeweiligen Einrichtung. Die statistischen Analysen wurden mit der Software IBM® SPSS® 28.0 in einem gepoolten Datensatz durchgeführt, dabei wurden deskriptive Statistiken wie Mittelwert, Median und Prozentwerte und Konfidenzintervalle berechnet. Zur Prüfung statistisch signifikanter Unterschiede in Subgruppen (Position im Krankenhaus und unterschiedliche Untersuchungsstichproben) diente ANOVA. Bei den beiden Erhebungen handelt es sich um Daten aus dem Projekt „Entwicklung der Gesundheitskompetenz in Einrichtungen der Gesundheitsversorgung (EwiKo)“ und dem Projekt „Gute Gesundheitsentscheidungen im Krankenhaus ermöglichen“ (GK-KH; ). Im Projekt EwiKo wurde zwischen März und Juli 2021 eine Befragung von Leitungspersonen im Krankenhaus, in Einrichtungen der Pflege und in der Eingliederungshilfe durchgeführt. Kontaktiert wurden im Rahmen einer Online-Befragung insgesamt N = 3266 Einrichtungen, wobei sich die hier vorgestellten Ergebnisse ausschließlich auf Krankenhäuser ( N = 1792, nach dem im deutschen Krankenhausverzeichnis 2018, § 108 SGB V) und Einrichtungen der Pflege (in Sachsen und Thüringen: N = 1474) konzentrieren. Zur Teilnahme eingeladen wurden Personen, die mittlere oder hohe Leitungsfunktionen in den Organisationen einnehmen, z. B. Einrichtungsleitung, Oberärztin/Oberarzt. Sie wurden in Anlehnung an die „Total Design Method“ 2‑mal an die Teilnahme erinnert, ein Incentive gab es nicht. Bei dem GK-KH-Projekt handelt es sich um eine zwischen November und Dezember 2022 durchgeführte, standardisierte querschnittliche Befragung, die online oder postalisch beantwortet werden konnte. Angeschrieben und zur Befragung eingeladen wurden ärztliche, pflegerische und kaufmännische Leitungen von 1250 der 1476 im deutschen Krankenhausverzeichnis gemäß § 108 SGB V gelisteten Krankenhäusern mit mehr als 50 Betten und vollständigen Kontaktangaben (Grundgesamtheit Leitungspersonen: 3301). Die Befragten wurden ebenfalls 2‑mal an die Teilnahme erinnert. Für jeden vollständig ausgefüllten Fragebogen gab es eine Spende an eine Freiburger Wohltätigkeitsorganisation. In beiden Studien kam der HLHO-10-Fragebogen zum Einsatz. Die Antworten sind auf einer 7‑Punkte-Likert-Skala von 1 (überhaupt nicht) bis 7 (in sehr großem Maße) anzugeben. Der HLHO-10 wurde in Deutschland mit leitenden Mitarbeiter:innen zertifizierter Brustzentren validiert und weist gute psychometrische Eigenschaften auf . Neben dem HLHO-10 umfassten die Erhebungsinstrumente in beiden Untersuchungen Fragen zur ausfüllenden Person und zu strukturellen Merkmalen der jeweiligen Einrichtung. Die statistischen Analysen wurden mit der Software IBM® SPSS® 28.0 in einem gepoolten Datensatz durchgeführt, dabei wurden deskriptive Statistiken wie Mittelwert, Median und Prozentwerte und Konfidenzintervalle berechnet. Zur Prüfung statistisch signifikanter Unterschiede in Subgruppen (Position im Krankenhaus und unterschiedliche Untersuchungsstichproben) diente ANOVA. Stichprobenbeschreibung Die EwiKo-Stichprobe umfasst 62 Krankenhäuser (KH) deutschlandweit (Rücklaufquote: 3,5 %) und 195 Pflegeeinrichtungen (Pflege, in Sachsen und Thüringen, Rücklaufquote: 13,2 %), die GK-KH-Stichprobe insgesamt 371 Krankenhausleitungen (Rücklaufquote: 11 %). In beiden Erhebungen sind die Einrichtungen mehrheitlich in freigemeinnütziger Trägerschaft. Je nach Datenbasis haben in den Krankenhäusern unterschiedliche Führungskräfte den Fragebogen beantwortet: In EwiKo sind unter den Teilnehmenden in Pflegeheimen rund 55 % der Personen in der Geschäftsführung bzw. kaufmännischen Leitung tätig, während in der GK-KH-Befragung in etwa die Hälfte der auswertbaren Fragebögen von Pflegedienstleitungen ausgefüllt wurden (Tab. ). Stand der oGK in Krankenhäusern und Pflegeeinrichtungen in Deutschland In Tab. sind für die 10 Standards des HLHO-10 die Mittelwerte, Standardabweichungen sowie 95 %-Konfidenzintervalle aus den beiden Studien getrennt für Krankenhäuser und Pflegeeinrichtungen aufgeführt. Die Selbstbewertung der oGK durch die Befragten schließt Mittelwerte zwischen 3,5 (Standard 7) und 5,7 (Standard 6 und Standard 9) ein, d. h., sie schätzen die Umsetzung der Bedingungen in ihren Einrichtungen im mittleren bis guten Bereich ein. Am wenigsten umgesetzt ist nach den beiden Stichproben der EwiKo-Studie der Standard 4 (Bereitstellung individualisierter Informationen) mit mittlerem Grad der Umsetzung zwischen 3,7 und 4,3. In der GK-KH-Studie ist der Standard 3 (Entwicklung von Gesundheitsinformationen unter Einbezug von Patient:innen) am niedrigsten ausgeprägt. Neben dem Standard 9 ist der Grad der Umsetzung bei Standard 6 am höchsten, bei dem es darum geht, wie gut sich Patient:innen in der Einrichtung zurechtfinden können. Es fällt auf, dass die beiden EwiKo-Stichproben sehr viel ähnlicher antworten als die GK-KH-Stichprobe und dass die in GK-KH Befragten die Umsetzung in den Krankenhäusern in 8 von 10 Standards, zum Teil deutlich und statistisch signifikant, weniger gut bewerten als die Krankenhäuser in der EwiKo-Studie (z. B. Standard 1 und Standard 10). Im Vergleich zu EwiKo schätzen die Befragten der GK-KH-Studie den Umsetzungsrad in 3 Standards deutlich und statistisch hoch signifikant niedriger ein als Befragte der EwiKo-Studie. Dabei handelt es sich um Standard 1 „Fokussierung der GK durch die Leitungspersonen“ (F (2,624) = 16,35; p < 0,001; η 2 = 0,05), Standard 2 „Berücksichtigung der GK im Qualitätsmanagement“ (F (2,624) = 19,16; p < 0,001; η 2 = 0,06) und Standard 3 „partizipative Entwicklung von Gesundheitsinformationen“ (F (2,624) = 15,32; p < 0,001; η2 = 0,05). Auch den Umsetzungsgrad der Standards 4, 6, 9 und 10 schätzen die Befragten der GK-KH-Studie, wenn auch nicht so deutlich, aber doch signifikant geringer ein als die Befragten in EwiKo: Standard 6 „navigationale GK“ (F (2,624) = 11,76; p < 0,001; η 2 = 0,04), Standard 4 „Nutzung verschiedener Medien zur Information bei der Patient:innenberatung“ (F (2,624) = 4,19; p = 0,016; η 2 = 0,01), Standard 9 „Kommunikation über Behandlungskosten“ (F (2,623) = 4,573; p = 0,011; η 2 = 0,02) und Standard 10 „Mitarbeiter:innenschulungen“ (F (2,622) = 19,20; p < 0,001; η 2 = 0,06). Keine substanziellen Unterschiede zwischen den Einrichtungen bzw. den Befragungen bestehen bei Standard 5 „Kommunikationsstandards“. Im Vergleich zur EwiKo etwas höher schätzen die GK-KH-Befragten den Grad der Umsetzung des Standards 7 „Bereitstellung individualisierter Informationen“ ein (F (2,624) = 6,131; p = 0,002; η 2 = 0,02). Einschätzung der oGK nach Position der Befragten in Krankenhäusern der EwiKo- und GK-KH-Studie Leitendes Pflegepersonal schätzt im Vergleich zu anderen Befragten das Maß der „Partizipation bei der Informationsentwicklung“ (F (3,349) = 3,79; p = 0,011; η 2 = 0,032) und die „Kommunikation über Kosten in ihren Organisationen“(F (3,347) = 4,31; p = 0,005; η 2 = 0,036) wie auch die „Gewährleistung des Einverständnisses der Patient:innen in Hochrisikosituationen“ (F (3,348) = 4,33; p = 0,005; η 2 = 0, 36) geringer ein als Beschäftigte in anderen Positionen. Die „Schulung der Mitarbeiter:innen zur GK“ nehmen ärztliches und pflegerisches Leitungspersonal in ihrer eigenen Organisation im Vergleich zu den kaufmännischen Leitungen als weniger umgesetzt wahr (F (3,349) = 3,15; p = 0,025, η 2 = 0,026). Die EwiKo-Stichprobe umfasst 62 Krankenhäuser (KH) deutschlandweit (Rücklaufquote: 3,5 %) und 195 Pflegeeinrichtungen (Pflege, in Sachsen und Thüringen, Rücklaufquote: 13,2 %), die GK-KH-Stichprobe insgesamt 371 Krankenhausleitungen (Rücklaufquote: 11 %). In beiden Erhebungen sind die Einrichtungen mehrheitlich in freigemeinnütziger Trägerschaft. Je nach Datenbasis haben in den Krankenhäusern unterschiedliche Führungskräfte den Fragebogen beantwortet: In EwiKo sind unter den Teilnehmenden in Pflegeheimen rund 55 % der Personen in der Geschäftsführung bzw. kaufmännischen Leitung tätig, während in der GK-KH-Befragung in etwa die Hälfte der auswertbaren Fragebögen von Pflegedienstleitungen ausgefüllt wurden (Tab. ). In Tab. sind für die 10 Standards des HLHO-10 die Mittelwerte, Standardabweichungen sowie 95 %-Konfidenzintervalle aus den beiden Studien getrennt für Krankenhäuser und Pflegeeinrichtungen aufgeführt. Die Selbstbewertung der oGK durch die Befragten schließt Mittelwerte zwischen 3,5 (Standard 7) und 5,7 (Standard 6 und Standard 9) ein, d. h., sie schätzen die Umsetzung der Bedingungen in ihren Einrichtungen im mittleren bis guten Bereich ein. Am wenigsten umgesetzt ist nach den beiden Stichproben der EwiKo-Studie der Standard 4 (Bereitstellung individualisierter Informationen) mit mittlerem Grad der Umsetzung zwischen 3,7 und 4,3. In der GK-KH-Studie ist der Standard 3 (Entwicklung von Gesundheitsinformationen unter Einbezug von Patient:innen) am niedrigsten ausgeprägt. Neben dem Standard 9 ist der Grad der Umsetzung bei Standard 6 am höchsten, bei dem es darum geht, wie gut sich Patient:innen in der Einrichtung zurechtfinden können. Es fällt auf, dass die beiden EwiKo-Stichproben sehr viel ähnlicher antworten als die GK-KH-Stichprobe und dass die in GK-KH Befragten die Umsetzung in den Krankenhäusern in 8 von 10 Standards, zum Teil deutlich und statistisch signifikant, weniger gut bewerten als die Krankenhäuser in der EwiKo-Studie (z. B. Standard 1 und Standard 10). Im Vergleich zu EwiKo schätzen die Befragten der GK-KH-Studie den Umsetzungsrad in 3 Standards deutlich und statistisch hoch signifikant niedriger ein als Befragte der EwiKo-Studie. Dabei handelt es sich um Standard 1 „Fokussierung der GK durch die Leitungspersonen“ (F (2,624) = 16,35; p < 0,001; η 2 = 0,05), Standard 2 „Berücksichtigung der GK im Qualitätsmanagement“ (F (2,624) = 19,16; p < 0,001; η 2 = 0,06) und Standard 3 „partizipative Entwicklung von Gesundheitsinformationen“ (F (2,624) = 15,32; p < 0,001; η2 = 0,05). Auch den Umsetzungsgrad der Standards 4, 6, 9 und 10 schätzen die Befragten der GK-KH-Studie, wenn auch nicht so deutlich, aber doch signifikant geringer ein als die Befragten in EwiKo: Standard 6 „navigationale GK“ (F (2,624) = 11,76; p < 0,001; η 2 = 0,04), Standard 4 „Nutzung verschiedener Medien zur Information bei der Patient:innenberatung“ (F (2,624) = 4,19; p = 0,016; η 2 = 0,01), Standard 9 „Kommunikation über Behandlungskosten“ (F (2,623) = 4,573; p = 0,011; η 2 = 0,02) und Standard 10 „Mitarbeiter:innenschulungen“ (F (2,622) = 19,20; p < 0,001; η 2 = 0,06). Keine substanziellen Unterschiede zwischen den Einrichtungen bzw. den Befragungen bestehen bei Standard 5 „Kommunikationsstandards“. Im Vergleich zur EwiKo etwas höher schätzen die GK-KH-Befragten den Grad der Umsetzung des Standards 7 „Bereitstellung individualisierter Informationen“ ein (F (2,624) = 6,131; p = 0,002; η 2 = 0,02). Leitendes Pflegepersonal schätzt im Vergleich zu anderen Befragten das Maß der „Partizipation bei der Informationsentwicklung“ (F (3,349) = 3,79; p = 0,011; η 2 = 0,032) und die „Kommunikation über Kosten in ihren Organisationen“(F (3,347) = 4,31; p = 0,005; η 2 = 0,036) wie auch die „Gewährleistung des Einverständnisses der Patient:innen in Hochrisikosituationen“ (F (3,348) = 4,33; p = 0,005; η 2 = 0, 36) geringer ein als Beschäftigte in anderen Positionen. Die „Schulung der Mitarbeiter:innen zur GK“ nehmen ärztliches und pflegerisches Leitungspersonal in ihrer eigenen Organisation im Vergleich zu den kaufmännischen Leitungen als weniger umgesetzt wahr (F (3,349) = 3,15; p = 0,025, η 2 = 0,026). Das Aktionsnetzwerk der Weltgesundheitsorganisation (WHO) „Measuring Population and Organizational Health Literacy“ (M-POHL) zielt in dem Teilprojekt „Organizational Health Literacy Hospitals“ (OHL-HOS) darauf ab, ein europaweit einsetzbares, handhabbares Instrument zu erarbeiten, um oGK international und vergleichend sichtbar zu machen, Gesundheitssysteme bzw. einzelne Sektoren miteinander zu vergleichen und voneinander lernen zu können. Als Grundlage für dieses Vorhaben dient das SAT-OHL-Hos-v1.1 . Es stützt sich, wie der HLHO-10 auf . Dieses ursprünglich deutschsprachige , später ins Englische übersetzte Instrument zählt mit 8 Standards, 21 Substandards und 141 Hauptindikatoren zu den umfassendsten Instrumenten zur Erfassung von oGK. Es ist, wie andere Instrumente auch, in erster Linie als Selbstbewertungsinstrument gedacht, explizit für Krankenhäuser, wobei (noch) nicht festgelegt ist, wer die Selbstbewertung im Krankenhaus durchführen soll. Die in M‑POHL-OHL-HOS in Krankenhäusern verschiedener beteiligter Länder durchgeführte Pilotierung der Version V1.1 zeigte in erster Linie, dass das Instrument als zu umfangreich, zu lang und zu aufwendig auszufüllen erlebt wird (Ergebnisse noch nicht veröffentlicht). M‑POHL-OHL stieß daher einen Kürzungsprozess an, der unter Erhalt der 8 Standards und unter gleichmäßiger Kürzung zu einem etwa auf ein Drittel reduzierten Umfang führen sollte (Tab. ). Der Kürzungsvorgang beinhaltete 3 Arbeitsschritte und weist Ähnlichkeiten mit Vorgehensweisen anderer Arbeitsgruppen in der Entwicklung von oGK-Selbstwertungsinstrumenten (z. B. ) auf: nationaler Austausch: Arbeitsgruppen auf nationaler Ebene waren gehalten, das gesamte Instrument zu inspizieren, die Relevanz der einzelnen Indikatoren für die Erfassung der oGK im nationalen Gesundheitssystem zu bewerten und pro Substandard Vorschläge zu machen, welche (in der Regel 2 bis 3) Indikatoren pro Substandard mit welcher Priorität beibehalten werden sollen; Zusammenführung der nationalen Voten; internationale Konsensustreffen im Netzwerk M‑POHL-OHL-HOS: Diskussion und Konsentierung Standard für Standard in insgesamt 9 Online-Workshops zu je 1,5 h. Die Diskussion in den Workshops sowie die Begründung der Entscheidungen wurden in Stichworten bei jedem Item protokolliert und tabellarisch festgehalten. Teilgenommen haben an diesem Arbeitsvorhaben M‑POHL-Mitglieder aus Österreich, Schweiz, Tschechien, Deutschland, Israel, Italien, Niederlande und Belgien. Ergebnisse Mit dem gewählten Vorgehen gelang es, das Instrument von insgesamt 141 Indikatoren auf 54 zu reduzieren. Zur Illustration des Vorgehens haben wir in Tab. die Indikatoren dreier Substandards zusammengestellt und berichten, wie wir als nationales Projektteam (ZI, EMB) die Indikatoren priorisiert haben und zu welchem Ergebnis die internationale Arbeitsgruppe gekommen ist. Die nationalen Priorisierungsprozesse führten in den meisten Fällen nicht direkt zu einem eindeutigen Ergebnis. Ein Konsens konnte in der Regel erst nach einem intensiven Austausch von Erklärungen, Erläuterungen und Argumenten erzielt werden, von denen wir hier die 3 zentralen Diskussionslinien skizzieren: Die Breite bzw. Tiefe der Indikatoren: Wie genau oder auch wie abstrakt sollen/dürfen die Indikatoren sein? Was kann man von allen Krankenhäusern fordern (und nicht nur von Universitätskliniken oder Lehrkrankenhäusern)? Welche Funktion erfüllen Krankenhäuser im jeweiligen Gesundheitssystem? Und was kann/darf/muss man von ihnen länderübergreifend erwarten? Ad 1: Das Ringen um einen Kompromiss zwischen Präzision und Abstraktion lässt sich an dem Substandard 3.1 „Die persönliche und organisationale Gesundheitskompetenz wird als eine wesentliche Fachkompetenz für alle in der Organisation tätigen Mitarbeiter:innen verstanden“ zeigen. Im Ausgangsformat hat der Standard 5 Indikatoren, von denen wir als nationales Team schon nur noch 2 als beizubehaltend vorgeschlagen haben (Tab. ). In der internationalen Diskussion verständigte man sich auf den (einen) Indikator, der mit einer (möglichst präzisen) Formulierung die Qualitätsanforderung beschreibt. Darüber hinausgehende Anforderungen an die Art und Weise, wie geschult wird, ob beispielsweise in der Routineversorgung ein regelmäßiges GK-bezogenes Feedback installiert ist, GK Bestandteil von Stellenausschreibungen ist oder regelhaft bei der Auswahl von Personal berücksichtigt wird, erschienen unklar, zu kleinteilig und zu sehr in die Belange der Krankenhäuser eingreifend. Ad 2: Die Diskussion entlang der Frage: „Was kann man von allen Krankenhäusern erwarten?“, war eindrücklich beim Substandard 2.1. „Das Unternehmen bezieht Patient:innen in die Entwicklung und Bewertung von patientenorientierten Dokumenten, Materialien und Dienstleistungen ein“. So bestand in der Abwägung schnell Einigkeit darüber, dass man nicht von allen Krankenhäusern verlangen möchte, regelhaft Schauspielpatient:innen in die Ausbildung von Gesundheitsfachkräften einzubinden oder Leitlinien und Prozesse für das Personal zur Patientenkommunikation immer zusammen mit Patient:innen zu entwickeln. Zudem wird in der überarbeiteten Form darauf hingewiesen werden, dass der Indikator 3.1. auch erfüllt ist, wenn methodisch anspruchsvoll, partizipativ von anderen entwickelte Dokumente und Dienstleistungen implementiert sind. Nicht verzichtet werden sollte demgegenüber auf den Indikator, der die für das institutionelle Lernen notwendige Feedback-Schleife adressiert (Tab. ). Ad 3: Die unterschiedlichen Aufgaben und Funktionen, die Krankenhäuser im jeweiligen nationalen Gesundheitssystem, aber auch vor dem Hintergrund unterschiedlicher regionaler und lokaler Gegebenheiten erfüllen, wurden bei dem Substandard 8.1. deutlich. In diesem Substandard geht es um die gesellschaftliche Verantwortung des Krankenhauses, die GK der lokalen Bevölkerung zu erhöhen. In der internationalen Diskussion wurde deutlich, dass diese Aufgabe in manchen Gesundheitssystemen nicht primär im Verantwortungsbereich von Krankenhäusern liegt. Die (trotzdem) beibehaltenen 2 von 3 Indikatoren sollen einen entsprechenden Kommentar erhalten und in der anstehend geplanten Pilotierung auf ihre Tauglichkeit getestet werden. Nicht verschwiegen werden soll aber auch, dass es in der Diskussion um die Auswahl und Formulierung weniger, gleichwohl möglichst präziser Indikatoren wiederholt den Impuls gab, neue Items aufzunehmen. Diese Impulse wurden kritisch und immer vor dem Hintergrund diskutiert, das Ziel, ein auf etwa ein Drittel reduziertes Instrument, nicht zu verfehlen. Mit dem gewählten Vorgehen gelang es, das Instrument von insgesamt 141 Indikatoren auf 54 zu reduzieren. Zur Illustration des Vorgehens haben wir in Tab. die Indikatoren dreier Substandards zusammengestellt und berichten, wie wir als nationales Projektteam (ZI, EMB) die Indikatoren priorisiert haben und zu welchem Ergebnis die internationale Arbeitsgruppe gekommen ist. Die nationalen Priorisierungsprozesse führten in den meisten Fällen nicht direkt zu einem eindeutigen Ergebnis. Ein Konsens konnte in der Regel erst nach einem intensiven Austausch von Erklärungen, Erläuterungen und Argumenten erzielt werden, von denen wir hier die 3 zentralen Diskussionslinien skizzieren: Die Breite bzw. Tiefe der Indikatoren: Wie genau oder auch wie abstrakt sollen/dürfen die Indikatoren sein? Was kann man von allen Krankenhäusern fordern (und nicht nur von Universitätskliniken oder Lehrkrankenhäusern)? Welche Funktion erfüllen Krankenhäuser im jeweiligen Gesundheitssystem? Und was kann/darf/muss man von ihnen länderübergreifend erwarten? Ad 1: Das Ringen um einen Kompromiss zwischen Präzision und Abstraktion lässt sich an dem Substandard 3.1 „Die persönliche und organisationale Gesundheitskompetenz wird als eine wesentliche Fachkompetenz für alle in der Organisation tätigen Mitarbeiter:innen verstanden“ zeigen. Im Ausgangsformat hat der Standard 5 Indikatoren, von denen wir als nationales Team schon nur noch 2 als beizubehaltend vorgeschlagen haben (Tab. ). In der internationalen Diskussion verständigte man sich auf den (einen) Indikator, der mit einer (möglichst präzisen) Formulierung die Qualitätsanforderung beschreibt. Darüber hinausgehende Anforderungen an die Art und Weise, wie geschult wird, ob beispielsweise in der Routineversorgung ein regelmäßiges GK-bezogenes Feedback installiert ist, GK Bestandteil von Stellenausschreibungen ist oder regelhaft bei der Auswahl von Personal berücksichtigt wird, erschienen unklar, zu kleinteilig und zu sehr in die Belange der Krankenhäuser eingreifend. Ad 2: Die Diskussion entlang der Frage: „Was kann man von allen Krankenhäusern erwarten?“, war eindrücklich beim Substandard 2.1. „Das Unternehmen bezieht Patient:innen in die Entwicklung und Bewertung von patientenorientierten Dokumenten, Materialien und Dienstleistungen ein“. So bestand in der Abwägung schnell Einigkeit darüber, dass man nicht von allen Krankenhäusern verlangen möchte, regelhaft Schauspielpatient:innen in die Ausbildung von Gesundheitsfachkräften einzubinden oder Leitlinien und Prozesse für das Personal zur Patientenkommunikation immer zusammen mit Patient:innen zu entwickeln. Zudem wird in der überarbeiteten Form darauf hingewiesen werden, dass der Indikator 3.1. auch erfüllt ist, wenn methodisch anspruchsvoll, partizipativ von anderen entwickelte Dokumente und Dienstleistungen implementiert sind. Nicht verzichtet werden sollte demgegenüber auf den Indikator, der die für das institutionelle Lernen notwendige Feedback-Schleife adressiert (Tab. ). Ad 3: Die unterschiedlichen Aufgaben und Funktionen, die Krankenhäuser im jeweiligen nationalen Gesundheitssystem, aber auch vor dem Hintergrund unterschiedlicher regionaler und lokaler Gegebenheiten erfüllen, wurden bei dem Substandard 8.1. deutlich. In diesem Substandard geht es um die gesellschaftliche Verantwortung des Krankenhauses, die GK der lokalen Bevölkerung zu erhöhen. In der internationalen Diskussion wurde deutlich, dass diese Aufgabe in manchen Gesundheitssystemen nicht primär im Verantwortungsbereich von Krankenhäusern liegt. Die (trotzdem) beibehaltenen 2 von 3 Indikatoren sollen einen entsprechenden Kommentar erhalten und in der anstehend geplanten Pilotierung auf ihre Tauglichkeit getestet werden. Nicht verschwiegen werden soll aber auch, dass es in der Diskussion um die Auswahl und Formulierung weniger, gleichwohl möglichst präziser Indikatoren wiederholt den Impuls gab, neue Items aufzunehmen. Diese Impulse wurden kritisch und immer vor dem Hintergrund diskutiert, das Ziel, ein auf etwa ein Drittel reduziertes Instrument, nicht zu verfehlen. Die im ersten Teil des Beitrags berichteten Ergebnisse aus den bundesweiten Befragungen liefern erste Hinweise zum Stand der Umsetzung von oGK in deutschen Krankenhäusern und Einrichtungen der Pflege. Der Grad der Umsetzung der 10 Standards variiert, liegt etwas über dem Skalenmittelpunkt und zeugt von einer durchaus kritischen Sicht der Befragten auf „ihre“ Einrichtungen. Dass der Standard „gutes Zurechtfinden im Krankenhaus“ in allen 3 untersuchten Stichproben mit am besten beurteilt wird, ist erfreulich. Nicht unerwartet sind die relativ zurückhaltenden Bewertungen rund um die Standards zu Gesundheitsinformationen (partizipative Entwicklung, Bereitstellung zielgruppenspezifischer Formate) und die Schulung der Belegschaft zum Thema Gesundheitskompetenz. Hier zeichnen sich auf Bundesebene Verbesserungsmöglichkeiten ab, die indirekt auch die Befunde aus den bevölkerungsbezogenen Studien zur individuellen Gesundheitskompetenz wiederspiegeln: Es ist in Deutschland nicht leicht, an gute gesundheitsbezogene Informationen zu gelangen . Im Vergleich zu den in der Validierungsstudie befragten zertifizierten Brustzentren wenden die hier befragten Krankenhäuser und Pflegeeinrichtungen seltener Kommunikationsstandards an und stellen in geringerem Ausmaß sicher, dass das Einverständnis der Patient:innen in Hochrisikosituationen vorliegt. Auch im internationalen Vergleich sind die hier vorgestellten Ergebnisse niedrig. Beispielsweise bewerten 405 Führungskräfte aus dem mittleren und höheren Management italienischer Krankenhäuser 7 der 10 Standards im Mittel zwischen 5,16 (Standard 3) und 5,97 (Standard 4) und nur bei Standard 10 ist die Bewertung schlechter als 4 (3,7; ). In Peking (China) bewerten Krankenhausleitungen alle 10 Attribute im Mittel mit mehr als 6 von 7 Punkten und damit noch einmal erheblich positiver als die Befragten in unseren Untersuchungen . Vergleichswerte zur oGK in stationären Pflegeeinrichtungen liegen unseres Wissens bislang nicht vor. Hier schließen wir mit den vorgestellten Ergebnissen eine Lücke. Erstaunlich sind die geringen Unterschiede in der Bewertung von Pflegeeinrichtungen und Krankenhäusern in der EwiKo-Studie. In Einrichtungen der Pflege müsste sich oGK eigentlich leichter implementieren lassen (und weiter implementiert sein) als in Krankenhäusern. Beispielsweise haben Einrichtungen der Pflege im Vergleich zu Krankenhäusern in der Regel eine einfachere Organisationsstruktur, ein weniger breites Aufgabenspektrum, längere Verweilzeiten der Bewohner:innen und damit mehr Gelegenheiten, GK zu adressieren. Ob dies tatsächlich die Umsetzbarkeit von oGK-Instrumenten und Maßnahmen erleichtert oder ob der Befund evtl. auch (nur) aus dem hohen Abstraktionsgrad des HLHO-10 resultiert, ist in künftigen Studien zu untersuchen. Eine Schwäche der beiden Befragungen besteht in dem geringen Rücklauf, der die Übertragbarkeit der Ergebnisse einschränkt. Die EwiKo-Stichprobe ist eine willkürliche Stichprobe und hatte u. a. aufgrund der erschwerten Bedingungen während der COVID-19-Pandemie nicht den Anspruch auf Repräsentativität. Im Abgleich mit den Grunddaten der Krankenhäuser (Statistisches Bundesamt, ) ist auch die GK-KH-Stichprobe mit ihrem hohen Anteil an freigemeinnützigen Krankenhäusern nicht repräsentativ. Unter der Annahme, dass sich in erster Linie schon mit Gesundheitskompetenz vertraute Einrichtungen beteiligt haben, müsste man in der Gesamtheit aller deutschen Krankenhäuser bzw. der Pflegeeinrichtungen in Thüringen und Sachsen von einem geringeren Umsetzungsgrad ausgehen. Möglicherweise ist so die im Vergleich zu EwiKo höhere Beteiligung in der GK-KH-Studie mitverantwortlich für die fast durchgängig beobachtete schlechtere Selbsteinschätzung in der GK-KH-Stichprobe. Für Unterschiede in den Befragungen ebenfalls mitverantwortlich könnten die unterschiedlichen Erhebungszeiträume sein. EwiKo fand mitten in der zweiten Welle der COVID-19-Pandemie (2021) statt, GK-KH in 2022, zu einem Zeitpunkt, an dem sich das pandemische Geschehen entspannt hatte. Angestrebt werden sollten bei künftigen Befragungen höhere Beteiligungsraten und die Erhebung über den HLHO-10 hinausgehender Informationen, um die Aussagekraft der Selbsteinschätzungen zu erhöhen. Der HLHO-10 stellt mit seinen 10 Standards ein kurzes Selbstbewertungsinstrument dar, dass jedoch aufgrund seines hohen Abstraktionsgrades nur bedingt konkrete Ansatzpunkte für die Weiterentwicklung in den Einrichtungen selbst liefert. Damit wurde und wird die Entwicklung umfangreicherer Instrumente begründet . Die Erfahrungen aus der Entwicklung und Pilotierung solcher Instrumente zeigen übereinstimmend: Der Austausch über die Standards und Indikatoren mit Fachkräften in den Einrichtungen führt zu einem gemeinsamen Verständnis von Gesundheitskompetenz und entsprechend auch einem (zu Beginn in der Regel erst in Ansätzen vorhandenen) Verständnis organisationaler Gesundheitskompetenz . Allerdings ist der Aufwand, der mit der vollständigen und umfassenden Selbstbewertung einhergeht, für die Einrichtungen erheblich und resultiert in dem Wunsch nach kürzeren, aber dennoch gezielten Ansatzpunkten identifizierender Selbstbewertungstools . Auch im M‑POHL-Netzwerk gab der einhellig aus den nationalen Pilotprojekten zur kulturellen Adaptation des SAT-OHL-Hos-V1.1 formulierte Wunsch nach einem kürzeren Instrument den Anstoß zu dem hier berichteten Kürzungsprozess (Ergebnisse noch nicht veröffentlicht). Bei diesem Prozess traten zum einen Phänomene auf, wie sie auch in nationalen Entwicklungen berichtet worden sind (z. B. das Abwägen zwischen Genauigkeit und Abstraktion; ). Zum anderen wurde deutlicher als im nationalen Kontext, dass bei der Entwicklung eines international anwendbaren Selbstbewertungsinstrumentes Bedingungen außerhalb der Organisation, z. B. Unterschiede in den Gesundheitssystemen, mitberücksichtigt werden müssen. Mit einem Umfang von etwa einem Drittel der ursprünglichen Indikatoren ( n = 54) gehört das Instrument nun zu den „kurzen ausführlichen“ Instrumenten (vgl. 160 Indikatoren in und 155 ; 88–92 ; 87 für das Krankenhaus, 85 für die Pflege, 77–82 für die Eingliederungshilfe, 20 in leichter Sprache und 77 ). Es ist dabei nach Einschätzung der M‑POHL-OHL-HOS-Arbeitsgruppe gelungen, die Breite und Tiefe des Konzeptes der oGK trotz der Kürzungen zu erhalten und weiterhin für das Konzept oGK zentrale Aspekte abzubilden (z. B. die nachhaltige Implementierung effektiver Kommunikationspraktiken und den Aufbau von Fähigkeiten in der Belegschaft, gesundheitskompetente Entscheidungen ihrer Patient:innen zu ermöglichen). Limitierend ist festzuhalten, dass eine empirische Prüfung des gekürzten SAT-OHL-Hos V2 in Bezug auf Akzeptanz und Praktikabilität noch aussteht und auch (noch) keine Informationen zu den psychometrischen Eigenschaften der Standards und Indikatoren vorliegen, ähnlich wie auch bei den anderen „kurzen ausführlichen“ Selbstbewertungsinstrumenten. Diese Fragen müssen in weiteren Studien beantwortet werden. Erwähnt werden soll, dass mittlerweile auch für Deutschland Interventionen vorliegen, die Maßnahmen zur Stärkung der GK und oGK in Einrichtungen der Gesundheitsversorgung und anderen Settings adressieren. So richtet sich der „Düsseldorfer Kompass“ an die Belegschaft von gesetzlichen Krankenkassen und „EwiKo“ adressiert nicht nur Krankenhäuser und Pflegeeinrichtungen, sondern auch Einrichtungen der Eingliederungshilfe . Auf Organisationen in verschiedenen Settings ausgerichtet ist zudem das Konzept der organisationsbezogenen Gesundheitskompetenz in urbanen Organisationen . Speziell für die Alten- und Langzeitpflege entwickelt ist „QUALIPEP“ . Krankenhäuser und Einrichtungen der Pflege schätzen ihre organisationale Gesundheitskompetenz mittelmäßig bis gut ein, am zurückhaltendsten sind sie in Bezug auf die Bereitstellung von zielgruppenspezifischen, qualitativ hochwertigen Gesundheitsinformationen und die Schulung der Belegschaft zur Gesundheitskompetenz. Ausführliche Selbstbewertungen sind erforderlich, um den Einrichtungen konkrete Ansatzpunkte für Organisationsentwicklung zu liefern. Mit dem gekürzten SAT-OHL-Hos steht nun ein „kurzes ausführliches“ und international vergleichend einsetzbares Instrument zur Verfügung, dessen empirische Prüfung ansteht. |
Transgender and gender diverse people with endometriosis: A perspective on affirming gynaecological care | 13938720-b6b4-43f0-bf55-ce1c7062924d | 11095187 | Gynaecology[mh] | The terms transgender and gender diverse refer to those whose gender does not align with that presumed at birth. This includes but is not limited to transgender men, transgender women, brotherboys, sistergirls, Takatapui, Tāhine, non-binary, agender people and those with fluid genders. – Trans and gender diverse people constitute between 0.5% and 4.8% of the Australian population (accurate statistics are absent due to a lack of inclusion in Australian census data), equating to over 1 million people. Trans and gender diverse people require access to gender-affirming healthcare to thrive. Although the demand for gender-affirming services is rising, many trans and gender diverse people are left waiting for years to access this often life-saving care or routine medical care. Gender-affirming care refers to not only gender-affirming hormone therapy (GAHT) or surgeries but also treating trans and gender diverse people with respect, creating a welcoming clinical environment, using their correct pronouns, name and other preferred language and running appropriate health screenings. Trans and gender diverse people have been identified as a priority population in multiple national strategies by the Australian Government due to a higher burden of physical and mental health concerns resulting from discrimination, including poor access to gender-affirming healthcare. Recent surveys of the Australian trans and gender diverse population show three key priorities: affordable and available gender-affirming care; accessible and easy pathways to access gender-affirming care; and, a healthcare sector knowledgeable in gender-affirming care. , These priorities are critical; trans and gender diverse people report lower levels of psychological distress, self-harm and suicidality when they experience respectful and competent gender-affirming care. Unfortunately, many trans and gender diverse people report struggling to access gender-affirming care with 60% of trans and gender diverse young people in Australia reporting feeling isolated from healthcare services. Those at the intersections of marginalized identities experience heightened discrimination and poorer access to culturally sensitive healthcare. For instance, poorer health outcomes are reported for transfeminine people, trans and gender diverse people of colour, disabled and neurodivergent people. Specific to Australia, Aboriginal and Torres Strait Islander trans and gender diverse people struggle to access both culturally sensitive and lesbian, gay, bisexual, trans, queer, intersex, asexual and other related identities (LGBTQIA+) friendly care. A lack of access to gender-affirming care in regional areas also disproportionately affects this community. Likewise, in Aotearoa, Maori and Pasifika people experience higher health risks related to discrimination. Research reflects the importance of a strong sense of belonging to culture, community and family/whanau within Aboriginal, Torres Strait Islander, Māori and Pasifika communities. The impacts of colonialism which discouraged the expression of diverse genders and sexualities previously accepted within these communities have huge ramifications for the wellbeing of trans and gender diverse First Nations people. Access to gynaecological care for trans and gender diverse people is hindered by cisnormative assumptions which frame gynaecology as ‘women’s health’. , We see this in the lack of inclusive language, imagery and lack of providers knowledgeable in gender-affirming gynaecological care. , When accessing care, trans and gender diverse people report experiencing misgendering, a lack of trauma-informed care, invasive questioning irrelevant to their care, severe pain during gynaecological examination, provider incompetence, gate keeping of medically necessary surgeries and even complete denial of care. – These experiences of discrimination have significant ramifications on future attempts to access care, with one genderqueer individual reporting removing their own intrauterine device (IUD) due to fear of returning to their healthcare provider. Another reported a cancer scare due to their provider being unable to interpret the pap smear results of an individual on testosterone, and another trans and gender diverse individual was diagnosed with a prostate infection despite informing their provider that they did not have a prostate. People with intersex variations similarly experience healthcare discrimination and poor access to non-pathologizing medical care. This is relevant as people with intersex variations have unique gynaecological needs, and up to 40% identify as trans and gender diverse. Intersex individuals make up between 1.7% and 4% of the population, yet there is limited research to date on the gynaecological needs of this group. Trans and gender diverse people commonly experience chronic pelvic pain (CPP), yet have limited access to gynaecological care. CPP affects between 51% and 72% of trans and gender diverse people presumed female at birth, and while endometriosis is a common cause of CPP, there is limited research investigating the prevalence, symptoms, treatment or experiences of endometriosis in trans and gender diverse people. – As stated above, the experiences of intersex people accessing gynaecological care for endometriosis or CPP are also absent from the research literature. Endometriosis is a chronic multi-system disease affecting between 1 in 7–10 women and people presumed female at birth. , It is characterized by debilitating pelvic pain, fatigue, menorrhagia, dysmenorrhoea, gastrointestinal distress, dyspareunia and dysuria. , This condition can have huge repercussions on the psychological and social wellbeing of the individual and can impact their financial security. , These ramifications are compounded in trans and gender diverse populations due to poorer access to gender-affirming healthcare, and higher rates of mental health concerns, and social, housing and employment discrimination. , , Endometriosis has a lengthy diagnostic delay of 8 years with limited management or effective treatment options available largely attributable to the stigmatization and normalization of pelvic pain and a lack of healthcare providers educated about endometriosis. This diagnostic delay is expected to be greater for trans and gender diverse people who experience additional barriers to accessing healthcare and for those at the intersections of marginalized identities, particularly trans and gender diverse people of colour. Endometriosis has historically been defined as an upper-class white women’s disease, which still influences care today and can be observed in diagnostic bias. Cisgender women historically report feeling dismissed and marginalized by medical professionals when seeking diagnosis and treatment for endometriosis and pelvic pain. Therefore, trans and gender diverse people are a marginalized group, within an already marginalized group, making seeking and obtaining an effective diagnosis even more challenging. Medical management of endometriosis does not differ significantly between trans and cis individuals and largely consists of hormonal contraceptives, surgical excision, analgesics, tricyclic antidepressants, antifibrinolytic agents, non-steroidal anti-inflammatories and gonadotropin-releasing hormone analogues. , , Significant numbers of cisgender women with endometriosis access complementary medicine (CM) due to dissatisfaction with pharmaceutical or surgical management reported among 45.4%. , Dissatisfaction with medical support among trans and gender diverse people with endometriosis is unclear; however, numbers are likely to be greater due to additional barriers to care. Heteronormative and cisnormative assumptions which underpin gynaecology, including the prioritization of penetrative intercourse and reproductive capacity over pain management, are particularly harmful to LGBTQIA + populations and likely increase dissatisfaction with care. This may similarly lead more trans and gender diverse people with endometriosis to access CM. CM in cisgender women with endometriosis has shown success in reducing dysmenorrhoea, shrinking endometriotic lesions and supporting pregnancy, alongside lower rates of adverse effects versus pharmaceutical interventions. , Therefore, it is essential that both CM and medical providers are competent in gender-affirming care. Australia was the first country in the world to implement a national action plan to help improve the lives of those with endometriosis. The National Action Plan for Endometriosis (NAPE) was released in 2018 to help support new advances in diagnosis and treatment. While there have been improvements as a result of the plan, such as the introduction of specialist pelvic pain clinics, the Australian government has recognized that marginalized groups, such as trans and gender diverse people, are not currently well represented in the plan, and in mid-2023 released a targeted call to specifically improve this. , The experiences of the author, L.A., accessing gynaecological care for endometriosis in Australia as a non-binary transmasculine person add valuable insights to this topic. They noted limited care available when presenting as a woman, versus absent care specific to non-binary people as they socially and medically transitioned. Non-binary individuals frequently avoid disclosing their gender due to fear of discrimination which results in inadequate care, self-medication and skewed data, disguising the need to adapt healthcare to be inclusive of non-binary genders. L.A. experienced a lack of providers competent in gender-affirming care, and in some instances, providers refused to adopt their correct pronouns. As a result of these kinds of experiences, trans and gender diverse people are often required to become experts in their own healthcare due to a lack of providers knowledgeable in this field. In L.A.’s experience, the dearth of literature investigating the impact of GAHT on endometriosis symptoms prevented them from making informed decisions about their health. In addition, common misconceptions around the curative role of hormone therapy or hormonal contraceptives resulted in their symptoms being dismissed while accessing GAHT. This is not in line with the limited research literature, with one study showing sustained symptoms in 40% of trans and gender diverse adolescents prescribed both testosterone and progestin. Trans and gender diverse people additionally experience disproportionate challenges economically which are compounded by a reduced ability to work in those with CPP or endometriosis. , L.A.’s experience accessing gender-affirming care and specialist services for endometriosis management, while facing workplace and social instability during transition, resulted in significant financial and psychological stress. Given the substantial financial burden that endometriosis already exerts on people, , the stress this additional burden places on trans and gender diverse people should not be underestimated. Limited trans and gender diverse–specific social support for those with endometriosis compounded this stress; support groups largely cater to cisgender individuals. An inclusive support space can assist individuals in coping with the often disabling experience of CPP and the anxiety and fear which can accompany this. In terms of online support spaces, it is well-known that trans and gender diverse people commonly report discrimination, bullying and abuse in these spaces, with recent research showing an increase in online anti-trans harassment across Australia over the past year. This highlights the importance of considering the social determinants of health when considering the care of trans and gender diverse people with CPP or endometriosis. A significant barrier to trans and gender diverse people’s access to competent gender-affirming care is a lack of inclusion in healthcare provider curriculum. , Research to date which has broadly assessed the inclusion of LGBTQIA + health in medical curriculum across Australia and Aotearoa has reported minimal inclusion of LGBTQIA + health as a whole, with trans and gender diverse health among the least covered subtopics. – The survey of academics showed that the majority (69%) believed the content to be important, yet did not believe it was relevant to their teaching niche. The vast majority (74%) were unaware of any faculty support from their university around LGBTQIA + healthcare. This is a major barrier to inclusion of trans and gender diverse health in curriculum with 15 medical schools across Australia and Aotearoa reporting no faculty support in this area. Sanchez et al. additionally reported that of 15 medical schools, only 4 included transitioning or gender affirmation in required curriculum. This is reflected in limited knowledge of healthcare providers and students in gender-affirming care. For example, a survey of fifth year medical students at the University of Otago, Aotearoa, showed that fewer than half were able to explain gender affirmation. This survey additionally identified poor cultural competency within LGBTQIA + healthcare, with fewer than half of students aware of the term Takatāpui (an umbrella term encompassing LGBTQIA + Māori people which acknowledges the importance of Maori culture in their queer identity). , Similarly, needs assessment reports interviewing Aboriginal Community Controlled Organization staff report that there is limited inclusion of Aboriginal culture within LGBTQIA + education. Participants reflected that content was ‘really Western’ and ‘it’d be great to have pre-colonial stories’ and a ‘cultural lens with the other elements’. A key factor in the negative healthcare encounters reported by trans and gender diverse individuals is provider bias. Inclusion of trans and gender diverse health in curriculum can legitimize gender-affirming care as a routine part of conventional medicine and may have a ‘pre-bunking’ effect, reducing healthcare providers susceptibility to disinformation. , However, healthcare bias is not necessarily reduced simply by introducing gender-affirming care to medical curriculum. Research in the United States suggests that competency in trans and gender diverse healthcare shares a greater correlation with transphobia versus hours of formal or informal education. Therefore, transphobia and medical bias must be addressed through curriculum to improve healthcare experiences and outcomes for trans and gender diverse people. , Care should be taken to avoid binary constructions of sex and gender, the conflation of sex and gender, and the use of pathologizing and stigmatizing language in trans and gender diverse healthcare. An overemphasis on biomedical framing dismisses important sociocultural factors in the construction of sex and gender, and contributes to the stigmatization and pathologizing of trans and gender diverse medical care. A structural competency approach which recognizes the societal influences on health outcomes may be a useful tool to reduce bias towards marginalized populations and draws attention to those at the intersections of marginalized identities. , Instead of simply describing the existence of health inequities, a structural competency approach discusses the historical and current structural factors which contribute to these inequities. For example, a curriculum which acknowledges barriers to care, including barriers within the learning institution, profession, wider environment and the history of trans and gender diverse healthcare, , , can teach students how to recognize and respond to illness as a downstream effect of social, political and economic structures. A structural competency approach has the opportunity to address the intersections of trans and gender diverse First Nations identities by recognizing the ways modern policies, laws and social contexts which criminalize or regulate gender identity and sexual orientation stem from colonial legacies. It is imperative that trans and gender diverse healthcare is included across medical and CM curriculum to improve healthcare outcomes for this population. Inclusion in curriculum should address factors, such as bias, cultural competency, faculty education, use of non-stigmatizing language, clinical experience with trans and gender diverse patients and inclusion in assessments, case studies and exams. Trans and gender diverse health should be included throughout healthcare niches versus segregated in the curriculum and education on the basics of gender-affirming care made mandatory versus elective content. , In the niche of gynaecology, curriculum should use gender inclusive imagery and language (i.e. language which is self-determined by trans and gender diverse individuals), gender neutral language (i.e. people with endometriosis) or additive language (i.e. women and trans and gender diverse people with endometriosis). In line with this, informal reference of gynaecology/reproductive medicine as ‘women’s health’ should be avoided and replaced with gender neutral language (gynaecology) or additive language (women’s and trans and gender diverse health). Inclusion of trans and gender diverse people in the development of curriculum is essential to ensure that content meets the needs of this diverse population and does not perpetuate existing medical bias. Importantly, cultural competency for First Nations trans and gender diverse people should be designed in consultation with First Nations trans and gender diverse people (Brotherboys and Sistergirls, Māori and Pacifica trans and gender diverse, and Takatāpui representatives). Trans and gender diverse people with endometriosis or CPP currently have limited access to affirming medical care which results in poorer health outcomes. Experiences of the author, L.A., highlight the importance of gender-affirming care in gynaecology and the broader social and economic factors which influence healthcare outcomes for this population. A key barrier to care identified is the lack of inclusion of trans and gender diverse health in healthcare curriculum. Further evaluation of current curriculum is required to inform future curriculum change, with no known assessment of CM curriculum to date. Curriculum exploring the health needs of trans and gender diverse people in a non-pathologizing and stigmatizing manner could dramatically improve access to gynaecological care and healthcare outcomes for this population. |
Etablierung und Umsetzung des Nationalen Aktionsplans Gesundheitskompetenz in Deutschland | ae861859-78ae-43ca-b936-8c722fdccf81 | 11868236 | Health Literacy[mh] | Wie die breit gefächerte Literatur und zahlreiche Dokumente der Weltgesundheitsorganisation (WHO) belegen, ist Gesundheitskompetenz – international als „Health Literacy“ bezeichnet – inzwischen zu einem bedeutenden Public-Health-Thema geworden. Definiert wird Health Literacy als „the knowledge, motivation and competences to access, understand, appraise, and apply health information in order to make judgements and take decisions in everyday life concerning healthcare, disease prevention and health promotion“ . Denn zahlreiche Studien zeigen mittlerweile, dass die Gesundheitskompetenz in vielen Ländern – auch in Deutschland – in weiten Teilen der Bevölkerung unzureichend ist . Gleichzeitig stellen Gesundheitskompetenz und ein souveräner Umgang mit Gesundheitsinformationen in modernen komplexen Gesellschaften eine immer wichtiger werdende Voraussetzung für eine selbstbestimmte Lebensgestaltung dar . Die voranschreitende digitale Transformation befördert dies in hohem Tempo. Durch sie sind Gesundheitsinformationen zwar heute einfacher zugänglich als noch vor wenigen Jahren, aber zugleich ist es immer schwerer geworden, relevante Informationen in der unüberschaubaren Informationsfülle zu finden und vor allem, sie einschätzen und nutzen zu können . In Reaktion auf diese Situation wurden in etlichen Ländern politische Strategien, darunter auch „Nationale Aktionspläne“, zur Förderung der Gesundheitskompetenz aufgelegt . Das gilt seit 2018 auch für Deutschland. Über die Entwicklung dieser Strategien und Aktionspläne und auch über die Umsetzung und die ihr zugrundeliegenden Implementationsstrategien und -erfahrungen ist noch immer wenig bekannt. Ziel des vorliegenden Beitrags ist es, die Entstehung und Umsetzung des Nationalen Aktionsplans Gesundheitskompetenz in Deutschland (NAP-GK) darzustellen und zu fragen, welche Erfahrungen damit gesammelt wurden und welche Wirkungen der Plan entfaltet hat. Zuvor wird kurz die Ausgangssituation und die Entwicklung der Gesundheitskompetenzdiskussion in Deutschland betrachtet. Die Idee, einen NAP-GK zu erarbeiten, entstand im Jahr 2015 mit Bekanntwerden der ersten bevölkerungsbezogenen Studienergebnisse für Deutschland . Bis dahin fand das Thema Gesundheitskompetenz in Deutschland kaum Beachtung. Anders war die internationale Situation, insbesondere in den USA. Dort existiert bereits seit den 1990er-/2000er-Jahren eine umfangreiche Forschung zur Gesundheitskompetenz, die auf die Diskussion über die gesundheitlichen Folgen geringer Literalität (unzureichende Lese‑, Schreib- und Rechenfähigkeiten) zurückgeht. Denn bereits in den 1990er-Jahren hatten dort durchgeführte Studien wie der „National Adult Literacy Survey“ (NALS) gezeigt, dass geringe Literalität in den USA ein gravierendes gesellschaftliches Problem darstellt . Das ist es bis heute geblieben: Nach wie vor verfügen ca. 20 % der amerikanischen erwachsenen Bevölkerung nicht über grundlegende Lese- und Schreibfähigkeiten. In Deutschland sind es 12 % der Erwachsenen – das entspricht 6,2 Mio. Menschen . Die Studien weisen zudem darauf hin, dass sich geringe Literalität quer durch alle Gesellschaftsschichten zieht, die unteren sozialen Schichten aber besonders trifft, und ebenso, dass sie gesundheitlich folgenreich ist: Denn sie erschwert den Zugang zum Gesundheitssystem und behindert die Compliance, weil Verschreibungen, Therapiehinweise oder Krankheitsinformationen nicht gelesen und verstanden werden können . In der Regel lag diesen wie auch den anschließend entstandenen Studien ein funktionales, auf Literalität begrenztes Verständnis von Health Literacy zugrunde, das in den folgenden Jahren zahlreiche konzeptionelle Weiterentwicklungen erfahren hat. Zu erwähnen sind insbesondere die Überlegungen zum relationalen Charakter von Gesundheitskompetenz, nach denen Gesundheitskompetenz als Ausdruck sowohl persönlicher Fähigkeiten als auch der gegebenen Umgebungs- und Strukturbedingungen zu verstehen ist . Hervorzuheben sind auch die Bemühungen, zu einem weiter gefassten, Public-Health-orientierten Verständnis von Gesundheitskompetenz zu kommen . Durch diese und andere Impulse hat sich Gesundheitskompetenz/Health Literacy inzwischen zu einem umfassenden, multidimensionalen Public Health-orientierten Konzept entwickelt, das nicht nur auf Compliance, sondern auch auf Krankheitsbewältigung/Versorgung, Prävention und Gesundheitsförderung abstellt und dabei auf die Stärkung selbstbestimmter Entscheidungen von Menschen über ihre Gesundheit zielt . Dieses Verständnis prägt auch den ersten Europäischen Health Literacy Survey (HLS-EU) , mit dem 2012 faktisch der Startschuss für die Forschung zur Gesundheitskompetenz in Deutschland erfolgte. Der HLS-EU ist aber auch deshalb erwähnenswert, weil mit ihm eine neue, die europäische Diskussion bis heute prägende Definition und ein entsprechendes Erhebungsinstrument entwickelt wurden und mit ihm – nicht weniger wichtig – erstmals empirische Befunde für Deutschland vorlagen. Allerdings waren sie auf das Bundesland Nordrhein-Westfalen (NRW) beschränkt. Deutschlandweite Daten fehlten somit, und dies motivierte die Entstehung erster Studien zur Gesundheitskompetenz in Deutschland . Dazu gehört auch die erste Studie zur Gesundheitskompetenz der Bevölkerung in Deutschland (Health Literacy Survey Germany 1 – HLS-GER 1 ), die 2014 nach dem gleichen methodischen Vorgehen wie der HLS-EU durchgeführt wurde. Die Ergebnisse dieser Studie sorgten sowohl in der gesundheitswissenschaftlichen als auch der gesundheitspolitischen Diskussion für eine nachhaltige Veränderung der Aufmerksamkeit für das Thema. Denn sie zeigten, dass mehr als die Hälfte der Bevölkerung in Deutschland – konkret 54,3 % – eine geringe Gesundheitskompetenz aufweist. Sie verdeutlichten zudem, dass Gesundheitskompetenz ungleich verteilt ist und geringe Gesundheitskompetenz u. a. mit niedriger Bildung, niedrigem Sozialstatus und einem höheren Lebensalter assoziiert ist . Anders formuliert: Menschen mit diesen Merkmalen gehören zu den sogenannten „vulnerablen“ Gruppen, die besonders große Schwierigkeiten im Umgang mit Gesundheitsinformationen haben und bei der Förderung von Gesundheitskompetenz spezielle Beachtung erhalten sollten. Das ist umso wichtiger, als geringe Gesundheitskompetenz mit zahlreichen negativen Konsequenzen einhergeht, die von ungesünderen Verhaltensweisen über verminderte Nutzung von Präventionsangeboten, ein höheres Krankheitsrisiko bis hin zu einer intensiveren Nutzung des Gesundheitssystems reichen und erhebliche Kosten verursachen . Ähnliche Tendenzen zeigen sich auch in anderen Ländern (z. B. ) sowie in nachfolgenden Studien zur Gesundheitskompetenz der Bevölkerung und einzelner Bevölkerungsgruppen in Deutschland (z. B. ). Die dargestellten Erkenntnisse zur Gesundheitskompetenz haben international die Entstehung zahlreicher Strategien und Nationaler Aktionspläne zur Verbesserung der Gesundheitskompetenz nach sich gezogen. Auch in Deutschland bildete sich unter dem Eindruck der ersten Studienergebnisse 2015 eine Initiative zur Erarbeitung eines Nationalen Aktionsplans Gesundheitskompetenz. Anders als üblich wurde sie nicht durch eine politische Instanz initiiert, sondern entstand unter der Federführung von Wissenschaftler:innen der Universität Bielefeld als zivilgesellschaftliche Initiative und setzte sich aus einer Gruppe von ausgewiesenen nationalen und internationalen Expert:innen aus unterschiedlichen wissenschaftlichen Disziplinen, Institutionen der Gesundheitsversorgung und der Gesundheitspolitik zusammen. Ende 2015 startete die Erarbeitung des NAP-GK. Sie beruhte auf mehreren Arbeitsschritten und begann mit einer Recherche und Analyse vorliegender Aktionspläne sowie einem Review der vorliegenden Literatur und der existenten Studienbefunde. Es folgten ausgiebige Diskussionen der daraus erwachsenen Konsequenzen für die Kontur und den Inhalt des zu erarbeitenden Aktionsplans für Deutschland – dies im Plenum sowie in unterschiedlichen Arbeitsgruppen der NAP-GK-Expertengruppe. Sie mündeten in der Erstellung eines ersten Entwurfs für den NAP-GK. Daran schloss sich ein umfangreicher Diskussions- und Konsentierungsprozess an: In einem ersten Schritt wurde ein Workshop mit Stakeholdern und Vertreter:innen der Verbände im Gesundheits- und Bildungswesen sowie den Mitgliedern der „Allianz für Gesundheitskompetenz“ (Initiative des Bundesministeriums für Gesundheit, BMG) durchgeführt, um die Empfehlungen des Plans in den unterschiedlichen Handlungsfeldern zu diskutieren und zu ergänzen. Ein zweiter Workshop richtete sich gezielt an Selbsthilfe- und Patientenorganisationen mit dem Ziel, den Entwurf des NAP-GK aus ihrer Perspektive zu kommentieren. Zusätzlich wurden Einzelgespräche mit weiteren relevanten Interessen- und Arbeitsgruppen sowie einzelnen Stakeholdern durchgeführt. Alle Schritte wurden protokolliert, diskutiert und wichtige relevante Ergebnisse anschließend eingearbeitet. Nach mehrfacher Überarbeitung der Rohfassung erfolgte im Spätherbst 2017 die Finalisierung des NAP-GK. Der innerhalb von knapp zwei Jahren erarbeitete NAP-GK enthält 15 aufeinander abgestimmte Empfehlungen in vier Handlungsbereichen, wie die Gesundheitskompetenz in Deutschland gefördert werden kann und welche Maßnahmen dazu angestoßen werden sollten. Die Empfehlungen zielen auf unterschiedliche Bereiche des gesellschaftlichen Lebens und widmen sich zu gleichen Teilen der Stärkung der persönlichen Gesundheitskompetenz wie der Verbesserung der Umgebungs- und Strukturbedingungen. Im Mittelpunkt des ersten Handlungsbereichs stehen die alltäglichen Lebenswelten. Die Empfehlungen konzentrieren sich auf die Stärkung der Gesundheitskompetenz im Erziehungs- und Bildungsbereich, der Wohnumgebung, der Arbeitswelt, der Kommune, in den Medien und im Freizeit- und Konsumbereich. Der zweite Handlungsbereich widmet sich dem Gesundheitssystem. Aufgrund seiner Komplexität, Instanzenvielfalt und Intransparenz stellt es sehr hohe Anforderungen an die Nutzenden. Empfohlen wird, es zu einem gesundheitskompetenten, nutzerfreundlichen und patientenzentrierten System weiterzuentwickeln und dazu die Navigation, Kommunikation, Information und Partizipation zu verbessern. Der dritte Handlungsbereich befasst sich mit dem Leben mit chronischen Erkrankungen, die sich immer weiter ausbreiten. Sie gehen mit zahlreichen Bewältigungsherausforderungen an die Erkrankten und ihre Familien einher. Die Empfehlungen zielen darauf ab, den Betroffenen einen gesundheitskompetenten Umgang mit der Krankheit und ihren zahlreichen Begleiterscheinungen zu ermöglichen, ihre Fähigkeit zum kompetenten Selbstmanagement zu stärken und das Alltagsleben mit chronischer Erkrankung zu erleichtern. Der vierte Handlungsbereich konzentriert sich auf die Verbesserung (und den Ausbau) der Forschung zur Gesundheitskompetenz . Der NAP-GK wurde im Februar 2018 dem damaligen Gesundheitsminister, der zugleich Schirmherr des Vorhabens war, übergeben. Der Plan richtet sich vor allem an die Politik und an relevante Akteur:innen im Gesundheitssystem und in anderen Bereichen der Gesellschaft (Erziehungs- und Bildungssystem, Freizeitbereich, Arbeitswelt etc.). Sein übergeordnetes Ziel besteht darin, gemäß der Prämisse „Health Literacy in all Policies“ ein Kooperationsbündnis aller Bereiche der Gesellschaft zu initiieren und so ein umfassendes Vorgehen bei der Förderung der Gesundheitskompetenz zu ermöglichen. Mit der Veröffentlichung des NAP-GK 2018 trat die Arbeit der Expertengruppe in eine neue Phase ein und begann die Umsetzung des NAP-GK. Mit ihr wurde weitgehend Neuland betreten. Denn zur damaligen Zeit lagen erst wenige Erfahrungen vor, wie die Umsetzung solcher Aktionspläne in die komplexen Strukturen der Politik und der Praxis erfolgen kann. Auch Diskussionen über Implementationsstrategien und -erfahrungen existierten nicht. Sie entstehen erst langsam (z. B. ). Vielfach waren die bis dato existenten Aktionspläne zudem in Ländern mit einem stärker staatlich geprägten Gesundheitssystem entstanden, in denen die Umsetzung von Empfehlungen prinzipiell dadurch möglich ist, dass von übergeordneten Instanzen Richtlinien formuliert werden, deren Umsetzung überprüft und eingefordert werden kann. Mittlerweile wird bezweifelt, dass solche Top-Down-Strategien allein erfolgversprechend sind (, auch ). Analysen der Umsetzung politischer Programme oder gesetzlicher Regelungen zufolge ist anzunehmen, dass sie selbst in zentralistisch gesteuerten Systemen nur sehr bedingt wirkungsvoll sind . Weil Deutschland kein zentral gesteuertes Gesundheitssystem besitzt, sondern als sogenannter „konservativer“ Wohlfahrtsstaat fast alle seine Sicherungs- und Versorgungssysteme nach dem Subsidiaritätsprinzip gestaltet hat, kam für die Umsetzung ohnehin keine Top-Down-Strategie in Frage. Im deutschen Gesundheitssystem kann der Staat nur politische Grundsätze formulieren, deren Konkretisierung und Umsetzung durch eigenständige, selbstorganisierte, relativ autonome Organisationen und Verbände erfolgt. Für die Umsetzung des NAP-GK erwuchs daraus die Notwendigkeit, nicht nur politische Entscheidungsträger, sondern auch die entsprechenden Instanzen, Organisationen, Verbände und Akteur:innen der Selbstverwaltung im Gesundheitssystem und in anderen Infrastrukturbereichen in die Umsetzung und Realisierung der Empfehlungen einzubeziehen. Damit bot sich eine eher Bottom-up ausgerichtete, auf Kooperation setzende Umsetzungsstrategie an, um Institutionen aus möglichst vielen Bereichen zur Umsetzung des Aktionsplans zu motivieren. Das sprach für eine auf Kooperation setzende Implementationsstrategie mit „upstream and downstream actors“ aus unterschiedlichen Bereichen . Unter Rückgriff auf implementationswissenschaftliche Überlegungen wurde die Umsetzung des NAP-GK zudem nicht als ein einmaliger Vorgang, sondern als fortlaufender Prozess konzipiert . Dafür spricht, dass die Umsetzung von Aktionsplänen oder politischen Strategien fast immer auch mit der Einführung von Innovationen verbunden ist, deren Übernahme sich selten ad hoc, sondern meist schrittweise vollzieht. Sie stößt zudem oft zunächst auf Widerstand , dessen Ausräumung Zeit erfordert. Als weiteres wichtiges Merkmal der Umsetzungsstrategie des NAP-GK ist anzuführen, dass sie als Kontinuum von drei unterschiedlichen, aufeinander aufbauenden und sich überlappenden Schritten von Diffusion, Dissemination und Implementation angelegt wurde (; Abb. ). Erster Schritt der Umsetzung: Diffusion Der erste Schritt „Diffusion“ zielte darauf, eine möglichst breite Streuung des NAP-GK und ein an der Umsetzung interessiertes Klima zu schaffen. Dazu wurde der Aktionsplan auf unterschiedlichen Kanälen publik gemacht. Ausgangspunkt war eine Großveranstaltung, an der Akteur:innen aus Politik, Selbstverwaltung, Medien und Wissenschaft teilnahmen. Anschließend wurde der Aktionsplan umfangreich postalisch und digital distribuiert und auf Tagungen vorgestellt. Besonders die Website hat sich als wichtiges Medium zur Distribution des NAP-GK erwiesen. Sie liefert Hintergrundinformationen über den NAP-GK, das Leitungsteam, die Expertengruppe und die Koordinierungsstelle sowie geplante und durchgeführte Veranstaltungen und sonstige Aktivitäten. Auf diese Weise gelang es, den NAP-GK in unterschiedlichen Netzwerken auch in vielen Verbänden und Vereinen bekannt zu machen. Zugleich wurden zahlreiche Presseberichte und Mediennachrichten über den Aktionsplan gestreut. Zweiter Schritt der Umsetzung: Dissemination Bei der parallel eingeleiteten Dissemination ging es um eine gezielte Verbreitung des NAP-GK an ausgewählte, wichtige Zielgruppen und Stakeholder, so u. a. Politiker:innen auf Bundes- und Landesebene, Leitungskräfte von Verbänden und Organisationen des Gesundheitswesens sowie von Wohlfahrtsorganisationen oder Einrichtungen des Erziehungs- und Bildungssektors und Stiftungen. Ziel war es, sie über den NAP-GK und seine Empfehlungen zu informieren, bei ihnen Adoptionsbereitschaft zu wecken und sie zu motivieren, sich in ihrem Handlungsbereich für die Förderung von Gesundheitskompetenz zu engagieren. Dazu erfolgten Publikationen in entsprechenden Fachzeitschriften, ebenso zahlreiche Vorträge über den NAP-GK auf Tagungen, unterschiedlichsten Fachveranstaltungen und Kongressen im Gesundheitssystem und in anderen relevanten Gesellschaftsbereichen. Zudem wurden zahlreiche Beiträge in Fachzeitschriften zum NAP-GK erstellt und veröffentlicht. Dritter Schritt der Umsetzung: Implementation Ein dritter Schritt zielte auf die Implementation des NAP-GK in für die Förderung von Gesundheitskompetenz als wichtig erachtete Handlungsbereiche. Im Mittelpunkt dieses Schritts standen a) kooperative Workshops mit Stakeholdern und Vertreter:innen aus der Politik, Wissenschaft und Praxis und b) eine Zusammenarbeit mit relevanten Netzwerken nach dem Motto „Health Literacy in all Policies“. a) Workshops Die Workshops bilden das Kernelement der Umsetzungsstrategie des NAP-GK. Ziel der Workshops sollte es sein, eine kooperative Weiterbearbeitung der Empfehlungen des NAP-GK in konkrete, direkt umsetzbare Handlungsschritte zu initiieren. Intention war es, auf diese Weise eine Identifikation mit dem NAP-GK und dessen Adoption zu erreichen und die beteiligten Akteur:innen zu motivieren, sich in ihrem Wirkungsfeld für die Umsetzung zu engagieren. Als Ergebnis der Workshops wurden jeweils Strategiepapiere erarbeitet und anschließend distribuiert. Insgesamt wurden neun von der Koordinierungsstelle des NAP-GK organisierte Workshops mit unterschiedlichen Akteur:innen durchgeführt, an denen jeweils etwa 25–30 Teilnehmende mitwirkten. Die Workshops folgten dabei einem festen Ablaufmuster (Abb. ). Von den Workshops wurden ausführliche Protokolle erstellt, die als Datenmaterial für die anschließende kooperative Erarbeitung eines Strategiepapiers dienten. Dazu wurden die wichtigsten Ergebnisse der Workshops anschließend zusammengefasst und erste hypothesenartige Kernaussagen festgelegt. Auf dieser Basis wurde ein von der Leitung der Workshops verantwortetes Strategiepapier entworfen. Es wurde den Workshopteilnehmenden zur Kommentierung zugesendet und anschließend meist mehrfach überarbeitet, bis alle einer Publikation zustimmten. Die Strategiepapiere wurden den Teilnehmenden für ihre Netzwerke zur Verfügung gestellt und parallel über die Website des NAP-GK distribuiert. Teilweise wurden sie ergänzend in Zeitschriften veröffentlicht. Die Strategiepapiere stießen in der Regel auf große Resonanz. In einigen Organisationen führten sie zu intensiven Diskussionen. Sie stimulierten außerdem die Identifikation mit dem NAP-GK und auch die Umsetzungsbereitschaft, was sich u. a. in den Anfragen zu Vorträgen und eigenen Projektplanungen zeigte. Bis heute, nach rund siebenjährigem Bestehen des NAP-GK, ist eine breite Palette an Praxisinitiativen zu finden, die auf den Aktionsplan Bezug nehmen. b) Kooperation mit Netzwerken Ein weiterer Bestandteil der kooperativen Implementationsstrategie bestand in der Zusammenarbeit mit Netzwerken und Mitwirkung in Gremien oder Arbeitsgruppen. Dazu gehörten z. B. die Arbeitsgruppe Gesundheitskompetenz im Nationalen Krebsplan, eine gleichnamige Gruppe in einem Pflegeverband, Beiräte von Institutionen, die entstehenden Netzwerke zur Gesundheitskompetenz wie etwa die AG Gesundheitskompetenz im Deutschen Netzwerk Versorgungsforschung und in der Deutschen Gesellschaft für Sozialmedizin und Prävention oder das Deutsche Netzwerk Gesundheitskompetenz (DNGK). Eine besondere Bedeutung hat die Kooperation mit der „Allianz Gesundheitskompetenz“, die 2017 vom BMG gegründet wurde und das Ziel hat, im Gesundheitswesen Praxisprojekte zur Förderung der Gesundheitskompetenz anzustoßen. Mit dieser Programmatik war und ist die Allianz Gesundheitskompetenz für die Umsetzung des Aktionsplans besonders wichtig. Im Gegenzug konnten die Mitglieder der Allianz dabei unterstützt werden, Ideen für Praxisprojekte zu entwickeln und zu realisieren, denn dazu fehlte es zu Beginn vielfach an Fachexpertise. Der erste Schritt „Diffusion“ zielte darauf, eine möglichst breite Streuung des NAP-GK und ein an der Umsetzung interessiertes Klima zu schaffen. Dazu wurde der Aktionsplan auf unterschiedlichen Kanälen publik gemacht. Ausgangspunkt war eine Großveranstaltung, an der Akteur:innen aus Politik, Selbstverwaltung, Medien und Wissenschaft teilnahmen. Anschließend wurde der Aktionsplan umfangreich postalisch und digital distribuiert und auf Tagungen vorgestellt. Besonders die Website hat sich als wichtiges Medium zur Distribution des NAP-GK erwiesen. Sie liefert Hintergrundinformationen über den NAP-GK, das Leitungsteam, die Expertengruppe und die Koordinierungsstelle sowie geplante und durchgeführte Veranstaltungen und sonstige Aktivitäten. Auf diese Weise gelang es, den NAP-GK in unterschiedlichen Netzwerken auch in vielen Verbänden und Vereinen bekannt zu machen. Zugleich wurden zahlreiche Presseberichte und Mediennachrichten über den Aktionsplan gestreut. Bei der parallel eingeleiteten Dissemination ging es um eine gezielte Verbreitung des NAP-GK an ausgewählte, wichtige Zielgruppen und Stakeholder, so u. a. Politiker:innen auf Bundes- und Landesebene, Leitungskräfte von Verbänden und Organisationen des Gesundheitswesens sowie von Wohlfahrtsorganisationen oder Einrichtungen des Erziehungs- und Bildungssektors und Stiftungen. Ziel war es, sie über den NAP-GK und seine Empfehlungen zu informieren, bei ihnen Adoptionsbereitschaft zu wecken und sie zu motivieren, sich in ihrem Handlungsbereich für die Förderung von Gesundheitskompetenz zu engagieren. Dazu erfolgten Publikationen in entsprechenden Fachzeitschriften, ebenso zahlreiche Vorträge über den NAP-GK auf Tagungen, unterschiedlichsten Fachveranstaltungen und Kongressen im Gesundheitssystem und in anderen relevanten Gesellschaftsbereichen. Zudem wurden zahlreiche Beiträge in Fachzeitschriften zum NAP-GK erstellt und veröffentlicht. Ein dritter Schritt zielte auf die Implementation des NAP-GK in für die Förderung von Gesundheitskompetenz als wichtig erachtete Handlungsbereiche. Im Mittelpunkt dieses Schritts standen a) kooperative Workshops mit Stakeholdern und Vertreter:innen aus der Politik, Wissenschaft und Praxis und b) eine Zusammenarbeit mit relevanten Netzwerken nach dem Motto „Health Literacy in all Policies“. a) Workshops Die Workshops bilden das Kernelement der Umsetzungsstrategie des NAP-GK. Ziel der Workshops sollte es sein, eine kooperative Weiterbearbeitung der Empfehlungen des NAP-GK in konkrete, direkt umsetzbare Handlungsschritte zu initiieren. Intention war es, auf diese Weise eine Identifikation mit dem NAP-GK und dessen Adoption zu erreichen und die beteiligten Akteur:innen zu motivieren, sich in ihrem Wirkungsfeld für die Umsetzung zu engagieren. Als Ergebnis der Workshops wurden jeweils Strategiepapiere erarbeitet und anschließend distribuiert. Insgesamt wurden neun von der Koordinierungsstelle des NAP-GK organisierte Workshops mit unterschiedlichen Akteur:innen durchgeführt, an denen jeweils etwa 25–30 Teilnehmende mitwirkten. Die Workshops folgten dabei einem festen Ablaufmuster (Abb. ). Von den Workshops wurden ausführliche Protokolle erstellt, die als Datenmaterial für die anschließende kooperative Erarbeitung eines Strategiepapiers dienten. Dazu wurden die wichtigsten Ergebnisse der Workshops anschließend zusammengefasst und erste hypothesenartige Kernaussagen festgelegt. Auf dieser Basis wurde ein von der Leitung der Workshops verantwortetes Strategiepapier entworfen. Es wurde den Workshopteilnehmenden zur Kommentierung zugesendet und anschließend meist mehrfach überarbeitet, bis alle einer Publikation zustimmten. Die Strategiepapiere wurden den Teilnehmenden für ihre Netzwerke zur Verfügung gestellt und parallel über die Website des NAP-GK distribuiert. Teilweise wurden sie ergänzend in Zeitschriften veröffentlicht. Die Strategiepapiere stießen in der Regel auf große Resonanz. In einigen Organisationen führten sie zu intensiven Diskussionen. Sie stimulierten außerdem die Identifikation mit dem NAP-GK und auch die Umsetzungsbereitschaft, was sich u. a. in den Anfragen zu Vorträgen und eigenen Projektplanungen zeigte. Bis heute, nach rund siebenjährigem Bestehen des NAP-GK, ist eine breite Palette an Praxisinitiativen zu finden, die auf den Aktionsplan Bezug nehmen. b) Kooperation mit Netzwerken Ein weiterer Bestandteil der kooperativen Implementationsstrategie bestand in der Zusammenarbeit mit Netzwerken und Mitwirkung in Gremien oder Arbeitsgruppen. Dazu gehörten z. B. die Arbeitsgruppe Gesundheitskompetenz im Nationalen Krebsplan, eine gleichnamige Gruppe in einem Pflegeverband, Beiräte von Institutionen, die entstehenden Netzwerke zur Gesundheitskompetenz wie etwa die AG Gesundheitskompetenz im Deutschen Netzwerk Versorgungsforschung und in der Deutschen Gesellschaft für Sozialmedizin und Prävention oder das Deutsche Netzwerk Gesundheitskompetenz (DNGK). Eine besondere Bedeutung hat die Kooperation mit der „Allianz Gesundheitskompetenz“, die 2017 vom BMG gegründet wurde und das Ziel hat, im Gesundheitswesen Praxisprojekte zur Förderung der Gesundheitskompetenz anzustoßen. Mit dieser Programmatik war und ist die Allianz Gesundheitskompetenz für die Umsetzung des Aktionsplans besonders wichtig. Im Gegenzug konnten die Mitglieder der Allianz dabei unterstützt werden, Ideen für Praxisprojekte zu entwickeln und zu realisieren, denn dazu fehlte es zu Beginn vielfach an Fachexpertise. Die Workshops bilden das Kernelement der Umsetzungsstrategie des NAP-GK. Ziel der Workshops sollte es sein, eine kooperative Weiterbearbeitung der Empfehlungen des NAP-GK in konkrete, direkt umsetzbare Handlungsschritte zu initiieren. Intention war es, auf diese Weise eine Identifikation mit dem NAP-GK und dessen Adoption zu erreichen und die beteiligten Akteur:innen zu motivieren, sich in ihrem Wirkungsfeld für die Umsetzung zu engagieren. Als Ergebnis der Workshops wurden jeweils Strategiepapiere erarbeitet und anschließend distribuiert. Insgesamt wurden neun von der Koordinierungsstelle des NAP-GK organisierte Workshops mit unterschiedlichen Akteur:innen durchgeführt, an denen jeweils etwa 25–30 Teilnehmende mitwirkten. Die Workshops folgten dabei einem festen Ablaufmuster (Abb. ). Von den Workshops wurden ausführliche Protokolle erstellt, die als Datenmaterial für die anschließende kooperative Erarbeitung eines Strategiepapiers dienten. Dazu wurden die wichtigsten Ergebnisse der Workshops anschließend zusammengefasst und erste hypothesenartige Kernaussagen festgelegt. Auf dieser Basis wurde ein von der Leitung der Workshops verantwortetes Strategiepapier entworfen. Es wurde den Workshopteilnehmenden zur Kommentierung zugesendet und anschließend meist mehrfach überarbeitet, bis alle einer Publikation zustimmten. Die Strategiepapiere wurden den Teilnehmenden für ihre Netzwerke zur Verfügung gestellt und parallel über die Website des NAP-GK distribuiert. Teilweise wurden sie ergänzend in Zeitschriften veröffentlicht. Die Strategiepapiere stießen in der Regel auf große Resonanz. In einigen Organisationen führten sie zu intensiven Diskussionen. Sie stimulierten außerdem die Identifikation mit dem NAP-GK und auch die Umsetzungsbereitschaft, was sich u. a. in den Anfragen zu Vorträgen und eigenen Projektplanungen zeigte. Bis heute, nach rund siebenjährigem Bestehen des NAP-GK, ist eine breite Palette an Praxisinitiativen zu finden, die auf den Aktionsplan Bezug nehmen. Ein weiterer Bestandteil der kooperativen Implementationsstrategie bestand in der Zusammenarbeit mit Netzwerken und Mitwirkung in Gremien oder Arbeitsgruppen. Dazu gehörten z. B. die Arbeitsgruppe Gesundheitskompetenz im Nationalen Krebsplan, eine gleichnamige Gruppe in einem Pflegeverband, Beiräte von Institutionen, die entstehenden Netzwerke zur Gesundheitskompetenz wie etwa die AG Gesundheitskompetenz im Deutschen Netzwerk Versorgungsforschung und in der Deutschen Gesellschaft für Sozialmedizin und Prävention oder das Deutsche Netzwerk Gesundheitskompetenz (DNGK). Eine besondere Bedeutung hat die Kooperation mit der „Allianz Gesundheitskompetenz“, die 2017 vom BMG gegründet wurde und das Ziel hat, im Gesundheitswesen Praxisprojekte zur Förderung der Gesundheitskompetenz anzustoßen. Mit dieser Programmatik war und ist die Allianz Gesundheitskompetenz für die Umsetzung des Aktionsplans besonders wichtig. Im Gegenzug konnten die Mitglieder der Allianz dabei unterstützt werden, Ideen für Praxisprojekte zu entwickeln und zu realisieren, denn dazu fehlte es zu Beginn vielfach an Fachexpertise. Nach gut drei Jahren war der NAP-GK zu einer Art Referenzwerk für die Förderung von Gesundheitskompetenz in Deutschland geworden und hatte zur Entstehung zahlreicher Initiativen beigetragen – in der Wissenschaft, wie der Praxis oder der Politik. Dann kam die COVID-19-Pandemie, die in fast allen Bereichen gesellschaftlichen Lebens zu einer abrupten Zäsur geführt hat. Auch die Implementationsstrategie des NAP-GK geriet durch sie an Grenzen und musste modifiziert werden, zumal auch die bis dahin erfolgreiche Karriere des Themas Gesundheitskompetenz unerwartet ins Trudeln geriet. Erstaunlich war dies deshalb, weil die Pandemie rasch deutlich zeigte, wie wichtig es in gesundheitlichen Krisen ist, im richtigen Moment die richtigen Informationen zum Umgang mit der unvertrauten Situation zu finden, zu verstehen, einzuordnen und für das eigene Gesundheitsverhalten nutzen zu können. Gerade während der Pandemie wurde die Stärkung der Gesundheitskompetenz daher wichtiger denn je, zumal die Bevölkerung ad hoc mit umfangreichen Informationsherausforderungen konfrontiert war, ohne auf gesichertes Wissen zurückgreifen und ohne sich auf den üblichen Informationspfaden bewegen zu können. Zugleich zeigten die Ergebnisse der zweiten Studie zur Gesundheitskompetenz der Bevölkerung in Deutschland (HLS-GER 2) aus den Jahren 2019/2020, dass sich die Gesundheitskompetenz in Deutschland seit der ersten Studie verschlechtert hat. Auch die ungleiche Verteilung von Gesundheitskompetenz hatte sich verstärkt . Es war also davon auszugehen, dass es um die Gesundheitskompetenz in Deutschland zum Zeitpunkt der Pandemie nicht gut bestellt war. Erste Studien zur coronaspezifischen Gesundheitskompetenz kamen alsbald zu ähnlichen Ergebnissen . Für die NAP-GK-Expertengruppe bedeutete dies, dass die Förderung von Gesundheitskompetenz und die Umsetzung des NAP-GK intensiviert werden mussten. Doch hatten sich die Umsetzungsbedingungen durch die Corona-Pandemie und ihre Begleiterscheinungen wie Kontaktbeschränkungen etc. völlig verändert. Vor allem die Workshopstrategie und die Kooperation mit Gremien und Arbeitsgruppen war kaum noch realisierbar, weil keine persönlichen Zusammenkünfte mehr möglich waren. Daher wurde die Umsetzungsstrategie geändert: Statt Workshops wurde vor allem auf Positionspapiere und Policy Briefs gesetzt, die kooperativ in unterschiedlichen Teams digital erarbeitet wurden. Es entstanden insgesamt fünf solcher Dokumente. Parallel bildete sich in dieser Zeit eine internationale Arbeitsgruppe zu „Health Literacy Policies“ und deren Implementation im „WHO Action Network on Measuring Population and Organizational Health Literacy“ (M-POHL), an der Vertreter:innen des NAP-GK mitwirkten und zur Entstehung eines entsprechenden Policy Guides beitrugen . Zum Ende der Pandemie hin wurde deutlich, dass eine abermalige Modifikation der Umsetzungsstrategie des NAP-GK erforderlich ist. Sie wird aktuell diskutiert und zusammen mit dem Deutschen Netzwerk für Gesundheitskompetenz (DNGK) erstellt. Insgesamt ist es in den sieben Jahren des Bestehens des NAP-GK gelungen, viele der Empfehlungen in die öffentliche Diskussion zu tragen und zahlreiche Impulse zum Agenda-Setting und zur Initiierung von Projekten zur Förderung von Gesundheitskompetenz zu setzen. Die wichtigsten mit dem NAP-GK und seiner Entstehungs- und Umsetzungsstrategie gesammelten Erfahrungen sollen abschließend dargestellt und summierend reflektiert werden. 1. Zivilgesellschaftliche Initiative. Zu den Besonderheiten des deutschen NAP-GK gehört, dass er nicht durch eine von der Regierung eingesetzte Kommission erarbeitet wurde, sondern von einer Gruppe ausgewiesener und von der Wichtigkeit des Themas überzeugter Akteur:innen. Er kann somit als Beispiel dafür gelten, dass zivilgesellschaftliche Initiativen entscheidend zur Entwicklung neuer Politikfelder und zum Kapazitätsaufbau eines neuen gesundheitlich relevanten Themas beitragen können. Zugleich hat diese Konstruktion Vor- und Nachteile. Einerseits war auf diese Weise die Unabhängigkeit und Neutralität gewährleistet, ebenso hohe Motivation. Andererseits birgt diese Konstruktion das Risiko politischer Distanz und schwacher politischer Resonanz und Unterstützung in sich. Sie konnte durch die frühe Übernahme der Schirmherrschaft des Bundesgesundheitsministers über den NAP-GK umgangen werden. Diese verlieh dem NAP-GK besonders in der Anfangszeit eine gewisse politische Legitimität und ein entsprechendes Ansehen. Mit den sich in der Folgezeit vollziehenden Ministerwechseln veränderte sich dies. Doch war der NAP-GK inzwischen zu einer festen Größe geworden, was sich u. a. daran zeigt, dass er in einem kürzlich erschienenen Bericht der Organisation für wirtschaftliche Zusammenarbeit und Entwicklung (OECD), in dem anhand ausgewählter Länderbeispiele unterschiedliche Initiativen zur Verbesserung der Gesundheitskompetenz dargestellt werden, explizit aufgegriffen wurde . 2. Kooperativer Ansatz und Klärung von Missverständnissen. Der kooperative Ansatz bei der Entwicklung und Umsetzung hat wesentlich zum Erfolg des NAP-GK beigetragen: Er ermöglichte die Einbindung vieler wichtiger Akteur:innen, Stakeholder und Interessengruppen. Dadurch gelang es, das Thema in der Fachdiskussion zu platzieren und ihm Akzeptanz zu verschaffen. Zugleich trug dieser Ansatz zur Ausräumung anfänglich bestehender Vorbehalte gegenüber dem Konzept der Gesundheitskompetenz bei. Denn vor allem die Übersetzung von „Health Literacy“ mit „Gesundheitskompetenz“ stieß bei den Beteiligten zunächst auf Missverständnisse, weil sie mit der seit Jahren intensiv geführten Debatte über Gesundheitsförderung und auch über den Kompetenzbegriff und die Kompetenzmessung kollidierte und umfangreiche Klärungsbemühungen erforderte. So wurde beispielsweise der Begriff Gesundheitskompetenz zu Beginn vielfach mit Gesundheitsförderung und der Fähigkeit, sich gesundheitsbewusst zu verhalten, gleichgesetzt. Der für Gesundheitskompetenz zentrale Aspekt – die Fähigkeit zum Umgang mit gesundheitsrelevanten Informationen – wurde vielfach übersehen. Solche Passungsprobleme mit etablierten Konzepten und daraus erwachsener Klärungsbedarf sind keine Seltenheit bei der Einführung von neuen Konzepten und stellen keine nationale Besonderheit dar . Auch international stieß das Konzept zunächst auf Irritation und teilweise auch Aversion . Ähnliches gilt für die den populationsorientierten Studien zugrundeliegende Erhebungsmethodik. Auch sie wurde zunächst (und wird) kritisch betrachtet. Beobachtet werden konnte, dass sich viele solcher Passungsprobleme im Lauf der Zeit abschliffen und auch die Widerstände gegen eine Adoption des Konzepts und der Erhebungsmethodik nachließen. Dennoch bedürfen sie nach wie vor der Beachtung, denn die bestehenden Divergenzen und Nahtstellen zu angrenzenden Konzepten oder die Auseinandersetzung über unterschiedliche Erhebungsmethoden sind bisher keineswegs beendet (z. B. ). 3. Umsetzung frühzeitig mitdenken. Insgesamt zeigen die Erfahrungen mit dem NAP-GK, dass es nicht reicht, Aktionspläne und/oder andere Strategien zur Förderung von Gesundheitskompetenz zu erarbeiten, sondern wichtig ist, den sich anschließenden Schritt – die Umsetzung – frühzeitig mitzudenken und zu planen. Dies geschieht weitaus zu wenig. Egal, ob bei Aktionsplänen, innovativen Politikstrategien, Gesetzen oder Empfehlungen aus Expertenkommissionen – oft wird auf naturwüchsige Umsetzung gesetzt. Doch ist diese Strategie selten erfolgreich und führt zu zahlreichen Umsetzungsdefiziten, wie auch die Literatur aufzeigt (ex. ). Die mit dem NAP-GK gesammelten Erfahrungen bestätigen dies einmal mehr und verdeutlichen, dass vorliegende implementationswissenschaftliche Erkenntnisse offenbar noch größere Beachtung erfahren müssen. Betrachtet man die der NAP-GK Umsetzung zugrundliegenden Schritte, zeigen die gesammelten Erfahrungen, wie essenziell die Schritte Diffusion und Dissemination sind, um neue Themen und Programme bekannt zu machen und um Umsetzungsinteresse und -motivation zu erzeugen. Zugleich wurde retrospektiv deutlich, dass beide Schritte, obschon sehr viel Energie auf sie verwendet wurde, intensivierungsbedürftig sind. Wie Studien mittlerweile zeigen, ist die Bekanntheit des Gesundheitskompetenz-Konzepts, z. B. bei den Gesundheitsprofessionen und damit bei wichtigen Instanzen der Förderung von Gesundheitskompetenz, nach wie vor begrenzt , sodass es weiterer und breitenwirksamerer Aufklärung bedarf. 4. Kooperative Workshops. Der dritte Schritt, die Implementation durch kooperative Workshops zur Operationalisierung der Empfehlungen des NAP-GK mit Politiker:innen und Stakeholdern bildete das Kernstück der Umsetzung. Auch er hat sich als erfolgreich erwiesen, wie allein die Resonanz in den Feedback-Runden und auch die unerwartete Identifikation mit den Strategiepapieren gezeigt haben. Die Durchführung der Workshops und insbesondere die Rekrutierung der Teilnehmenden entpuppten sich zuweilen allerdings als sehr aufwändig und anspruchsvoll – aus terminlichen und organisatorischen Gründen. Ähnlich war es auch während der Corona-Pandemie bei den digitalen Meetings zur Erarbeitung der Policy Briefs bzw. Positionspapiere. Hinzu kam, dass Repräsentanten der Politik nur schwer für eine Teilnahme zu gewinnen waren. Stakeholder machten daher die Mehrheit der Teilnehmenden aus. Zudem zeigte sich bald, dass einmalige Workshops nicht ausreichen und lediglich die Funktion eines „Appetizers“ einnehmen. Sie können die Innovations- und Implementationsbereitschaft zwar anregen, aber keine zeitstabilen Effekte erzeugen. Wichtig ist deshalb – wie die Erfahrungen bestätigten – ein langfristig angelegtes, prozessuales Vorgehen mit Wiederholungsschlaufen und Vertiefungen in unterschiedlichen Formaten, zumal auch die Stimulierung und Umsetzung von Innovationen im Wirkungsfeld der Teilnehmenden selten reibungslos verläuft und Zeit benötigt . Zugleich sollte der kooperative Ansatz bei allen Bemühungen unbedingt beibehalten werden. Er ist zwar aufwändig, hat sich jedoch bewährt und als wirkungsvoll erwiesen. Das gilt auch für die Kooperation mit Gremien, Arbeitsgruppen und Netzwerken. 5. Stärker einzubeziehende Bereiche. Die Dissemination und Implementation des NAP-GK hat sich vornehmlich auf das Gesundheitssystem konzentriert. Das Erziehungs- und Bildungssystem und weitere wichtige Bereiche (Kommune, Ernährung) wurden tendenziell zu wenig einbezogen. Ursache dafür ist, dass dies Ressourcen verlangt, die dem NAP-GK-Projekt nicht zur Verfügung standen. Die Themen „vulnerable Gruppen“ und damit einhergehend „Vermeidung von Ungleichheit“ haben zwar eine bedeutende Rolle bei der Implementation gespielt – allein vier von den neun Workshops widmeten sich einzelnen vulnerablen Gruppen und Ungleichheit wurde wiederum zum wichtigen Querschnittthema erhoben – dennoch sollten auch sie künftig noch intensiver aufgegriffen werden. 6. Monitoring ermöglichen. Wünschenswert wäre zudem eine formative Evaluation zur Ermittlung der Wirkungen des NAP-GK gewesen. Auch sie setzt allerdings mehr Ressourcen und Kapazitäten voraus. Gleichzeitig können vorliegende Bevölkerungsbefragungen zur Evaluation genutzt werden, insbesondere wenn sie als regelmäßiges Monitoring angelegt sind – wie auch von der WHO betont wird . Ein solches Monitoring zu ermöglichen, ist nach wie vor eine der Zukunft vorbehaltene Aufgabe. Denn bislang liegen in Deutschland kaum Wiederholungsbefragungen zur Gesundheitskompetenz vor, die vergleichende Trendanalysen erlauben und zur Evaluation herangezogen werden können. Die wenigen vorliegenden vergleichbaren populationsorientierten Daten deuten jedoch bereits an, dass ein solches Monitoring wichtig sein dürfte, um Veränderungen der Gesundheitskompetenz im Zeitverlauf beobachten und Fördermaßnahmen bewerten zu können. Zu den Besonderheiten des deutschen NAP-GK gehört, dass er nicht durch eine von der Regierung eingesetzte Kommission erarbeitet wurde, sondern von einer Gruppe ausgewiesener und von der Wichtigkeit des Themas überzeugter Akteur:innen. Er kann somit als Beispiel dafür gelten, dass zivilgesellschaftliche Initiativen entscheidend zur Entwicklung neuer Politikfelder und zum Kapazitätsaufbau eines neuen gesundheitlich relevanten Themas beitragen können. Zugleich hat diese Konstruktion Vor- und Nachteile. Einerseits war auf diese Weise die Unabhängigkeit und Neutralität gewährleistet, ebenso hohe Motivation. Andererseits birgt diese Konstruktion das Risiko politischer Distanz und schwacher politischer Resonanz und Unterstützung in sich. Sie konnte durch die frühe Übernahme der Schirmherrschaft des Bundesgesundheitsministers über den NAP-GK umgangen werden. Diese verlieh dem NAP-GK besonders in der Anfangszeit eine gewisse politische Legitimität und ein entsprechendes Ansehen. Mit den sich in der Folgezeit vollziehenden Ministerwechseln veränderte sich dies. Doch war der NAP-GK inzwischen zu einer festen Größe geworden, was sich u. a. daran zeigt, dass er in einem kürzlich erschienenen Bericht der Organisation für wirtschaftliche Zusammenarbeit und Entwicklung (OECD), in dem anhand ausgewählter Länderbeispiele unterschiedliche Initiativen zur Verbesserung der Gesundheitskompetenz dargestellt werden, explizit aufgegriffen wurde . Der kooperative Ansatz bei der Entwicklung und Umsetzung hat wesentlich zum Erfolg des NAP-GK beigetragen: Er ermöglichte die Einbindung vieler wichtiger Akteur:innen, Stakeholder und Interessengruppen. Dadurch gelang es, das Thema in der Fachdiskussion zu platzieren und ihm Akzeptanz zu verschaffen. Zugleich trug dieser Ansatz zur Ausräumung anfänglich bestehender Vorbehalte gegenüber dem Konzept der Gesundheitskompetenz bei. Denn vor allem die Übersetzung von „Health Literacy“ mit „Gesundheitskompetenz“ stieß bei den Beteiligten zunächst auf Missverständnisse, weil sie mit der seit Jahren intensiv geführten Debatte über Gesundheitsförderung und auch über den Kompetenzbegriff und die Kompetenzmessung kollidierte und umfangreiche Klärungsbemühungen erforderte. So wurde beispielsweise der Begriff Gesundheitskompetenz zu Beginn vielfach mit Gesundheitsförderung und der Fähigkeit, sich gesundheitsbewusst zu verhalten, gleichgesetzt. Der für Gesundheitskompetenz zentrale Aspekt – die Fähigkeit zum Umgang mit gesundheitsrelevanten Informationen – wurde vielfach übersehen. Solche Passungsprobleme mit etablierten Konzepten und daraus erwachsener Klärungsbedarf sind keine Seltenheit bei der Einführung von neuen Konzepten und stellen keine nationale Besonderheit dar . Auch international stieß das Konzept zunächst auf Irritation und teilweise auch Aversion . Ähnliches gilt für die den populationsorientierten Studien zugrundeliegende Erhebungsmethodik. Auch sie wurde zunächst (und wird) kritisch betrachtet. Beobachtet werden konnte, dass sich viele solcher Passungsprobleme im Lauf der Zeit abschliffen und auch die Widerstände gegen eine Adoption des Konzepts und der Erhebungsmethodik nachließen. Dennoch bedürfen sie nach wie vor der Beachtung, denn die bestehenden Divergenzen und Nahtstellen zu angrenzenden Konzepten oder die Auseinandersetzung über unterschiedliche Erhebungsmethoden sind bisher keineswegs beendet (z. B. ). Insgesamt zeigen die Erfahrungen mit dem NAP-GK, dass es nicht reicht, Aktionspläne und/oder andere Strategien zur Förderung von Gesundheitskompetenz zu erarbeiten, sondern wichtig ist, den sich anschließenden Schritt – die Umsetzung – frühzeitig mitzudenken und zu planen. Dies geschieht weitaus zu wenig. Egal, ob bei Aktionsplänen, innovativen Politikstrategien, Gesetzen oder Empfehlungen aus Expertenkommissionen – oft wird auf naturwüchsige Umsetzung gesetzt. Doch ist diese Strategie selten erfolgreich und führt zu zahlreichen Umsetzungsdefiziten, wie auch die Literatur aufzeigt (ex. ). Die mit dem NAP-GK gesammelten Erfahrungen bestätigen dies einmal mehr und verdeutlichen, dass vorliegende implementationswissenschaftliche Erkenntnisse offenbar noch größere Beachtung erfahren müssen. Betrachtet man die der NAP-GK Umsetzung zugrundliegenden Schritte, zeigen die gesammelten Erfahrungen, wie essenziell die Schritte Diffusion und Dissemination sind, um neue Themen und Programme bekannt zu machen und um Umsetzungsinteresse und -motivation zu erzeugen. Zugleich wurde retrospektiv deutlich, dass beide Schritte, obschon sehr viel Energie auf sie verwendet wurde, intensivierungsbedürftig sind. Wie Studien mittlerweile zeigen, ist die Bekanntheit des Gesundheitskompetenz-Konzepts, z. B. bei den Gesundheitsprofessionen und damit bei wichtigen Instanzen der Förderung von Gesundheitskompetenz, nach wie vor begrenzt , sodass es weiterer und breitenwirksamerer Aufklärung bedarf. Der dritte Schritt, die Implementation durch kooperative Workshops zur Operationalisierung der Empfehlungen des NAP-GK mit Politiker:innen und Stakeholdern bildete das Kernstück der Umsetzung. Auch er hat sich als erfolgreich erwiesen, wie allein die Resonanz in den Feedback-Runden und auch die unerwartete Identifikation mit den Strategiepapieren gezeigt haben. Die Durchführung der Workshops und insbesondere die Rekrutierung der Teilnehmenden entpuppten sich zuweilen allerdings als sehr aufwändig und anspruchsvoll – aus terminlichen und organisatorischen Gründen. Ähnlich war es auch während der Corona-Pandemie bei den digitalen Meetings zur Erarbeitung der Policy Briefs bzw. Positionspapiere. Hinzu kam, dass Repräsentanten der Politik nur schwer für eine Teilnahme zu gewinnen waren. Stakeholder machten daher die Mehrheit der Teilnehmenden aus. Zudem zeigte sich bald, dass einmalige Workshops nicht ausreichen und lediglich die Funktion eines „Appetizers“ einnehmen. Sie können die Innovations- und Implementationsbereitschaft zwar anregen, aber keine zeitstabilen Effekte erzeugen. Wichtig ist deshalb – wie die Erfahrungen bestätigten – ein langfristig angelegtes, prozessuales Vorgehen mit Wiederholungsschlaufen und Vertiefungen in unterschiedlichen Formaten, zumal auch die Stimulierung und Umsetzung von Innovationen im Wirkungsfeld der Teilnehmenden selten reibungslos verläuft und Zeit benötigt . Zugleich sollte der kooperative Ansatz bei allen Bemühungen unbedingt beibehalten werden. Er ist zwar aufwändig, hat sich jedoch bewährt und als wirkungsvoll erwiesen. Das gilt auch für die Kooperation mit Gremien, Arbeitsgruppen und Netzwerken. Die Dissemination und Implementation des NAP-GK hat sich vornehmlich auf das Gesundheitssystem konzentriert. Das Erziehungs- und Bildungssystem und weitere wichtige Bereiche (Kommune, Ernährung) wurden tendenziell zu wenig einbezogen. Ursache dafür ist, dass dies Ressourcen verlangt, die dem NAP-GK-Projekt nicht zur Verfügung standen. Die Themen „vulnerable Gruppen“ und damit einhergehend „Vermeidung von Ungleichheit“ haben zwar eine bedeutende Rolle bei der Implementation gespielt – allein vier von den neun Workshops widmeten sich einzelnen vulnerablen Gruppen und Ungleichheit wurde wiederum zum wichtigen Querschnittthema erhoben – dennoch sollten auch sie künftig noch intensiver aufgegriffen werden. Wünschenswert wäre zudem eine formative Evaluation zur Ermittlung der Wirkungen des NAP-GK gewesen. Auch sie setzt allerdings mehr Ressourcen und Kapazitäten voraus. Gleichzeitig können vorliegende Bevölkerungsbefragungen zur Evaluation genutzt werden, insbesondere wenn sie als regelmäßiges Monitoring angelegt sind – wie auch von der WHO betont wird . Ein solches Monitoring zu ermöglichen, ist nach wie vor eine der Zukunft vorbehaltene Aufgabe. Denn bislang liegen in Deutschland kaum Wiederholungsbefragungen zur Gesundheitskompetenz vor, die vergleichende Trendanalysen erlauben und zur Evaluation herangezogen werden können. Die wenigen vorliegenden vergleichbaren populationsorientierten Daten deuten jedoch bereits an, dass ein solches Monitoring wichtig sein dürfte, um Veränderungen der Gesundheitskompetenz im Zeitverlauf beobachten und Fördermaßnahmen bewerten zu können. Angesichts anhaltender alter und neuer Krisen mit ihren unterschiedlichen gesellschaftlichen und gesundheitlichen Auswirkungen und Ungewissheiten steigt die Bedeutung von Gesundheitskompetenz weiter. Sah es zunächst so aus, als würde Gesundheitskompetenz rasch zu einem neuen Politikfeld werden, sind inzwischen Zweifel angebracht. Zwar stößt das Thema Gesundheitskompetenz in der Wissenschaft und auch in der Praxis mittlerweile auf bemerkenswerte Aufmerksamkeit. Auf politischer Ebene entspricht die Resonanz jedoch nicht der gesellschaftlichen Bedeutung des Themas, obschon geringe Gesundheitskompetenz in Deutschland nach wie vor ein nicht zu unterschätzendes Public-Health-Problem darstellt, das große Teile der Bevölkerung betrifft. Daher bleibt es eine zentrale Aufgabe, sich weiterhin für ein nachhaltiges Agenda-Setting und eine systematische Interventions- und Forschungsentwicklung zur Gesundheitskompetenz mit ausreichenden finanziellen Ressourcen zu engagieren. Um dies anzustoßen und die weitere Arbeit mit dem Nationalen Aktionsplan auf eine noch breitere Basis zu stellen, ist die zentrale Koordination ab 2025 von der Universität Bielefeld und der Hertie School, Berlin, an das Deutsche Netzwerk für Gesundheitskompetenz übergegangen. |
A two-stage deep-learning model for determination of the contact of mandibular third molars with the mandibular canal on panoramic radiographs | 9f25bdf1-e177-4f2f-b896-dc1a6ce22038 | 11562527 | Dentistry[mh] | The inferior alveolar nerve is a branch of the trigeminal nerve that travels through the mandibular canal (MC) and innervates the mandibular teeth and the lower lip . Mandibular third molars (MTMs) are among the most commonly impacted teeth . Surgical extraction of impacted MTMs is a common dental procedure that may be associated with complications such as pain, edema, mouth opening limitation, and lower lip paresthesia. The incidence of injury to the inferior alveolar nerve during MTM extraction ranges from 0.4 to 8.4%. Although spontaneous healing occurs in most cases, permanent nerve damage remains a risk (< 1% of the cases) . Anatomic proximity of the MC to the MTMs is the most important risk factor for nerve damage. Insufficient or inaccurate assessment of the anatomical relationship between MTMs and MC by dental clinicians can result in incorrect treatment planning and nerve damage. Accurate preoperative assessment of the location of the MC and its relationship with MTM is imperative . Artificial intelligence (AI) is defined as the capability of a machine to perform functions normally associated with human intelligence such as reasoning, learning, and self-improvement . Deep learning (DL), a subset of AI, uses artificial neural networks and has gained increasing popularity in medicine and dentistry . AI currently has several applications in dentistry for a variety of tasks, from detection of dental caries , periapical lesions , and periodontal diseases to estimation of patients’ dental age or clinical decision support . DL has also been used for the detection of the position and angulation of MTMs . Several previous studies have reported acceptably high accuracy of DL models for classification of the relationship between MTMs and MC as well as the level of difficulty of surgical removal of MTMs and the associated risk of nerve damage . Researchers have used various DL models, including U-nets, YOLO-driven models, and Retinanet, to detect and classify these anatomical structures. Notably, the majority of these studies used panoramic radiographs, i.e. two dimensional (2-D) imaging, to establish the reference test and define if there was an anatomical contact between MTMs and MC or not. While panoramic radiography is more accessible and shows lower radiation dose than three-dimensional (3-D) modalities such as cone beam computed tomography (CBCT), it is less accurate to assess the true relationship between MTMs and MC. Therefore, using 3-D CBCT images to establish the ground truth would significantly add to the existing evidence. Furthermore, in certain parts of the world, access to three-dimensional imaging resources may be limited . This is where the remarkable potential of DL shines, especially when applied to training using the gold standard provided by 3-D images. For DL, using 3-D imaging for establishing the reference test on which models will then be trained on the corresponding 2-D images is a valid strategy; the trained models may show superior capabilities on 2-D imaging as they ingested 3-D information during training . Therefore, in the present study, 3-D CBCT images have been used as the ground truth for classifying the contact/no contact of MTMs and MCs. This study aimed to assess the accuracy of a two-stage deep learning (DL) model for (1) detecting MTMs and the MCs, and (2) classifying their anatomical relationship (contact/no contact) on panoramic radiographs.
The study protocol was approved by the Research Ethics Committee at Isfahan University of Medical Sciences (#IR.MUI.REC.1402.007, approval date: 23/05/2023). Reporting of this study follows the “Artificial intelligence in dental research” checklist . Data and sampling This cross-sectional study was conducted using panoramic radiographs and CBCT scans of the same patients retrieved from the archives of an oral radiology clinic in Isfahan, Iran. The panoramic radiographs were obtained using a Promax scanner (Planmeca, Helsinki, Finland) and the CBCT scans were taken using Cranex 3D CBCT scanner (Soredex, Tuusula, Finland). Exposure was tailored based on individual patient characteristics to optimize image quality and detail. The radiographs were obtained starting from April 2020 to May 2023. The inclusion criteria were (1) absence of radiographic evidence of maxillofacial fractures (2), absence of radiographic evidence of maxillofacial abnormalities, and (3) availability of CBCT scans of the MTMs of the same patients taken maximally within 1 month of panoramic radiographs. The exclusion criteria were poor-quality panoramic radiographs and poor-quality CBCT scans. In total, 387 individuals were evaluated of which 203 (55.3%) were females. The age of the studied individuals ranged from 18 to 59 years (mean 25 years). MTMs were present bilaterally on 232 and unilaterally on 155 panoramic radiographs. In total, 619 images were collected which included MTMs and MCs, including 318 pieces of data in the contact group and 301 pieces of data in the no-contact group. All images were anonymized and coded before entering the study. The dataset was divided into training data, validation data, and test data (80:10:10). Data labeling Two oral and maxillofacial radiologists with 6 and 30 years of clinical experience, respectively, evaluated the relationship of MTMs with the MC on CBCT scans (on cross-sectional and axial views) and classified the panoramic images accordingly (contact or no contact). In case of disagreement, the images were evaluated simultaneously by the two observers until a consensus was reached (Fig. ). Labeling of panoramic radiographs for MTMs and MCs was conducted by a calibrated dental student using the LabelMe software (MIT Computer Science and Artificial Intelligence Laboratory, Cambridge, Massachusetts, USA) employing bounding boxes. Boxes used the occlusal surface of MTMs as the superior boundary and the inferior border of the mandible as the inferior boundary. The posterior end of the MC was the posterior boundary, and the mesiodistal half of the mandibular second molar comprised the anterior boundary of the bounding boxes, so that each bounding box contained MTM and MC on one side (Fig. ). One experienced radiologist double checked all boxes. The labeling process was critical to ensure accurate training of the DL models. A meticulously trained dental student utilized LabelMe software, positioning bounding boxes around MTMs and MCs using predefined anatomical landmarks. This ensured consistency across all labeled images. As an extra quality control measure, an experienced radiologist reviewed the annotations, minimizing any potential labeling errors. Image preprocessing The panoramic radiographs and CBCT scans were collected in DICOM format. Prior to analysis, the images were converted to PNG format to facilitate processing. The images were resized to a standardized resolution of 512 × 512 pixels to ensure consistency across the dataset. To artificially expand the training dataset and enhance the model’s robustness, data augmentation techniques were employed including random rotations of up to 10 degrees left and right, horizontal flipping, and brightness adjustments. Models and model parameters In the first step, we utilized an advanced Faster Region-based Convolutional Neural Network (Faster R-CNN) framework, underpinned by the ResNeXt architecture, for the extraction of regions of interest (ROIs). The Faster R-CNN model, a prominent two-stage object detection model, has gained substantial traction in medical imaging research. It synergistically integrates a region proposal network with a deep convolutional neural network, facilitating object localization within images, and has consistently yielded favorable results . To train the model, 2000 epochs with a batch size of 64 were used. In order to increase the overall accuracy of classification in two stages of prediction, the predicted bounding boxes were cropped by the model and saved as outputs of this stage. ResNeXt was used for this purpose. As a modification of ResNet, ResNeXt also uses residual blocks, but it also includes a concept known as “grouped convolutions.” Each residual block in ResNeXt contains parallel convolutional pathways called “cardinality.” Each of these pathways processes the input data simultaneously and concatenates their outputs to produce the block’s final output. This architecture is especially adept for academic and medical imaging applications . Model training Following the detection phase, cropped ROIs were subjected to classification modeling against the ground truth label established on the CBCTs. Image preprocessing step for this model included normalization and data augmentation by rotation 10°to left and right. 619 images were fed to the model, with 500 of them being training data, 61 for validation and 58 for testing. Stochastic gradient descent was selected and batch size and epoch were 4 and 35, respectively. Model performance evaluation The training and validation of the models were conducted using the Python programming language with the PyTorch framework. The images and annotations were stored and processed in JSON format. This format includes information about the image size, object categories, and bounding box coordinates, which are essential for training the model. AP50 is the average precision at an intersection over union of 0.5, while AP75 is the average precision at an intersection over union of 0.75. AP50 and AP75 were used as assessment indices for region of interest (ROI) extraction by the object detection model. The accuracy, loss, precision, recall, specificity, F1-score and the confusion matrix were used to assess the model performance in detection of structures and classification of their relationship. The confusion matrix has four concepts (accuracy, precision, recall, and F1 score) and defines the classification results according to the actual data. It considers true positive (TP), true negative (TN), false positive (FP), and false negative (FN) results based on comparing the classification of the model compared to the ground truth. Accuracy indicates that to what extent the model predicts the output data correctly. It shows the level of training of the model and is calculated using the following formula: [12pt]{minimal}
$$\:=+}{+++}$$ A c c u r a c y = T P + T N T P + F N + F P + T N Precision indicates that what percentage of positive diagnoses by the algorithm is actually correct, and is calculated by the following formula: [12pt]{minimal}
$$\:=}{+}$$ P r e c i s i o n = T P T P + F N Recall focuses on truly positive results, and is calculated as follows: [12pt]{minimal}
$$\:=}{+}\:$$ R e c a l l = T P T P + F P Specificity measures the proportion of actual negatives that are correctly identified as such, and is calculated as: [12pt]{minimal}
$$Specificity\;=\;$$ S p e c i f i c i t y = TN T N + F P F1-score indicates a harmonic mean of precision and recall, and is calculated by the following formula: [12pt]{minimal}
$$ F1\;=\;2\;\;$$ F 1 = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n × R e c a l l
This cross-sectional study was conducted using panoramic radiographs and CBCT scans of the same patients retrieved from the archives of an oral radiology clinic in Isfahan, Iran. The panoramic radiographs were obtained using a Promax scanner (Planmeca, Helsinki, Finland) and the CBCT scans were taken using Cranex 3D CBCT scanner (Soredex, Tuusula, Finland). Exposure was tailored based on individual patient characteristics to optimize image quality and detail. The radiographs were obtained starting from April 2020 to May 2023. The inclusion criteria were (1) absence of radiographic evidence of maxillofacial fractures (2), absence of radiographic evidence of maxillofacial abnormalities, and (3) availability of CBCT scans of the MTMs of the same patients taken maximally within 1 month of panoramic radiographs. The exclusion criteria were poor-quality panoramic radiographs and poor-quality CBCT scans. In total, 387 individuals were evaluated of which 203 (55.3%) were females. The age of the studied individuals ranged from 18 to 59 years (mean 25 years). MTMs were present bilaterally on 232 and unilaterally on 155 panoramic radiographs. In total, 619 images were collected which included MTMs and MCs, including 318 pieces of data in the contact group and 301 pieces of data in the no-contact group. All images were anonymized and coded before entering the study. The dataset was divided into training data, validation data, and test data (80:10:10).
Two oral and maxillofacial radiologists with 6 and 30 years of clinical experience, respectively, evaluated the relationship of MTMs with the MC on CBCT scans (on cross-sectional and axial views) and classified the panoramic images accordingly (contact or no contact). In case of disagreement, the images were evaluated simultaneously by the two observers until a consensus was reached (Fig. ). Labeling of panoramic radiographs for MTMs and MCs was conducted by a calibrated dental student using the LabelMe software (MIT Computer Science and Artificial Intelligence Laboratory, Cambridge, Massachusetts, USA) employing bounding boxes. Boxes used the occlusal surface of MTMs as the superior boundary and the inferior border of the mandible as the inferior boundary. The posterior end of the MC was the posterior boundary, and the mesiodistal half of the mandibular second molar comprised the anterior boundary of the bounding boxes, so that each bounding box contained MTM and MC on one side (Fig. ). One experienced radiologist double checked all boxes. The labeling process was critical to ensure accurate training of the DL models. A meticulously trained dental student utilized LabelMe software, positioning bounding boxes around MTMs and MCs using predefined anatomical landmarks. This ensured consistency across all labeled images. As an extra quality control measure, an experienced radiologist reviewed the annotations, minimizing any potential labeling errors.
The panoramic radiographs and CBCT scans were collected in DICOM format. Prior to analysis, the images were converted to PNG format to facilitate processing. The images were resized to a standardized resolution of 512 × 512 pixels to ensure consistency across the dataset. To artificially expand the training dataset and enhance the model’s robustness, data augmentation techniques were employed including random rotations of up to 10 degrees left and right, horizontal flipping, and brightness adjustments.
In the first step, we utilized an advanced Faster Region-based Convolutional Neural Network (Faster R-CNN) framework, underpinned by the ResNeXt architecture, for the extraction of regions of interest (ROIs). The Faster R-CNN model, a prominent two-stage object detection model, has gained substantial traction in medical imaging research. It synergistically integrates a region proposal network with a deep convolutional neural network, facilitating object localization within images, and has consistently yielded favorable results . To train the model, 2000 epochs with a batch size of 64 were used. In order to increase the overall accuracy of classification in two stages of prediction, the predicted bounding boxes were cropped by the model and saved as outputs of this stage. ResNeXt was used for this purpose. As a modification of ResNet, ResNeXt also uses residual blocks, but it also includes a concept known as “grouped convolutions.” Each residual block in ResNeXt contains parallel convolutional pathways called “cardinality.” Each of these pathways processes the input data simultaneously and concatenates their outputs to produce the block’s final output. This architecture is especially adept for academic and medical imaging applications .
Following the detection phase, cropped ROIs were subjected to classification modeling against the ground truth label established on the CBCTs. Image preprocessing step for this model included normalization and data augmentation by rotation 10°to left and right. 619 images were fed to the model, with 500 of them being training data, 61 for validation and 58 for testing. Stochastic gradient descent was selected and batch size and epoch were 4 and 35, respectively.
The training and validation of the models were conducted using the Python programming language with the PyTorch framework. The images and annotations were stored and processed in JSON format. This format includes information about the image size, object categories, and bounding box coordinates, which are essential for training the model. AP50 is the average precision at an intersection over union of 0.5, while AP75 is the average precision at an intersection over union of 0.75. AP50 and AP75 were used as assessment indices for region of interest (ROI) extraction by the object detection model. The accuracy, loss, precision, recall, specificity, F1-score and the confusion matrix were used to assess the model performance in detection of structures and classification of their relationship. The confusion matrix has four concepts (accuracy, precision, recall, and F1 score) and defines the classification results according to the actual data. It considers true positive (TP), true negative (TN), false positive (FP), and false negative (FN) results based on comparing the classification of the model compared to the ground truth. Accuracy indicates that to what extent the model predicts the output data correctly. It shows the level of training of the model and is calculated using the following formula: [12pt]{minimal}
$$\:=+}{+++}$$ A c c u r a c y = T P + T N T P + F N + F P + T N Precision indicates that what percentage of positive diagnoses by the algorithm is actually correct, and is calculated by the following formula: [12pt]{minimal}
$$\:=}{+}$$ P r e c i s i o n = T P T P + F N Recall focuses on truly positive results, and is calculated as follows: [12pt]{minimal}
$$\:=}{+}\:$$ R e c a l l = T P T P + F P Specificity measures the proportion of actual negatives that are correctly identified as such, and is calculated as: [12pt]{minimal}
$$Specificity\;=\;$$ S p e c i f i c i t y = TN T N + F P F1-score indicates a harmonic mean of precision and recall, and is calculated by the following formula: [12pt]{minimal}
$$ F1\;=\;2\;\;$$ F 1 = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n × R e c a l l
In total, 387 individuals were evaluated. The age of the studied individuals ranged from 18 to 59 years (mean 25 years). MTMs were present bilaterally on 232 and unilaterally on 155 radiographs. In total, 619 images were collected which included MTMs and MCs, including 318 pieces of data in the contact group and 301 pieces of data in the no-contact group. Detection of MTMs and MCs The model demonstrated high accuracy in identifying the ROIs in panoramic images. The model achieved 0.99 accuracy at 50% overlap (AP50) and 0.90 accuracy at 75% overlap (AP75). This means that the model was able to correctly identify the area containing the MTM and MC within the specific boundaries detailed in the data labeling part of the article. Classification of the relationship of MTMs and MCs For the training set, the model achieved an accuracy of 0.86, meaning it correctly classified the contact between the MTM and MC in 86% of cases, with a loss of 0.37. The loss value indicates the difference between the model’s predictions and the actual outcomes, with lower values indicating better performance. During the validation phase, the model’s accuracy was 0.81 (81% correct classifications) with a loss of 0.48. In the test set, the model had an accuracy of 0.85 (85% correct classifications) and a loss of 0.41. Loss values of DL models represent the discrepancy between predicted outputs and actual ground truth. Lower loss values indicate that the model’s predictions are closer to the true values. When loss values approach zero, it means the model is making highly accurate predictions. Essentially, the smaller the loss, the better the model’s performance. Additionally, the precision of the model in the test group was 0.86, indicating that 86% of the contacts detected by the model were correct. The recall was 0.85, showing that the model successfully identified 85% of all actual contacts. The specificity was 0.93, meaning the model correctly identified 93% of non-contacts. The F1-score was 0.84, which balances precision and recall to give an overall measure of the model’s performance. Finally, the area under the receiver operating characteristic curve (AUROC) was 0.90, indicating the model’s ability to distinguish between contact and no contact with 90% effectiveness. Detailed evaluation metrics are provided in Table .
The model demonstrated high accuracy in identifying the ROIs in panoramic images. The model achieved 0.99 accuracy at 50% overlap (AP50) and 0.90 accuracy at 75% overlap (AP75). This means that the model was able to correctly identify the area containing the MTM and MC within the specific boundaries detailed in the data labeling part of the article.
For the training set, the model achieved an accuracy of 0.86, meaning it correctly classified the contact between the MTM and MC in 86% of cases, with a loss of 0.37. The loss value indicates the difference between the model’s predictions and the actual outcomes, with lower values indicating better performance. During the validation phase, the model’s accuracy was 0.81 (81% correct classifications) with a loss of 0.48. In the test set, the model had an accuracy of 0.85 (85% correct classifications) and a loss of 0.41. Loss values of DL models represent the discrepancy between predicted outputs and actual ground truth. Lower loss values indicate that the model’s predictions are closer to the true values. When loss values approach zero, it means the model is making highly accurate predictions. Essentially, the smaller the loss, the better the model’s performance. Additionally, the precision of the model in the test group was 0.86, indicating that 86% of the contacts detected by the model were correct. The recall was 0.85, showing that the model successfully identified 85% of all actual contacts. The specificity was 0.93, meaning the model correctly identified 93% of non-contacts. The F1-score was 0.84, which balances precision and recall to give an overall measure of the model’s performance. Finally, the area under the receiver operating characteristic curve (AUROC) was 0.90, indicating the model’s ability to distinguish between contact and no contact with 90% effectiveness. Detailed evaluation metrics are provided in Table .
This study assessed the performance of a two-stage DL algorithm for detecting MTMs and MCs on panoramic images and classifying their relationship (contact/no contact) against a reference test established on CBCTs. The accuracy of the Faster R-CNN model for the detection of MTMs and MCs was AP50 = 0.99 and AP75 = 0.90 respectively, while the classification yielded accuracy metrics between 0.84 (F-1 score) and 0.93 (specificity). The developed model pipeline was able to predict 3-D relationships on 2-D imaging with high accuracy; the risk of predicting a close contact was low (< 7%) and the chance of flagging such contact to clinicians was relatively high (85%). The findings of this study need some more in-depth discussion. Panoramic radiographs are routinely used for initial assessment of MTMs. Although several signs in panoramic radiographs can be indicative of a contact between MTMs and MC, including narrowing of the MC in the area of third molar, narrowing of the roots, darkening of the roots, and deflection of MC, etc., the gold standard modality for determination of the true relationship is CBCT or other 3-D modalities . With the ever-increasing use of DL in dental diagnostics, 2-D images have been widely used as input data for various detection, classification, and prediction tasks . DL models trained on conventional radiographs complemented by reference tests established on 3-D data can enhance the diagnostic potential and yield superior performance. Such DL models, trained using reference 3D images, can enhance 2D images, effectively elevating them to a super 2D level. A similar concept of enhancing the diagnostic capabilities of a specific imaging modality by leveraging another complementary imaging technique has been used in studies to improve the quality of CBCT images from learned computed tomography data or the generation of 3-D imagery from 2-D data . Several studies have used DL for assessing MTMs and their relationship with MCs. Fukuda et al. compared AlexNet, GoogLeNet, and VGG-16 for classification of the relationship of MTMs with MCs. The training data in their study included 600 panoramic radiographs (300 radiographs showing no contact and 300 showing contact between MTMs and MCs). They reported accuracy values between 0.88 and 0.93. No significant difference was observed among the performance of the three models in the classification task. Lee et al. used Retinanet for detection of MTMs and Vision Transformers for predicting the level of difficulty of MTM extractions and the risk of nerve damage. They used 4903 panoramic radiographs and reported accuracies of 0.81 to 0.84 for their predictions. Liu et al. used 254 CBCT scans as inputs for a U-net based model for detection of MCs and MTMs and ResNet-34 model for classification of their relationship, again confirming high accuracies. Kempers et al. detected MTMs and MCs on 863 panoramic radiographs and classified their relationship by using a U-net with MobileNet V2, yielding an accuracy of 0.95. Zhu et al. evaluated the detection of MTMs and MCs on 503 panoramic radiographs using an MM3-IANnet, a YOLOv4-based model, and reported an average precision of 0.83. Takebe et al. used a YOLO-driven DL model trained on panoramic radiographs complemented by CBCT information. Their model reached an accuracy of 0.89 in the original dataset of 579 panoramic radiographs and 0.93 in an external test dataset. This suggested model combines the Faster R-CNN model with the ResNeXt architecture, providing several advantages over previous approaches. The Faster R-CNN framework, known for its robust object detection capabilities, was specifically chosen for its high performance in medical imaging tasks. The ResNeXt model, a modification of ResNet, uses grouped convolutions to enhance feature extraction, resulting in more accurate and efficient classification. Compared to other studies, the present approach offers: 1- Enhanced detection accuracy: The combination of Faster R-CNN and ResNeXt achieved higher detection accuracy (AP50 = 0.99, AP75 = 0.90) compared to other models such as AlexNet, GoogLeNet, and VGG-16 used by Fukuda et al. , which reported accuracies between 0.88 and 0.93. 2- Efficient Use of 3-D Data: By training on 2-D images with a reference test established on 3-D CBCT scans, the present model leverages the rich information from 3-D data to enhance the accuracy of 2-D image analysis, potentially surpassing human performance. Overall, the unique combination of these neural network architectures and methodologies resulted in a highly accurate and efficient model for detecting and classifying the relationship between MTMs and MCs on panoramic radiographs. The present study has some limitations. First, we only used a limited sample size, as finding pairs of panoramic and CBCT images fitting the inclusion criteria was challenging. Second, we used radiographic data from a single acquisition device (Promax scanner for panoramic radiographs and Cranex 3D CBCT scanner), which may limit the generalizability of the findings. Confirmation of the findings on other populations and using different radiographic devices should be performed to confirm generalizability. Differences in image quality and parameters from other devices could affect the model’s performance. Future studies should include data from multiple devices to validate the robustness of the present model. Third, we employed only one model for each of the two tasks of detection and classification, and used an ensembling modeling approach. Using different architectures and employing end-to-end modeling should be performed to comprehensively explore the subject. Fourth, a comparison of the accuracy of classification with that of clinicians is recommended to gauge the possible benefits of DL for this task. Last, it should be noted that a potential “super-2D” image is a challenge for the human user: If a certain feature is not assessable and hence explainable in a 2-D image, human autonomy and responsibility will be challenged. Future studies should consider introducing elements of explainable AI.
The suggested model combining the Faster R-CNN model with the ResNeXt architecture DL was trained on 2-D data but against a reference test established on 3-D imagery. This two-stage model yielded high accuracy and could be supportive in clinical care. Future studies should gauge its true usefulness by comparing it against the human users’ performance and should consider how to communicate a potential super-human performance.
|
An efficient genotyper and star-allele caller for pharmacogenomics | 7cf9d68e-d00a-4ca2-a8f3-04542d830392 | 9977157 | Pharmacology[mh] | We have compared Aldy 4.3 (with PharmVar v5.2.3) against Astrolabe v0.8.7.2 , StellarPGx v1.2.5 , Stargazer v1.0.8 , and Cyrius v1.1.1 . Aldy 4 was also compared against Aldy v3.3 , the previous version of Aldy. The comparisons were performed on a sizeable GeT-RM set of publicly available samples and genes for which genotyping panel validations were available . These samples were sequenced with three technologies: (1) a PGRNseq v.3 Illumina-based pharmacogene-targeted panel (137 samples) , (2) Illumina WGS (70 samples), and (3) 10x Genomics sequencing (95 samples). In addition to these samples, Aldy 4 was also run on the set of 45 Coriell samples sequenced by a PacBio HiFi pharmacogene-targeted panel and validated by . The percentage of known alleles covered by these data sets for each evaluated gene is available in Supplemental Table S1 . Aldy 4 and other tools were run on the following 19 genes: CYP1A1 , CYP1A2 , CYP2A13 , CYP2A6 , CYP2B6 , CYP2C8 , CYP2C9 , CYP2C19 , CYP2D6 , CYP2J2 , CYP2S1 , CYP3A4 , CYP3A5 , CYP3A7 , CYP3A43 , CYP4F2 , DPYD , SLCO1B1 , and TPMT . Although Aldy 4 also supports additional 15 genes, their evaluation was omitted because we did not have the ground truth panel data for these genes. Note that not every tool supports all these genes: As a rule of thumb, Stargazer, Aldy 3, and Aldy 4 provide the broadest support, whereas the other tools are geared toward a small subset of these genes (typically CYP genes, such as CYP2D6 and CYP2C19 ). In an ideal world, we would have a “perfect” phase for each sample and would be able to evaluate each tool against such phase. Unfortunately, the available ground truth data are obtained through genotyping panels and assays designed to detect only the common major star-alleles (or core alleles , alleles defined solely by functional, or core, variants). These panels often cannot call minor star-alleles (or suballeles , alleles defined by nonfunctional variants, or subvariants, and functionally indistinguishable from the major star-alleles), as well as less common alleles. The low resolution of the available ground truth data and the differences in database specifications between the different tools necessitated a few accommodations within the evaluation process for the sake of fairness. First, we updated ground truth calls that missed the presence of less common variants and alleles. Updates were only performed if there was a consensus between the star-allele calling tools that differed from the ground truth data and if an updated call extended the validated allele definition (i.e., if the variants defining the validated allele also form a part of the consensus definition). Note that a similar approach was used by . Each updated call was further manually inspected to ensure that the variants missing from the ground truth calls are indeed present and not sequencing artifacts. In rare instances, it was hard to precisely distinguish the presence of the variant, especially if the variant allele frequency (VAF) was too low (alleles with lower VAFs are sometimes caused by the sequencing or read alignment bias, especially in the presence of pseudogenes, and are typically validated through Sanger sequencing). Samples with such variants were marked as “need validation” . For such samples, calls that either used or ignored such ambiguous variants were deemed “correct.” Second, we have followed the common strategy used in clinical studies by only comparing the major (core) star-allele calls and ignoring the minor star-allele (suballele) designations. In other words, only the phasing of functional (core) variants was considered; subvariants and silent variants that do not alter the functionality of an allele were ignored (i.e., a *1A/*2B minor star-allele call was treated as a functionally equivalent *1/*2 major star-allele call). Note that major (core) star-alleles are typically distinguished by the number (e.g., *1 functionally differs from *2), whereas minor star-alleles (suballeles) are traditionally distinguished by a letter (e.g., *2A and *2B harbor different silent variants despite sharing common core variants) or by a numerical suffix in the recent PharmVar definitions (e.g., *2.001 instead of *2A). The further complication lies in the discrepancies among the databases themselves: Different tools ship with different databases and often augment such databases with custom entries. For that reason, identical alleles with differing names were treated equally, and ambiguities were resolved in the individual tool's favor (e.g., if a tool's call could be interpreted as correct, we called it as correct; this way, we ensured that custom database entries were accounted for). The exact criteria used for allele updates and allele comparisons are listed in the Supplemental Note S1 . Where possible, the CYP2D8 region was used as the copy number neutral region; exceptions include Aldy 4 using F1 region for the PacBio data. Some tools, such as Astrolabe and Stargazer, required VCF files; where needed, VCFs were generated by BCFtools . All results were obtained on machines with Intel Xeon E5-2680v4 and 8260 CPUs. Each evaluated tool genotypes a single gene in a single sample within a few minutes, regardless of the sequencing technology used. However, note that Aldy 4 only needs BAM/CRAM to run; other tools often require VCF or GDF files that can take significant time to generate. Overall, the best accuracy on short-read data sets (PGRNseq v.3, Illumina WGS, and 10x Genomics) was achieved by Aldy 4 (98.42%), followed by Aldy 3 (96.78%), StellarPGx (90.40%), Astrolabe (86.50%), Cyrius (82.63%), and Stargazer (76.47%). PGRNseq v.3 Aldy 4, Aldy 3, Stargazer, and Astrolabe were run on 137 PGRNseq v.3 targeted sequencing samples from the GeT-RM collection. PGRNseq v.3 targets common pharmacogenes and sequences them at high depths (up to 1000× per loci). Note that we could not get either Stargazer or Astrolabe to run on targeted sequencing data natively; thus, VCF files were provided as an input for these tools. Because of the limited nature of VCF files, these tools were unable to call copy number changes and fusions on this data set. Although Stargazer has a mode for targeted data, we were unable to get good results with it; a detailed explanation is given in the Supplemental Notebook . The comparison with StellarPGx was omitted as it does not support targeted sequencing data. As can be seen in , Aldy 4 identifies nearly all of the alleles in all genes correctly—more than the other two tools—with a total accuracy of 98.45%. In some cases (e.g., failed cases in the genes CYP1A1 and CYP2B6 ), no caller was able to call correct star-alleles because the PGRNseq panel did not sequence the variant of interest (e.g., a nonexonic downstream variant rs4646903 that defines CYP1A1*2A was not covered by the panel at all). On this data set, Aldy 4's performance is only marginally better than Aldy 3. This is expected as neither of the model updates unique to Aldy 4 applies to the high-quality PGRNseq data set with stable coverage. Minor changes are mostly owing to the differences in the variant calling (e.g., Aldy 4's incorporation of quality scores and mapping qualities). Illumina WGS We have run all tools on 70 Illumina HiSeq-sequenced WGS samples from the GeT-RM sample collection. These samples were sequenced with an average depth of roughly 30×. The details are also available in . Here, Aldy 4 again calls nearly all star-alleles correctly and genotypes more samples than the competition for every considered gene. The only exception is CYP2D6 , for which Cyrius genotypes two samples (NA21781) more than Aldy 4. In this case, Aldy 4 misses the nonfunctional *68 and identifies the *2 allele as *65; however, the *65 allele extends the *2 allele with a single variant (rs1065852), and it is unclear if this allele is indeed a *2 or a *65. Aldy 4 and other tools were able to correctly call alleles defined by intronic and downstream variants across the genes on these data. Note that the main reason behind the Stargazer's lower accuracy on this data set was copy number calling: Although Stargazer often identified the star-allele correctly, it would often call them more times than needed (e.g., *1/*2 + *2 instead of *1/*2). Note that Aldy 4 only calls copy numbers and fusions on genes that are known to harbor such changes; otherwise, it assumes that two copies are present. Note that Astrolabe used a modified CYP4F2 database whose allele nomenclature differed from the other databases. Thus, the comparison with Astrolabe on CYP4F2 was omitted for the sake of consistency. We also observed a large number of mismatches in SLCO1B1 across all tools owing to the incomplete panel validation and inconsistent database specifications used by various tools. Finally, the improvements in the copy number model and more sensitive variant calling in Aldy 4 account for a few improved calls on more complex CYP2D6 and CYP2A6 samples. 10x Genomics We have run all tools on 95 GeT-RM samples sequenced by a 10x Genomics WGS sequencer. The average depth of sequencing was roughly 40×. Because several important pharmacogenes reside within repeated regions of the human genome, EMA aligner with the density-based optimization mode was used for improved alignment of the 10x reads to the reference genome (hg19 at the time of alignment; the same results should be expected when aligning the data to GRCh38 as well because the gene regions of interest did not undergo major changes between the two releases). The comparison details are available in . Although the 10x Genomics protocol uses Illumina HiSeq for sequencing, the read coverage is not as uniform as it is in an average Illumina WGS sample; 10x-specific biases also result in quite a few misaligned reads compared with the WGS data. For this reason, the overall allele calling accuracy is lower than the WGS data set; this is especially evident in Stargazer, in which the accuracy of its copy number detection module is even lower than in WGS data. However, Aldy 4 still correctly calls the majority of alleles (with 97.11%) accuracy, especially compared with the other tools. The most challenging genes for all tools were CYP2A6 and CYP2D6 . Aldy's accuracy is lower in these genes, primarily owing to the occasional copy number mismatch (owing to the coverage unevenness) and sequencing artifacts (where many misidentified variants were either an artifact or were undersequenced). Note that Aldy 4 benefited from the novel phasing module that was able to successfully use 10x Genomics barcodes to link long-distance variants together. Finally, we observe significant improvements over Aldy 3 in CYP2D6 and CYP2A6 samples on this data set owing to an improved copy number model that better handles noisy coverage and ambiguous variants (a common case in 10x Genomics samples) and is, as such, able to improve the calling accuracy up to 30% in these genes. PacBio HiFi Finally, Aldy 4 was run on two sets of PacBio HiFi samples sequenced by a custom targeted pharmacogenomics panel . The first set contained 24 samples, whereas the second set was comprised of 21 samples. The coverage of these data sets varies—it can be as low as 10×—and at times exceed even 200×. Aldy's calls were compared with those of Astrolabe. Although none of the other tools support PacBio long reads natively, we were able to at least run Astrolabe in VCF mode. The validation data were obtained from and . The call details are available in . Star-allele calls generated by Aldy 4 agree with the ground truth in all genes except for a few CYP2D6 calls and one CYP2C9 call. Furthermore, its calls augmented and phased many ground truth calls generated by panels with limited variant coverage with additional variants observed by PacBio data . Aldy was also able to find and phase alleles that have not been cataloged in CYP2B6 , CYP2C19 , CYP3A4 , DPYD , and SLCO1B1 . Further validation of such novel calls, as well as of the calls that were deemed ambiguous, is needed to fully confirm and understand such alleles. When it comes to CYP2D6 , Aldy 4's calls disagree with the ground truth data owing to the difference in predicted copy number. In two instances, Aldy 4 called an additional copy (i.e., *1 + *1 instead of *1, and *4 + *4 instead of *4), whereas in the other two instances, Aldy 4 did not call an existing copy (i.e., it called *2 instead of *2 + *2 and *10 instead of *10 + *10). In one instance, Aldy called *36 instead of *10 (note that these alleles are nearly identical, with the only difference being a conversion of exon 9 in *36); in the final instance Aldy did not call the nonfunctional *68 fusion allele. In all these cases, the observed coverage was noisy, and further validation is needed to ascertain the exact copy number of these samples. Let us also point out that Astrolabe's calls in genes CYP2C19 and SLCO1B1 , as well as CYP2D6 in the second data set, were highly ambiguous, often containing more than 10 functionally different solutions. Other remarks Many tools often confuse CYP2B6*6 and CYP2B6*9 alleles that differ only in the variant rs2279343. This variant is often either undersequenced or covered by ambiguous reads that potentially originate from the neighboring CYP2B7 pseudogene and is thus hard to call with high confidence in some technologies (e.g., PGRNseq v.3). When the true call was ambiguous, both possible calls were deemed “correct.” Similar cases were also observed with CYP2A6*1 and CYP2A6*35 alleles. Further validation is needed to properly ascertain the true existence of these alleles in problematic samples. If multiple allele calls were generated by a tool for a given sample and gene combination, the call was deemed “correct” if at least one such multicall matched the ground truth. Note that the prevalence of multiple calls was overall low: ∼1.1% for Aldy 4, 2.7% for Aldy 3, 1.9% for Stargazer, 1.9% for StellarPGx, and 15.7% for Astrolabe. Aldy 4's new phasing model was a significant factor for a low multicall rate: Although the rate was 1.6% on PGRNseq v.3 and 1.8% on WGS samples owing to the short read lengths of such samples, it decreased to 0.5% on 10x and PacBio samples that allowed better phasing. The vast majority of ambiguous calls were observed when genotyping CYP4F2 and SLCO1B1 . Finally, note that Aldy 4 generated no more than three diplotypes for each ambiguous call. Aldy 4, Aldy 3, Stargazer, and Astrolabe were run on 137 PGRNseq v.3 targeted sequencing samples from the GeT-RM collection. PGRNseq v.3 targets common pharmacogenes and sequences them at high depths (up to 1000× per loci). Note that we could not get either Stargazer or Astrolabe to run on targeted sequencing data natively; thus, VCF files were provided as an input for these tools. Because of the limited nature of VCF files, these tools were unable to call copy number changes and fusions on this data set. Although Stargazer has a mode for targeted data, we were unable to get good results with it; a detailed explanation is given in the Supplemental Notebook . The comparison with StellarPGx was omitted as it does not support targeted sequencing data. As can be seen in , Aldy 4 identifies nearly all of the alleles in all genes correctly—more than the other two tools—with a total accuracy of 98.45%. In some cases (e.g., failed cases in the genes CYP1A1 and CYP2B6 ), no caller was able to call correct star-alleles because the PGRNseq panel did not sequence the variant of interest (e.g., a nonexonic downstream variant rs4646903 that defines CYP1A1*2A was not covered by the panel at all). On this data set, Aldy 4's performance is only marginally better than Aldy 3. This is expected as neither of the model updates unique to Aldy 4 applies to the high-quality PGRNseq data set with stable coverage. Minor changes are mostly owing to the differences in the variant calling (e.g., Aldy 4's incorporation of quality scores and mapping qualities). We have run all tools on 70 Illumina HiSeq-sequenced WGS samples from the GeT-RM sample collection. These samples were sequenced with an average depth of roughly 30×. The details are also available in . Here, Aldy 4 again calls nearly all star-alleles correctly and genotypes more samples than the competition for every considered gene. The only exception is CYP2D6 , for which Cyrius genotypes two samples (NA21781) more than Aldy 4. In this case, Aldy 4 misses the nonfunctional *68 and identifies the *2 allele as *65; however, the *65 allele extends the *2 allele with a single variant (rs1065852), and it is unclear if this allele is indeed a *2 or a *65. Aldy 4 and other tools were able to correctly call alleles defined by intronic and downstream variants across the genes on these data. Note that the main reason behind the Stargazer's lower accuracy on this data set was copy number calling: Although Stargazer often identified the star-allele correctly, it would often call them more times than needed (e.g., *1/*2 + *2 instead of *1/*2). Note that Aldy 4 only calls copy numbers and fusions on genes that are known to harbor such changes; otherwise, it assumes that two copies are present. Note that Astrolabe used a modified CYP4F2 database whose allele nomenclature differed from the other databases. Thus, the comparison with Astrolabe on CYP4F2 was omitted for the sake of consistency. We also observed a large number of mismatches in SLCO1B1 across all tools owing to the incomplete panel validation and inconsistent database specifications used by various tools. Finally, the improvements in the copy number model and more sensitive variant calling in Aldy 4 account for a few improved calls on more complex CYP2D6 and CYP2A6 samples. We have run all tools on 95 GeT-RM samples sequenced by a 10x Genomics WGS sequencer. The average depth of sequencing was roughly 40×. Because several important pharmacogenes reside within repeated regions of the human genome, EMA aligner with the density-based optimization mode was used for improved alignment of the 10x reads to the reference genome (hg19 at the time of alignment; the same results should be expected when aligning the data to GRCh38 as well because the gene regions of interest did not undergo major changes between the two releases). The comparison details are available in . Although the 10x Genomics protocol uses Illumina HiSeq for sequencing, the read coverage is not as uniform as it is in an average Illumina WGS sample; 10x-specific biases also result in quite a few misaligned reads compared with the WGS data. For this reason, the overall allele calling accuracy is lower than the WGS data set; this is especially evident in Stargazer, in which the accuracy of its copy number detection module is even lower than in WGS data. However, Aldy 4 still correctly calls the majority of alleles (with 97.11%) accuracy, especially compared with the other tools. The most challenging genes for all tools were CYP2A6 and CYP2D6 . Aldy's accuracy is lower in these genes, primarily owing to the occasional copy number mismatch (owing to the coverage unevenness) and sequencing artifacts (where many misidentified variants were either an artifact or were undersequenced). Note that Aldy 4 benefited from the novel phasing module that was able to successfully use 10x Genomics barcodes to link long-distance variants together. Finally, we observe significant improvements over Aldy 3 in CYP2D6 and CYP2A6 samples on this data set owing to an improved copy number model that better handles noisy coverage and ambiguous variants (a common case in 10x Genomics samples) and is, as such, able to improve the calling accuracy up to 30% in these genes. Finally, Aldy 4 was run on two sets of PacBio HiFi samples sequenced by a custom targeted pharmacogenomics panel . The first set contained 24 samples, whereas the second set was comprised of 21 samples. The coverage of these data sets varies—it can be as low as 10×—and at times exceed even 200×. Aldy's calls were compared with those of Astrolabe. Although none of the other tools support PacBio long reads natively, we were able to at least run Astrolabe in VCF mode. The validation data were obtained from and . The call details are available in . Star-allele calls generated by Aldy 4 agree with the ground truth in all genes except for a few CYP2D6 calls and one CYP2C9 call. Furthermore, its calls augmented and phased many ground truth calls generated by panels with limited variant coverage with additional variants observed by PacBio data . Aldy was also able to find and phase alleles that have not been cataloged in CYP2B6 , CYP2C19 , CYP3A4 , DPYD , and SLCO1B1 . Further validation of such novel calls, as well as of the calls that were deemed ambiguous, is needed to fully confirm and understand such alleles. When it comes to CYP2D6 , Aldy 4's calls disagree with the ground truth data owing to the difference in predicted copy number. In two instances, Aldy 4 called an additional copy (i.e., *1 + *1 instead of *1, and *4 + *4 instead of *4), whereas in the other two instances, Aldy 4 did not call an existing copy (i.e., it called *2 instead of *2 + *2 and *10 instead of *10 + *10). In one instance, Aldy called *36 instead of *10 (note that these alleles are nearly identical, with the only difference being a conversion of exon 9 in *36); in the final instance Aldy did not call the nonfunctional *68 fusion allele. In all these cases, the observed coverage was noisy, and further validation is needed to ascertain the exact copy number of these samples. Let us also point out that Astrolabe's calls in genes CYP2C19 and SLCO1B1 , as well as CYP2D6 in the second data set, were highly ambiguous, often containing more than 10 functionally different solutions. Many tools often confuse CYP2B6*6 and CYP2B6*9 alleles that differ only in the variant rs2279343. This variant is often either undersequenced or covered by ambiguous reads that potentially originate from the neighboring CYP2B7 pseudogene and is thus hard to call with high confidence in some technologies (e.g., PGRNseq v.3). When the true call was ambiguous, both possible calls were deemed “correct.” Similar cases were also observed with CYP2A6*1 and CYP2A6*35 alleles. Further validation is needed to properly ascertain the true existence of these alleles in problematic samples. If multiple allele calls were generated by a tool for a given sample and gene combination, the call was deemed “correct” if at least one such multicall matched the ground truth. Note that the prevalence of multiple calls was overall low: ∼1.1% for Aldy 4, 2.7% for Aldy 3, 1.9% for Stargazer, 1.9% for StellarPGx, and 15.7% for Astrolabe. Aldy 4's new phasing model was a significant factor for a low multicall rate: Although the rate was 1.6% on PGRNseq v.3 and 1.8% on WGS samples owing to the short read lengths of such samples, it decreased to 0.5% on 10x and PacBio samples that allowed better phasing. The vast majority of ambiguous calls were observed when genotyping CYP4F2 and SLCO1B1 . Finally, note that Aldy 4 generated no more than three diplotypes for each ambiguous call. Pharmacogenomics is becoming a key component of evidence-based medicine . Genes like CYP2D6 and CYP2C19 regulate a large portion of clinically prescribed drugs; other genes, such as those in the HLA or IGH gene clusters, are vital for understanding the immune response . As their function is dependent on their haplotype, it is of vital importance to genotype and haplotype these genes before administering medical treatment . HTS technologies are a natural candidate for this process, especially when considered that the currently available clinical genotyping panels are often restricted only to the most common genotypes and struggle to detect more complex structural alterations within pharmacogenes. In this work, we have presented Aldy 4, the first tool that can accurately and consistently call star-alleles in data from various sequencing technologies, including but not limited to long-read PacBio data, Illumina short-read sequencing in all of its flavors (i.e., whole-genome, whole-exome, and targeted capture data), as well as the 10x Genomics barcoded data. Aldy 4 achieves this by using combinatorial optimization models to solve various challenges associated with calling pharmacogenetic haplotypes from sequencing data, such as copy number and structural variation detection, variant calling, and variant phasing, ultimately resulting in a star-allele decomposition of a gene of interest. We have shown the strength of Aldy 4's approach through a series of comparisons against the current state-of-the-art star-allele callers, in which Aldy 4 performed the best. We hope that Aldy 4 will be of vital importance to clinicians in tailoring prescription recommendations, thus leading to improved medical care. There are still some open questions left that need to be answered in future work. Most importantly, the panel-validated calls improved by the star-allele callers through the use of HTS data—often containing novel alleles not previously cataloged in the existing databases—need to be validated in a wet-laboratory environment for all genes presented, as was performed recently for a selection of CYP2C genes . More tests are also needed on larger cohorts to accurately evaluate the precision of these tools, Aldy 4 included, on rare fusions. The incorporation of other highly polymorphic pharmacogenomics regions, such as HLA or IGH , should also be considered, as Aldy (and other evaluated pharmacogenomics tools) are currently unable to handle the complexities of such regions. Finally, the complete characterization of minor star-alleles, accompanied by the careful characterization of noncoding variants, is also needed to understand the full effect of pharmacogenes on the treatment and drug dosage decisions. The goal of Aldy is to reconstruct the exact sequence content (or haplotype) of each gene copy of a given pharmacogene from an HTS data sample and assign a star-allele to each reconstructed haplotype present in the data set. This process is subsequently referred to as star-allele calling . To accurately call star-alleles, it is necessary to consult a database of known star-alleles that contains the exact sequence content of each pharmacogene allele. Suppose that a pharmacogene G harbors variants M = { m 1 , … , m | M | } , where any m ∈ M is a single-nucleotide substitution or a small indel. Depending on their impact on the gene G , these variants are either deemed functional (core) variants or silent variants (subvariants); although core variants are typically nonsynonymous, they might also include UTR and intronic variants that affect the drug metabolism. The reference allele of G is an allele that harbors no variants at all. It is commonly known as the *1 star-allele. Notable exceptions to this rule include CYP2C19 (where CYP2C19*1 was recently renamed to CYP2C19*38 ) and wild-type alleles in IFNL3 and DPYD . Any other star-allele S i is defined by the subset of known variants M that distinguish its sequence content from the reference *1 allele. In some genes, such as CYP2D6 , star-allele identifiers are also assigned to fusions and other pseudogene-induced structural changes that affect the pharmacogene. For this reason, the definition of star-alleles is extended to also include their structural configuration. This configuration describes whether a pharmacogene is wholly present in the genome, is deleted, or is a gene–pseudogene hybrid. The set of valid configurations is denoted as G . Note that each structural configuration can induce many distinct star-alleles depending on the choice of variants from M . Thus, we can formally define a star-allele S i as a tuple ( g i , A i ), where g i ∈ G and A i ⊆ M . The star-allele database is formally a collection of all known structural configurations, variants, and known star-alleles ( G , M , { S 1 , S 2 , … } ) , where S i = ( g i , A i ) such that g i ∈ G and A i ⊆ M . To call star-alleles of a given pharmacogene from the given sequencing sample, Aldy needs to perform the following steps: analyze the aligned HTS reads in BAM/CRAM format and resolve incorrectly aligned reads, detect structural configurations by calling copy number changes and gene–pseudogene fusions, and use the read alignments from BAM/CRAM to call star-alleles and phase the gene. BAM/CRAM analysis and alignment correction Aldy begins by taking a SAM, BAM, or CRAM file generated by a read aligner (e.g., BWA , pbbm2 [ https://github.com/PacificBiosciences/pbmm2 ], or minimap2 ). It is recommended to postprocess these files with the GATK's “best practices” pipeline (the local indel realignment step is especially helpful for the subsequent variant calling) . Aldy extracts the relevant variants that are present in a given pharmacogene from the alignment file, as well as coverage information needed for the copy number and structural variation detection step. It also collects phasing information from long reads, barcoded fragments, and paired-end fragments where available. The original version of Aldy relied on the assumption that read alignments produced by the off-the-shelf aligner are mostly correct. Although this assumption holds for short paired-end Illumina reads, it breaks for long reads such as PacBio HiFi reads. For example, if a sample harbors a gene duplication and if the highly similar pseudogene is located immediately next to this gene, any long read spanning two duplicated copies of the gene will get its second half incorrectly aligned to the pseudogene because the reference genome does not contain two copies of the gene in question . The correct alignment would perform a split mapping and align the second half again to the pharmacogene. These incorrect alignments are even more problematic in the presence of gene fusions: Any read that spans a gene–pseudogene fusion breakpoint will not be split-mapped but incorrectly aligned to either pharmacogene or its pseudogene. Aldy 4 corrects such alignments by splitting any long read that spans the gene–pseudogene boundary into shorter gene-level segments and aligning each segment independently. Each segment is guaranteed to span only one gene (either pharmacogene or a pseudogene) and thus avoid being misaligned in the manner described above. The size of each such segment is at most the size of the gene. Aldy 4 performs a further split-mapping of each segment that spans a potential fusion breakpoint to determine whether a read originates from a fusion event or not (a read is said to originate from a fusion event if its split-mapping alignment score is better than the original alignment score). Unlike previous versions, Aldy 4 considers base quality scores and read mapping qualities when calling the allele-specific variants. This ensures that the low-quality variants in noisy and low-coverage samples are filtered out before the star-allele calling. Finally, Aldy 4 performs the indel-realignment step through indelpost to correct misalignments that often happen when aligning short reads to small indels . Copy number and structural variation analysis In a typical scenario, a sample contains two parental copies of a pharmacogene of interest for which star-alleles need to be called. This is generally true for most of the pharmacogenes of interest. However, a few major pharmacogenes do not follow this pattern and are prone to various copy number changes and structural events. The most notable example is that of the CYP2D6 gene, one of the most important pharmacogenes , whose copies can undergo whole-gene deletions, duplications, and hybrid fusions (where a copy begins with the CYP2D6 sequence but switches to the pseudogene CYP2D7 sequence at a given breakpoint, or vice versa) . Other prominent examples include CYP2A6 , G6PD , and so on. Each copy—fusions included—yields its own star-allele. Thus, to correctly call star-alleles of such genes, it is necessary to correctly detect the total number of available gene copies as well as the configuration (i.e., structure) of each copy. Each gene copy can be described by its structural configuration represented as a binary vector g ∈ G that indicates the presence or absence of genic regions in a given configuration . Because each star-allele is defined by a matching structural configuration, such configurations must be found before the star-alleles can be accurately called. The size of the configuration vector depends on the number of gene segments that define various structural configurations. For example, the CYP2D6 gene is divided into r = 20 segments that correspond to its exons, introns, and flanking regions, because all structural variations are described at the level of whole exons and introns . The total length of the CYP2D6 configuration vector is 2 r (i.e., 40) because the vector also includes segments from the neighboring CYP2D7 pseudogene. This vector can encode any known CYP2D6 structural configuration: for example, a single CYP2D6 copy ( r ones followed by r zeros), a single CYP2D7 copy ( r zeros followed by r ones), CYP2D6 – 2D7 fusion in intron 1 (one followed by r − 1 zeros, in turn followed by a 0 and r − 1 ones), and so on. Once these vectors are established, any complex configuration within CYP2D locus can be represented as an aggregate of individual configuration vectors (for an example, see A). Note that valid structural configuration vectors are obtained from the corresponding allele databases and that each such vector is typically assigned a star-allele identifier (e.g., CYP2D6*13A represents a CYP2D6 fusion with the breakpoint in exon 1). In a sequenced sample, we only observe the aggregate coverage vector cn that describes the number of reads covering each genomic loci of interest within the sample (the formation of this vector is described in Supplemental Note S2 ; an example is shown in B). The goal of Aldy is to find a set of configuration vectors { g 1 , … , g n } ⊆ G whose sum is closest to the observed aggregate coverage, where each structural configuration can be selected only once (an assumption made for the sake of clarity; in practice, Aldy allows selecting the same configurations multiple times). As there might be many such sets, Aldy only looks for the most parsimonious solution: a solution that selects the minimal number of such vectors. This problem, previously dubbed as the copy number estimation problem (CNEP) , can be efficiently solved via integer linear programming (ILP) as follows. Assume that a gene G is segmented into 2 r regions. Let G = { g 1 , … , g | G | } stand for the set of the available configuration vectors, where g i = [ g i ,1 , …, g i ,2 r ] and g i , j ∈ {0, 1} for any i and j . Let cn be the aggregate coverage vector observed from HTS data, and let us introduce a binary variable z i for each g i that indicates if g i is a part of the solution or not. The objective—minimization of difference between the observed aggregate coverage and predicted solution—can be modeled as follows: min ∑ j = 1 2 r | c n j − ∑ i = 1 | G | z i g i , j | . Although this model performs well on WGS and targeted data , it is rather sensitive to deviations from the expected coverage distribution. It also cannot properly handle the cases in which the normalized aggregate coverage is not stable or uniform. For targeted panels with nonuniform coverage distributions, aggregate coverage can be “normalized” by dividing it by the coverage of the control sample if it is stable across different samples. Aldy does this automatically for known targeted panels. Thus, Aldy 4 improves the original CNEP formulation by introducing additional optimization terms. This is performed by modifying the original objective term and extending it with two additional terms, resulting in a three-term optimization objective. The first term is the same as the original CNEP objective but focuses only on the regions associated with the pharmacogene (and not its pseudogene): o 1 = ∑ j = 1 r | c n j − ∑ i = 1 | G | z i g i , j | . The second term of the objective function considers the interaction between the pharmacogene and the corresponding pseudogene region by considering the changes between their respective region coverage. For example, if the coverage of the exon 2 in CYP2D6 is three and in CYP2D7 is two, the resulting region coverage difference would be one. This difference can be further normalized (in this case, divided by three). Using normalized differences allows us to handle samples in which the observed aggregate coverage ( cn ) varies between the regions owing to various sequencing and alignment biases. Despite region-specific coverage variation, the relative abundances between the matching gene–pseudogene regions remain constant. This term can formally be expressed as o 2 = ∑ j = 1 r | c n j − c n j + r ν j − ∑ i = 1 | G | z i g i , j − g i , j + r ν j | . Here ν j = max{ cn j , cn j + r } + 1 is the normalization factor. The final term of the objective function ensures that the ILP solver selects the most parsimonious solution: o 3 = ∑ i = 1 | G | μ i z i . μ i is parsimony parameter (by default set to 1 / | G | ). However, some unlikely configurations, such as left fusions, will have higher parsimony scores to reflect the observation that such configurations are rare . Aldy 4's modified CNEP model uses an ILP solver to minimize sum of these three terms o 1 + o 2 + o 3 . These solutions are passed to the later steps that will decide the best overall solution. Star-allele calling Aldy now proceeds by assigning the exact star-allele identifier to each of the n structural configurations obtained in the previous step. As stated in Methods, a star-allele S i is defined as a tuple ( g i , A i ), where g i ∈ G and A i ⊆ M . The star-allele assignment problem can also be modeled through the ILP as follows. Let us indicate the presence of star-allele S i with a binary variable a i . Our goal is to select a set of star-alleles S 1 , …, S n such that (1) the set of the structural configurations that describes selected star-alleles is identical to the set of the structural configurations from the previous step, and (2) the difference between predicted and observed coverage for each variant m (denoted as cov( m )) is minimized. In other words, we want to minimize ∑ m ∈ M | cov ( m ) − ∑ i : m ∈ A i a i | . Although conceptually simple, this model does not account for cases in which database definitions are incomplete or incorrect. To account for these cases, the model must be allowed to alter star-allele definitions if needed. Aldy thus introduces new binary variables p i , m and q i , m that indicate if a variant m is to be “removed” from the star-allele S i (while being present in the database definition A i ), or “added” to it (while being absent in A i ). Then it attempts to minimize the following expression for each variant m : e m = | cov ( m ) − ( ∑ i : m ∈ A i a i p i , m + ∑ i : m ∉ A i a i q i , m ) | . As a i , p i , m , and q i , m are all binary variables, their product can be expressed as a set of linear constraints. The minimization objective can be expressed as min ∑ m ∈ M e m + ∑ i a i [ ∑ m α i , m ( 1 − p i , m ) + ∑ m β i , m q i , m ] . Parameters α i , m and β i , m are penalties for adding or removing the variant m from allele S i . Adding a variant is less common than missing a variant, so generally we use α i , m = 2 and β i , m = 1 for any i and m . Note that not all variants are the same: As functional (core) variants can fundamentally alter the behavior of a star-allele (and thus change its designation), Aldy disallows removing such variants from any star-allele and allows adding novel functional (core) variants to the allele if and only if no other assignment is possible. This is performed by setting the corresponding α i , m to a very large value. Note that novel core variants are added only if no other decision can be made. The star-allele calling model also enforces other constraints: Each functional (core) variant must be expressed by at least one allele, and each structural configuration must be expressed by at least one allele compatible with it. Finally, Aldy performs two rounds of star-allele calling for improved accuracy. In the first round, Aldy only considers functional (core) variants and identifies all major (core) star-allele combinations that explain the present functional variants . Being restricted solely to core variants, this step alone often produces multiple candidate solutions. Thus, Aldy then uses the second round to refine the candidate calls from the first round and break the ties by considering the silent variants (subvariants) as well. It finally selects the star-allele with the best second-round objective score as the final call. The formulation Aldy 4 uses for this step remains similar to the original model used in the older versions of Aldy. The single major difference is the change in the first (functional star-allele calling) round: Aldy 4 can now call star-alleles that contain novel functional (core) variants—a not uncommon event if a gene database is incomplete—if no other call can be made. Read-based phasing The above-described model essentially performs a variant of statistical phasing: It uses the database knowledge to select the most likely haplotypes that best explain the given observations from the data. Although performing well in practice , there are nevertheless cases when the aforementioned model produces multiple equally likely calls. It is also unable to assign a novel variant to a particular star-allele unambiguously. Finally, in sporadic cases, the above model can produce incorrect results. These challenges can be resolved with long reads that provide long-range phasing information. Aldy 4 newly incorporates the handling of long-range phasing information to the star-allele calling model as follows. Suppose that there are z fragments R 1 , …, R z , each fragment being defined by a set of variants that it spans: R j = { m 1 , … , } ⊆ M . Each sequenced fragment originated from a single star-allele and can thus be assigned to one of the star-alleles in the data set. This assignment can be controlled by introducing a binary variable f i , j that is set if and only if a fragment R j is assigned to S i . Clearly, ∑ i f i , j must be one for every R j because each fragment originates from a single allele. Ideally, we want to assign a R j to S i only if such an assignment agrees with the star-allele sequence as much as possible. In other words, we want to minimize the number of disagreements between allele S i and fragment R j . Thus, the total disagreement of an assignment can be expressed as follows: e i , j = ∑ m ∈ R j ( 1 − p i , m − q i , m ) + ∑ m ∈ R j ( p i , m + q i , m ) , where R ¯ j denotes the set of variants that are not present in read R j but are spanned by it. The total phasing error can be expressed as ∑ i , r f r , i e r , i . This expression can be added to the objective function of the star-allele calling model. Although the expanded version of this expression contains quadratic terms, each quadratic term is a product of two binary variables and, as such, can be trivially linearized. As a final remark, note that the number of binary variables in the phasing model is dependent on the total number of present reads and alleles. In some cases, it can exceed half a million variables, making the overall model very costly to solve. The model can be significantly improved by using a smaller random sample of fragments, where the size of the random sample depends on the number of present reads and alleles. Limitations Aldy uses ILP solvers to solve the presented models. Although ILP solving is NP-hard even when restricted to the models mentioned above , all these models are solvable in practice in less than a minute thanks to the state-of-the-art integer programming solvers used by Gurobi ( https://www.gurobi.com/ ) or CBC ( https://github.com/coin-or/Cbc/tree/releases/2.9.9 ) solvers. In some rare instances, Aldy cannot unambiguously call star-alleles from short-read data sets owing to the read length limitations and lack of strand information. In these cases, Aldy will report all possible solutions. In some cases, this might be misleading; for example, a *68 + *4/*5 call can be reported as *68/*4 (where *5 stands for deletion allele). However, both calls are functionally identical and should be treated as equal (as is performed here). Aldy also makes heavy use of the existing star-allele databases to call star-alleles and fusion breakpoints. Although it can handle cases in which the database is incomplete or lacking, it can theoretically report incorrect results if a present allele is wildly divergent from any allele in the database. Aldy 4's detection of structural configurations is highly dependent on the stability of coverage across different sequencing runs. Although this is not a significant issue for short-read WGS and targeted sequencing panels, the coverage might vary more than expected in PacBio samples. For this reason, Aldy 4 brings support for the exploration of a broader solution space when needed to account for potential noise. Finally, note that Aldy 4 does not cover all existing pharmacogenes: Genes from the IGH and HLA regions, ABC gene families (e.g., ABCG2 ), and the UGT1/2 gene clusters are prominently not included. Although some genes, such as ABCG2 , can be easily supported with a corresponding database file (something that is planned for the future releases), more complex clusters such as HLA or IGH require major changes to the core algorithm to account for challenges posed by those regions and are better left to the specialized tools such as ImmunoTyper-SR . Software availability Aldy 4 is available at GitHub ( https://github.com/0xTCG/aldy ) and also uploaded as Supplemental Code . The experimental procedure and results are available at GitHub ( https://github.com/0xTCG/aldy/tree/master/paper ) and are also uploaded as the Supplemental Notebook and Supplemental Experiments , respectively. Aldy begins by taking a SAM, BAM, or CRAM file generated by a read aligner (e.g., BWA , pbbm2 [ https://github.com/PacificBiosciences/pbmm2 ], or minimap2 ). It is recommended to postprocess these files with the GATK's “best practices” pipeline (the local indel realignment step is especially helpful for the subsequent variant calling) . Aldy extracts the relevant variants that are present in a given pharmacogene from the alignment file, as well as coverage information needed for the copy number and structural variation detection step. It also collects phasing information from long reads, barcoded fragments, and paired-end fragments where available. The original version of Aldy relied on the assumption that read alignments produced by the off-the-shelf aligner are mostly correct. Although this assumption holds for short paired-end Illumina reads, it breaks for long reads such as PacBio HiFi reads. For example, if a sample harbors a gene duplication and if the highly similar pseudogene is located immediately next to this gene, any long read spanning two duplicated copies of the gene will get its second half incorrectly aligned to the pseudogene because the reference genome does not contain two copies of the gene in question . The correct alignment would perform a split mapping and align the second half again to the pharmacogene. These incorrect alignments are even more problematic in the presence of gene fusions: Any read that spans a gene–pseudogene fusion breakpoint will not be split-mapped but incorrectly aligned to either pharmacogene or its pseudogene. Aldy 4 corrects such alignments by splitting any long read that spans the gene–pseudogene boundary into shorter gene-level segments and aligning each segment independently. Each segment is guaranteed to span only one gene (either pharmacogene or a pseudogene) and thus avoid being misaligned in the manner described above. The size of each such segment is at most the size of the gene. Aldy 4 performs a further split-mapping of each segment that spans a potential fusion breakpoint to determine whether a read originates from a fusion event or not (a read is said to originate from a fusion event if its split-mapping alignment score is better than the original alignment score). Unlike previous versions, Aldy 4 considers base quality scores and read mapping qualities when calling the allele-specific variants. This ensures that the low-quality variants in noisy and low-coverage samples are filtered out before the star-allele calling. Finally, Aldy 4 performs the indel-realignment step through indelpost to correct misalignments that often happen when aligning short reads to small indels . In a typical scenario, a sample contains two parental copies of a pharmacogene of interest for which star-alleles need to be called. This is generally true for most of the pharmacogenes of interest. However, a few major pharmacogenes do not follow this pattern and are prone to various copy number changes and structural events. The most notable example is that of the CYP2D6 gene, one of the most important pharmacogenes , whose copies can undergo whole-gene deletions, duplications, and hybrid fusions (where a copy begins with the CYP2D6 sequence but switches to the pseudogene CYP2D7 sequence at a given breakpoint, or vice versa) . Other prominent examples include CYP2A6 , G6PD , and so on. Each copy—fusions included—yields its own star-allele. Thus, to correctly call star-alleles of such genes, it is necessary to correctly detect the total number of available gene copies as well as the configuration (i.e., structure) of each copy. Each gene copy can be described by its structural configuration represented as a binary vector g ∈ G that indicates the presence or absence of genic regions in a given configuration . Because each star-allele is defined by a matching structural configuration, such configurations must be found before the star-alleles can be accurately called. The size of the configuration vector depends on the number of gene segments that define various structural configurations. For example, the CYP2D6 gene is divided into r = 20 segments that correspond to its exons, introns, and flanking regions, because all structural variations are described at the level of whole exons and introns . The total length of the CYP2D6 configuration vector is 2 r (i.e., 40) because the vector also includes segments from the neighboring CYP2D7 pseudogene. This vector can encode any known CYP2D6 structural configuration: for example, a single CYP2D6 copy ( r ones followed by r zeros), a single CYP2D7 copy ( r zeros followed by r ones), CYP2D6 – 2D7 fusion in intron 1 (one followed by r − 1 zeros, in turn followed by a 0 and r − 1 ones), and so on. Once these vectors are established, any complex configuration within CYP2D locus can be represented as an aggregate of individual configuration vectors (for an example, see A). Note that valid structural configuration vectors are obtained from the corresponding allele databases and that each such vector is typically assigned a star-allele identifier (e.g., CYP2D6*13A represents a CYP2D6 fusion with the breakpoint in exon 1). In a sequenced sample, we only observe the aggregate coverage vector cn that describes the number of reads covering each genomic loci of interest within the sample (the formation of this vector is described in Supplemental Note S2 ; an example is shown in B). The goal of Aldy is to find a set of configuration vectors { g 1 , … , g n } ⊆ G whose sum is closest to the observed aggregate coverage, where each structural configuration can be selected only once (an assumption made for the sake of clarity; in practice, Aldy allows selecting the same configurations multiple times). As there might be many such sets, Aldy only looks for the most parsimonious solution: a solution that selects the minimal number of such vectors. This problem, previously dubbed as the copy number estimation problem (CNEP) , can be efficiently solved via integer linear programming (ILP) as follows. Assume that a gene G is segmented into 2 r regions. Let G = { g 1 , … , g | G | } stand for the set of the available configuration vectors, where g i = [ g i ,1 , …, g i ,2 r ] and g i , j ∈ {0, 1} for any i and j . Let cn be the aggregate coverage vector observed from HTS data, and let us introduce a binary variable z i for each g i that indicates if g i is a part of the solution or not. The objective—minimization of difference between the observed aggregate coverage and predicted solution—can be modeled as follows: min ∑ j = 1 2 r | c n j − ∑ i = 1 | G | z i g i , j | . Although this model performs well on WGS and targeted data , it is rather sensitive to deviations from the expected coverage distribution. It also cannot properly handle the cases in which the normalized aggregate coverage is not stable or uniform. For targeted panels with nonuniform coverage distributions, aggregate coverage can be “normalized” by dividing it by the coverage of the control sample if it is stable across different samples. Aldy does this automatically for known targeted panels. Thus, Aldy 4 improves the original CNEP formulation by introducing additional optimization terms. This is performed by modifying the original objective term and extending it with two additional terms, resulting in a three-term optimization objective. The first term is the same as the original CNEP objective but focuses only on the regions associated with the pharmacogene (and not its pseudogene): o 1 = ∑ j = 1 r | c n j − ∑ i = 1 | G | z i g i , j | . The second term of the objective function considers the interaction between the pharmacogene and the corresponding pseudogene region by considering the changes between their respective region coverage. For example, if the coverage of the exon 2 in CYP2D6 is three and in CYP2D7 is two, the resulting region coverage difference would be one. This difference can be further normalized (in this case, divided by three). Using normalized differences allows us to handle samples in which the observed aggregate coverage ( cn ) varies between the regions owing to various sequencing and alignment biases. Despite region-specific coverage variation, the relative abundances between the matching gene–pseudogene regions remain constant. This term can formally be expressed as o 2 = ∑ j = 1 r | c n j − c n j + r ν j − ∑ i = 1 | G | z i g i , j − g i , j + r ν j | . Here ν j = max{ cn j , cn j + r } + 1 is the normalization factor. The final term of the objective function ensures that the ILP solver selects the most parsimonious solution: o 3 = ∑ i = 1 | G | μ i z i . μ i is parsimony parameter (by default set to 1 / | G | ). However, some unlikely configurations, such as left fusions, will have higher parsimony scores to reflect the observation that such configurations are rare . Aldy 4's modified CNEP model uses an ILP solver to minimize sum of these three terms o 1 + o 2 + o 3 . These solutions are passed to the later steps that will decide the best overall solution. Aldy now proceeds by assigning the exact star-allele identifier to each of the n structural configurations obtained in the previous step. As stated in Methods, a star-allele S i is defined as a tuple ( g i , A i ), where g i ∈ G and A i ⊆ M . The star-allele assignment problem can also be modeled through the ILP as follows. Let us indicate the presence of star-allele S i with a binary variable a i . Our goal is to select a set of star-alleles S 1 , …, S n such that (1) the set of the structural configurations that describes selected star-alleles is identical to the set of the structural configurations from the previous step, and (2) the difference between predicted and observed coverage for each variant m (denoted as cov( m )) is minimized. In other words, we want to minimize ∑ m ∈ M | cov ( m ) − ∑ i : m ∈ A i a i | . Although conceptually simple, this model does not account for cases in which database definitions are incomplete or incorrect. To account for these cases, the model must be allowed to alter star-allele definitions if needed. Aldy thus introduces new binary variables p i , m and q i , m that indicate if a variant m is to be “removed” from the star-allele S i (while being present in the database definition A i ), or “added” to it (while being absent in A i ). Then it attempts to minimize the following expression for each variant m : e m = | cov ( m ) − ( ∑ i : m ∈ A i a i p i , m + ∑ i : m ∉ A i a i q i , m ) | . As a i , p i , m , and q i , m are all binary variables, their product can be expressed as a set of linear constraints. The minimization objective can be expressed as min ∑ m ∈ M e m + ∑ i a i [ ∑ m α i , m ( 1 − p i , m ) + ∑ m β i , m q i , m ] . Parameters α i , m and β i , m are penalties for adding or removing the variant m from allele S i . Adding a variant is less common than missing a variant, so generally we use α i , m = 2 and β i , m = 1 for any i and m . Note that not all variants are the same: As functional (core) variants can fundamentally alter the behavior of a star-allele (and thus change its designation), Aldy disallows removing such variants from any star-allele and allows adding novel functional (core) variants to the allele if and only if no other assignment is possible. This is performed by setting the corresponding α i , m to a very large value. Note that novel core variants are added only if no other decision can be made. The star-allele calling model also enforces other constraints: Each functional (core) variant must be expressed by at least one allele, and each structural configuration must be expressed by at least one allele compatible with it. Finally, Aldy performs two rounds of star-allele calling for improved accuracy. In the first round, Aldy only considers functional (core) variants and identifies all major (core) star-allele combinations that explain the present functional variants . Being restricted solely to core variants, this step alone often produces multiple candidate solutions. Thus, Aldy then uses the second round to refine the candidate calls from the first round and break the ties by considering the silent variants (subvariants) as well. It finally selects the star-allele with the best second-round objective score as the final call. The formulation Aldy 4 uses for this step remains similar to the original model used in the older versions of Aldy. The single major difference is the change in the first (functional star-allele calling) round: Aldy 4 can now call star-alleles that contain novel functional (core) variants—a not uncommon event if a gene database is incomplete—if no other call can be made. The above-described model essentially performs a variant of statistical phasing: It uses the database knowledge to select the most likely haplotypes that best explain the given observations from the data. Although performing well in practice , there are nevertheless cases when the aforementioned model produces multiple equally likely calls. It is also unable to assign a novel variant to a particular star-allele unambiguously. Finally, in sporadic cases, the above model can produce incorrect results. These challenges can be resolved with long reads that provide long-range phasing information. Aldy 4 newly incorporates the handling of long-range phasing information to the star-allele calling model as follows. Suppose that there are z fragments R 1 , …, R z , each fragment being defined by a set of variants that it spans: R j = { m 1 , … , } ⊆ M . Each sequenced fragment originated from a single star-allele and can thus be assigned to one of the star-alleles in the data set. This assignment can be controlled by introducing a binary variable f i , j that is set if and only if a fragment R j is assigned to S i . Clearly, ∑ i f i , j must be one for every R j because each fragment originates from a single allele. Ideally, we want to assign a R j to S i only if such an assignment agrees with the star-allele sequence as much as possible. In other words, we want to minimize the number of disagreements between allele S i and fragment R j . Thus, the total disagreement of an assignment can be expressed as follows: e i , j = ∑ m ∈ R j ( 1 − p i , m − q i , m ) + ∑ m ∈ R j ( p i , m + q i , m ) , where R ¯ j denotes the set of variants that are not present in read R j but are spanned by it. The total phasing error can be expressed as ∑ i , r f r , i e r , i . This expression can be added to the objective function of the star-allele calling model. Although the expanded version of this expression contains quadratic terms, each quadratic term is a product of two binary variables and, as such, can be trivially linearized. As a final remark, note that the number of binary variables in the phasing model is dependent on the total number of present reads and alleles. In some cases, it can exceed half a million variables, making the overall model very costly to solve. The model can be significantly improved by using a smaller random sample of fragments, where the size of the random sample depends on the number of present reads and alleles. Aldy uses ILP solvers to solve the presented models. Although ILP solving is NP-hard even when restricted to the models mentioned above , all these models are solvable in practice in less than a minute thanks to the state-of-the-art integer programming solvers used by Gurobi ( https://www.gurobi.com/ ) or CBC ( https://github.com/coin-or/Cbc/tree/releases/2.9.9 ) solvers. In some rare instances, Aldy cannot unambiguously call star-alleles from short-read data sets owing to the read length limitations and lack of strand information. In these cases, Aldy will report all possible solutions. In some cases, this might be misleading; for example, a *68 + *4/*5 call can be reported as *68/*4 (where *5 stands for deletion allele). However, both calls are functionally identical and should be treated as equal (as is performed here). Aldy also makes heavy use of the existing star-allele databases to call star-alleles and fusion breakpoints. Although it can handle cases in which the database is incomplete or lacking, it can theoretically report incorrect results if a present allele is wildly divergent from any allele in the database. Aldy 4's detection of structural configurations is highly dependent on the stability of coverage across different sequencing runs. Although this is not a significant issue for short-read WGS and targeted sequencing panels, the coverage might vary more than expected in PacBio samples. For this reason, Aldy 4 brings support for the exploration of a broader solution space when needed to account for potential noise. Finally, note that Aldy 4 does not cover all existing pharmacogenes: Genes from the IGH and HLA regions, ABC gene families (e.g., ABCG2 ), and the UGT1/2 gene clusters are prominently not included. Although some genes, such as ABCG2 , can be easily supported with a corresponding database file (something that is planned for the future releases), more complex clusters such as HLA or IGH require major changes to the core algorithm to account for challenges posed by those regions and are better left to the specialized tools such as ImmunoTyper-SR . Aldy 4 is available at GitHub ( https://github.com/0xTCG/aldy ) and also uploaded as Supplemental Code . The experimental procedure and results are available at GitHub ( https://github.com/0xTCG/aldy/tree/master/paper ) and are also uploaded as the Supplemental Notebook and Supplemental Experiments , respectively. Supplemental Material |
Associations between plasma metabolism-associated proteins and future development of giant cell arteritis: results from a prospective study | 2260385a-6493-48af-b85d-be96a84ba7a2 | 11781587 | Biochemistry[mh] | GCA is the most common large-vessel vasculitis in persons aged >50 years in the western world, affecting medium- to large-sized arteries, with a female predominance . The highest incidences have been reported from Scandinavian countries and Minnesota, USA, which have rates of ∼20–30/100 000 among persons aged over 50 years . Several factors that may predict GCA years before clinical onset have been described. In a nested case–control study from our group, individuals who subsequently developed GCA had lower blood glucose, cholesterol and triglycerides at baseline, a median of 21 years before disease onset, compared with controls . Similar results have been reported in a retrospective survey of another population . Several studies have found an association between low BMI prior to diagnosis and subsequent GCA . In addition to a lower BMI, a retrospective case–control study from Gothenburg found that current smoking and multiple hormone-related factors were associated with increased risk of GCA . Incident GCA cases have been identified as having a significantly lower prevalence of diabetes mellitus compared with controls at the time of diagnosis . Taken together, subsequent GCA cases seem to have better metabolic control and a lower BMI, and to be less likely to suffer from diabetes prior to diagnosis. Furthermore, T cell checkpoint dysregulation may play a part in the pathogenesis of GCA, as a higher expression of programmed cell death receptor-1 (PD-1) on T cells and lower expression of programmed death ligand-1 (PD-L1) on dendritic cells (DCs) have been reported in temporal artery biopsies (TABs) from GCA patients . PD-1/PD-L1 interaction normally leads to inhibition of the T cell receptor activating cascade, resulting in an attenuated immune response . A connection between the PD1/PD-L1 checkpoint and glucose metabolites has been proposed, as a study by Watanabe et al. showed a positive association between mitochondrial pyruvate and the expression of PD-L1 on macrophages . This indicates that glucose levels might play a part in the pathogenesis of GCA. In a previous study with a similar design, our group found that some biomarkers associated with inflammation were elevated in pre-GCA cases compared with controls . In particular, elevated levels of IFN-γ and MCP3 were found years prior to diagnosis in individuals who subsequently developed GCA. Several other proteins known to be important for T cell function were also associated with GCA in these analyses, e.g. CXCL9, IL-2, CD40 and CCL25 . These results suggest that activation of the adaptive immune system may precede the clinical onset . In this study, we aimed to investigate metabolism-associated plasma proteins prior to onset of GCA. To our knowledge, this has not been done previously. Source population and exposure information The Malmö Diet Cancer Study (MDCS) is a community-based health survey performed in Malmö in 1991–1996. All women born 1923–50 and all men born 1923–45 who were residents of Malmö were invited to participate. Exclusion criteria were insufficient Swedish language skills and mental incapacity. The total source population was 74 138 persons, and the participation rate was 40.8%. A total of 30 447 participants (12 121 men and 18 326 women) were included. The mean age at screening was 58 years in women and 59 years in men. Using a self-administered questionnaire, information on lifestyle factors and current health status was collected from all participants. Non-fasting blood samples were obtained at the time of inclusion in the health survey in a standardized manner and stored at –80°C. Further details on the MDCS are described elsewhere . Cases and controls All cases had participated in the MDCS prior to being diagnosed with GCA. Patients were identified based on ICD diagnosis codes indicating GCA in the local outpatient clinic administrative register for Malmö University Hospital or the National Patient Register through 31 December 2011. A structured review of medical records was performed, and cases were classified according to the 1990 ACR criteria for GCA . Some cases with typical clinical features were included, based on expert opinion, even if they did not fulfil the classification criteria. In addition, data on visual manifestations, initial dose of glucocorticoids, large-vessel involvement, and other disease characteristics were collected. One control for each validated case who was alive and free from GCA when the index person was diagnosed with GCA was randomly selected from the MDCS. The controls were matched for sex, year of birth and year of screening. We identified 100 cases with corresponding controls. After excluding those who had no preserved plasma samples, or insufficient sample volume, data were available for 95 cases and 97 controls. The regional research ethics committee for southern Sweden approved the study (registration number 308/2007). When included in the MDCS, all participants gave their written informed consent for future use of their collected information and samples for research purposes. No additional consent for participation in this study was obtained. Neither patients nor the public were involved in the study design, recruitment, or dissemination of results. Plasma proteomic biomarkers A large panel with 92 metabolic proteins (Olink ® Metabolic panel ) was used to investigate potential biomarkers associated with metabolism prior to clinical disease onset in patients developing GCA. All biomarkers analysed are presented in , available at Rheumatology online. Plasma levels of proteins were analysed by the Proximity Extension Assay (PEA) technique, using a multiplex reagent kit (O-link Bioscience, Uppsala, Sweden), and the analysis has been described in detail elsewhere . The results are presented with arbitrary units. As the platform provides relative protein quantification as log2 normalized protein expression (NPX), every unit increase corresponds to a doubling in the relative protein concentration. Statistics The statistical analyses were specified in a study protocol written before obtaining the data. The analyses were separated into two categories, and analyses involving biomarkers with an a priori hypothesis formulated by the authors were handled separately from analyses involving all biomarkers. The latter were regarded as hypothesis-generating analyses. Variables that were not normally distributed were log-transformed using the natural logarithm. Normality of distribution was assessed using visual inspection of histograms and the Shapiro-Wilk test. Eight biomarkers, all with Shapiro-Wilk statistics of <0.85, were log-transformed. To allow for logarithmic computation without censoring individuals with negative NPX values, the smallest possible constant was added to the arbitrary values. Biomarkers with a priori hypotheses Six biomarkers were selected for evaluation of a priori hypotheses biomarkers ( , available at Rheumatology online). Five of them were assumed to be elevated: meteorin-like protein (Metrnl), fructose-1,6-bisphosphatase 1 (FBP1), galanin peptides (GAL), adhesion G protein-coupled receptor E2 (ADGRE2), nectin-2, and one was assumed to be reduced: appetite-regulating hormone ghrelin (GHRL). These proteins were selected based on previous knowledge of function and/or association with other inflammatory diseases . To examine potential biomarker predictors, we used conditional logistic regression, with case status as the outcome. A group number connecting each case and its corresponding control was entered in the logistic regression models as a categorical variable. Odds ratios (ORs) were calculated per S.D. to enable comparisons of effect sizes. Further, the analyses were stratified by time from screening to GCA diagnosis (by quartiles) in years. Associations across quartiles ( P for trend) were assessed by examining the interactions between quartile of time to diagnosis and biomarker levels in separate logistic regression models. Multiple hypothesis testing was handled using the Holm correction approach , and both corrected and original P -values are presented. Hypothesis-generating analyses To identify groups of protein that explained variance in the proteome, we used principal component analysis (PCA). Before inclusion in the PCA, z -scores were computed for all biomarkers to enable comparability. Assumptions were fulfilled for the Kaiser–Meyer–Olkin (KMO) test for sampling adequacy and Bartlett’s test for sphericity, respectively. We set the preliminary cut-off for the Eigenvalue to >2.0 to reduce the number of components, resulting in seven components. Of these, components 5–7 were excluded from the analyses of associations between individual contributing proteins and GCA, based on limited numbers of variables with high factor loading ( , available at Rheumatology online) and because they were considered to add minor contributions based on the scree plot ( , available at Rheumatology online). The seven components were included as independent variables in logistic regression models evaluating their relation to the risk of subsequent GCA. In the next step, the biomarkers with a loading of >0.7 within the selected components were investigated using the same statistical protocol as for the a priori analyses, except that correction for multiple testing was not applied to the hypothesis-generating approach. All statistical analyses were performed using the Statistical Package for Social Sciences (version 24.0; IBM, Armonk, NY, USA). The Malmö Diet Cancer Study (MDCS) is a community-based health survey performed in Malmö in 1991–1996. All women born 1923–50 and all men born 1923–45 who were residents of Malmö were invited to participate. Exclusion criteria were insufficient Swedish language skills and mental incapacity. The total source population was 74 138 persons, and the participation rate was 40.8%. A total of 30 447 participants (12 121 men and 18 326 women) were included. The mean age at screening was 58 years in women and 59 years in men. Using a self-administered questionnaire, information on lifestyle factors and current health status was collected from all participants. Non-fasting blood samples were obtained at the time of inclusion in the health survey in a standardized manner and stored at –80°C. Further details on the MDCS are described elsewhere . All cases had participated in the MDCS prior to being diagnosed with GCA. Patients were identified based on ICD diagnosis codes indicating GCA in the local outpatient clinic administrative register for Malmö University Hospital or the National Patient Register through 31 December 2011. A structured review of medical records was performed, and cases were classified according to the 1990 ACR criteria for GCA . Some cases with typical clinical features were included, based on expert opinion, even if they did not fulfil the classification criteria. In addition, data on visual manifestations, initial dose of glucocorticoids, large-vessel involvement, and other disease characteristics were collected. One control for each validated case who was alive and free from GCA when the index person was diagnosed with GCA was randomly selected from the MDCS. The controls were matched for sex, year of birth and year of screening. We identified 100 cases with corresponding controls. After excluding those who had no preserved plasma samples, or insufficient sample volume, data were available for 95 cases and 97 controls. The regional research ethics committee for southern Sweden approved the study (registration number 308/2007). When included in the MDCS, all participants gave their written informed consent for future use of their collected information and samples for research purposes. No additional consent for participation in this study was obtained. Neither patients nor the public were involved in the study design, recruitment, or dissemination of results. A large panel with 92 metabolic proteins (Olink ® Metabolic panel ) was used to investigate potential biomarkers associated with metabolism prior to clinical disease onset in patients developing GCA. All biomarkers analysed are presented in , available at Rheumatology online. Plasma levels of proteins were analysed by the Proximity Extension Assay (PEA) technique, using a multiplex reagent kit (O-link Bioscience, Uppsala, Sweden), and the analysis has been described in detail elsewhere . The results are presented with arbitrary units. As the platform provides relative protein quantification as log2 normalized protein expression (NPX), every unit increase corresponds to a doubling in the relative protein concentration. The statistical analyses were specified in a study protocol written before obtaining the data. The analyses were separated into two categories, and analyses involving biomarkers with an a priori hypothesis formulated by the authors were handled separately from analyses involving all biomarkers. The latter were regarded as hypothesis-generating analyses. Variables that were not normally distributed were log-transformed using the natural logarithm. Normality of distribution was assessed using visual inspection of histograms and the Shapiro-Wilk test. Eight biomarkers, all with Shapiro-Wilk statistics of <0.85, were log-transformed. To allow for logarithmic computation without censoring individuals with negative NPX values, the smallest possible constant was added to the arbitrary values. Six biomarkers were selected for evaluation of a priori hypotheses biomarkers ( , available at Rheumatology online). Five of them were assumed to be elevated: meteorin-like protein (Metrnl), fructose-1,6-bisphosphatase 1 (FBP1), galanin peptides (GAL), adhesion G protein-coupled receptor E2 (ADGRE2), nectin-2, and one was assumed to be reduced: appetite-regulating hormone ghrelin (GHRL). These proteins were selected based on previous knowledge of function and/or association with other inflammatory diseases . To examine potential biomarker predictors, we used conditional logistic regression, with case status as the outcome. A group number connecting each case and its corresponding control was entered in the logistic regression models as a categorical variable. Odds ratios (ORs) were calculated per S.D. to enable comparisons of effect sizes. Further, the analyses were stratified by time from screening to GCA diagnosis (by quartiles) in years. Associations across quartiles ( P for trend) were assessed by examining the interactions between quartile of time to diagnosis and biomarker levels in separate logistic regression models. Multiple hypothesis testing was handled using the Holm correction approach , and both corrected and original P -values are presented. To identify groups of protein that explained variance in the proteome, we used principal component analysis (PCA). Before inclusion in the PCA, z -scores were computed for all biomarkers to enable comparability. Assumptions were fulfilled for the Kaiser–Meyer–Olkin (KMO) test for sampling adequacy and Bartlett’s test for sphericity, respectively. We set the preliminary cut-off for the Eigenvalue to >2.0 to reduce the number of components, resulting in seven components. Of these, components 5–7 were excluded from the analyses of associations between individual contributing proteins and GCA, based on limited numbers of variables with high factor loading ( , available at Rheumatology online) and because they were considered to add minor contributions based on the scree plot ( , available at Rheumatology online). The seven components were included as independent variables in logistic regression models evaluating their relation to the risk of subsequent GCA. In the next step, the biomarkers with a loading of >0.7 within the selected components were investigated using the same statistical protocol as for the a priori analyses, except that correction for multiple testing was not applied to the hypothesis-generating approach. All statistical analyses were performed using the Statistical Package for Social Sciences (version 24.0; IBM, Armonk, NY, USA). Incident cases A total of 95 cases were identified with a confirmed incident diagnosis of GCA and available results from the plasma proteome analyses. Of the total cohort, 78 (82%) of the cases were female, 64% had a positive temporal artery biopsy (TAB) and 90.5% fulfilled the ACR 1990 classification criteria for GCA . At the time of diagnosis, the median age was 73.5 years (range 56.1–85.8) and the median time from screening to diagnosis was 12.0 years (range 0.3–19.1) . Only one case was screened <1 year before diagnosis. Proportions with a self-reported history of comorbidities or anti-hypertensive treatment at the time of screening were similar in cases and controls. No cases or controls were treated with glucocorticoids, and <10% were on anti-diabetic or lipid-lowering drugs. ( and , available at Rheumatology online). Testing of a priori hypotheses ADGRE2 was higher in cases compared with controls [(mean (NPX) 4.88 vs 4.79] , [OR (per S.D.) 1.67; 95% CI 1.08–2.57, P = 0.022)]. In analysis stratified by time from screening to diagnosis (quartiles), the highest OR was found in the subset sampled closer to GCA diagnosis (0.32–8.49 years) (OR 3.91; 95% CI 1.45–10.60). Although the wide CI indicates uncertainty, the association in this subset was significant even after correction for multiple testing ( P = 0.042). There was a decreasing trend by quartile 3 , but the trend over all four quartiles did not reach significance ( P = 0.257). FBP1 levels were lower in cases compared with controls [(mean (NPX) 0.95 vs 1.10] , [OR (per S.D.) 0.59; 95% CI 0.35–0.99, P = 0.044)). In the analysis stratified by time from screening to diagnosis, the lowest OR was found in the quartile sampled closer to GCA diagnosis (OR 0.29; 95% CI 0.06–1.39), with an increasing trend by quartile three. Overall, there were no significant difference between levels of Metrnl in cases compared with controls [OR (per S.D.) 1.42; 95% CI 0.90–2.23, P = 0.135]. However, in the stratified analysis there was a significant trend ( P = 0.030), with higher ORs in those sampled closer to GCA diagnosis, i.e. quartile 1 [OR (per S.D.) 2.40; 95% CI 0.98–5.85, P = 0.055] and quartile 2 [8.50–11.96 years before diagnosis; OR (per S.D.) 3.13; 95% CI 0.92–10.63, P = 0.067], respectively, compared with those sampled with a longer duration to diagnosis. For GAL, GHRL and nectin-2, there were no significant associations with subsequent development of GCA. Hypothesis-generating results For the seven components with Eigenvalues above 2.0 in the PCA, factor loadings for every protein are shown in , available at Rheumatology online. Descriptive data for these components are shown in . They were further investigated as potential predictors of GCA, using each component as an individual variable in logistic regression models. Component number 4 significantly predicted subsequent GCA (OR 2.06; 95% CI 1.21–3.49, P = 0.008) . With an Eigenvalue of 3.88, component number 4 explained 4.21% of the variance in the proteome, which was mainly driven by FBP1 which was the only protein with a factor loading of >0.7 within this component ( , available at Rheumatology online). Components 1–4 included 26 variables that met the factor loading cut-off and were therefore further investigated . Of these 26 variables, 2 revealed significant association per S.D. with subsequent GCA . FBP1, found to be significant in the a priori hypothesis testing described above, was also identified as a potential predictor of GCA through the hypothesis-generating analysis. In addition, higher ROR1 concentrations were associated with increased risk of GCA (OR per S.D. 1.61; 95% CI 1.05–2.46). Stratification by time from screening to diagnosis for FBP1 is described above, and for ROR1 no significant trend depending on sampling time was observed . A total of 95 cases were identified with a confirmed incident diagnosis of GCA and available results from the plasma proteome analyses. Of the total cohort, 78 (82%) of the cases were female, 64% had a positive temporal artery biopsy (TAB) and 90.5% fulfilled the ACR 1990 classification criteria for GCA . At the time of diagnosis, the median age was 73.5 years (range 56.1–85.8) and the median time from screening to diagnosis was 12.0 years (range 0.3–19.1) . Only one case was screened <1 year before diagnosis. Proportions with a self-reported history of comorbidities or anti-hypertensive treatment at the time of screening were similar in cases and controls. No cases or controls were treated with glucocorticoids, and <10% were on anti-diabetic or lipid-lowering drugs. ( and , available at Rheumatology online). ADGRE2 was higher in cases compared with controls [(mean (NPX) 4.88 vs 4.79] , [OR (per S.D.) 1.67; 95% CI 1.08–2.57, P = 0.022)]. In analysis stratified by time from screening to diagnosis (quartiles), the highest OR was found in the subset sampled closer to GCA diagnosis (0.32–8.49 years) (OR 3.91; 95% CI 1.45–10.60). Although the wide CI indicates uncertainty, the association in this subset was significant even after correction for multiple testing ( P = 0.042). There was a decreasing trend by quartile 3 , but the trend over all four quartiles did not reach significance ( P = 0.257). FBP1 levels were lower in cases compared with controls [(mean (NPX) 0.95 vs 1.10] , [OR (per S.D.) 0.59; 95% CI 0.35–0.99, P = 0.044)). In the analysis stratified by time from screening to diagnosis, the lowest OR was found in the quartile sampled closer to GCA diagnosis (OR 0.29; 95% CI 0.06–1.39), with an increasing trend by quartile three. Overall, there were no significant difference between levels of Metrnl in cases compared with controls [OR (per S.D.) 1.42; 95% CI 0.90–2.23, P = 0.135]. However, in the stratified analysis there was a significant trend ( P = 0.030), with higher ORs in those sampled closer to GCA diagnosis, i.e. quartile 1 [OR (per S.D.) 2.40; 95% CI 0.98–5.85, P = 0.055] and quartile 2 [8.50–11.96 years before diagnosis; OR (per S.D.) 3.13; 95% CI 0.92–10.63, P = 0.067], respectively, compared with those sampled with a longer duration to diagnosis. For GAL, GHRL and nectin-2, there were no significant associations with subsequent development of GCA. For the seven components with Eigenvalues above 2.0 in the PCA, factor loadings for every protein are shown in , available at Rheumatology online. Descriptive data for these components are shown in . They were further investigated as potential predictors of GCA, using each component as an individual variable in logistic regression models. Component number 4 significantly predicted subsequent GCA (OR 2.06; 95% CI 1.21–3.49, P = 0.008) . With an Eigenvalue of 3.88, component number 4 explained 4.21% of the variance in the proteome, which was mainly driven by FBP1 which was the only protein with a factor loading of >0.7 within this component ( , available at Rheumatology online). Components 1–4 included 26 variables that met the factor loading cut-off and were therefore further investigated . Of these 26 variables, 2 revealed significant association per S.D. with subsequent GCA . FBP1, found to be significant in the a priori hypothesis testing described above, was also identified as a potential predictor of GCA through the hypothesis-generating analysis. In addition, higher ROR1 concentrations were associated with increased risk of GCA (OR per S.D. 1.61; 95% CI 1.05–2.46). Stratification by time from screening to diagnosis for FBP1 is described above, and for ROR1 no significant trend depending on sampling time was observed . In this study, we investigated proteins selected on the basis of known association with metabolism. Our results indicated that a subset of these biomarkers was elevated or decreased in subsequent GCA cases compared with controls, indicating that these proteins might play a part in GCA pathogenesis and should be further investigated. We found that levels of ADGRE2 were significantly higher in cases compared with controls, overall, and in particular in those sampled close to diagnosis, with a decreasing trend for increased time to diagnosis. Metrnl also had higher ORs in those sampled closer to diagnosis, with a significantly decreasing trend in those sampled with a longer time duration to diagnosis. Metrnl was initially identified as a protein with neurotrophic functions . Thereafter it was described as an adipokine produced by white adipose tissue, with a potential role in insulin sensitization . However, in clinical studies on the relationship between circulating Metrnl levels and type 2 diabetes mellitus, there have been conflicting results . Recent data suggest a role for Metrnl in inflammation . For example, associations between Metrnl and inflammatory diseases such as psoriasis, atopic dermatitis and RA have been described. A study comparing biopsies and samples from patients with RA, PsA and OA found higher levels of Metrnl in both the synovium and SF in RA and PsA . More recently, further knowledge on Metrnl’s role in inflammation has come to light. It is expressed on activated macrophages, and in vivo Metrnl levels increase with inflammatory response . Knockout mice who do not express Metrnl exhibit multiple immune system abnormalities, for example lower levels of IgG in plasma, and downregulation of chemokine production . ADGR2, also known as CD312 and EMR2, is a receptor initially identified as a myeloid-restricted transcript expressed in monocytes, macrophages, myeloid dendritic cells, and granulocytes . Distinct myeloid populations have differential expression patterns of EMR2, suggesting a regulatory role in neutrophil function . Kuan-Yu et al. showed that the EMR2 receptor is a surface marker for macrophage differentiation, and that EMR2 activation eventually leads to MAPK and NF-kB signalling. Through this pathway, it induces expression of pro-inflammatory mediators, including IL-8, TNF-α and MMP9 . Kop et al. found that EMR2 expression in the synovial sublining was significantly higher in RA patients compared with OA patients and ReA control patients. Most EMR2-positive cells were macrophages and dendritic cells . ADGRE2/EMR2 signalling has also been implicated in ANCA-associated vasculitis (AAV). Irmscher et al. demonstrated that human serum factor H-related protein (FHR1) induces inflammasome NLRP3 in vitro through EMR2, independent of complement. This induction results in the secretion of pro-inflammatory cytokines such as IL-1β, TNF-α, IL-18 and IL-6. Furthermore, in vitro FHR1 selectively binds to necrotic cells in necrotic glomerulae of AAV patients, and in atherosclerotic plaques. Circulating FHR1 concentrations of AAV patients was correlated with levels of inflammation and progressive disease . Taken together, both Metrnl and ADGRE2 are associated with macrophage activation, and expression and circulating levels seem to vary depending on degree of inflammation. They have independently been associated with other inflammatory diseases, such as RA. Macrophage polarization is an important feature in GCA pathogenesis , in which a distinct subset of CD206+ cells may stimulate tissue destruction and remodelling, through the production of MMP9 ; further studies of possible effects of these markers on macrophage activation and polarization in a GCA context would be of interest. Levels of FBP1 were found to be significantly lower in cases compared with controls, overall, and with the lowest ORs in those sampled closest to diagnosis. FBP1 is a key enzyme in the glycolytic pathway . It directly catalyses hydrolysis of fructose-1,6-bisphosphate to fructose-6-phosphate in gluconeogenesis, i.e. reversing the PFK1-catalysed phosphorylation of fructose-6-phosphate to fructose-1,6-bisphosphate in glycolysis. Our results suggest that FBP1 may have some protective effect against GCA. FBP1 is mainly expressed in the kidneys and the liver and plays a critical role in maintaining blood glucose levels. In animal models, inhibition of FBP1 leads to inhibited gluconeogenesis and increased glucose sensitivity. An upregulation of FBPs (mainly FBP1) occurs in diabetes-susceptible obese mice , suggesting that it is important in type II diabetes mellitus (T2DM). Moreover, transgenic mice overexpressing human FBP1 specifically in pancreatic islet β-cells show reduced insulin secretion . FBP1 inhibition has been investigated as a treatment for T2DM, but no such drug has been licensed to date . Although animal model studies indicate that FBP provides negative feedback that limits weight gain , which, given the observed negative association between BMI and GCA , would argue for it as a biomarker of increased GCA risk, it may also be that FBP1 activity, leading to increased availability of pyruvate, downregulates PD-L1 in macrophages, a pathway that would be protective against GCA . As this pattern is against our original hypothesis, it should be interpreted with caution, and this new concept should be supported by additional studies and other experimental data. In our hypothesis-generating analysis, the tyrosine kinase enzyme ROR1 was identified through PCA analysis and further found to be significantly higher in cases compared with controls. ROR1 is a receptor shown to be significant in embryonic development and cancer. In chronic lymphocytic leukaemia (CLL), a distinct expression of ROR1 on tumour cells has been identified, and ROR1 is currently one out of six markers recommended internationally for refining the diagnosis of CLL . Surface expression of ROR1 has also been associated with other haematologic malignancies. When activated, ROR1 signals via transcription factors, including NF-κB . The role of this pathway in the preclinical phase of GCA should be further explored. The present findings, together with the previously reported upregulation of T cell–related proteins , suggest activation of a broad range of pro-inflammatory mechanisms in the preclinical phase of GCA. The potential roles of macrophage polarization and activation, potentially regulated by glycolysis, and NF-κB signalling, should be taken into account when developing treatment strategies aiming at early and sustained remission in patients with GCA. Limitations of our study include the relatively small sample size, which affected the precision of the estimates, as shown by the CIs for the stratified analyses. Changes of biomarkers over time could not be assessed, as the samples were obtained at a single time point. It would be of interest to examine these markers over time, and to investigate how they relate to active vasculitis at the time of diagnosis. The reported NPX values are reported in arbitrary units and are therefore not convertible to absolute concentrations. They can only be compared with the relative abundance of each biomarker across the sample cohort and should not be used for ranking the relative abundance of biomarkers within samples. This makes it difficult to compare the findings with those of previous studies on biomarkers in active disease. Moreover, plasma levels of biomarkers do not necessarily reflect levels in relevant tissues, i.e. lymphoid organs or arteries . As there are no previous studies on pre-diagnostic samples, except for the one from our group , the candidate biomarkers were selected, based on knowledge about established GCA, from within the group of biomarkers included in the OLINK panel. Aspects of pre-analytic handling and analytic procedures can always be a source of bias. Pre-analytical procedures for blood sample collection and storage, and storage time in the freezer may all be sources of bias. The risk of such bias was minimized by matching cases and controls at the time point for inclusion. Our results can only be generalized to populations of mainly Scandinavian ethnicity in the investigated age groups. As no validation cohort was accessible in this study set-up, further investigations on this matter in other cohorts are needed to confirm generalizability. Strengths of our study include the well-defined cohort, in which cases were validated through a structured review process, and the matching of controls on age and sex, which enabled comparisons of levels of biomarkers, as concentrations of various biomarkers may differ between the sexes and are known to change during the lifetime . Another strength is the standardized manner in which the blood samples were obtained and stored, which was appropriate for later analysis. Mass spectrometry (MS) would have been the main alternative for plasma protein biomarker discovery . However, the two approaches are complementary. While MS-based methods are biased to quantify the most abundant plasma proteins, that method gives direct evidence for proteins through the specific mass spectra of the protein or peptide . In contrast, when using OLINK and other antibody-based platforms, defined protein groups are quantified in targeted analyses, including proteins with low concentrations . In conclusion, in this nested case–control study, elevated levels of ADGRE2 and Metrnl prior to diagnosis suggested activation of the innate immune system, possibly through macrophage and neutrophil differentiation and activation. The association between ROR1 levels and subsequent GCA might reflect an early inflammatory process. Furthermore, the protective role of FBP1 needs to be further investigated. keae073_Supplementary_Data |
Medicinal Chemistry Strategies for the Modification of Bioactive Natural Products | 26ba8f88-a70c-407f-9afb-6b68055cf6f8 | 10856770 | Pharmacology[mh] | Natural bioactive compounds are structurally unique metabolites produced by a variety of organisms, including animals, plants, and microorganisms, that possess exceptional physiological activities. These compounds are valuable resources for the development of novel drugs, particularly in the ongoing battle against infectious diseases and cancer . However, most natural bioactive compounds exhibit certain limitations in terms of their biological properties. These limitations include low activity, limited specificity, significant toxicity, and unfavorable pharmacokinetic profiles, which hinder their direct utilization as pharmaceutical agents. Therefore, it is imperative to optimize and modify their molecular structures to enhance their biological properties and achieve safety, efficacy, and control in drug applications. Natural active compounds are highly regarded as valuable resources for discovering structurally innovative lead compounds. By modifying and optimizing the molecular structure, diverse libraries of compounds can be generated, harnessing the potential of natural product resources and bolstering the prospects of new drug development. The field of drug screening and development based on natural products has significantly advanced with the advent of bioinformatics technologies, including artificial intelligence (AI) and advanced computing. Techniques such as target prediction, metabolite profiling of natural products, and investigations into the dynamics and thermodynamics of pharmacophores have greatly facilitated the identification of lead compounds from natural sources. These approaches enable the exploration of structural transformations, commencing from the intact natural product, progressing to its fragments, and ultimately culminating in structural optimization . Compared to chemically synthesized drugs, natural products are characterized by structural diversity and complexity, more chiral centers, fewer nitrogen and halogen atoms or aromatic rings, and other characteristics that will be discussed below. 2.1. Diversity and Complexity of Natural Product Structures Natural products exhibit a remarkable diversity and complexity in their structures. Take artemisinin ( 1 , ), for example, which contains a unique combination of a peroxide bond, lactone, and a bridged tricyclic system. These structural features not only preserve chemical reactivity but also ensure molecular stability . Similarly, paclitaxel ( 2 , ) possesses a fused tetracyclic framework with a 6-8-6-4 arrangement of functional groups, which contributes to its potent inhibition of microtubule proteins. However, the complex structures of natural products pose challenges in their chemical synthesis . In some cases, natural product structures may contain “redundant” atoms that do not participate in target binding. This presence of redundant atoms can have negative implications on the physicochemical, pharmacokinetic, and pharmaceutical properties of the compounds. Therefore, it becomes crucial to remove these redundant atoms and fragments during the process of structural modification in order to enhance the efficiency of ligand binding . 2.2. High sp3 Carbon Content and Few Aromatic Rings Natural product structures often exhibit a high proportion of sp3-hybridized carbon atoms, which are commonly found in aliphatic chains or cyclic compounds. This unique feature imparts flexibility to these structures. For example, the immunosuppressant tacrolimus ( 3 , ) and the antitumor agent epothilone B ( 4 , ) possess large macrocyclic lactone structures that provide them with considerable flexibility. The active component ISP-1 ( 5 , ) in Cordyceps militaris , an immunomodulator, is a flexible linear compound. Nature seems to prefer aliphatic rings over aromatic rings, as only 38% of known natural products contain aromatic systems . Aromatic rings play a crucial role in interacting with drug targets through phenomena such as π-π stacking, hydrogen bonding, and van der Waals forces, thereby influencing pharmacological properties and bioactivity. Additionally, the introduction of various substituents or functional groups on aromatic rings can effectively modulate the properties, selectivity, and solubility of drug molecules . 2.3. Low Nitrogen and Halogen Content Most natural products primarily consist of carbon, hydrogen, and oxygen, with a relatively low abundance of nitrogen atoms. When nitrogen atoms are present, their quantity is often limited. Nitrogen atoms can display nucleophilic characteristics and can exist in trivalent or pentavalent states. They can appear as basic salts or neutral amides, participate in ring formation, contribute to aromatization and fusion reactions, act as terminal or linking groups, and function as both hydrogen bond donors and acceptors. These properties enhance the binding efficiency between small molecules and ligands and can also have an impact on the drug’s solubility and bioavailability . With the exception of bromine atoms commonly found in marine organisms, natural products generally contain a low amount of halogens . Approximately 20% of small molecule anticancer lead compounds incorporate iodine, bromine, or chlorine. Halogens provide enhancements in lipophilicity and membrane permeability, and the electronegativity of halogens can augment the biological activity of the central molecule. For example, the inclusion of potent electron-withdrawing groups like fluorine can improve binding affinity, metabolic stability, physical properties, and selective activity . 2.4. Chirality and Stereochemistry The generation of natural products involves a series of enzymatic reactions, wherein the stereospecificity of these reactions determines the stereochemical attributes of the resultant products, including chiral centers, axes, and cis-trans isomerism . For instance, morphine ( 6 , ) consists of 21 non-hydrogen atoms, forms five fused rings, and possesses five chiral centers (red dots). Lovastatin ( 7 , ), on the other hand, comprises 28 non-hydrogen atoms, has eight chiral centers (red dots), and two conjugated trans double bonds. Dealing with chirality and stereochemistry can be as challenging as handling complex structures. Consequently, during chemical synthesis, it is advisable to minimize unnecessary chiral elements while maintaining activity and pharmacokinetic properties . Natural products exhibit a remarkable diversity and complexity in their structures. Take artemisinin ( 1 , ), for example, which contains a unique combination of a peroxide bond, lactone, and a bridged tricyclic system. These structural features not only preserve chemical reactivity but also ensure molecular stability . Similarly, paclitaxel ( 2 , ) possesses a fused tetracyclic framework with a 6-8-6-4 arrangement of functional groups, which contributes to its potent inhibition of microtubule proteins. However, the complex structures of natural products pose challenges in their chemical synthesis . In some cases, natural product structures may contain “redundant” atoms that do not participate in target binding. This presence of redundant atoms can have negative implications on the physicochemical, pharmacokinetic, and pharmaceutical properties of the compounds. Therefore, it becomes crucial to remove these redundant atoms and fragments during the process of structural modification in order to enhance the efficiency of ligand binding . Natural product structures often exhibit a high proportion of sp3-hybridized carbon atoms, which are commonly found in aliphatic chains or cyclic compounds. This unique feature imparts flexibility to these structures. For example, the immunosuppressant tacrolimus ( 3 , ) and the antitumor agent epothilone B ( 4 , ) possess large macrocyclic lactone structures that provide them with considerable flexibility. The active component ISP-1 ( 5 , ) in Cordyceps militaris , an immunomodulator, is a flexible linear compound. Nature seems to prefer aliphatic rings over aromatic rings, as only 38% of known natural products contain aromatic systems . Aromatic rings play a crucial role in interacting with drug targets through phenomena such as π-π stacking, hydrogen bonding, and van der Waals forces, thereby influencing pharmacological properties and bioactivity. Additionally, the introduction of various substituents or functional groups on aromatic rings can effectively modulate the properties, selectivity, and solubility of drug molecules . Most natural products primarily consist of carbon, hydrogen, and oxygen, with a relatively low abundance of nitrogen atoms. When nitrogen atoms are present, their quantity is often limited. Nitrogen atoms can display nucleophilic characteristics and can exist in trivalent or pentavalent states. They can appear as basic salts or neutral amides, participate in ring formation, contribute to aromatization and fusion reactions, act as terminal or linking groups, and function as both hydrogen bond donors and acceptors. These properties enhance the binding efficiency between small molecules and ligands and can also have an impact on the drug’s solubility and bioavailability . With the exception of bromine atoms commonly found in marine organisms, natural products generally contain a low amount of halogens . Approximately 20% of small molecule anticancer lead compounds incorporate iodine, bromine, or chlorine. Halogens provide enhancements in lipophilicity and membrane permeability, and the electronegativity of halogens can augment the biological activity of the central molecule. For example, the inclusion of potent electron-withdrawing groups like fluorine can improve binding affinity, metabolic stability, physical properties, and selective activity . The generation of natural products involves a series of enzymatic reactions, wherein the stereospecificity of these reactions determines the stereochemical attributes of the resultant products, including chiral centers, axes, and cis-trans isomerism . For instance, morphine ( 6 , ) consists of 21 non-hydrogen atoms, forms five fused rings, and possesses five chiral centers (red dots). Lovastatin ( 7 , ), on the other hand, comprises 28 non-hydrogen atoms, has eight chiral centers (red dots), and two conjugated trans double bonds. Dealing with chirality and stereochemistry can be as challenging as handling complex structures. Consequently, during chemical synthesis, it is advisable to minimize unnecessary chiral elements while maintaining activity and pharmacokinetic properties . The aforementioned characteristics present a range of options for structural modification and transformation, allowing for novel discoveries in research and development. Compared with complex natural drugs, the structure of chemical drug molecules is relatively simple, with a higher content of aromatic rings, especially many nitrogen-containing aromatic heterocycles, resulting in a higher overall nitrogen content in the molecule. This design can form additional hydrogen bond donors and acceptors. The linking mode of cyclic systems is relatively simple, and it is often linked by short linkers such as amide bonds or methylene. Some structural fragments frequently appear in multiple drug molecules and have specific structures and functions, which are closely related to the activity and characteristics of drugs . Privileged fragments refer to small molecular fragments or scaffolds that are highly represented in bioactive compounds and exhibit a wide range of biological activities. In medicinal chemistry, they have several advantages : (1) Activity and affinity: The utilization of privileged fragments in multiple drugs is primarily driven by their demonstrable higher activity and affinity, facilitating specific interactions with biological macromolecules, such as proteins and enzymes. These fragments have exhibited favorable biological activities in several drug compounds, thereby enhancing the probability of uncovering pharmaceutically active compounds; (2) Highly optimized properties: given the recurrent presence of privileged fragments in multiple pharmaceuticals, their structural and functional characteristics have undergone rigorous validation and verification within an extensive repertoire of drug design and optimization endeavors. Consequently, these fragments have garnered considerable attention and refinement, offering distinct advantages in enhancing pharmacokinetic properties, pharmacological profiles, and selectivity of drug candidates ; (3) Flexibility in structural modification: privileged fragments assume a pivotal role as core scaffolds in medicinal agents, providing ample opportunities for subsequent structural modifications to fine-tune their specific characteristics. This inherent flexibility enables chemists to personalize these fragments by manipulating side chains and incorporating additional functional groups, fostering improved optimal drug performances ; (4) Rich structural diversity: although privileged fragments possess predetermined skeletal frameworks, their adeptness for modifications at diverse positional and orientational geometries engenders a wealth of structural diversity . In doing so, privileged fragments facilitate the integration of molecular diversity, effectively expanding the scope of medicinal chemistry investigations to encompass numerous potential targets and pathways. In conclusion, privileged fragments exhibit high activity and affinity, and, through optimization and research, they have demonstrated significant advantages in medicinal chemistry. Due to their controllable structure, privileged fragments offer relative simplicity in synthesis compared to complex natural compounds, allowing for further optimization through structural modifications. This flexibility enables chemists to personalize these fragments by adjusting side chains, introducing additional functional groups, and establishing abundant structure-activity relationships, ultimately resulting in rich structural diversity . This facilitates the attainment of improved drug performance . In medicinal chemistry, “simplifying complexity” is an important remodeling strategy. It involves simplifying the structural core of complex active natural products, partially or completely transforming them into privileged scaffold structures that are easier to synthesize and possess stronger pharmacological effects. Additionally, it can involve deconstructing active natural products into smaller molecular fragments, reassembling and optimizing the entire new scaffold using fragment-based drug design principles, and further modifying it through local structural modifications to enhance its activity and pharmacological efficacy. Over the past 20 years, with the widespread application of high-throughput screening techniques and computer-aided drug design, as well as the increasing availability of protein structure databases and natural product databases, a foundation has been established for structure-based remodeling of active natural products. By integrating techniques such as virtual screening, high-throughput screening, large databases, structural biology information, and computational chemistry, it is possible to employ structure-based drug design strategies such as scaffold hopping and privileged scaffold replacement to optimize certain active natural lead compounds. 4.1. From ISP-1 to Siponimod The ultimate goal of modifying natural products is to develop active compounds into medicines. The transformation process from ISP-1 to siponimod ( 11 , a) serves as an example of natural product modification. The strategies and methods used in this process have significant implications for modifying other natural compounds ( a). ISP-1 is a natural product and it exhibits immunomodulatory effects by acting as a receptor for sphingosine-1-phosphate . I SP-1 possesses a complex structure with three chiral centers, a trans double bond, and both an amino group and a carboxyl group. However, due to its high toxicity and low solubility, I SP-1 cannot be used as a medication without further modifications or adaptations . To simplify the structure, reduce or eliminate chiral centers, improve activity, and enhance pharmacokinetics, compound 8 was selected as a lead compound for structural modifications. Through a series of transformations and optimizations, fingolimod ( 9 , a) was eventually developed. Fingolimod is a symmetrical molecule that has undergone modifications such as removal of the ketone group, trans double bond, and chiral carbons. This molecule lacks chiral and stereoisomeric factors and incorporates a benzene ring into the long chain, which reduces the number of saturated carbons and facilitates synthesis and conformational rigidity. Fingolimod was introduced to the market in 2010 for the treatment of multiple sclerosis . The success of fingolimod can be attributed to the substitution of the alkyl chain with an aromatic ring. Alkyl chains, being flexible in nature, can exist in various conformations, which may not favor the attainment of a “high concentration” of active conformations. Hence, incorporating factors that restrict conformational flexibility in the chain, such as replacing a portion of the saturated carbon chain with a phenyl ring, can offer advantages in terms of potency, pharmacokinetics, safety, and physicochemical properties . Fingolimod functions as a prodrug that is transformed into its active form by sphingosine kinase 2 in the liver after oral absorption. In light of this, a new compound called phosphorylated fingolimod ( 10 , a) was designed . In the subsequent stages of development, the researchers discovered Siponimod , a novel S1P1 receptor agonist that was developed by Novartis. This compound was designed by replacing the flexible lipid chain of fingolimod with a rigid aromatic ring and cyclohexane, and by introducing a trifluoromethyl group onto the aromatic ring. These structural modifications were implemented to enhance the selectivity of siponimod in its interaction with specific receptors, thereby potentially influencing its pharmacological activity . In initial structure-activity studies, the trifluoromethyl group on the benzene ring was found to significantly impact activity, boosting it by over 30 times compared to the unsubstituted hydrogen atom. Despite the structural differences between siponimod and fingolimod , they share similar molecular sizes and pharmacophore features, and their distribution patterns exhibit resemblances ( b). In terms of potency, siponimod exhibits an EC 50 value of 0.4 nM, while its EC 50 for the S1P3 receptor, where its activity is undesirable, is at 5 μM, thus demonstrating a high level of selectivity. Furthermore, studies conducted on monkeys have revealed an oral bioavailability of 71% for siponimod , with a plasma half-life (T 1/2 ) of 19 h . From ISP-1 to Siponimod , the transformation process involves simplifying the complex structure of a natural product with multiple chiral centers and a long flexible chain, which has poor pharmacological properties, into a small molecule chemical drug with a clearly defined drug-like structure. Several transformative strategies can be summarized as follows: (1) In the initial stage of remodeling, the structure is simplified by removing chiral centers as much as possible to reduce the difficulties in chemical synthesis. This allows the synthesis of a controllable drug scaffold for systematic structure-activity relationship studies; (2) For the long flexible liner alkyl chain, the advantage of introducing a pharmaceutically strong aromatic core is that it increases molecular rigidity, and the aromatic ring structure enables further structural modifications; (3) The flexible tail of Fingolimod is further replaced with a benzene ring and a cyclohexane while maintaining its hydrophobic properties. This not only enhances rigidity but also allows for additional modifications by introducing functional groups onto the benzene ring; (4) It can be rationalized using molecular modeling based upon solved S1P1crystal structure to do the optimization (PDB: 3V2Y). Based on the predicted binding mode, it appears that the carboxylic acid headgroup of siponimod is able to form salt bridges with Lys34 and Arg120 and a hydrogen bond with Tyr29. These electrostatic interactions are strong and serve to anchor the ligand molecule in its binding pocket . Although there is a significant difference in structure from ISP-1 to Siponimod , the remodeling process follows the principles of simplification and advantageous fragment replacement, leading to the generation of simplified lead compounds. Through structure-activity relationship studies, guided by structure-based drug design principles, and with the assistance of structural biology, the structure is further elaborated. These strategies demonstrate an effective approach for the transformation of active natural products by combining scaffold hopping, privileged fragment replacement, and guidance from structural biology information. 4.2. The “Statin” Drugs Mesvastatin ( 12 , ), which is isolated from the fermentation broth of the fungus Penicillium citrinum , was the first inhibitor discovered that targets hydroxymethylglutaryl-coenzyme A (HMG-CoA) reductase, the rate-limiting enzyme in cholesterol synthesis in the body. The chemical structure of lovastatin ( 13 , ) includes eight chiral carbons, with two within the upper lactone ring and six on the lower hexahydronaphthalene ring. It was introduced to the market in 1987 as a medication for lowering cholesterol levels . The lactone ring, formed by a dihydroxy acid, is an important pharmacophore feature, while the hexahydronaphthalene structure serves as a backbone and hydrophobic fragment crucial for enzyme binding, although the presence of chiral centers is not necessary. Subsequent “statin” drugs that entered the market retained the dihydroxy acid structure but underwent significant changes in the lower part of the molecule, which lacks chiral centers. Enzyme binding is primarily governed by hydrophobic-hydrophobic interactions. For instance, fluvastatin ( 15 , ), pitavastatin ( 16 , ), atorvastatin ( 17 , ), and rosuvastatin ( 18 , ) all share the same spatial orientation of the two hydroxyl groups in the dihydroxy acid fragment, but their structural backbones are transformed into indole, quinoline, pyrrole, and pyridine rings, respectively. These structural variations contribute to their lipid-lowering effects . Interestingly, despite the different structural types of synthetic statin drugs, they share similar binding modes. a illustrates the interactions of simvastatin and rosuvastatin with the amino acid residues. Compared to simvastatin, rosuvastatin utilizes aromatic rings with halogen and nitrogen atoms to replace the hexahydronaphthalene ring . The additional fluorophenyl motif and nitrogen atoms enhance the binding affinity between rosuvastatin and the target protein ( b) . The spatial arrangement and structure of compounds is critical for their biological activity. However, the construction of complex stereogenic structures in synthetic medicinal molecules differs fundamentally from the formation of stereoisomers in natural products. Stereogenic structures in natural products are formed via complex biosynthetic pathways, relying on several endogenous reactions and selective enzymes. In contrast, medicinal chemists can construct and control the stereogenic structure of synthetic molecules via chemical synthesis and purification techniques. Therefore, the methods for constructing stereogenic structures of synthetic medicinal molecules are fundamentally different from the way in which natural products form stereoisomers. The stereogenic structure of natural medicinal molecules is often constructed with unique stereochemistry through chiral sp3-hybridized carbon atoms or complex ring systems that happen to achieve binding specificity with proteins, generating biological activity. For example, the hexahydronaphthalene structure of type 1 “statin” drugs is formed by five chiral carbon atoms with sp3 hybridization, creating a unique stereoconfiguration of fused rings. It is difficult to synthesize analogs of this structure or make modifications or substitutions on this structure. It is also challenging to conduct comprehensive structure-activity relationship studies on this ring system. In type 2 molecules, the transformation of the hexahydronaphthalene structure into privileged structures such as quinolines and indoles proceeds by substituting the ester group with a benzene ring and incorporating a fluorine atom onto the benzene ring. Additionally, the methyl group can be substituted with a propyl or cyclopropyl group as desired . Regarding the four aromatic ring systems of Atorvastatin, its spatial configuration is achieved by connecting different aromatic rings and adjusting the dihedral angles between the rings, forming a unique stereostructure. Fragments are connected either through direct aromatic ring connections or by using amide bonds as linkers, which is a common connection method in the synthesis of drug molecules. Many classic organic named reactions, such as the Suzuki reaction, Buchwald–Hartwig reaction, Ullmann reaction, and others, efficiently enable the coupling between aromatic rings . Compared to the ester bond connection commonly found in natural products, the amide bond exhibits higher stability and inertness, making it relatively stable under changes in temperature and pH. It is less susceptible to acid-base hydrolysis and therefore exhibits good stability in biological systems. The stereoelectronic and donor-acceptor interactions of the amide bond can modulate the pharmacokinetic properties of the drug. The incorporation of amide bonds into the structure of drug molecules can improve their lipophilicity and hydrophilicity, allowing for better control of their absorption, distribution, and metabolic behavior. In the structure of rosuvastatin, more nitrogen atoms are introduced. Nitrogen atoms are typically capable of acting as hydrogen bond acceptors or donors. Due to their electronegative nature, nitrogen atoms are ideal candidates for forming hydrogen bonds with positively charged hydrogen atoms. Hydrogen bonding is one of the key ways in which drug molecules interact with biomacromolecules such as receptors or enzymes. It not only influences the activity, specificity, and affinity of the drug, but also alters its physicochemical and metabolic properties. Compared to atorvastatin, rosuvastatin displays enhanced hydrophilicity, resulting in reduced penetration of the blood-brain barrier. This characteristic prevents central nervous system stimulation and does not interfere with a patient’s sleep . In summary, privileged fragments concatenation are powerful techniques for constructing active drug skeletons and accelerating the synthesis and optimization of drug molecules. These methods allow for the replacement of natural scaffolds with controllable synthetic structures. These advantageous structures often consist of standardized chemical building blocks, enabling efficient creation of active compound libraries with similar structural features . The coupling of aromatic fragments is commonly used in chemical synthesis because it offers a fast and highly selective method for constructing versatile scaffolds with high yields. Aromatic pharmacophores are particularly suitable for substitution and fragment growth, especially when combined with small molecule-protein crystallography information . This combination enables precise adjustments in fragment growth and substitution, making it easier to explore structure-activity relationships. The substitution and modification have the potential to enhance therapeutic efficacy and reduce adverse reactions. 4.3. From Phloridzin to Dapagliflozin For 150 years, phlorizin ( 19 , ), a phloroglucinol glucoside, has been known to be present in the roots, stems, and fruit peels of fruit trees. Extensive research has been conducted to explore its potential as a medicinal agent and pharmacological tool. It has been discovered that phlorizin exerts a hypoglycemic effect by inhibiting the sodium-glucose co-transporter 2 (SGLT2) in the renal tubules, leading to the excretion of glucose in the urine and reductions in blood glucose levels. However, phlorizin also inhibits the sodium-glucose co-transporter 1 (SGLT1) in the intestinal mucosa, which limits its use as a drug and may cause side effects . Nevertheless, phlorizin serves as a valuable lead compound for further research. Structural modifications have been made to achieve the following objectives: (1) Elimination of the inhibitory effect on SGLT1 while improving selective inhibition of SGLT2; (2) Reduction or removal of phenolic hydroxyl groups to decrease phase II metabolism and prolong in vivo retention time; (3) Enhancement of the in vivo stability of compound glycosidic bonds . Between two aromatic rings of dihydrochalcone, there are four rotatable bonds. Reducing the number of single bonds can be beneficial in maintaining an active conformation and enhancing the activity . Through a series of explorations, it was discovered that the benzene rings could be connected by methylene groups. Compared to phlorizin , the structure of sergliflozin ( 20 , ) has reduced the number of freely rotating redundant atoms and increased the rigidity of the molecular framework, thereby enhancing the selectivity of the scaffold. However, sergliflozin still has stability problems and is not available as a drug . The glycosyl group is a pivotal pharmacophore; however, the O-glycosidic linkage exhibits poor metabolic stability as it is susceptible to cleavage mediated by β-glucosidase enzymes. Through a series of explorations, compound 21 was disclosed, which was characterized with C-aryl glucosides and meta-substituted diarylmethanes. This transformation sustained its activity and selectivity while bolstering its metabolic stability. Starting with compound 21 , a series of structure-activity relationship studies were conducted. Among the C-aryl glucosides compounds, dapagliflozin ( 22 , ) demonstrated exceptional stability and selectivity. It presented IC 50 values for SGLT2 and SGLT1 at 1.1 and 1390 nmol·L −1 , respectively, augmenting its selectivity more than a thousand-fold. Jointly developed by BMS and AstraZeneca, it advanced to Phase III clinical trials. It earned approval from the European Union in 2012, marking its debut as the inaugural SGLT2-targeting drug for type 2 diabetes . Simultaneously, a series of SGLT2 inhibitors were launched, such as canangliflozin ( 23 , ), empagliflozin ( 24 , ), and ipragliflozin ( 25 , ). These were independently developed in different companies. Those molecules almost have the same pharmacophore features and similar scaffolds . Tofoliflozin ( 26 , ) is a non-glucosides SGLT2 inhibitor and possess high selectivity and good absorption characteristics . During the early stages of development, due to the lack of a crystalline complex between phlorizin and the protein, the specific binding mode between the target protein and the compound was not well understood. Researchers primarily relied on traditional medicinal chemistry optimization methods such as scaffold hopping and structure-activity relationship studies to guide compound modifications. Although the detailed interaction details between phlorizin and the target protein could not be directly elucidated, researchers were still able to achieve excellent therapeutic effects through compound improvements. However, since 2022, several reports have emerged regarding the crystalline complexes between SGLT2 protein and small molecules, revealing the binding mode of these compounds with the protein . From the latest small molecule-protein co-crystallization analysis, despite the structural similarities between the natural compound phlorizin and the marketed drugs, they are actually not located in the same active pocket ( a). This work provides a framework for understanding the mechanism of the SGLT2 inhibitors and also develops a foundation for the future rational design and optimization of new inhibitors targeting these transporters. This finding provides new directions and opportunities for further research and enables scientists to explore non-glycoside SGLT2 inhibitors with different structural scaffolds. While glycoside-based SGLT2 inhibitors derived from root extracts have shown promising efficacy in medical practice, the development of non-glycoside SGLT2 inhibitors remains of significant scientific and commercial value. Compounds 29 , 30 , and 31 were identified through a ligand-based virtual screening strategy, combined with pharmacophore models and structural clustering analysis, as structurally novel non-glycoside SGLT2 inhibitors . The exploration of the structure optimization and structure-activity relationship of compounds 29 , 30 , and 31 is currently underway, and more researchers are expected to contribute to the development of novel SGLT2 inhibitors in the future. In summary, these advanced technologies significantly enhance the speed and efficiency of drug development. Advanced structural biology techniques such as cryo-electron microscopy can be used to determine the co-crystal structures of natural compounds and proteins, thereby identifying their binding sites, binding strength, binding modes, and effects on organisms. This contributes to our understanding of the mechanisms of action of active molecules in vivo and provides guidance for drug design and discovery. For compounds or targets that are difficult to co-crystallize, the alphafold technology can be utilized to predict the binding modes of small molecules with proteins. While there may be some deviation between the predicted results and the actual binding state, it still provides relevant information for each stage of modifying active natural products. 4.4. Structure Simplification Structure simplification serves as a strategic approach to optimize natural products. By reducing the complexity of natural product structures while maintaining or decreasing their activity, it makes them easier to synthesize and facilitates comprehensive exploration of structure-activity relationships for further studies. Take morphine, for example: none of the five chiral centers in morphine are essential for binding with opioid receptors. Although methadone ( 32 , ) and pethidine ( 33 , ) contain one chiral carbon, there is no difference in activity between their enantiomers. Fentanyl ( 34 , ) is a symmetrical molecule without chirality. Compared to the complex skeleton of morphine, fentanyl is simpler and easier to synthesize. The elimination of chiral centers and stereochemical configurations, which has made it feasible to explore a series of investigations on structure-activity relationships, solubility, absorption, and metabolism, is characteristic of this class of drugs with a simplified framework . Similarly, the cyclic structure of alkaloid cocaine ( 35 , ) is cleaved, simplifying its structure and allowing for the synthesis of non-chiral local anesthetics like procaine ( 36 , ), tetracaine ( 37 , ), and lidocaine ( 38 , ) . Physostigmine ( 39 , ) is a parasympathomimetic alkaloid and a reversible cholinesterase inhibitor. Due to its chemical instability in the body, modified compounds such as pyridostigmine bromide ( 40 , ) and neostigmine bromide ( 41 , ) have been developed. Unlike physostigmine, which is a tertiary amine, these synthetic quaternary ammonium salts have limited penetration into the central nervous system, resulting in a lower likelihood of adverse effects such as orthostatic hypotension. However, they still effectively improve muscle tone in patients with myasthenia gravis . The ultimate goal of modifying natural products is to develop active compounds into medicines. The transformation process from ISP-1 to siponimod ( 11 , a) serves as an example of natural product modification. The strategies and methods used in this process have significant implications for modifying other natural compounds ( a). ISP-1 is a natural product and it exhibits immunomodulatory effects by acting as a receptor for sphingosine-1-phosphate . I SP-1 possesses a complex structure with three chiral centers, a trans double bond, and both an amino group and a carboxyl group. However, due to its high toxicity and low solubility, I SP-1 cannot be used as a medication without further modifications or adaptations . To simplify the structure, reduce or eliminate chiral centers, improve activity, and enhance pharmacokinetics, compound 8 was selected as a lead compound for structural modifications. Through a series of transformations and optimizations, fingolimod ( 9 , a) was eventually developed. Fingolimod is a symmetrical molecule that has undergone modifications such as removal of the ketone group, trans double bond, and chiral carbons. This molecule lacks chiral and stereoisomeric factors and incorporates a benzene ring into the long chain, which reduces the number of saturated carbons and facilitates synthesis and conformational rigidity. Fingolimod was introduced to the market in 2010 for the treatment of multiple sclerosis . The success of fingolimod can be attributed to the substitution of the alkyl chain with an aromatic ring. Alkyl chains, being flexible in nature, can exist in various conformations, which may not favor the attainment of a “high concentration” of active conformations. Hence, incorporating factors that restrict conformational flexibility in the chain, such as replacing a portion of the saturated carbon chain with a phenyl ring, can offer advantages in terms of potency, pharmacokinetics, safety, and physicochemical properties . Fingolimod functions as a prodrug that is transformed into its active form by sphingosine kinase 2 in the liver after oral absorption. In light of this, a new compound called phosphorylated fingolimod ( 10 , a) was designed . In the subsequent stages of development, the researchers discovered Siponimod , a novel S1P1 receptor agonist that was developed by Novartis. This compound was designed by replacing the flexible lipid chain of fingolimod with a rigid aromatic ring and cyclohexane, and by introducing a trifluoromethyl group onto the aromatic ring. These structural modifications were implemented to enhance the selectivity of siponimod in its interaction with specific receptors, thereby potentially influencing its pharmacological activity . In initial structure-activity studies, the trifluoromethyl group on the benzene ring was found to significantly impact activity, boosting it by over 30 times compared to the unsubstituted hydrogen atom. Despite the structural differences between siponimod and fingolimod , they share similar molecular sizes and pharmacophore features, and their distribution patterns exhibit resemblances ( b). In terms of potency, siponimod exhibits an EC 50 value of 0.4 nM, while its EC 50 for the S1P3 receptor, where its activity is undesirable, is at 5 μM, thus demonstrating a high level of selectivity. Furthermore, studies conducted on monkeys have revealed an oral bioavailability of 71% for siponimod , with a plasma half-life (T 1/2 ) of 19 h . From ISP-1 to Siponimod , the transformation process involves simplifying the complex structure of a natural product with multiple chiral centers and a long flexible chain, which has poor pharmacological properties, into a small molecule chemical drug with a clearly defined drug-like structure. Several transformative strategies can be summarized as follows: (1) In the initial stage of remodeling, the structure is simplified by removing chiral centers as much as possible to reduce the difficulties in chemical synthesis. This allows the synthesis of a controllable drug scaffold for systematic structure-activity relationship studies; (2) For the long flexible liner alkyl chain, the advantage of introducing a pharmaceutically strong aromatic core is that it increases molecular rigidity, and the aromatic ring structure enables further structural modifications; (3) The flexible tail of Fingolimod is further replaced with a benzene ring and a cyclohexane while maintaining its hydrophobic properties. This not only enhances rigidity but also allows for additional modifications by introducing functional groups onto the benzene ring; (4) It can be rationalized using molecular modeling based upon solved S1P1crystal structure to do the optimization (PDB: 3V2Y). Based on the predicted binding mode, it appears that the carboxylic acid headgroup of siponimod is able to form salt bridges with Lys34 and Arg120 and a hydrogen bond with Tyr29. These electrostatic interactions are strong and serve to anchor the ligand molecule in its binding pocket . Although there is a significant difference in structure from ISP-1 to Siponimod , the remodeling process follows the principles of simplification and advantageous fragment replacement, leading to the generation of simplified lead compounds. Through structure-activity relationship studies, guided by structure-based drug design principles, and with the assistance of structural biology, the structure is further elaborated. These strategies demonstrate an effective approach for the transformation of active natural products by combining scaffold hopping, privileged fragment replacement, and guidance from structural biology information. Mesvastatin ( 12 , ), which is isolated from the fermentation broth of the fungus Penicillium citrinum , was the first inhibitor discovered that targets hydroxymethylglutaryl-coenzyme A (HMG-CoA) reductase, the rate-limiting enzyme in cholesterol synthesis in the body. The chemical structure of lovastatin ( 13 , ) includes eight chiral carbons, with two within the upper lactone ring and six on the lower hexahydronaphthalene ring. It was introduced to the market in 1987 as a medication for lowering cholesterol levels . The lactone ring, formed by a dihydroxy acid, is an important pharmacophore feature, while the hexahydronaphthalene structure serves as a backbone and hydrophobic fragment crucial for enzyme binding, although the presence of chiral centers is not necessary. Subsequent “statin” drugs that entered the market retained the dihydroxy acid structure but underwent significant changes in the lower part of the molecule, which lacks chiral centers. Enzyme binding is primarily governed by hydrophobic-hydrophobic interactions. For instance, fluvastatin ( 15 , ), pitavastatin ( 16 , ), atorvastatin ( 17 , ), and rosuvastatin ( 18 , ) all share the same spatial orientation of the two hydroxyl groups in the dihydroxy acid fragment, but their structural backbones are transformed into indole, quinoline, pyrrole, and pyridine rings, respectively. These structural variations contribute to their lipid-lowering effects . Interestingly, despite the different structural types of synthetic statin drugs, they share similar binding modes. a illustrates the interactions of simvastatin and rosuvastatin with the amino acid residues. Compared to simvastatin, rosuvastatin utilizes aromatic rings with halogen and nitrogen atoms to replace the hexahydronaphthalene ring . The additional fluorophenyl motif and nitrogen atoms enhance the binding affinity between rosuvastatin and the target protein ( b) . The spatial arrangement and structure of compounds is critical for their biological activity. However, the construction of complex stereogenic structures in synthetic medicinal molecules differs fundamentally from the formation of stereoisomers in natural products. Stereogenic structures in natural products are formed via complex biosynthetic pathways, relying on several endogenous reactions and selective enzymes. In contrast, medicinal chemists can construct and control the stereogenic structure of synthetic molecules via chemical synthesis and purification techniques. Therefore, the methods for constructing stereogenic structures of synthetic medicinal molecules are fundamentally different from the way in which natural products form stereoisomers. The stereogenic structure of natural medicinal molecules is often constructed with unique stereochemistry through chiral sp3-hybridized carbon atoms or complex ring systems that happen to achieve binding specificity with proteins, generating biological activity. For example, the hexahydronaphthalene structure of type 1 “statin” drugs is formed by five chiral carbon atoms with sp3 hybridization, creating a unique stereoconfiguration of fused rings. It is difficult to synthesize analogs of this structure or make modifications or substitutions on this structure. It is also challenging to conduct comprehensive structure-activity relationship studies on this ring system. In type 2 molecules, the transformation of the hexahydronaphthalene structure into privileged structures such as quinolines and indoles proceeds by substituting the ester group with a benzene ring and incorporating a fluorine atom onto the benzene ring. Additionally, the methyl group can be substituted with a propyl or cyclopropyl group as desired . Regarding the four aromatic ring systems of Atorvastatin, its spatial configuration is achieved by connecting different aromatic rings and adjusting the dihedral angles between the rings, forming a unique stereostructure. Fragments are connected either through direct aromatic ring connections or by using amide bonds as linkers, which is a common connection method in the synthesis of drug molecules. Many classic organic named reactions, such as the Suzuki reaction, Buchwald–Hartwig reaction, Ullmann reaction, and others, efficiently enable the coupling between aromatic rings . Compared to the ester bond connection commonly found in natural products, the amide bond exhibits higher stability and inertness, making it relatively stable under changes in temperature and pH. It is less susceptible to acid-base hydrolysis and therefore exhibits good stability in biological systems. The stereoelectronic and donor-acceptor interactions of the amide bond can modulate the pharmacokinetic properties of the drug. The incorporation of amide bonds into the structure of drug molecules can improve their lipophilicity and hydrophilicity, allowing for better control of their absorption, distribution, and metabolic behavior. In the structure of rosuvastatin, more nitrogen atoms are introduced. Nitrogen atoms are typically capable of acting as hydrogen bond acceptors or donors. Due to their electronegative nature, nitrogen atoms are ideal candidates for forming hydrogen bonds with positively charged hydrogen atoms. Hydrogen bonding is one of the key ways in which drug molecules interact with biomacromolecules such as receptors or enzymes. It not only influences the activity, specificity, and affinity of the drug, but also alters its physicochemical and metabolic properties. Compared to atorvastatin, rosuvastatin displays enhanced hydrophilicity, resulting in reduced penetration of the blood-brain barrier. This characteristic prevents central nervous system stimulation and does not interfere with a patient’s sleep . In summary, privileged fragments concatenation are powerful techniques for constructing active drug skeletons and accelerating the synthesis and optimization of drug molecules. These methods allow for the replacement of natural scaffolds with controllable synthetic structures. These advantageous structures often consist of standardized chemical building blocks, enabling efficient creation of active compound libraries with similar structural features . The coupling of aromatic fragments is commonly used in chemical synthesis because it offers a fast and highly selective method for constructing versatile scaffolds with high yields. Aromatic pharmacophores are particularly suitable for substitution and fragment growth, especially when combined with small molecule-protein crystallography information . This combination enables precise adjustments in fragment growth and substitution, making it easier to explore structure-activity relationships. The substitution and modification have the potential to enhance therapeutic efficacy and reduce adverse reactions. For 150 years, phlorizin ( 19 , ), a phloroglucinol glucoside, has been known to be present in the roots, stems, and fruit peels of fruit trees. Extensive research has been conducted to explore its potential as a medicinal agent and pharmacological tool. It has been discovered that phlorizin exerts a hypoglycemic effect by inhibiting the sodium-glucose co-transporter 2 (SGLT2) in the renal tubules, leading to the excretion of glucose in the urine and reductions in blood glucose levels. However, phlorizin also inhibits the sodium-glucose co-transporter 1 (SGLT1) in the intestinal mucosa, which limits its use as a drug and may cause side effects . Nevertheless, phlorizin serves as a valuable lead compound for further research. Structural modifications have been made to achieve the following objectives: (1) Elimination of the inhibitory effect on SGLT1 while improving selective inhibition of SGLT2; (2) Reduction or removal of phenolic hydroxyl groups to decrease phase II metabolism and prolong in vivo retention time; (3) Enhancement of the in vivo stability of compound glycosidic bonds . Between two aromatic rings of dihydrochalcone, there are four rotatable bonds. Reducing the number of single bonds can be beneficial in maintaining an active conformation and enhancing the activity . Through a series of explorations, it was discovered that the benzene rings could be connected by methylene groups. Compared to phlorizin , the structure of sergliflozin ( 20 , ) has reduced the number of freely rotating redundant atoms and increased the rigidity of the molecular framework, thereby enhancing the selectivity of the scaffold. However, sergliflozin still has stability problems and is not available as a drug . The glycosyl group is a pivotal pharmacophore; however, the O-glycosidic linkage exhibits poor metabolic stability as it is susceptible to cleavage mediated by β-glucosidase enzymes. Through a series of explorations, compound 21 was disclosed, which was characterized with C-aryl glucosides and meta-substituted diarylmethanes. This transformation sustained its activity and selectivity while bolstering its metabolic stability. Starting with compound 21 , a series of structure-activity relationship studies were conducted. Among the C-aryl glucosides compounds, dapagliflozin ( 22 , ) demonstrated exceptional stability and selectivity. It presented IC 50 values for SGLT2 and SGLT1 at 1.1 and 1390 nmol·L −1 , respectively, augmenting its selectivity more than a thousand-fold. Jointly developed by BMS and AstraZeneca, it advanced to Phase III clinical trials. It earned approval from the European Union in 2012, marking its debut as the inaugural SGLT2-targeting drug for type 2 diabetes . Simultaneously, a series of SGLT2 inhibitors were launched, such as canangliflozin ( 23 , ), empagliflozin ( 24 , ), and ipragliflozin ( 25 , ). These were independently developed in different companies. Those molecules almost have the same pharmacophore features and similar scaffolds . Tofoliflozin ( 26 , ) is a non-glucosides SGLT2 inhibitor and possess high selectivity and good absorption characteristics . During the early stages of development, due to the lack of a crystalline complex between phlorizin and the protein, the specific binding mode between the target protein and the compound was not well understood. Researchers primarily relied on traditional medicinal chemistry optimization methods such as scaffold hopping and structure-activity relationship studies to guide compound modifications. Although the detailed interaction details between phlorizin and the target protein could not be directly elucidated, researchers were still able to achieve excellent therapeutic effects through compound improvements. However, since 2022, several reports have emerged regarding the crystalline complexes between SGLT2 protein and small molecules, revealing the binding mode of these compounds with the protein . From the latest small molecule-protein co-crystallization analysis, despite the structural similarities between the natural compound phlorizin and the marketed drugs, they are actually not located in the same active pocket ( a). This work provides a framework for understanding the mechanism of the SGLT2 inhibitors and also develops a foundation for the future rational design and optimization of new inhibitors targeting these transporters. This finding provides new directions and opportunities for further research and enables scientists to explore non-glycoside SGLT2 inhibitors with different structural scaffolds. While glycoside-based SGLT2 inhibitors derived from root extracts have shown promising efficacy in medical practice, the development of non-glycoside SGLT2 inhibitors remains of significant scientific and commercial value. Compounds 29 , 30 , and 31 were identified through a ligand-based virtual screening strategy, combined with pharmacophore models and structural clustering analysis, as structurally novel non-glycoside SGLT2 inhibitors . The exploration of the structure optimization and structure-activity relationship of compounds 29 , 30 , and 31 is currently underway, and more researchers are expected to contribute to the development of novel SGLT2 inhibitors in the future. In summary, these advanced technologies significantly enhance the speed and efficiency of drug development. Advanced structural biology techniques such as cryo-electron microscopy can be used to determine the co-crystal structures of natural compounds and proteins, thereby identifying their binding sites, binding strength, binding modes, and effects on organisms. This contributes to our understanding of the mechanisms of action of active molecules in vivo and provides guidance for drug design and discovery. For compounds or targets that are difficult to co-crystallize, the alphafold technology can be utilized to predict the binding modes of small molecules with proteins. While there may be some deviation between the predicted results and the actual binding state, it still provides relevant information for each stage of modifying active natural products. Structure simplification serves as a strategic approach to optimize natural products. By reducing the complexity of natural product structures while maintaining or decreasing their activity, it makes them easier to synthesize and facilitates comprehensive exploration of structure-activity relationships for further studies. Take morphine, for example: none of the five chiral centers in morphine are essential for binding with opioid receptors. Although methadone ( 32 , ) and pethidine ( 33 , ) contain one chiral carbon, there is no difference in activity between their enantiomers. Fentanyl ( 34 , ) is a symmetrical molecule without chirality. Compared to the complex skeleton of morphine, fentanyl is simpler and easier to synthesize. The elimination of chiral centers and stereochemical configurations, which has made it feasible to explore a series of investigations on structure-activity relationships, solubility, absorption, and metabolism, is characteristic of this class of drugs with a simplified framework . Similarly, the cyclic structure of alkaloid cocaine ( 35 , ) is cleaved, simplifying its structure and allowing for the synthesis of non-chiral local anesthetics like procaine ( 36 , ), tetracaine ( 37 , ), and lidocaine ( 38 , ) . Physostigmine ( 39 , ) is a parasympathomimetic alkaloid and a reversible cholinesterase inhibitor. Due to its chemical instability in the body, modified compounds such as pyridostigmine bromide ( 40 , ) and neostigmine bromide ( 41 , ) have been developed. Unlike physostigmine, which is a tertiary amine, these synthetic quaternary ammonium salts have limited penetration into the central nervous system, resulting in a lower likelihood of adverse effects such as orthostatic hypotension. However, they still effectively improve muscle tone in patients with myasthenia gravis . The scaffolds derived from natural products can be dissected into simpler and more easily synthesized fragment-like scaffolds either manually or computationally . These scaffolds inherit distinct conformational and physicochemical features from the original natural product templates, making them suitable for exploring chemically relevant space with biological activity. The general design strategy follows a flow-process diagram, involving the gradual simplification of the structural complexity of the parent compound (natural product) into virtual fragments, resulting in the formation of small and chemically appealing scaffolds ( a). Natural products have provided inspiration, while computer programs have enhanced the efficiency of rational molecular design. Koch et al. designed a compound library from the marine product dibromo-dysidiolide, where 19% of the compounds showed activity as inhibitors of 11β-hydroxysteroid dehydrogenase type I ( b) . Similarly, adopting the concept of extracting scaffolds, Wetzel released the Scaffold Hunter software ( https://scaffoldhunter.sourceforge.net ) in 2009, which employs deconvolution analysis of structurally complex natural products to obtain virtual skeletal trees, thus making the chemical structure data of complex bioactive substances more intuitive . The structural classification of natural products (SCONP) is an organizing principle for charting the known chemical space explored by nature. SCONP arranges the scaffolds of the natural products in a tree-like fashion and provides a viable analysis and hypothesis-generating tool for the design of natural product-derived compound collections . Compared to high-throughput screening, this approach has a higher hit rate. However, the activity of compounds obtained through molecular design based on natural product fragments often ranges from weak to moderate, making subsequent structural optimization an indispensable step in improving activity. The discovery of early natural active drugs occurred before a complete understanding of disease mechanisms, and researchers initially relied on animal pathological models for drug screening. Later, with advancements in molecular pathology, studies began to focus on specific enzymes or proteins as targets for drug screening. Due to the development of computer technology, bioinformatics, and structural genomics, an increasing number of important protein targets and their crystal structures, such as adrenergic receptors , potassium ion channels , and sodium-calcium exchangers , have been resolved. This has made it possible to use crystallographic structures for the screening and design of natural active compounds. Unlike traditional high-throughput screening methods based on cells and enzymes, virtual screening based on protein structures can greatly shorten the time and reduce the cost of obtaining active natural products . After obtaining the active compounds through virtual screening, further activity experiments can be conducted to verify the results. Combining computational simulations with experimental results, especially the mutual verification of structure-activity relationships and computational simulation results, can guide the next step of drug design . With the clear structure of drug target proteins, computer simulations can be used to simulate the binding of drug molecules to target proteins, and even obtain corresponding complex crystal structures directly. This provides information for the targeted modification of drug molecules. Therefore, virtual screening based on protein structures and the resolution of complex crystal structures provide a faster and more effective way to obtain and optimize active natural compounds. Currently, the main structural biology techniques for obtaining protein target structures include X-ray crystallography , nuclear magnetic resonance (NMR) spectroscopy , and cryo-electron microscopy (cryo-EM) three-dimensional reconstruction . However, these methods may not be applicable to some complex proteins, which limits the study of many important proteins and drug design. To overcome this limitation, DeepMind has developed a protein structure prediction software based on neural networks, known as Alphafold ( https://alphafold.com ). Alphafold uses large-scale protein structure databases for training and can accurately predict protein folding, secondary structure, domain contacts, and other information. This technology can quickly and accurately predict protein structures, which is very useful for drug design and virtual screening . By predicting protein structures, researchers can use computer simulations to predict the interaction between drug molecules and target proteins, and thus predict the inhibitory activity, affinity, and selectivity of drugs. This method can accelerate the drug development process and provide more targeted drug design strategies. Alphafold’s high accuracy in protein structure prediction has demonstrated its advantages. However, for some complex proteins or protein complexes, there are still challenges that need to be further improved and developed. Protein structure databases can also provide references for drug design and virtual screening, helping scientists quickly find drug candidates related to specific targets. There are multiple protein structure databases worldwide; PDB was established by the Brookhaven National Laboratory in the United States in 1971. It is a molecular structure database and an open-source database that is currently maintained by the Research Collaboration for Structural Bioinformatics (RCSB). It is considered the most important database in the field of structural biology . The majority of its data comes from experimentally determined three-dimensional structures of biomolecules, including proteins, as well as some nucleic acids, sugars, and complexes formed by nucleic acids and proteins. In the drug screening process, the quantity and quality of compounds in the library are crucial. These compounds need to be representative and be able to cover as many active chemical skeleton types as possible. Some common natural product databases are shown in , and some of these databases overlap with each other. The following introduces several well-known databases. The initial compounds obtained from virtual screening are subsets of the compound databases used in the screening process and require further experimental validation, such as compound activity and binding affinity to target proteins. Molecular docking prediction involves estimating the binding free energy, i.e., the binding affinity, between small molecule compounds and target proteins. However, for enzymes and receptors, predicting binding affinity alone is insufficient, and more specific experiments are needed to determine whether the small molecule compounds act as inhibitors (antagonists) or activators. The following are several techniques that directly measure the binding affinity between small molecules and target proteins. For example, microscale thermophoresis (MST), isothermal titration calorimetry (ITC), and surface plasmon resonance (SPR). MST measures the strength and rate of binding between small molecule compounds and target proteins using a microscale heat gradient . ITC determines the binding constants and thermodynamic parameters by measuring the heat released or absorbed during the interaction between small molecule compounds and target proteins . SPR quantifies the binding affinity and kinetic parameters by monitoring changes in reflected light caused by the binding of small molecule compounds to target proteins . Natural products are secondary metabolites produced by organisms for their own growth and propagation, rather than being specifically designed for the treatment of human diseases. However, due to their inherent activity and limitations in terms of pharmaceuticals, structural modifications are necessary. These modifications should be personalized and tailored to address the specific properties and limitations of the natural products under study, with the aim of optimizing therapeutic efficacy, pharmacokinetics, safety, and biopharmaceutical characteristics. Looking at successful examples of natural products evolving into drugs, the extent of structural changes can vary from drastic transformations to minor alterations involving only a few atoms or functional groups. While there is no fixed pattern, the underlying principles and concepts of structural modification remain consistent. Refining their structures through these modifications aims to enhance activity potency and selectivity. Modern technologies such as cryo-electron microscopy (cryo-EM), AlphaFold, and natural compound databases have revolutionized the field of drug discovery and development, offering exciting new possibilities for modifying bioactive natural products. By utilizing these technologies and resources, researchers can use structure-based drug design in medicinal chemistry to optimize bioactive natural products and improve various aspects of their properties, ultimately developing safer and more effective natural drugs. |
Association Between Cancer Screening Patterns and Carer Literacy in Individuals With Cognitive Decline: An Observational Study | 6a722274-0672-4163-a505-929087b459b4 | 11497085 | Health Literacy[mh] | Background The incidence rates of dementia, mild cognitive impairment (MCI), and cancer increase with age, presenting a complex challenge for individuals with cognitive impairment and their families . In particular, as cognitive decline progresses, questions may arise regarding the value of continued cancer screening . Given the significant correlation between an individual's level of health literacy and that of their family members, we hypothesized that health literacy might serve as a social network resource that influences health‐related behaviors among individuals with cognitive impairment . This association underscores the importance of addressing the psychological and practical aspects of cancer screening in this population, in order to balance the benefits of early detection with the nuanced needs of individuals with cognitive impairment and their carers. In Japan, the Ministry of Health, Labor, and Welfare (MHLW) recommends specific cancer screening schedules for individuals based on sex and age. Men are advised to undergo gastric, lung, and colorectal cancer screenings from the ages of 50, 40, and 40 years, respectively; while women are recommended to undergo cervical and breast cancer screenings from the ages of 20 and 40 years, respectively. Annual colorectal and lung cancer screenings are advised, whereas screenings for all other cancer types are recommended biennially . Figure provides an overview of the different settings where cancer screenings are typically conducted in Japan—including community health checkups, workplace health checkups, and other types of screenings. However, guidelines for individuals with dementia, particularly those with severe dementia, remain inconsistent and unclear . For example, while some studies suggest limited benefits of such screening in persons with severe dementia , there is less consensus regarding persons with MCI or mild dementia. Individuals with dementia generally have a shorter life expectancy than that of the general population ; however, recent data suggest that life expectancy post‐diagnosis may be increasing , with individuals with MCI experiencing longer survival times. In our cohort of patients in a Japanese memory clinic, those with MCI showed a median survival rate of > 3000 days, indicating lower mortality rates than those of individuals with dementia . While pneumonia is reportedly a common cause of death, our cohort had more deaths attributed to cancer , indicating the importance of cancer screening in patients of memory clinics. As cognitive function declines, affected individuals rely on carers for decision‐making—particularly as their ability to perform daily activities decreases . Carers play a critical role in navigating complex healthcare decisions, including cancer screenings. This relationship substantiates our use of carers' health literacy as a proxy for patient screening behaviors in this study. Cognitive decline can make more complex breast cancer screenings, such as mammography, particularly challenging . Carers often experience decisional conflict and increased burden, potentially owing to a lack of both knowledge and confidence regarding making decisions related to mammography for relatives with Alzheimer's disease and related dementias . Ishikawa et al. indicated that health literacy within a family is interrelated, highlighting the significance of health literacy in carers . Therefore, this study focused on health literacy in carers, which has been shown to be associated with better health‐related decision‐making and outcomes, as well as reduced burden of care . We hypothesized that health literacy in carers influences cancer screening behaviors in patients and that adequate information and support for carers might encourage better participation in cancer screening. This study specifically aimed to (1) compare cancer screening attendance rates between the general Japanese population and a patient cohort from a Japanese memory clinic cohort and (2) analyze the impact of carer health literacy on the screening behaviors of the individuals they care for.
Methods 2.1 Study Cohort This study was a follow‐up survey to the Japanese National Center for Geriatrics and Gerontology (NCGG)—Life STORIES of Individuals with Dementia study . This represents a comprehensive dataset that includes clinical records and prognostic data from patients diagnosed with various types of dementia or MCI who visited the NCGG's memory clinic between July 2010 and September 2018. In December 2022, we expanded the target period and number of participants to include patients who visited the clinic between July 2010 and July 2022—allowing us to incorporate more recent data concerning cancer screening behaviors within this population, focusing specifically on the influence of carer health literacy. Ethical approval for this study was obtained from the institutional review board (IRB) of the NCGG (approval number: 1661). 2.2 Participants Participants were drawn from a cohort of 5148 patients aged ≥ 65 years who visited the NCGG's memory clinic between July 2010 and July 2022. Primary carers were surveyed to report on their patients' cancer‐screening behaviors. We excluded responses from the patients themselves, non‐primary carers, and individuals with unspecified relationships. We also excluded carers of patients who were deceased or had moderate‐to‐severe dementia, as the study focused only on patients with MCI or mild dementia, where cancer screening remains relevant. Responses with missing information concerning the outcome or main explanatory variables were also excluded, resulting in a final analytical cohort of 826 primary carers. 2.3 Measurements 2.3.1 Outcome The primary outcome was the cancer screening attendance rate among the patients over the preceding 2 years, as reported by the primary carers through their responses to a questionnaire. This approach ensured accurate data collection from the population with cognitive decline, who might not have accurately recalled their screening attendance rates. The questionnaire mirrored the screening items recommended by the Japanese MHLW, including gastric, lung, colorectal, cervical, and breast cancer screenings . Prostate cancer, although prevalent among males in Japan, was not included in this study because it is not part of the MHLW‐recommended screenings for this target population. To ensure comparability with national data, these screenings were aligned with the Comprehensive Survey of Living Conditions conducted by the MHLW, which excludes individuals with dementia. Each cancer screening type was defined using the following specific diagnostic methods: (1) barium X‐ray radiography or endoscopy (gastroscopy or fibroscopy) for gastric cancer; (2) chest X‐ray radiography and sputum tests for lung cancer; (3) fecal occult blood tests for colorectal cancer; (4) Pap smear for cervical cancer; and (5) mammography or breast ultrasonography for breast cancer. This approach facilitated accurate recall and reporting by carers, enhancing data reliability. Our subsequent comparison of the cohort's screening attendance rates with publicly available national data aimed to contextualize our findings within the broader landscape of cancer screening in Japan—while noting the exclusion of individuals with dementia from the national survey . As part of our questionnaire, the primary carers were asked to indicate any reasons they had for not enforcing regular cancer screenings in their patients, by selecting from 14 possible options. These options were consistent with those used in the Comprehensive Survey of Living Conditions. They consisted of “Didn't know about it”; “No time”; “Location was too far”; “Costs money”; “Anxious about tests (bloodwork, endoscopy)”; “Was going to/admitted at a medical facility at the time”; “Don't feel need to get it annually”; “Confident in health, don't feel need”; “Can go to the doctor anytime if needed”; “Worried about results, don't want to get it”; “Too much hassle”; “Concerned about COVID‐19 infection”; “Missed opportunity due to COVID‐19 impact”; and “Other.” The carers were instructed to select all applicable reasons. 2.3.2 Exposure The carers' levels of communicative and critical health literacy (CCHL) were assessed using a questionnaire that focused on their levels of confidence regarding managing health‐related information. The CCHL scale is based on a validated questionnaire developed by Ishikawa et al. in 2008 for the Japanese population . It has been extensively validated and is widely used in public health research in Japan, thus indicating its external validity in this context . Communicative health literacy involves the ability to seek, understand, and use health‐related information to address health problems—including the skills to collect, comprehend, and communicate health‐related needs effectively . Critical health literacy entails higher‐level skills for analyzing health information, evaluating source credibility, and making informed decisions based on critical assessments . The scale consists of five items, with three measuring communicative health literacy and two measuring critical health literacy, rated on a 5‐point scale from 1 (never) to 5 (strongly agree). It has demonstrated high internal consistency (Cronbach's alpha = 0.86), with all item‐total correlations being positive (0.77–0.85). The CCHL scores of our cohort were dichotomized into low and high groups, based on the median scores. This decision to use median scores follows the methodology established by Ishikawa et al. and subsequent studies that used the CCHL scale . The cutoffs of 11 for communicative health literacy (items i–iii) and 7 for critical health literacy (items iv and v) were determined based on these median values . 2.3.3 Patient‐Related Variables Information regarding the patients—such as age at the study's commencement, sex, and dementia type at initial diagnosis—was extracted from the NCGG memory center's database. Dementia types were determined by specialist physicians using neuropsychological assessments to classify cognitive functions. The MCI and dementia definitions were based on the criteria set by the US National Institute on Aging‐Alzheimer's Association. Dementia was categorized as Alzheimer's disease , dementia with Lewy bodies or Parkinson's disease , vascular dementia , and frontotemporal lobar degeneration . Besides initial diagnosis data, carers were asked to evaluate the current cognitive states of their patients, selecting from “suspected dementia (no daily life impairment but a feeling of cognitive dysfunction)”; “mild (occasional reminders or supervision needed, with some forgetfulness concerning recent events or upcoming tasks)”; “moderate (assistance needed with personal care, difficulty with dressing or eating independently)”; or “severe (constant care required, significant communication difficulties)”; to provide an up‐to‐date reflection of the patient's condition. This carer‐reported cognitive state was crucial for determining the inclusion and exclusion of participants, as is shown in Figure . The specialist‐determined dementia type was used primarily as a reference for initial diagnosis (Table ). The level of care required by the patients was assessed according to the Japanese long‐term care (LTC) system, which categorizes care needs into seven levels: “Support Required” (Levels 1 and 2) and “Long‐Term Care Required” (Levels 1–5), with Level 1 being the least disabled and Level 5 the most . Eligibility was determined using a standardized 74‐item questionnaire focused on activities of daily living (ADLs). This initial assessment was followed by a decision made by a long‐term care approval board, which included healthcare and social welfare experts. The board's decision was based on computer‐generated results, a home‐visit report, and a medical doctor's opinion. In this study, the carers reported the LTC level assigned to their patients through the questionnaire. Based on prior research, we grouped “‘Support Required” and “Long‐Term Care Level 1” into one category and “Long‐Term Care Level 2 or Higher” into another, with the latter indicating severe functional disability . 2.3.4 Carer‐Related Information Carer information such as sex, age, and relationship to the patient was collected through the questionnaire. 2.4 Statistical Analysis Statistical analysis commenced with descriptive statistics for continuous variables (expressed as means and standard deviations) and categorical variables (expressed as frequencies and percentages). Continuous variables were analyzed using Student's t ‐tests if they had normal distributions, and Mann–Whitney U tests for any with non‐normal distributions—based on the Shapiro–Wilk test for assessing data distribution normality. Specifically, the patients' ages, carers' ages, and follow‐up periods from initial diagnosis to survey were analyzed using the Mann–Whitney U test, because these variables did not follow normal distributions. Categorical variables were analyzed using chi‐squared tests for comparisons between sexes. Given that the Japanese MHLW provides different cancer screening recommendations for males and females, we separated the data by sex to ensure that our findings accurately reflected these differences. Cancer screening attendance rates across different age groups (65–74, 75–84, and ≥ 85 years) were calculated and compared to national data using Chi‐squared tests. The relationship between cancer screening attendance and carer health literacy was assessed using odds ratios, with health literacy classified into low and high groups based on CCHL median scores. We first calculated unadjusted odds ratios (ORs) to evaluate the association between health literacy and screening attendance. Binary logistic regression was then used to estimate adjusted ORs (AORs) and 95% confidence intervals (CIs) while accounting for potential confounders. Each literacy component was added to the model independently, adjusting for potential confounders such as the patient's age, required level of care, and dementia type at initial diagnosis, as well as the carers' age, sex, and relationship to the patient. The theoretical justification for these adjustments was based on the well‐established influence of these factors on healthcare access and decision‐making in patients with cognitive impairment. Age and level of care have been shown to be critical determinants of both cognitive function and the ability to engage in healthcare activities . Dementia type has been shown to influence the progression of cognitive decline and the associated burden on carers, which in turn can impact health‐related decision‐making . Similarly, the demographic characteristics of the carers, such as their age, sex, and relationship to the patient, represent key variables that affect the dynamics of caregiving and the decision‐making process regarding cancer screenings . We initially considered including the patient's required level of care, dementia type, and the carer's relationship to the patient as potential covariates. However, the level of care did not significantly influence carer health literacy and appeared to function more as an intermediary factor. Similarly, dementia type was not associated with carer health literacy. Therefore, both variables were excluded from the final model. By including these covariates, our analysis aimed to isolate the specific impact of carer health literacy on cancer screening attendance, ensuring that the observed associations were not confounded by these critical factors. Separate analyses were performed for each cancer screening type.
Study Cohort This study was a follow‐up survey to the Japanese National Center for Geriatrics and Gerontology (NCGG)—Life STORIES of Individuals with Dementia study . This represents a comprehensive dataset that includes clinical records and prognostic data from patients diagnosed with various types of dementia or MCI who visited the NCGG's memory clinic between July 2010 and September 2018. In December 2022, we expanded the target period and number of participants to include patients who visited the clinic between July 2010 and July 2022—allowing us to incorporate more recent data concerning cancer screening behaviors within this population, focusing specifically on the influence of carer health literacy. Ethical approval for this study was obtained from the institutional review board (IRB) of the NCGG (approval number: 1661).
Participants Participants were drawn from a cohort of 5148 patients aged ≥ 65 years who visited the NCGG's memory clinic between July 2010 and July 2022. Primary carers were surveyed to report on their patients' cancer‐screening behaviors. We excluded responses from the patients themselves, non‐primary carers, and individuals with unspecified relationships. We also excluded carers of patients who were deceased or had moderate‐to‐severe dementia, as the study focused only on patients with MCI or mild dementia, where cancer screening remains relevant. Responses with missing information concerning the outcome or main explanatory variables were also excluded, resulting in a final analytical cohort of 826 primary carers.
Measurements 2.3.1 Outcome The primary outcome was the cancer screening attendance rate among the patients over the preceding 2 years, as reported by the primary carers through their responses to a questionnaire. This approach ensured accurate data collection from the population with cognitive decline, who might not have accurately recalled their screening attendance rates. The questionnaire mirrored the screening items recommended by the Japanese MHLW, including gastric, lung, colorectal, cervical, and breast cancer screenings . Prostate cancer, although prevalent among males in Japan, was not included in this study because it is not part of the MHLW‐recommended screenings for this target population. To ensure comparability with national data, these screenings were aligned with the Comprehensive Survey of Living Conditions conducted by the MHLW, which excludes individuals with dementia. Each cancer screening type was defined using the following specific diagnostic methods: (1) barium X‐ray radiography or endoscopy (gastroscopy or fibroscopy) for gastric cancer; (2) chest X‐ray radiography and sputum tests for lung cancer; (3) fecal occult blood tests for colorectal cancer; (4) Pap smear for cervical cancer; and (5) mammography or breast ultrasonography for breast cancer. This approach facilitated accurate recall and reporting by carers, enhancing data reliability. Our subsequent comparison of the cohort's screening attendance rates with publicly available national data aimed to contextualize our findings within the broader landscape of cancer screening in Japan—while noting the exclusion of individuals with dementia from the national survey . As part of our questionnaire, the primary carers were asked to indicate any reasons they had for not enforcing regular cancer screenings in their patients, by selecting from 14 possible options. These options were consistent with those used in the Comprehensive Survey of Living Conditions. They consisted of “Didn't know about it”; “No time”; “Location was too far”; “Costs money”; “Anxious about tests (bloodwork, endoscopy)”; “Was going to/admitted at a medical facility at the time”; “Don't feel need to get it annually”; “Confident in health, don't feel need”; “Can go to the doctor anytime if needed”; “Worried about results, don't want to get it”; “Too much hassle”; “Concerned about COVID‐19 infection”; “Missed opportunity due to COVID‐19 impact”; and “Other.” The carers were instructed to select all applicable reasons. 2.3.2 Exposure The carers' levels of communicative and critical health literacy (CCHL) were assessed using a questionnaire that focused on their levels of confidence regarding managing health‐related information. The CCHL scale is based on a validated questionnaire developed by Ishikawa et al. in 2008 for the Japanese population . It has been extensively validated and is widely used in public health research in Japan, thus indicating its external validity in this context . Communicative health literacy involves the ability to seek, understand, and use health‐related information to address health problems—including the skills to collect, comprehend, and communicate health‐related needs effectively . Critical health literacy entails higher‐level skills for analyzing health information, evaluating source credibility, and making informed decisions based on critical assessments . The scale consists of five items, with three measuring communicative health literacy and two measuring critical health literacy, rated on a 5‐point scale from 1 (never) to 5 (strongly agree). It has demonstrated high internal consistency (Cronbach's alpha = 0.86), with all item‐total correlations being positive (0.77–0.85). The CCHL scores of our cohort were dichotomized into low and high groups, based on the median scores. This decision to use median scores follows the methodology established by Ishikawa et al. and subsequent studies that used the CCHL scale . The cutoffs of 11 for communicative health literacy (items i–iii) and 7 for critical health literacy (items iv and v) were determined based on these median values . 2.3.3 Patient‐Related Variables Information regarding the patients—such as age at the study's commencement, sex, and dementia type at initial diagnosis—was extracted from the NCGG memory center's database. Dementia types were determined by specialist physicians using neuropsychological assessments to classify cognitive functions. The MCI and dementia definitions were based on the criteria set by the US National Institute on Aging‐Alzheimer's Association. Dementia was categorized as Alzheimer's disease , dementia with Lewy bodies or Parkinson's disease , vascular dementia , and frontotemporal lobar degeneration . Besides initial diagnosis data, carers were asked to evaluate the current cognitive states of their patients, selecting from “suspected dementia (no daily life impairment but a feeling of cognitive dysfunction)”; “mild (occasional reminders or supervision needed, with some forgetfulness concerning recent events or upcoming tasks)”; “moderate (assistance needed with personal care, difficulty with dressing or eating independently)”; or “severe (constant care required, significant communication difficulties)”; to provide an up‐to‐date reflection of the patient's condition. This carer‐reported cognitive state was crucial for determining the inclusion and exclusion of participants, as is shown in Figure . The specialist‐determined dementia type was used primarily as a reference for initial diagnosis (Table ). The level of care required by the patients was assessed according to the Japanese long‐term care (LTC) system, which categorizes care needs into seven levels: “Support Required” (Levels 1 and 2) and “Long‐Term Care Required” (Levels 1–5), with Level 1 being the least disabled and Level 5 the most . Eligibility was determined using a standardized 74‐item questionnaire focused on activities of daily living (ADLs). This initial assessment was followed by a decision made by a long‐term care approval board, which included healthcare and social welfare experts. The board's decision was based on computer‐generated results, a home‐visit report, and a medical doctor's opinion. In this study, the carers reported the LTC level assigned to their patients through the questionnaire. Based on prior research, we grouped “‘Support Required” and “Long‐Term Care Level 1” into one category and “Long‐Term Care Level 2 or Higher” into another, with the latter indicating severe functional disability . 2.3.4 Carer‐Related Information Carer information such as sex, age, and relationship to the patient was collected through the questionnaire.
Outcome The primary outcome was the cancer screening attendance rate among the patients over the preceding 2 years, as reported by the primary carers through their responses to a questionnaire. This approach ensured accurate data collection from the population with cognitive decline, who might not have accurately recalled their screening attendance rates. The questionnaire mirrored the screening items recommended by the Japanese MHLW, including gastric, lung, colorectal, cervical, and breast cancer screenings . Prostate cancer, although prevalent among males in Japan, was not included in this study because it is not part of the MHLW‐recommended screenings for this target population. To ensure comparability with national data, these screenings were aligned with the Comprehensive Survey of Living Conditions conducted by the MHLW, which excludes individuals with dementia. Each cancer screening type was defined using the following specific diagnostic methods: (1) barium X‐ray radiography or endoscopy (gastroscopy or fibroscopy) for gastric cancer; (2) chest X‐ray radiography and sputum tests for lung cancer; (3) fecal occult blood tests for colorectal cancer; (4) Pap smear for cervical cancer; and (5) mammography or breast ultrasonography for breast cancer. This approach facilitated accurate recall and reporting by carers, enhancing data reliability. Our subsequent comparison of the cohort's screening attendance rates with publicly available national data aimed to contextualize our findings within the broader landscape of cancer screening in Japan—while noting the exclusion of individuals with dementia from the national survey . As part of our questionnaire, the primary carers were asked to indicate any reasons they had for not enforcing regular cancer screenings in their patients, by selecting from 14 possible options. These options were consistent with those used in the Comprehensive Survey of Living Conditions. They consisted of “Didn't know about it”; “No time”; “Location was too far”; “Costs money”; “Anxious about tests (bloodwork, endoscopy)”; “Was going to/admitted at a medical facility at the time”; “Don't feel need to get it annually”; “Confident in health, don't feel need”; “Can go to the doctor anytime if needed”; “Worried about results, don't want to get it”; “Too much hassle”; “Concerned about COVID‐19 infection”; “Missed opportunity due to COVID‐19 impact”; and “Other.” The carers were instructed to select all applicable reasons.
Exposure The carers' levels of communicative and critical health literacy (CCHL) were assessed using a questionnaire that focused on their levels of confidence regarding managing health‐related information. The CCHL scale is based on a validated questionnaire developed by Ishikawa et al. in 2008 for the Japanese population . It has been extensively validated and is widely used in public health research in Japan, thus indicating its external validity in this context . Communicative health literacy involves the ability to seek, understand, and use health‐related information to address health problems—including the skills to collect, comprehend, and communicate health‐related needs effectively . Critical health literacy entails higher‐level skills for analyzing health information, evaluating source credibility, and making informed decisions based on critical assessments . The scale consists of five items, with three measuring communicative health literacy and two measuring critical health literacy, rated on a 5‐point scale from 1 (never) to 5 (strongly agree). It has demonstrated high internal consistency (Cronbach's alpha = 0.86), with all item‐total correlations being positive (0.77–0.85). The CCHL scores of our cohort were dichotomized into low and high groups, based on the median scores. This decision to use median scores follows the methodology established by Ishikawa et al. and subsequent studies that used the CCHL scale . The cutoffs of 11 for communicative health literacy (items i–iii) and 7 for critical health literacy (items iv and v) were determined based on these median values .
Patient‐Related Variables Information regarding the patients—such as age at the study's commencement, sex, and dementia type at initial diagnosis—was extracted from the NCGG memory center's database. Dementia types were determined by specialist physicians using neuropsychological assessments to classify cognitive functions. The MCI and dementia definitions were based on the criteria set by the US National Institute on Aging‐Alzheimer's Association. Dementia was categorized as Alzheimer's disease , dementia with Lewy bodies or Parkinson's disease , vascular dementia , and frontotemporal lobar degeneration . Besides initial diagnosis data, carers were asked to evaluate the current cognitive states of their patients, selecting from “suspected dementia (no daily life impairment but a feeling of cognitive dysfunction)”; “mild (occasional reminders or supervision needed, with some forgetfulness concerning recent events or upcoming tasks)”; “moderate (assistance needed with personal care, difficulty with dressing or eating independently)”; or “severe (constant care required, significant communication difficulties)”; to provide an up‐to‐date reflection of the patient's condition. This carer‐reported cognitive state was crucial for determining the inclusion and exclusion of participants, as is shown in Figure . The specialist‐determined dementia type was used primarily as a reference for initial diagnosis (Table ). The level of care required by the patients was assessed according to the Japanese long‐term care (LTC) system, which categorizes care needs into seven levels: “Support Required” (Levels 1 and 2) and “Long‐Term Care Required” (Levels 1–5), with Level 1 being the least disabled and Level 5 the most . Eligibility was determined using a standardized 74‐item questionnaire focused on activities of daily living (ADLs). This initial assessment was followed by a decision made by a long‐term care approval board, which included healthcare and social welfare experts. The board's decision was based on computer‐generated results, a home‐visit report, and a medical doctor's opinion. In this study, the carers reported the LTC level assigned to their patients through the questionnaire. Based on prior research, we grouped “‘Support Required” and “Long‐Term Care Level 1” into one category and “Long‐Term Care Level 2 or Higher” into another, with the latter indicating severe functional disability .
Carer‐Related Information Carer information such as sex, age, and relationship to the patient was collected through the questionnaire.
Statistical Analysis Statistical analysis commenced with descriptive statistics for continuous variables (expressed as means and standard deviations) and categorical variables (expressed as frequencies and percentages). Continuous variables were analyzed using Student's t ‐tests if they had normal distributions, and Mann–Whitney U tests for any with non‐normal distributions—based on the Shapiro–Wilk test for assessing data distribution normality. Specifically, the patients' ages, carers' ages, and follow‐up periods from initial diagnosis to survey were analyzed using the Mann–Whitney U test, because these variables did not follow normal distributions. Categorical variables were analyzed using chi‐squared tests for comparisons between sexes. Given that the Japanese MHLW provides different cancer screening recommendations for males and females, we separated the data by sex to ensure that our findings accurately reflected these differences. Cancer screening attendance rates across different age groups (65–74, 75–84, and ≥ 85 years) were calculated and compared to national data using Chi‐squared tests. The relationship between cancer screening attendance and carer health literacy was assessed using odds ratios, with health literacy classified into low and high groups based on CCHL median scores. We first calculated unadjusted odds ratios (ORs) to evaluate the association between health literacy and screening attendance. Binary logistic regression was then used to estimate adjusted ORs (AORs) and 95% confidence intervals (CIs) while accounting for potential confounders. Each literacy component was added to the model independently, adjusting for potential confounders such as the patient's age, required level of care, and dementia type at initial diagnosis, as well as the carers' age, sex, and relationship to the patient. The theoretical justification for these adjustments was based on the well‐established influence of these factors on healthcare access and decision‐making in patients with cognitive impairment. Age and level of care have been shown to be critical determinants of both cognitive function and the ability to engage in healthcare activities . Dementia type has been shown to influence the progression of cognitive decline and the associated burden on carers, which in turn can impact health‐related decision‐making . Similarly, the demographic characteristics of the carers, such as their age, sex, and relationship to the patient, represent key variables that affect the dynamics of caregiving and the decision‐making process regarding cancer screenings . We initially considered including the patient's required level of care, dementia type, and the carer's relationship to the patient as potential covariates. However, the level of care did not significantly influence carer health literacy and appeared to function more as an intermediary factor. Similarly, dementia type was not associated with carer health literacy. Therefore, both variables were excluded from the final model. By including these covariates, our analysis aimed to isolate the specific impact of carer health literacy on cancer screening attendance, ensuring that the observed associations were not confounded by these critical factors. Separate analyses were performed for each cancer screening type.
Results 3.1 Participant Inclusion Of the initial 5148 patients identified, 1439 did not respond to the survey invitation (“non‐responses”) and 146 declined to participate (“non‐consent”), leaving 3563 responses. After excluding 899 responses from patients, 385 from non‐primary carers, and 43 from individuals with unspecified relationships, 2236 primary carer responses were retained. Further exclusions were made for cases where the patient was deceased ( n = 317) or had moderate‐to‐severe dementia ( n = 1000), as well as for responses with missing key data ( n = 70). This resulted in a final analytical cohort of 826 primary carers (Figure ). 3.2 Baseline Characteristics Among the 826 patients analyzed, 517 and 309 were women (62.6%, average age = 83.9 years) and men (37.4%, average age = 81.5 years), respectively. The median age of the primary carers was 67.0 years, and 65.9% were women. The most common relationship to their patient was biological child (47.5%), followed by spouse (40.7%; Table ). Biological children were significantly more common among female patients (59.0%), while spouses were more prevalent among male ones (65.4%). The most frequent care requirement categories were support required at Levels 1–2 and Care Level 1 (48.1%). The most common initial diagnosis was MCI (35.7%), followed by Alzheimer's disease (34.6%). Health literacy variables were assessed using the CCHL scale. The median score for communicative health literacy was 11 (interquartile range [IQR]: 9–13); and that for critical health literacy was 7 (IQR: 6–8). Based on these median scores, the carers were dichotomized into low and high groups for each literacy component. In the communicative health literacy domain, 44.6% of the carers fell into the low group, while 55.4% were in the high one. For critical health literacy, 47.6% were in the low group and 52.4% were in the high one. 3.3 Cancer Screening Attendance and Its Association With Carer Health Literacy The cancer screening attendance rates varied according to age and sex (Figure ). For female patients, the rates decreased with age across all recommended screenings. Gastric cancer screenings were attended by 24% of women aged 65–74 years, which dropped to 8% in those aged > 85 years. Lung and colorectal cancer screenings exhibited similar age‐related declines. Cervical and breast cancer screening attendance in the memory clinic cohort was substantially lower than the national averages—particularly for women aged 65–74 years, of whom only 11% were screened for breast cancer (a significant drop from the 32% national average). Although this disparity decreased in the oldest age groups, it remained significant. The male patients exhibited higher gastric, lung, and colorectal cancer screening rates than the female ones, particularly those aged 65–74 years. Compared to the national data from the Comprehensive Survey of Living Conditions, significantly lower attendance rates were observed in the memory clinic cohort among women across all age groups, particularly for gastric, cervical, and breast cancers. For men, significant differences were noted in the older age groups for gastric cancer screening. The most commonly reported reasons for not attending cancer screenings were “Can go to the doctor anytime if needed” (46.8%), “Was going to/was admitted at a medical facility at the time” (16.4%), “Concerned about COVID‐19 infection” (10.9%), and “No time” (8.9%). Our analysis of the association between the health literacy levels of carers (subdivided into communicative and critical) and cancer screening attendance rates of their patients revealed higher attendance rates in the high group vs. the low one across both subscales (Tables and ). For female patients, those whose carers had higher levels of communicative health literacy were significantly more likely to attend screenings for gastric (AOR, 1.77; 95% CI, 1.03–3.04), colorectal (AOR, 1.70; 95% CI, 1.08–2.70), and breast (AOR, 3.08; 95% CI, 1.40–6.76) cancers. In male patients, a higher likelihood of attending lung cancer screenings was seen in the high health literacy group vs. the low one (AOR, 1.82; 95% CI, 1.11–2.99).
Participant Inclusion Of the initial 5148 patients identified, 1439 did not respond to the survey invitation (“non‐responses”) and 146 declined to participate (“non‐consent”), leaving 3563 responses. After excluding 899 responses from patients, 385 from non‐primary carers, and 43 from individuals with unspecified relationships, 2236 primary carer responses were retained. Further exclusions were made for cases where the patient was deceased ( n = 317) or had moderate‐to‐severe dementia ( n = 1000), as well as for responses with missing key data ( n = 70). This resulted in a final analytical cohort of 826 primary carers (Figure ).
Baseline Characteristics Among the 826 patients analyzed, 517 and 309 were women (62.6%, average age = 83.9 years) and men (37.4%, average age = 81.5 years), respectively. The median age of the primary carers was 67.0 years, and 65.9% were women. The most common relationship to their patient was biological child (47.5%), followed by spouse (40.7%; Table ). Biological children were significantly more common among female patients (59.0%), while spouses were more prevalent among male ones (65.4%). The most frequent care requirement categories were support required at Levels 1–2 and Care Level 1 (48.1%). The most common initial diagnosis was MCI (35.7%), followed by Alzheimer's disease (34.6%). Health literacy variables were assessed using the CCHL scale. The median score for communicative health literacy was 11 (interquartile range [IQR]: 9–13); and that for critical health literacy was 7 (IQR: 6–8). Based on these median scores, the carers were dichotomized into low and high groups for each literacy component. In the communicative health literacy domain, 44.6% of the carers fell into the low group, while 55.4% were in the high one. For critical health literacy, 47.6% were in the low group and 52.4% were in the high one.
Cancer Screening Attendance and Its Association With Carer Health Literacy The cancer screening attendance rates varied according to age and sex (Figure ). For female patients, the rates decreased with age across all recommended screenings. Gastric cancer screenings were attended by 24% of women aged 65–74 years, which dropped to 8% in those aged > 85 years. Lung and colorectal cancer screenings exhibited similar age‐related declines. Cervical and breast cancer screening attendance in the memory clinic cohort was substantially lower than the national averages—particularly for women aged 65–74 years, of whom only 11% were screened for breast cancer (a significant drop from the 32% national average). Although this disparity decreased in the oldest age groups, it remained significant. The male patients exhibited higher gastric, lung, and colorectal cancer screening rates than the female ones, particularly those aged 65–74 years. Compared to the national data from the Comprehensive Survey of Living Conditions, significantly lower attendance rates were observed in the memory clinic cohort among women across all age groups, particularly for gastric, cervical, and breast cancers. For men, significant differences were noted in the older age groups for gastric cancer screening. The most commonly reported reasons for not attending cancer screenings were “Can go to the doctor anytime if needed” (46.8%), “Was going to/was admitted at a medical facility at the time” (16.4%), “Concerned about COVID‐19 infection” (10.9%), and “No time” (8.9%). Our analysis of the association between the health literacy levels of carers (subdivided into communicative and critical) and cancer screening attendance rates of their patients revealed higher attendance rates in the high group vs. the low one across both subscales (Tables and ). For female patients, those whose carers had higher levels of communicative health literacy were significantly more likely to attend screenings for gastric (AOR, 1.77; 95% CI, 1.03–3.04), colorectal (AOR, 1.70; 95% CI, 1.08–2.70), and breast (AOR, 3.08; 95% CI, 1.40–6.76) cancers. In male patients, a higher likelihood of attending lung cancer screenings was seen in the high health literacy group vs. the low one (AOR, 1.82; 95% CI, 1.11–2.99).
Discussion This study compared cancer screening rates among Japanese individuals with cognitive impairment, by aligning data from a memory clinic cohort with national statistics. Significant differences were observed between the memory clinic cohort and the national average—particularly among women, who exhibited lower screening rates for gastric, cervical, and breast cancers across all age groups. Men aged > 75 years demonstrated a similar pattern for gastric cancer screenings. While individuals with dementia are generally less likely to undergo cancer screenings , our findings indicate age‐ and sex‐specific trends, highlighting the complex factors affecting screening decisions for those with MCI and early‐stage dementia. These factors included logical challenges, inability to comprehend the importance of screenings, and varying progression of cognitive decline. This complexity necessitates targeted information and enhanced support within medical facilities to promote screening behaviors—particularly for less invasive screenings, such as lung and colorectal cancer, which may present fewer logistical challenges. Given these disparities, we recommend implementing targeted interventions to improve cancer screening rates among patients with cognitive impairment, with a particular focus on tailored communication strategies that consider both the cognitive abilities of patients and the health literacy levels of their carers. In this study, the variation in cancer screening attendance by type may be attributable to the fact that more complex procedures, such as those conducted during gastric and breast cancer screenings , impose a greater burden on individuals with cognitive impairment and their carers , potentially leading to reduced participation rates. For example, while mammograms are often perceived as complex, owing to the discomfort and anxiety associated with the procedure, the colorectal cancer screening method analyzed in this study refers to fecal occult blood tests, which are less invasive than procedures such as colonoscopies. This distinction is significant, as it highlights the fact that the screening methods analyzed in this study may vary in complexity and burden. Addressing issues of procedure complexity and carer burden is pivotal to improving participation rates in cancer screenings. Enhanced support within medical settings to facilitate attendance at screenings and strong advocacy for early detection are also essential. Emphasizing the relative ease of certain screenings may also encourage carers and patients alike. In this study, health literacy in carers—particularly communicative health literacy—significantly influenced the cancer screening behaviors of patients with cognitive impairment. Considering broader trends, socioeconomic factors, and resource availability are critical to shaping these behaviors—particularly in Japan, where the rates are substantially lower than in countries with more robust cancer screening cultures . Consistent with the collateral care concept discussed by Yoshida et al., familial factors such as financial instability and loneliness can also exacerbate barriers to seeking medical care . Their research emphasizes considering the broader familial and social contexts when addressing the health needs of patients with cognitive decline . In the present study, high health literacy levels among carers correlated with increased patient screening attendance and enhanced recognition of screening importance, indicating that health literacy is a critical factor influencing screening behaviors . Our study emphasizes carer involvement in promoting attendance, suggesting that decision‐making support structures in healthcare settings should cater to the needs of those with cognitive decline and their carers to achieve more effective cancer screening decisions. In the context of carer health literacy and its relationship with cancer screening behaviors, we observed that communicative health literacy played a significant role, particularly in terms of promoting breast cancer screening, for which the AOR, was notably high. This finding suggests that female spouses, who are often primary carers, should understand the necessity and procedures involved in breast cancer screening. The communicative aspect of health literacy is particularly relevant, and enhancing the communication skills of carers, particularly spouses, can lead to more informed decisions and higher screening rates. The absence of established cancer screening guidelines for individuals with cognitive decline highlights the importance of this study in the context of public health policies. With Japan's cancer screening rates lagging behind those of countries such as the US and UK by 30%–40%, the need for intervention has become crucial. The lower rates in Japan have also been attributed to systemic differences in health insurance, methods used for encouraging attendance, and potentially to societal stigma that may undervalue the health needs of individuals with dementia. In our study, common reasons for not screening included lack of time, confidence in personal health, and the belief that medical consultation is available when needed—suggesting a gap in health literacy regarding the importance of cancer screening. Our findings support the need for tailored educational campaigns and screening initiatives for individuals with cognitive impairment and their carers, to enhance both patient and family health literacy. Furthermore, as Fowler et al. suggested, developing decision‐making support frameworks for healthcare services is crucial . Moreover, our study advocates for patient‐centered communication strategies, emphasizing the roles of healthcare systems in facilitating easier access to cancer screenings. Challenging stigma and improving awareness concerning the capabilities and needs of individuals with cognitive decline is essential to fostering equitable care. Based on our findings, scheduling cancer screenings in conjunction with routine care visits may help reduce barriers for patients with cognitive impairment, potentially making information more accessible and comprehensible while also minimizing logistical obstacles to attendance. This approach may merit further exploration in future studies. Through these recommendations, we aim to enhance dialog among carers, patients, and healthcare providers. Additionally, given the particular challenges related to health literacy in Japan, it is important to address misconceptions and enhance the understanding of cancer screening significance. Future policies should also consider screening approaches for individuals with disabilities, including those with dementia, to ensure equitable healthcare access and informed decision‐making. 4.1 Study Limitations This study included data from a single memory clinic in Aichi Prefecture, Japan, with participants from both urban and rural areas. While this focus provides valuable longitudinal insights, the use of national data rather than a matched control group introduces potential hidden biases and limits the ability to assess health literacy at a baseline level. Furthermore, the study did not account for motivational factors or other reasons that might influence a patient's decision not to undergo cancer screenings. These elements may provide a more nuanced understanding of screening behaviors if explored in future research. Additionally, although the national data used does not include regional specifics, limiting the examination of geographical disparities, our findings still offer a critical perspective on cancer screening behaviors among patients with cognitive impairment in Japan. 4.2 Clinical Implications This study revealed significant clinical implications for enhancing cancer screening participation among individuals with cognitive decline. First, targeted strategies that improve health literacy in carers are required, as this directly influences screening behaviors. Moreover, our study showed that logistical challenges significantly impact participation, particularly in screenings requiring complex preparatory steps. Streamlined processes and carer assistance—such as scheduling screenings during regular care visits, providing transportation, and simplifying preparatory instructions—may mitigate these barriers. Healthcare policies should address the specific needs of patients with cognitive decline, including developing clear guidelines for cancer screening, interventions focused on carer support, better communication strategies, and logistical simplification. Lastly, our data underscore the necessity of nuanced approaches that consider the severity of each patient's cognitive decline. For those with mild impairment, regular screening benefits outweigh the risks, given their extended life expectancy. A comprehensive healthcare framework supporting patient‐centric, informed decision‐making might bridge the screening gap for this vulnerable population.
Study Limitations This study included data from a single memory clinic in Aichi Prefecture, Japan, with participants from both urban and rural areas. While this focus provides valuable longitudinal insights, the use of national data rather than a matched control group introduces potential hidden biases and limits the ability to assess health literacy at a baseline level. Furthermore, the study did not account for motivational factors or other reasons that might influence a patient's decision not to undergo cancer screenings. These elements may provide a more nuanced understanding of screening behaviors if explored in future research. Additionally, although the national data used does not include regional specifics, limiting the examination of geographical disparities, our findings still offer a critical perspective on cancer screening behaviors among patients with cognitive impairment in Japan.
Clinical Implications This study revealed significant clinical implications for enhancing cancer screening participation among individuals with cognitive decline. First, targeted strategies that improve health literacy in carers are required, as this directly influences screening behaviors. Moreover, our study showed that logistical challenges significantly impact participation, particularly in screenings requiring complex preparatory steps. Streamlined processes and carer assistance—such as scheduling screenings during regular care visits, providing transportation, and simplifying preparatory instructions—may mitigate these barriers. Healthcare policies should address the specific needs of patients with cognitive decline, including developing clear guidelines for cancer screening, interventions focused on carer support, better communication strategies, and logistical simplification. Lastly, our data underscore the necessity of nuanced approaches that consider the severity of each patient's cognitive decline. For those with mild impairment, regular screening benefits outweigh the risks, given their extended life expectancy. A comprehensive healthcare framework supporting patient‐centric, informed decision‐making might bridge the screening gap for this vulnerable population.
Conclusions While our findings suggest a gap in cancer screening rates between individuals with cognitive impairment and the general population, the effect sizes for the association between health literacy and screening rates were small and often statistically insignificant. This indicates that further research is warranted to better understand the specific barriers and develop tailored cancer screening strategies that effectively address both individual and systemic factors for this population.
Yujiro Kuroda: conceptualization (lead), data curation (equal), formal analysis (lead), funding acquisition (lead), investigation (lead), methodology (lead), project administration (lead), software (lead), writing – original draft (lead), writing – review and editing (lead). Aya Goto: investigation (equal), methodology (equal), supervision (lead), writing – original draft (supporting), writing – review and editing (equal). Kazuaki Uchida: investigation (equal), project administration (equal), writing – review and editing (equal). Taiki Sugimoto: data curation (equal), investigation (equal), writing – review and editing (equal). Kosuke Fujita: investigation (equal), project administration (equal), writing – review and editing (equal). Yoko Yokoyama: investigation (equal), project administration (equal), writing – review and editing (equal). Takeshi Nakagawa: investigation (equal), project administration (equal), writing – review and editing (equal). Tami Saito: conceptualization (equal), data curation (equal), funding acquisition (equal), investigation (equal), project administration (equal), supervision (equal), writing – review and editing (equal). Taiji Noguchi: data curation (equal), investigation (equal), writing – review and editing (equal). Ayane Komatsu: data curation (equal), investigation (equal), writing – review and editing (equal). Hidenori Arai: funding acquisition (equal), project administration (equal), supervision (equal), writing – review and editing (equal). Takashi Sakurai: conceptualization (equal), data curation (equal), funding acquisition (equal), investigation (equal), project administration (equal), supervision (equal), writing – review and editing (equal).
This study adhered to the ethical standards required for research involving human subjects. Approval was obtained from the institutional review board of the National Center for Geriatrics and Gerontology prior to data collection. All of the participants, their legal guardians, or primary carers (in cases of participants with cognitive decline) provided informed consent to participate in this study. Data confidentiality and participant anonymity were ensured throughout the research process.
The authors declare no conflicts of interest.
|
Longitudinal changes in factors affecting postoperative patient satisfaction after robot-assisted radical prostatectomy: an assessment using a patient-reported questionnaire | 9d10a24d-6cbb-4652-a9e2-37d03f247adb | 11756060 | Surgical Procedures, Operative[mh] | Long-term survival with a life expectancy more than 10 years can be achieved in patients diagnosed with localized prostate cancer (PCa). Therefore, maintaining postoperative quality of life (QOL) and treatment satisfaction are important. In Japan, robot-assisted radical prostatectomy (RARP) has been widely performed, as in many other countries around the world, as a treatment for localized PCa. Systematic reviews and meta-analyses have revealed that RARP is superior to open or laparoscopic radical prostatectomy (RP) in terms of postoperative physiological recovery with respect to urinary incontinence and sexual dysfunction . However, even with robot-assisted surgery, postoperative disease-specific QOL and treatment satisfaction have not been sufficiently improved. Previous studies have focused on the factors affecting postoperative treatment satisfaction and/or regret. Lindsay et al. demonstrated that higher regret was observed in around one third of patients and was associated with worse disease-specific QOL and sexual function measures after RARP . Another study reported that urinary bother (UB) was the independent factor influencing treatment satisfaction after RARP . The longitudinal time course and the degree of disease-specific QOL recovery are reported to vary by the kinds of functional and bother domains . For example, the urinary domain immediately deteriorates with RP, but then improves with time. On the other hand, sexual function, which is worsened postoperatively, rarely improves . Thus, factors affecting treatment satisfaction are also expected to change with time. However, to the best of our knowledge, there are few studies investigating the longitudinal changes in factors that affect treatment satisfaction in patients following RARP. Therefore, this study examined longitudinal treatment satisfaction changes based on an assessment by patient-reported questionnaire.
Patients The participants in this prospective observational clinical cohort study were 612 consecutive patients who underwent RARP at Fukushima Medical University Hospital, Fukushima, Japan between April 2013 and May 2020. Technical supervision was done by Y.K. For RARP, a combination of posterior and anterior intraperitoneal approaches and early exposure of the seminal vesicles and vasa deferentia were carried out using a four-arm da Vinci Si surgical system (Intuitive Surgical Incorporation, Sunnyvale, CA, USA). The indwelling urethral catheter was typically removed on postoperative day 6–7. Evaluation of disease-specific QOL before and after RARP Patients were asked to answer the Japanese version of the Expanded Prostate Cancer Index Composite (EPIC) questionnaire, which was hand-delivered, by nurses or outpatient clinicians preoperatively and at 1, 3, 6, 9, and 12 months postoperatively as we previously reported . The EPIC questionnaire is a robust, self-administered, and validated questionnaire that is designed to evaluate PCa-specific QOL . The Japanese version of EPIC has also been validated and used as an indicator to measure disease-specific QOL for large-scale studies in Japan . It consists of 50 items comprising urinary, bowel, sexual, and hormonal domains. Each domain is further comprised of function and bother subdomains, respectively. Treatment satisfaction is evaluated based on the question, “Overall, how satisfied are you with the prostate treatment you received?” Based on this question, satisfaction status is expressed on a five-point Likert scale ranging from “extremely dissatisfied” to “extremely satisfied”. These answers are then converted to scores of 0, 25, 50, 75, or 100. All EPIC scores, including treatment satisfaction, range from 0 to 100, with higher scores indicating better disease-specific QOL. With regard to satisfaction, the question asked about satisfaction with the PCa treatment that patients had already received. Therefore, in this study, the satisfaction score before RARP was excluded because patients had not yet received any treatment. Data collection Of the 612 patients, one patient in whom RARP was converted to open surgery due to severe adhesion to the rectum and 35 patients who received salvage and/or adjuvant therapies such as radiotherapy or hormonal therapy within 12 months after RARP were excluded. In addition, 4 patients who stopped visiting our outpatient clinic within 12 months after RARP and 164 patients who had not responded at least once in all postoperative evaluation timepoints regarding satisfaction were excluded. Finally, a total of 408 patients were therefore analyzed in this study. At each follow-up timepoint, patients who indicated a score of 75 or 100 were classified into the satisfied group, whereas patients who showed a score of 0, 25, or 50 were classified into the non-satisfied group (Fig. ). To investigate the factors related to treatment satisfaction, the following preoperative, perioperative, and postoperative parameters were retrieved from the electronic health record system at our hospital: patient age at surgery, initial prostate-specific antigen (iPSA) values, neoadjuvant hormonal therapy (NHT) or not, body mass index (BMI), HbA1c values, clinical T stage, biopsy Gleason score (GS), D’Amico risk classification, nerve-sparing or not, console time, and resected prostate weight. This study was reviewed and approved by the Ethics Committee of Fukushima Medical University (clinical approval no. C-T2023-0258). Statistical analysis Age at surgery, initial PSA values, BMI, HbA1c values, resected prostate weight, and EPIC scores were presented as median with data range (minimum to maximum). The Friedmann test for multiple comparison was applied to verify whether longitudinal changes in EPIC scores showed any significant differences among 6 evaluation periods. Logistic regression analysis was performed as a multivariate analysis. Parameters that were reported or considered to be clinically relevant were used as dependent variables. Statistical analyses were performed using STATCEL ® version 3 software (add-in software for Microsoft Excel; OMS Publishing Inc., Saitama, Japan) or SPSS ® software package version 26 for Windows ® (Statistical Package for Social Science, Chicago, IL). Values of p < 0.05 were considered statistically significant.
The participants in this prospective observational clinical cohort study were 612 consecutive patients who underwent RARP at Fukushima Medical University Hospital, Fukushima, Japan between April 2013 and May 2020. Technical supervision was done by Y.K. For RARP, a combination of posterior and anterior intraperitoneal approaches and early exposure of the seminal vesicles and vasa deferentia were carried out using a four-arm da Vinci Si surgical system (Intuitive Surgical Incorporation, Sunnyvale, CA, USA). The indwelling urethral catheter was typically removed on postoperative day 6–7.
Patients were asked to answer the Japanese version of the Expanded Prostate Cancer Index Composite (EPIC) questionnaire, which was hand-delivered, by nurses or outpatient clinicians preoperatively and at 1, 3, 6, 9, and 12 months postoperatively as we previously reported . The EPIC questionnaire is a robust, self-administered, and validated questionnaire that is designed to evaluate PCa-specific QOL . The Japanese version of EPIC has also been validated and used as an indicator to measure disease-specific QOL for large-scale studies in Japan . It consists of 50 items comprising urinary, bowel, sexual, and hormonal domains. Each domain is further comprised of function and bother subdomains, respectively. Treatment satisfaction is evaluated based on the question, “Overall, how satisfied are you with the prostate treatment you received?” Based on this question, satisfaction status is expressed on a five-point Likert scale ranging from “extremely dissatisfied” to “extremely satisfied”. These answers are then converted to scores of 0, 25, 50, 75, or 100. All EPIC scores, including treatment satisfaction, range from 0 to 100, with higher scores indicating better disease-specific QOL. With regard to satisfaction, the question asked about satisfaction with the PCa treatment that patients had already received. Therefore, in this study, the satisfaction score before RARP was excluded because patients had not yet received any treatment.
Of the 612 patients, one patient in whom RARP was converted to open surgery due to severe adhesion to the rectum and 35 patients who received salvage and/or adjuvant therapies such as radiotherapy or hormonal therapy within 12 months after RARP were excluded. In addition, 4 patients who stopped visiting our outpatient clinic within 12 months after RARP and 164 patients who had not responded at least once in all postoperative evaluation timepoints regarding satisfaction were excluded. Finally, a total of 408 patients were therefore analyzed in this study. At each follow-up timepoint, patients who indicated a score of 75 or 100 were classified into the satisfied group, whereas patients who showed a score of 0, 25, or 50 were classified into the non-satisfied group (Fig. ). To investigate the factors related to treatment satisfaction, the following preoperative, perioperative, and postoperative parameters were retrieved from the electronic health record system at our hospital: patient age at surgery, initial prostate-specific antigen (iPSA) values, neoadjuvant hormonal therapy (NHT) or not, body mass index (BMI), HbA1c values, clinical T stage, biopsy Gleason score (GS), D’Amico risk classification, nerve-sparing or not, console time, and resected prostate weight. This study was reviewed and approved by the Ethics Committee of Fukushima Medical University (clinical approval no. C-T2023-0258).
Age at surgery, initial PSA values, BMI, HbA1c values, resected prostate weight, and EPIC scores were presented as median with data range (minimum to maximum). The Friedmann test for multiple comparison was applied to verify whether longitudinal changes in EPIC scores showed any significant differences among 6 evaluation periods. Logistic regression analysis was performed as a multivariate analysis. Parameters that were reported or considered to be clinically relevant were used as dependent variables. Statistical analyses were performed using STATCEL ® version 3 software (add-in software for Microsoft Excel; OMS Publishing Inc., Saitama, Japan) or SPSS ® software package version 26 for Windows ® (Statistical Package for Social Science, Chicago, IL). Values of p < 0.05 were considered statistically significant.
Patient characteristics Table shows the patient characteristics in this cohort ( n = 408). Median age at surgery was 67.5 years and the median initial PSA value was 7.3 ng/mL. In this study, 65 patients (15.9%) received NHT. Uni- or bilateral nerve-sparing surgery was performed in 108 patients (26.5%). Longitudinal changes in EPIC sub-domain and satisfaction scores Figure shows the longitudinal changes in EPIC subdomain scores for all patients. The median score for urinary function (UF) before RARP was 100. The UF score worsened immediately postoperatively. It gradually recovered, but it had not reached the same level as the preoperative score by 12 months after surgery (Fig. a). Similarly, the UB score also significantly got worse at 1 month after RARP. However, UB at 9 and 12 months after RARP recovered to the baseline level (Fig. b). Scores for bowel function (BF) and bowel bother (BB) were significantly decreased at 1 month postoperatively and gradually increased following that (Fig. c and d). As for sexual function (SF), it was already low preoperatively, and statistically significant improvement was not observed after RARP (Fig. e). Sexual bother (SB) score at 3, 6, and 9 months postoperatively was significantly worse than that at 1 month postoperatively (Fig. f). Hormonal function (HF) scores were slightly decreased 1 month postoperatively. It showed statistically significant improvement by 12 months postoperatively (Fig. g). Postoperative hormonal bother (HB) scores did not show significant differences compared with those at baseline (Fig. h). Factors related to the satisfaction at each postoperative evaluation timepoint As shown in Table , multivariate logistic regression analysis revealed that factors significantly relating to treatment satisfaction had changed with time. UB (odds ratio (OR) = 1.024; 95% confidence interval (CI) = 1.007–1.041; p < 0.001) and SF (OR = 0.944; 95% CI = 0.906–0.984; p = 0.004) were the significant factors associated with treatment satisfaction at 1 month postoperatively; UB (OR = 1.041; 95% CI = 1.018–1.065; p < 0.001) and SB (OR = 1.018; 95% CI = 1.008–1.028; p < 0.001) at 3 months; UF (OR = 1.026; 95% CI = 1.006–1.046; p = 0.01), UB (OR = 1.036; 95% CI = 1.009–1.064; p = 0.009), SB (OR = 1.012; 95% CI = 1.003–1.022; p = 0.009), and HB (OR = 1.064; 95% CI = 1.009–1.123; p = 0.023) at 6 months; UF (OR = 1.028; 95% CI = 1.009–1.047; p = 0.004) and SB (OR = 1.013; 95% CI = 1.003–1.022; p = 0.009) at 9 months; and UF (OR = 1.019; 95% CI = 1.001–1.038; p = 0.036) at 12 months. Age, BMI, NHT or not, nerve-sparing or not, resected prostate weight, BF, BB, and HF were not significantly associated with treatment satisfaction at all postoperative evaluation timepoints.
Table shows the patient characteristics in this cohort ( n = 408). Median age at surgery was 67.5 years and the median initial PSA value was 7.3 ng/mL. In this study, 65 patients (15.9%) received NHT. Uni- or bilateral nerve-sparing surgery was performed in 108 patients (26.5%).
Figure shows the longitudinal changes in EPIC subdomain scores for all patients. The median score for urinary function (UF) before RARP was 100. The UF score worsened immediately postoperatively. It gradually recovered, but it had not reached the same level as the preoperative score by 12 months after surgery (Fig. a). Similarly, the UB score also significantly got worse at 1 month after RARP. However, UB at 9 and 12 months after RARP recovered to the baseline level (Fig. b). Scores for bowel function (BF) and bowel bother (BB) were significantly decreased at 1 month postoperatively and gradually increased following that (Fig. c and d). As for sexual function (SF), it was already low preoperatively, and statistically significant improvement was not observed after RARP (Fig. e). Sexual bother (SB) score at 3, 6, and 9 months postoperatively was significantly worse than that at 1 month postoperatively (Fig. f). Hormonal function (HF) scores were slightly decreased 1 month postoperatively. It showed statistically significant improvement by 12 months postoperatively (Fig. g). Postoperative hormonal bother (HB) scores did not show significant differences compared with those at baseline (Fig. h).
As shown in Table , multivariate logistic regression analysis revealed that factors significantly relating to treatment satisfaction had changed with time. UB (odds ratio (OR) = 1.024; 95% confidence interval (CI) = 1.007–1.041; p < 0.001) and SF (OR = 0.944; 95% CI = 0.906–0.984; p = 0.004) were the significant factors associated with treatment satisfaction at 1 month postoperatively; UB (OR = 1.041; 95% CI = 1.018–1.065; p < 0.001) and SB (OR = 1.018; 95% CI = 1.008–1.028; p < 0.001) at 3 months; UF (OR = 1.026; 95% CI = 1.006–1.046; p = 0.01), UB (OR = 1.036; 95% CI = 1.009–1.064; p = 0.009), SB (OR = 1.012; 95% CI = 1.003–1.022; p = 0.009), and HB (OR = 1.064; 95% CI = 1.009–1.123; p = 0.023) at 6 months; UF (OR = 1.028; 95% CI = 1.009–1.047; p = 0.004) and SB (OR = 1.013; 95% CI = 1.003–1.022; p = 0.009) at 9 months; and UF (OR = 1.019; 95% CI = 1.001–1.038; p = 0.036) at 12 months. Age, BMI, NHT or not, nerve-sparing or not, resected prostate weight, BF, BB, and HF were not significantly associated with treatment satisfaction at all postoperative evaluation timepoints.
The purpose of this study was to clarify the longitudinal changes in disease-specific QOL and treatment satisfaction in patients who underwent RARP. First of all, we observed that the postoperative changes of disease-specific QOL greatly differed depending on the EPIC subdomain. Furthermore, we demonstrated that factors affecting longitudinal postoperative treatment satisfaction differed depending on the time of evaluation. In this study, scores of UF and UB worsened immediately posoperatively and gradually improved over the course of about 12 months after RARP. These results are consistent with the results reported in previous studies . In the present study, both BF and BB scores significantly worsened at 1 month postoperatively compared with those at baseline ( p < 0.001 for both) and then gradually improved. The EPIC bowel domain is mainly meant to evaluate the impact of radiotherapy on QOL . While patients who received radiotherapy were not included in this cohort, intrapelvic surgery can cause bowel symptoms. For example, Bishoff et al. examined the incidence of fecal incontinence after radical prostatectomy and reported that fecal incontinence occurred more frequently than previously recognized . The low preoperative SF scores further worsened after RARP and showed no improvement trend up to 12 months postoperatively. The SB score, however, was high preoperatively and did not worsen after RARP. Other studies have reported similar results in Japanese PCa patients who underwent RARP . Nakagawa et al. reported that high HF and HB scores at baseline showed consistently high scores for these parameters up to 12 months following RARP . In their cohort, 16.3% of patients received NHT. Similar to their study, HF and HB also remained high throughout the pre- and postoperative periods in the present study, in which 15.8% of patients received NHT. In summary, disease-specific QOL in this cohort was consistent with those previously reported for Japanese patients who underwent RARP. In the present study, UF was found to be one of the significant factors influencing treatment satisfaction from 6 to 12 months postoperatively, while UB was proved to be a factor significantly affecting treatment satisfaction from 1 to 6 months postoperatively. Attention has been focused on not only urinary incontinence but other lower urinary tract symptoms (LUTS) as postoperative complications after RP . Even though robot-assisted surgery has made delicate and precise manipulation possible, some patients experience prolonged postoperative LUTS that worsens over time. LUTS other than urinary incontinence or events related to LUTS can also affect QOL and treatment satisfaction. In fact, Nakagawa et al. reported that UB was related to 12-month postoperative treatment satisfaction in Japanese PCa patients evaluated using EPIC . We have also previously reported that the timing of pad exchanges was the most important factor affecting QOL, and the amount of urinary incontinence was not associated with decreased QOL . From another point of view, Schroeck et al. reported that, in the RARP era, patients’ expectations that oncological control will be achieved after RARP and that LUTS will be managed and sexual function will be maintained better than with other surgical modalities are too high . Based on these reports, some patients in this cohort might not be satisfied with their own conditions with respect to LUTS. In this study, only three patients had an EPIC SF score of 65 or higher, which is the threshold for intact SF as reported in a previous report. Therefore, we included all cases in the analysis without separating them based on preoperative SF . A study of longitudinal postoperative QOL after RP as evaluated using EPIC has shown that sexual domain scores in Japanese patients with PCa are known to have high SB scores despite of low SF scores . Nevertheless, in this study, SB was found to be a significant factor affecting treatment satisfaction from 3 to 9 months postoperatively. Intraoperative nerve preservation is considered to be the most effective treatment for erectile dysfunction (ED), which is a complication of RP, but its effectiveness cannot be said to be sufficient. In recent years, penile rehabilitation for patients with RP-induced ED has received a lot of attention. Nakano et al. showed that significant recovery of postoperative SF was observed in Japanese patients who underwent nerve-sparing RARP following penile rehabilitation with low-dose phosphodiesterase (PDE) 5 inhibitors . Inoue et al. reported that the introduction of low-intensity shock wave therapy (LIESWT) improved SF in Japanese patients who underwent RP . Penile rehabilitation is not offered to patients who have undergone RARP at our hospital. However, if patients obtain information about the effectiveness of penile rehabilitation leading to higher expectations for recovery of sexual function after RARP, SF and SB could be factors affecting treatment satisfaction due to the gap between patient expectations for recovery and their real-life trajectory. With regard to hormonal domains, this study demonstrated that HB was significantly associated with treatment satisfaction only at 6 months postoperatively. NHT or not was not a significant factor affecting treatment satisfaction, possibly due to the fact that the analysis in this study did not extend to the treatment duration and the type of drugs used for NHT. Murthy et al. reported that serum testosterone concentrations did not always recover to baseline levels even if administration of luteinizing hormone-releasing hormone agonist had been stopped . Pickles et al. stated that completion of hormonal therapy does not necessarily mean that serum testosterone levels immediately improve to pre-treatment levels . Decreased serum testosterone concentrations are known to cause a variety of signs and symptoms including those involving sexual dysfunction, such as decreased libido and ED . We have previously reported that vintage NHT brought about adverse hormonal symptoms after RARP . In general, NHT before RP is not recommended in current guidelines outside of its use in clinical trials . The reason why HB was only significantly related to treatment satisfaction at 6 months postoperatively was uncertain. However, from these results, NHT before RARP should not be recommended from the standpoint of not only oncological outcome, but also patient postoperative satisfaction. Several limitations to this study should be taken into account. First, factors other than disease-specific QOL as assessed using EPIC were not examined. The financial burden of healthcare negatively affects patient well-being and QOL, especially in the setting of a cancer diagnosis and associated treatment choice. For instance, annual household income was also associated with lower satisfaction scores . Marital or employment status in patients with malignant tumors has also been reported to be related to patient QOL . Moreover, surgical complications are associated with poor QOL . Based on these reports, there is a possibility that patient satisfaction in this cohort might be affected by not only disease-specific QOL but also those social factors. Second, baseline disease-specific QOL was not evaluated to find the factors affecting treatment satisfaction. We have demonstrated that patients with preoperative urinary incontinence had lower urinary QOL scores after RARP . This suggests that baseline conditions can affect postoperative QOL and treatment satisfaction. Third, the follow-up duration was relatively short. Trends toward improvement in subdomain and treatment satisfaction scores are also observed at 12 months after RARP . A study with longer follow-up is thus warranted for exploration of the time-dependent factors affecting treatment satisfaction. Even considering these limitations, our findings are nevertheless useful for patients in decision-making and subsequent treatment choice. Further investigation is required to elucidate the longer-term QOL, taking into account social background and other preoperative factors.
Treatment satisfaction in patients who underwent RARP changed over time. Our results suggest that giving sufficient information before treatment choice is both important and useful for patients’ decision-making, leading to improved patient QOL and treatment satisfaction.
|
Evaluating if ChatGPT Can Answer Common Patient Questions Compared With OrthoInfo Regarding Rotator Cuff Tears | 8235c90e-44b0-40dd-8586-5e88c7113f9e | 11905972 | Patient Education as Topic[mh] | We selected the eight frequently asked questions from the OrthoInfo rotator cuff tear web page. OrthoInfo questions and responses were used as a “benchmark” for orthopaedic patient information because this source is both peer reviewed and updated by experts. The OrthoInfo questions were then systematically inputted to ChatGPT 3.5 on January 9, 2024, without prompting, and responses were subsequently recorded. To determine a sixth-grade reading level response, the question was inputted with a qualifier afterward. For example, “What is the rotator cuff, and what does it do?” was inputted to ChatGPT, and the response was recorded. It was then posed the exact same question and asked to answer at a sixth-grade reading level. For example, “What is the rotator cuff, and what does it do? Explained at a sixth grade reading level.” This was determined to be the “sixth-grade” response. To ensure impartiality and eliminate potential biases, each question was independently input through distinct queries, ensuring that ChatGPT responses were solely influenced by the specific question without any contextual information from previous questions. After the queries were completed, there were eight responses to the frequently asked questions at the standard level and at a sixth-grade reading level. To assess the accuracy and appropriateness of the responses, two fellowship-trained Shoulder and Elbow surgeons, two fellowship-trained Orthopaedic Sports surgeons, and one Orthopaedic Sports fellow analyzed each response. They were provided the chatbot responses at both reading levels, as well as a printout of the OrthoInfo Information page about rotator cuff tears. They were then asked to rate each response using a 1 to 5 Likert scale (Figure ) for both accuracy and appropriateness, where a score of 5 was considered “excellent”. Mika et al and Johns et al used a similar methodology; however, they used a 4-point Likert scale, where a score of 1 was considered “excellent.” Surgeons were asked to compare ChatGPT's responses using OrthoInfo as the benchmark reference for patient-facing information. As the benchmark, OrthoInfo was rated a 5/5 with respect to all questions. We considered an average score of 4/5 to be adequate as a source of patient-facing information. This was chosen based on our Likert scale, where 4/5 represents “mostly accurate/appropriate with some inconsistencies.” We determined this to be the minimum acceptable score where a response sufficiently provided basic information. The Flesch-Kincaid (FK) Grade Level score was used to determine the readability of each response. This was calculated for each standard ChatGPT response, sixth-grade response, and OrthoInfo response. Statistical analysis included descriptive statistics, the paired t -test, and interrater reliability. We compared each physician's rating of the same question at the sixth-grade level and standard level, for both accuracy and appropriateness, using the paired t -test. We also compared the average score of each question at the sixth-grade level and standard level using the paired t -test. The average scores at both reading levels were then compared with OrthoInfo using the paired t -test. The average FK Grade Level for each group of responses was compared using the paired t -test. Interrater reliability was evaluated by intraclass correlation coefficient (ICC; Cronbach α), percent agreement, and descriptive statistics.
The eight questions from the OrthoInfo website can be found in Figure . Complete responses provided by ChatGPT can be found in Appendix 1 ( http://links.lww.com/JG9/A396 ). Standard ChatGPT responses were rated markedly more accurate and appropriate compared with the sixth-grade ChatGPT responses (Table ). On average, standard ChatGPT responses were above the adequacy cutoff of 4/5. Conversely, responses provided at the sixth-grade level were below the adequacy cutoff. OrthoInfo responses were rated markedly more accurate (5.0 ± 0.0 vs. 4.7 ± 0.47; P = 0.004) and appropriate (5.0 ± 0.0 vs. 4.5 ± 0.57; P = 0.016) compared with standard ChatGPT responses. OrthoInfo responses were also rated markedly more accurate (5.0 ± 0.0 vs. 3.6 ± 0.76; P < 0.001) and appropriate (5.0 ± 0.0 vs. 3.7 ± 0.98; P < 0.001) compared with sixth-grade ChatGPT responses. Reliability statistics can be found in Table . Physicians' ratings of sixth-grade ChatGPT responses showed an ICC of 0.823 for accuracy and 0.666 for appropriateness, indicating good reliability of response gradings. Standard ChatGPT response groups tended to have low variability between raters, causing paradoxically lower ICC of −0.532 for accuracy and 0.598 for appropriateness. As a result, percent agreement and descriptive statistics were also used to provide a more authentic analysis of reliability. Physicians' ratings of standard ChatGPT responses showed a 75% agreement for accuracy and 67.5% agreement for appropriateness, indicating favorable reliability. Descriptive statistics for standard ChatGPT responses consistently approached the rating 5/5 and had low variability between mean, median, and mode. Table shows the FK Grade Level for each question and average score for each reading level. Table shows the statistical comparison between each group; markedly lower reading grade levels were observed between sixth-grade ChatGPT answers compared with standard ChatGPT answers (sixth-grade FK mean = 7.71, standard FK mean = 14.3; P < 0.001). Sixth-grade ChatGPT responses were also markedly lower compared with OrthoInfo (sixth-grade FK mean = 7.71, OrthoInfo FK mean = 10.33; P < 0.001). In addition, the average reading grade level of standard ChatGPT responses was markedly higher when compared with that of OrthoInfo (standard FK mean = 14.3, OrthoInfo FK mean = 10.33; P = 0.002).
Overall, standard ChatGPT responses provided mostly accurate and appropriate patient information regarding rotator cuff tears; however, they were statistically significantly worse than OrthoInfo responses while having a markedly higher reading grade level. When prompting for sixth-grade reading level, the response scores were written at a markedly lower reading level; however, they were below the acceptable threshold of accuracy and appropriateness and were markedly less accurate and appropriate compared with the standard ChatGPT responses and OrthoInfo responses. With a mean accuracy of 4.7 and a mean appropriateness of 4.5, standard ChatGPT responses can be considered moderately accurate and appropriate. This coincides with published literature, which previously found ChatGPT's DISCERN score between 55 and 60, where scores above 50 indicate its ability to provide acceptable information. , Of note, only Question 5 received an average appropriateness score below the cutoff, rated 3.6/5. It asks, “Can a rotator cuff tear be healed or strengthened without surgery?” Surgeons found this response to be more inappropriate due to its failure to specify the inability to heal a rotator cuff tear through conservative management alone. In addition, ChatGPT suggested ultrasonography therapy as a method of improving blood flow and healing. There is sparse evidence suggesting that ultrasonography therapy may be beneficial for pain management in rotator cuff tendinopathy, specifically calcific tendinitis. However, ultrasonography therapy has not shown any benefit over placebo for other causes of rotator cuff tendinopathy, especially rotator cuff tears. As patients increasingly turn to online sources for information about their health conditions, a concern for patient's health literacy arises because much of the available patient education materials surpass the reading comprehension level of the average American. Previous analyses of online orthopaedic patient education materials found the mean grade level of 10.5 to be far higher than the recommended level set by the National Institutes of Health and American Medical Association. , This trend in reading level is extremely common, because 97% of articles by the American Academy of Orthopaedic Surgeons were found to be above a sixth-grade reading level, and 81% had a readability score above the eighth-grade level. With the rising popularity of ChatGPT, it is reasonable to assume that patients will turn to it for medical educational purposes. Our findings showcase the chatbot's familiar struggle that other orthopaedic patient-education materials face. Standard ChatGPT responses were deemed accurate and appropriate but provided information at a grade level of 14.3. Not only does this far surpass the average American reading level, but this reading level is also greater than what is presented online, showcased by its notable difference when compared with the reading level for OrthoInfo. This reading level is more appropriate for readers with college- or graduate-level education. Consequently, our analysis suggests that although standard ChatGPT may provide accurate medical responses, it may struggle to maintain a balance between readability and accurate information. Conversely, ChatGPT failed to adequately provide information at a sixth-grade level. It often omitted crucial details and occasionally presented incorrect information. This led to lower scores for accuracy and appropriateness, causing it to fall below the adequacy cutoff. Upon prompting ChatGPT to respond at a sixth-grade level, a notable trend emerged. Answers 3, 4, 5, and 6 each fell below the adequacy threshold. Each of these responses primarily addressed decision-making aspects related to managing rotator cuff tears. These questions, such as “When should I see a doctor for a rotator cuff tear?” or “At what point does a rotator cuff tear require surgery to fix?” inherently involved subjective judgments. Conversely, questions 1, 2, 7, and 8 centered more on objective information, offering less opportunity for subjective interpretations regarding management. For example, “What is a rotator cuff tear?” or “What causes a rotator cuff tear?” This discrepancy showcases a notable inability for ChatGPT to provide information regarding medical decision making. Therefore, it is reasonable to conclude that the lower average scores for the more subjective questions at the sixth-grade level may indicate ChatGPT's challenge in providing accurate and appropriate information regarding medical management using simplified language. Both the standard and sixth grade accuracy and appropriateness ratings fell short when compared with OrthoInfo. OrthoInfo was chosen as our benchmark because it is the American Academy of Orthopaedic Surgeons' designated website for patient-facing information. It provides handouts for clinicians to give their patients and covers many topics including disease information, potential treatments, and guides to recovery. In addition, information and recommendations provided by OrthoInfo are peer reviewed and updated by surgical experts, further strengthening its reliability and credibility. It is because of this that we gave responses provided by OrthoInfo 5/5 for accuracy and appropriateness. Multiple statistical methodologies were done to ensure reliability of physicians' responses. The rating physicians generally agreed that responses at the sixth-grade level provided inaccurate and inappropriate information, which can be seen by the ICC above 0.60 for both sixth-grade ChatGPT accuracy and appropriateness. This can be confirmed by looking at the mean, median, and mode for sixth-grade responses, which mostly fell below the adequacy cutoff of 4/5. Conversely, physicians tended to agree that responses at the standard ChatGPT level provided accurate and appropriate information. There is one outlier of Cronbach alpha for standard accuracy, being −0.532. However, this error is likely due to the low variance between physicians' scores, which is a limitation of ICC analysis and may artificially deflate this reliability metric. Of note, the median and mode for standard accuracy were both 5/5. This suggests that the physicians generally gave high ratings, and their ratings are consistently close to each other without notable outliers. Because of this, it is safe to assume that the physicians generally agreed on high accuracy and appropriateness for standard ChatGPT responses. Our study's results only partially supported our hypothesis regarding ChatGPT's effectiveness in providing accurate and appropriate patient information concerning rotator cuff tears. Although the standard responses exhibited markedly higher accuracy and appropriateness scores compared with the sixth-grade level responses, both fell short when compared with the benchmark set by OrthoInfo. Notably, our analysis highlighted ChatGPT's challenge in providing comprehensive information, particularly in addressing subjective aspects of medical management, while maintaining simplicity in language suitable for a sixth-grade reading level. Moreover, the broader context of patient education materials exceeding the average American's reading comprehension level emphasizes the importance of achieving a balance between readability and accuracy in artificial intelligence (AI)–driven healthcare communication. As technology continues to evolve, further research and refinement are necessary to harness the full potential of AI in enhancing patient education while ensuring accessibility and comprehension for diverse patient populations. First, as AI technology advances rapidly, the conclusions drawn from this study may not be definitive, given its relatively novel nature. Future updates and developments in AI algorithms could potentially alter the effectiveness and accuracy of AI-driven healthcare communication, thereby affecting our conclusions. Our study focused solely on the default ChatGPT 3.5 version and did not incorporate the premium ChatGPT4 version, which may offer more suitable responses. This was not chosen because it is limited to paid subscribers, which is a minority of ChatGPT users. Another limitation is the subjective nature of each physician's gradings. Although the ChatGPT responses were to be compared with OrthoInfo, there is some amount of subjectivity due to diversity in background and prior training for each physician. Surveys given to surgeon raters were not blinded, which may introduce some level of bias. Another explanation for the discrepancy between standard ChatGPT and sixth-grade ChatGPT responses is the effect of the additional prompt “explain at a sixth-grade level.” The initial prompting of a large language model, such as ChatGPT, is paramount in the quality of responses it provides. A proper prompt sets the context of the conversation and determines what information should be considered important. Kaarre et al conducted a similar study in which ChatGPT questions were crafted using “prompt engineering,” with guidelines provided by White et al They prompted ChatGPT 4.0 to provide information as an expert orthopaedic surgeon to two target groups: patients and medical doctors. Specific criteria for each target group were listed, such as length of AI response, use of medical jargon, and knowledge of anatomy. This study found ChatGPT was able to provide accurate responses 65% of the time, for both target demographics. However, it also found that without either prompt, ChatGPT provided much longer answers, reduced adaptability for both groups, and increased the possibility of providing misinformation. This suggests ChatGPT may provide accurate and appropriate information for the target demographic, but it requires greater responsibility on the user to provide detailed and extensive prompting. They conclude, with which we agree, that it is reasonable to assume the average patient would not use such extensive prompting when asking medical questions, which may lead to subpar responses. Wright et al analyzed ChatGPT prompting with regard to THA and TKA questions. They found that prompting the chatbot by telling it to make it “easier to understand” maintained accuracy and decreased FK reading level, although still at a reading level far above American Medical Association recommendations. Although our prompt specified “Explained at a sixth-grade reading level,” it is entirely possible that ChatGPT interpreted this prompt as “explain to a sixth grader.” Further investigation is needed to determine prompting that improves readability without compromising accuracy.
Standard ChatGPT responses were less accurate and appropriate, with worse readability compared with OrthoInfo responses. Despite being easier to read, sixth-grade level ChatGPT responses compromised on accuracy and appropriateness. At this time, ChatGPT is not recommended as a standalone source for patient information on rotator cuff tears but may supplement information provided by orthopaedic surgeons.
|
Measuring the quality of care in metastatic colorectal cancer: a scoping review of quality indicators | fb68f04e-46cc-42fb-8b7e-05a4b31125e0 | 11487155 | Internal Medicine[mh] | Literature search strategy This scoping review was conducted according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses for Scoping Reviews guidelines . A literature search was conducted between February 8 and 16, 2024, searching Web of Science, PubMed, and CINAHL for publications in English between January 1, 2014, and December 31, 2023. Search terms were constructed using Medical Subject Headings and free-text terms for CRC and quality indicators; the full search strategy can be seen in the (available online). Inclusion and exclusion criteria We included articles examining the development, selection, review, or validation of quality indicators, including retrospective comparative cohort studies, noncomparative reviews of population-based data, data-linkage feasibility studies, expert opinion studies, and associated systematic reviews and meta-analyses. We excluded studies of generic quality indicators for cancer care not in specific reference to CRC or if they focused solely on the screening, diagnosis, and management of early-stage CRC. Studies focused on nonclinical indicators, such as measures of cost, resource availability, or workload, were also excluded. Data extraction and data synthesis Articles were searched for unique quality indicators in metastatic CRC across the spectrum of disease, from diagnosis of metastatic disease to death, recording a short description, relevant discipline (radiology, pathology, surgical, radiation oncology, medical oncology, supportive or palliative care), domain of care (diagnosis, staging and treatment planning, medical management, surgical management, supportive care or terminal care), the numerator, denominator, data sources, rationale, risk adjustment, potential barriers to operationalization, and the donabedian classification (structural measure, process measure, or outcome measure). Similar quality indicators were grouped and given a single descriptor and the most representative numerator and/or denominator. Articles underwent citation and reference searching, and we reviewed the relevant gray literature from national bodies, including ASCO and ASCO-QOPI, the National Comprehensive Cancer Network (NCCN), and the National Initiative of Cancer Care Quality. Results were downloaded and imported into Covidence software (Covidence, Melbourne, Victoria, Australia) to remove duplicates, screened first by title and abstract, and retrieved for full text review. Papers were screened by first author C.D., with queries subject to a discussion with co-author D.S. until consensus was reached. In cases where consensus could not be reached, a third reviewer (P.G.) was consulted to provide an independent and final assessment. Each paper describing the development or implementation of quality indicators was reviewed against the Standards for QUality Improvement Reporting Excellence 2.0 appraisal tool for methodological rigor, including review of sample size, data completeness, bias, reliability and replicability. The systematic reviews of quality indicators were assessed against the Assessing the Methodological Quality of Systematic Reviews 2 tool to assess their search strategy and data-extraction techniques . These studies were reviewed for the number of quality indicators included in each paper; the proportion dedicated to CRC compared with other solid organ tumor types; those describing quality in metastatic vs early-stage disease; and accompanying data regarding quality indicator measurability, clinical utility, and barriers to use. This scoping review was conducted according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses for Scoping Reviews guidelines . A literature search was conducted between February 8 and 16, 2024, searching Web of Science, PubMed, and CINAHL for publications in English between January 1, 2014, and December 31, 2023. Search terms were constructed using Medical Subject Headings and free-text terms for CRC and quality indicators; the full search strategy can be seen in the (available online). We included articles examining the development, selection, review, or validation of quality indicators, including retrospective comparative cohort studies, noncomparative reviews of population-based data, data-linkage feasibility studies, expert opinion studies, and associated systematic reviews and meta-analyses. We excluded studies of generic quality indicators for cancer care not in specific reference to CRC or if they focused solely on the screening, diagnosis, and management of early-stage CRC. Studies focused on nonclinical indicators, such as measures of cost, resource availability, or workload, were also excluded. Articles were searched for unique quality indicators in metastatic CRC across the spectrum of disease, from diagnosis of metastatic disease to death, recording a short description, relevant discipline (radiology, pathology, surgical, radiation oncology, medical oncology, supportive or palliative care), domain of care (diagnosis, staging and treatment planning, medical management, surgical management, supportive care or terminal care), the numerator, denominator, data sources, rationale, risk adjustment, potential barriers to operationalization, and the donabedian classification (structural measure, process measure, or outcome measure). Similar quality indicators were grouped and given a single descriptor and the most representative numerator and/or denominator. Articles underwent citation and reference searching, and we reviewed the relevant gray literature from national bodies, including ASCO and ASCO-QOPI, the National Comprehensive Cancer Network (NCCN), and the National Initiative of Cancer Care Quality. Results were downloaded and imported into Covidence software (Covidence, Melbourne, Victoria, Australia) to remove duplicates, screened first by title and abstract, and retrieved for full text review. Papers were screened by first author C.D., with queries subject to a discussion with co-author D.S. until consensus was reached. In cases where consensus could not be reached, a third reviewer (P.G.) was consulted to provide an independent and final assessment. Each paper describing the development or implementation of quality indicators was reviewed against the Standards for QUality Improvement Reporting Excellence 2.0 appraisal tool for methodological rigor, including review of sample size, data completeness, bias, reliability and replicability. The systematic reviews of quality indicators were assessed against the Assessing the Methodological Quality of Systematic Reviews 2 tool to assess their search strategy and data-extraction techniques . These studies were reviewed for the number of quality indicators included in each paper; the proportion dedicated to CRC compared with other solid organ tumor types; those describing quality in metastatic vs early-stage disease; and accompanying data regarding quality indicator measurability, clinical utility, and barriers to use. Included studies The literature search yielded 2036 articles, with a further 27 identified from citation and reference searching. After removing duplicates, screening, and full text review, 11 articles were included in the final analysis . Of those 11 articles, 5 were systematized reviews: 3 reviewed quality indicators specific to CRC; 1 reviewed quality indicators in lung cancer, breast cancer, prostate cancer, and CRC; and 1 evaluated quality indicators of systemic anticancer treatment across all solid organ malignancies. The remaining 6 articles concerned the development, validation, or operationalization of quality indicators . All articles were from high-income countries, including from the United States (n = 2 [18%]), Canada (n = 2 [18%]), the United Kingdom (n = 2 [18%]), Australia (n = 3 [27%]), the Netherlands (n = 1 [9%]), and an international collaboration from the International Consortium for Health Outcomes Measurement (n = 1 [9%]). Data sources Three major approaches were seen in the 6 studies of quality indicator development and implementation. Data linkage studies Of the 6 studies detailing the development and implementation of quality indicators, 3 detailed linkage of robust, preexisting, population-based datasets, including linking of cancer registry data, admission data, International Statistical Classification of Diseases, Tenth Revision codes, and mortality data . These studies compared quality indicators between institutions to identify outliers. A strength of these studies was their large numbers, allowing case mix adjustment and robust analysis of associations between the quality indicator and both patient-level factors (eg, age, gender, insurance status, and socioeconomic status) and hospital-level factors (eg, public and private, rurality, caseload). An important limitation to each of these studies, however, was the uncertainty about the completeness and accuracy of data, which may differ between institutions. Manual chart abstraction Two studies outlined approaches dependent on manual chart abstraction for complex clinical and pathological data, both using purpose-built computerized data-abstraction tools administered by trained abstractors outside of existing administrative datasets or workflows. Dedicated cancer registries Finally, 1 paper outlined the development of consensus-based quality indicators specific to the care of patients with metastatic CRC using data routinely captured in a comprehensive clinical registry . This study was limited by a smaller sample size than population-based efforts, and the authors described limitations in data completeness for some metrics (including timing of administration of chemotherapy after diagnosis and before death and referral to palliative care). Therefore, some proposed quality indicators were not evaluable and would require data linkage with larger administration datasets. Studies examining quality indicator development or implementation Of the 6 studies examining the development or implementation of quality indicators , 5 (83%) focused purely on CRC and 1 described a quality indicator of chemotherapy toxicity across all cancer types . Of those studies just examining CRC, 3 included quality indicators across all phases of the disease, 1 study detailed quality indicators using data captured in a metastatic CRC registry , and 1 study described quality indicators specific to the terminal phase . The studies were heterogenous in their contexts and quality indicator development strategy. The size of the studies varied from 1000 to 7683 patients (median = 1723 [IQR = 1207-7088]), and the number of institutions involved varied from 10 to 106 (median = 22 [IQR = 10-105]). All were retrospective cohort studies or based on expert consensus (Oxford level 4 and level 5 evidence) . The studies were diverse regarding both geography and funding sources, which made extrapolation across clinical settings challenging. One study exclusively examined outcomes for US veterans , and another studied outcomes for insured patients across institutions in Florida . In contrast, papers from the United Kingdom and Canada examined settings with universal health care, and the 2 Australian papers incorporated public and private health-care settings . One study used an expert panel to identify potential “evidence-based” or “best practice–based” quality indicators that could be captured from an existing clinical registry . This study used existing clinical CRC cancer data-collection efforts, incorporating clinicopathological characteristics of all patient throughout the spectrum of the disease. Another study piloted the use of a computerized data-abstraction system to evaluate performance against 31 quality indicators based on the NCCN guidelines across 28 institutions in the United States, detailing important challenges in the manual abstraction of clinical data to populate complex or nuanced quality indicator items and the difficulties of reliably capturing events occurring in external institutions. Modifying existing quality indicators for lung cancer, breast cancer, prostate cancer, and CRC from ASCO-QOPI , the Florida Initiative for Quality Cancer Care reported the process of manual chart abstraction for 1000 patients across 10 sites, assessing interval change in quality indicator adherence across 2 reporting periods . These data included all eligible CRC patient across sites within the 2-year period, with comprehensive data-abstraction processes, quality control, and case mix adjustment. Finally, 3 large retrospective cohort studies reported implementation of quality indicators using comprehensive population-based datasets, using complex data linkage between clinical, administrative, and registry data to enable analysis of patient-level and hospital-level factors associated with variable quality indicator adherence or outcomes. Systematic reviews of quality indicators Of the 5 prior papers detailing comprehensive literature reviews or performing a complete systematic review , the 2017 meta-analysis from Keikes et al. defined 389 discrete quality indicators from 41 studies across all stages of CRC. This meta-analysis categorized quality indicators by donabedian category and discipline and analyzed their scientific credibility (consensus-based, evidence-based, or validated in well-conducted cohort studies) but notably did not identify any established evidenced-based or validated indicators for metastatic CRC. The second systematic review analyzed existing quality indicators, evaluating systemic anticancer treatment across all cancer types . This review identified 63 unique quality indicators, mostly process-based indicators focused on the appropriateness and safety of prescribed therapies based on NCCN and ASCO-QOPI guidelines . Notably, this study focused on the care of the 4 most common tumor types: breast cancer, lung cancer, prostate cancer, and CRC. Of the 63 quality indicators in this meta-analysis, 7 were generic indicators for metastatic disease, but there were no quality indicators specifically for metastatic CRC. Finally, 3 (60%) of the extracted studies detailed a systematic review of the literature to identify potential quality indicators before an expert panel or focus group refined them, using a modified Delphi approach to reach a consensus set for use in their dedicated health-care setting . Each of these approaches constructed a comprehensive list of quality indicators, but none incorporated feasibility as a prerequisite for selection. One such study determined that the data required for its proposed indicator set was not currently collected by existing administrative datasets and would be variably documented in medical records, thus limiting manual abstraction. Moreover, the International Consortium for Health Outcomes Measurement explicitly acknowledges that its indicator set requires complex structured clinicopathological data that would “stretch the capabilities of most institutions” . Extracted quality indicators in metastatic CRC Thirty-five distinct quality indicators for the metastatic setting were extracted from the 11 studies across 6 domains of care: 1) diagnosis, staging, and treatment planning; 2) medical oncology and systemic anticancer treatment; 3) radiation oncology; 4) surgical approaches; 5) supportive care; and 6) palliative and end-of-life care, with a general quality indicator of overall survival. Of the 35 quality indicators extracted, 8 (23%) were unique to metastatic CRC and 27 (77%) were generic quality indicators across different tumor types but applicable to metastatic CRC . The details of the extracted quality indicators, including description, rationale, numerator, denominator, and potential data sources, are outlined in . Systemic anticancer treatment made up the largest group of quality indicators (n = 12 [34%]), none of which were specific to CRC. The majority (10/12 [83%]) were process measures, such as recording body surface area, consent, blood results, and administration of supportive medications such as antiemetics and granulocyte colony stimulating factor. These quality indicators reflect accepted minimum standards for safe chemotherapy administration in ASCO and NCCN guidelines and are established quality indicators in the ASCO-QOPI . None addressed specific chemotherapeutic approaches, targeted therapies, or immune therapies used in metastatic CRC. The next-largest group pertained to palliative and end-of-life care, all generic outcome measures of the use of high-acuity health-care resources toward the end of life (n = 9), such as chemotherapy, radiation therapy, emergency department visits, intensive care unit admissions, and use of acute-care beds . Quality indicators pertaining to diagnosis, staging, and treatment planning comprised both process and outcome measures. These measures included generic process measures across solid organ malignancies (documentation of pathology, staging, and multidisciplinary team discussion) as well as process measures unique to CRC (2 quality indicators measuring rates of molecular testing for mismatch repair and RAS variations). Notably, these quality indicators do not reflect the breadth of molecular diagnostics now used routinely. Two quality indicators described generic time-based outcome measures designed to capture efficiency of care and the patient experience (time between diagnosis and treatment and time between histological diagnosis and informing the patient). There were smaller numbers of quality indicators for radiation oncology (n = 2) and surgical oncology (n = 2), all specific to metastatic CRC. This finding likely pertains to the exclusion of quality indicators intended to assess screening and the definitive treatment of early-stage disease. The literature search yielded 2036 articles, with a further 27 identified from citation and reference searching. After removing duplicates, screening, and full text review, 11 articles were included in the final analysis . Of those 11 articles, 5 were systematized reviews: 3 reviewed quality indicators specific to CRC; 1 reviewed quality indicators in lung cancer, breast cancer, prostate cancer, and CRC; and 1 evaluated quality indicators of systemic anticancer treatment across all solid organ malignancies. The remaining 6 articles concerned the development, validation, or operationalization of quality indicators . All articles were from high-income countries, including from the United States (n = 2 [18%]), Canada (n = 2 [18%]), the United Kingdom (n = 2 [18%]), Australia (n = 3 [27%]), the Netherlands (n = 1 [9%]), and an international collaboration from the International Consortium for Health Outcomes Measurement (n = 1 [9%]). Three major approaches were seen in the 6 studies of quality indicator development and implementation. Data linkage studies Of the 6 studies detailing the development and implementation of quality indicators, 3 detailed linkage of robust, preexisting, population-based datasets, including linking of cancer registry data, admission data, International Statistical Classification of Diseases, Tenth Revision codes, and mortality data . These studies compared quality indicators between institutions to identify outliers. A strength of these studies was their large numbers, allowing case mix adjustment and robust analysis of associations between the quality indicator and both patient-level factors (eg, age, gender, insurance status, and socioeconomic status) and hospital-level factors (eg, public and private, rurality, caseload). An important limitation to each of these studies, however, was the uncertainty about the completeness and accuracy of data, which may differ between institutions. Manual chart abstraction Two studies outlined approaches dependent on manual chart abstraction for complex clinical and pathological data, both using purpose-built computerized data-abstraction tools administered by trained abstractors outside of existing administrative datasets or workflows. Dedicated cancer registries Finally, 1 paper outlined the development of consensus-based quality indicators specific to the care of patients with metastatic CRC using data routinely captured in a comprehensive clinical registry . This study was limited by a smaller sample size than population-based efforts, and the authors described limitations in data completeness for some metrics (including timing of administration of chemotherapy after diagnosis and before death and referral to palliative care). Therefore, some proposed quality indicators were not evaluable and would require data linkage with larger administration datasets. Of the 6 studies detailing the development and implementation of quality indicators, 3 detailed linkage of robust, preexisting, population-based datasets, including linking of cancer registry data, admission data, International Statistical Classification of Diseases, Tenth Revision codes, and mortality data . These studies compared quality indicators between institutions to identify outliers. A strength of these studies was their large numbers, allowing case mix adjustment and robust analysis of associations between the quality indicator and both patient-level factors (eg, age, gender, insurance status, and socioeconomic status) and hospital-level factors (eg, public and private, rurality, caseload). An important limitation to each of these studies, however, was the uncertainty about the completeness and accuracy of data, which may differ between institutions. Two studies outlined approaches dependent on manual chart abstraction for complex clinical and pathological data, both using purpose-built computerized data-abstraction tools administered by trained abstractors outside of existing administrative datasets or workflows. Finally, 1 paper outlined the development of consensus-based quality indicators specific to the care of patients with metastatic CRC using data routinely captured in a comprehensive clinical registry . This study was limited by a smaller sample size than population-based efforts, and the authors described limitations in data completeness for some metrics (including timing of administration of chemotherapy after diagnosis and before death and referral to palliative care). Therefore, some proposed quality indicators were not evaluable and would require data linkage with larger administration datasets. Of the 6 studies examining the development or implementation of quality indicators , 5 (83%) focused purely on CRC and 1 described a quality indicator of chemotherapy toxicity across all cancer types . Of those studies just examining CRC, 3 included quality indicators across all phases of the disease, 1 study detailed quality indicators using data captured in a metastatic CRC registry , and 1 study described quality indicators specific to the terminal phase . The studies were heterogenous in their contexts and quality indicator development strategy. The size of the studies varied from 1000 to 7683 patients (median = 1723 [IQR = 1207-7088]), and the number of institutions involved varied from 10 to 106 (median = 22 [IQR = 10-105]). All were retrospective cohort studies or based on expert consensus (Oxford level 4 and level 5 evidence) . The studies were diverse regarding both geography and funding sources, which made extrapolation across clinical settings challenging. One study exclusively examined outcomes for US veterans , and another studied outcomes for insured patients across institutions in Florida . In contrast, papers from the United Kingdom and Canada examined settings with universal health care, and the 2 Australian papers incorporated public and private health-care settings . One study used an expert panel to identify potential “evidence-based” or “best practice–based” quality indicators that could be captured from an existing clinical registry . This study used existing clinical CRC cancer data-collection efforts, incorporating clinicopathological characteristics of all patient throughout the spectrum of the disease. Another study piloted the use of a computerized data-abstraction system to evaluate performance against 31 quality indicators based on the NCCN guidelines across 28 institutions in the United States, detailing important challenges in the manual abstraction of clinical data to populate complex or nuanced quality indicator items and the difficulties of reliably capturing events occurring in external institutions. Modifying existing quality indicators for lung cancer, breast cancer, prostate cancer, and CRC from ASCO-QOPI , the Florida Initiative for Quality Cancer Care reported the process of manual chart abstraction for 1000 patients across 10 sites, assessing interval change in quality indicator adherence across 2 reporting periods . These data included all eligible CRC patient across sites within the 2-year period, with comprehensive data-abstraction processes, quality control, and case mix adjustment. Finally, 3 large retrospective cohort studies reported implementation of quality indicators using comprehensive population-based datasets, using complex data linkage between clinical, administrative, and registry data to enable analysis of patient-level and hospital-level factors associated with variable quality indicator adherence or outcomes. Of the 5 prior papers detailing comprehensive literature reviews or performing a complete systematic review , the 2017 meta-analysis from Keikes et al. defined 389 discrete quality indicators from 41 studies across all stages of CRC. This meta-analysis categorized quality indicators by donabedian category and discipline and analyzed their scientific credibility (consensus-based, evidence-based, or validated in well-conducted cohort studies) but notably did not identify any established evidenced-based or validated indicators for metastatic CRC. The second systematic review analyzed existing quality indicators, evaluating systemic anticancer treatment across all cancer types . This review identified 63 unique quality indicators, mostly process-based indicators focused on the appropriateness and safety of prescribed therapies based on NCCN and ASCO-QOPI guidelines . Notably, this study focused on the care of the 4 most common tumor types: breast cancer, lung cancer, prostate cancer, and CRC. Of the 63 quality indicators in this meta-analysis, 7 were generic indicators for metastatic disease, but there were no quality indicators specifically for metastatic CRC. Finally, 3 (60%) of the extracted studies detailed a systematic review of the literature to identify potential quality indicators before an expert panel or focus group refined them, using a modified Delphi approach to reach a consensus set for use in their dedicated health-care setting . Each of these approaches constructed a comprehensive list of quality indicators, but none incorporated feasibility as a prerequisite for selection. One such study determined that the data required for its proposed indicator set was not currently collected by existing administrative datasets and would be variably documented in medical records, thus limiting manual abstraction. Moreover, the International Consortium for Health Outcomes Measurement explicitly acknowledges that its indicator set requires complex structured clinicopathological data that would “stretch the capabilities of most institutions” . Thirty-five distinct quality indicators for the metastatic setting were extracted from the 11 studies across 6 domains of care: 1) diagnosis, staging, and treatment planning; 2) medical oncology and systemic anticancer treatment; 3) radiation oncology; 4) surgical approaches; 5) supportive care; and 6) palliative and end-of-life care, with a general quality indicator of overall survival. Of the 35 quality indicators extracted, 8 (23%) were unique to metastatic CRC and 27 (77%) were generic quality indicators across different tumor types but applicable to metastatic CRC . The details of the extracted quality indicators, including description, rationale, numerator, denominator, and potential data sources, are outlined in . Systemic anticancer treatment made up the largest group of quality indicators (n = 12 [34%]), none of which were specific to CRC. The majority (10/12 [83%]) were process measures, such as recording body surface area, consent, blood results, and administration of supportive medications such as antiemetics and granulocyte colony stimulating factor. These quality indicators reflect accepted minimum standards for safe chemotherapy administration in ASCO and NCCN guidelines and are established quality indicators in the ASCO-QOPI . None addressed specific chemotherapeutic approaches, targeted therapies, or immune therapies used in metastatic CRC. The next-largest group pertained to palliative and end-of-life care, all generic outcome measures of the use of high-acuity health-care resources toward the end of life (n = 9), such as chemotherapy, radiation therapy, emergency department visits, intensive care unit admissions, and use of acute-care beds . Quality indicators pertaining to diagnosis, staging, and treatment planning comprised both process and outcome measures. These measures included generic process measures across solid organ malignancies (documentation of pathology, staging, and multidisciplinary team discussion) as well as process measures unique to CRC (2 quality indicators measuring rates of molecular testing for mismatch repair and RAS variations). Notably, these quality indicators do not reflect the breadth of molecular diagnostics now used routinely. Two quality indicators described generic time-based outcome measures designed to capture efficiency of care and the patient experience (time between diagnosis and treatment and time between histological diagnosis and informing the patient). There were smaller numbers of quality indicators for radiation oncology (n = 2) and surgical oncology (n = 2), all specific to metastatic CRC. This finding likely pertains to the exclusion of quality indicators intended to assess screening and the definitive treatment of early-stage disease. Health services research is conducted across diverse settings, with variable contexts, funding sources, and stakeholders, making comparison and extrapolation challenging . Nevertheless, measuring the quality of clinical cancer care is of the utmost importance, and the scope of the review and the inclusion criteria were purposefully broad to capture all possible quality indicators across a range of settings. The majority of quality indicators found were process measures focused on discrete markers of safety and quality (eg, was compliance with a specific recommendation—such as chemotherapy consent or documentation of body surface area—seen in the clinical notes?). Although endorsed by national bodies such as ASCO and NCCN, the relationship between such process measures and clinical outcomes is uncertain, and it is challenging to determine whether such indicators reflect improved performance or simply improved documentation . In contrast, outcome measures capture health states, conditions, or patient-reported measures, but it is difficult to determine what variability in these outcome measures may represent. For example, outcome-based quality indicators focused on health-care utilization at the end of life, such as time spent in hospital, may reflect inappropriately aggressive anticancer care in the terminal phases of the disease or the availability of inpatient palliative care systems or patient preferences for their use. The interpretation of such quality indicators is highly dependent on the clinical context. Next, there is the difficulty of examining the quality of care in sufficient detail and granularity, with limitations on available data and frequent lags in data reporting and analysis. For example, the quality indicators specific to CRC (rates of standard molecular testing, identification of genetic syndromes, ablative approaches to oligometastatic disease) currently encompass only a fraction of the nuance and complexity of the modern multidisciplinary approach . This finding reflects the delays in quality indicator development, integration into data management systems, and eventual quality reporting. Moreover, the routine use of these quality indicators is limited because they require complex data extraction by dedicated personnel or comprehensive cancer registries to capture such clinicopathological detail. This challenge is only expanding as cancer care is increasingly personalized and relevant to ever-smaller patient subsets, which limits robust comparison. For example, to clarify whether patients are receiving appropriate, guideline-directed therapy, the population studied must be refined by primary site, molecular subtype, line of therapy, performance status, and patient preference for or against therapy . Similarly, the quality indicators capturing patient-reported outcomes, such as health-related quality of life, although valuable and desired by consumers, require questionnaires not usually incorporated into busy clinical settings, as outlined in the paper from the CRC working group of the International Consortium for Health Outcomes Measurement . Finally, there is the challenge of capturing sufficient data to detect notable differences in outcomes between populations or practices. Outcome measures can be reliably captured only by large population-based datasets or large prospective studies, routinely and systematically collected and amenable to case mix adjustment—such as those of the UK National Health Service; the Surveillance, Epidemiology and End Results Program or National Cancer Database in the United States; or similarly large population-based cancer registries seen in Canada or Europe . Such data may lack the granularity to identify the cause of any disparity between sites, however, or to enable dedicated improvement efforts. Given the multiple health-care settings and stakeholders involved, selection of appropriate quality indicators will necessarily differ by scenario. Our search excluded studies not published in English and that captured only health services research in high-income countries, which is a limitation of our study and reduces generalizability. Although this scoping review has identified the heterogeneity of existing quality indicator research in CRC, it has also highlighted the fundamental lack of quality indicators to capture and evaluate the nuance of the metastatic phase of the disease in necessary detail. For example, only 2 CRC-specific quality indicators examined the breadth of information required to adequately characterize the molecular status of a patient needed to guide treatment selection—likely a substantial deficit given the expanding treatment armamentarium, including chemotherapy, immunotherapy, and targeted therapies . It is essential that optimal treatment be given to well-defined patient subsets to ensure high-quality care, with minimization of cost and toxicity . Emerging CRC quality indicators must therefore focus on patient selection, sequencing of treatments, and integration of modern diagnostic aids—imaging and omics included—to truly reflect best practice and value-based care. Finally, in an era of increasing patient input and collaboration, the design and implementation of quality indicators should involve effective consumer consultation. In summary, this review identified an abundance of generic process measures for CRC, clustered around the diagnostic and palliative phases of care. Renewed effort is essential to develop appropriate and meaningful quality indicators focused on the nuance and detail of clinical practice in this complex and costly phase of the disease. Although a major challenge of health services research has been capturing the requisite data to reliably measure health-care outcomes and empower future quality endeavors, there is every reason to be optimistic that big data will be transformative and that future work will focus on barriers and enablers to routine data collection. As sophisticated informatics is increasingly integrated into routine service delivery, including much-needed elements such as patient-reported outcomes, we can develop standardized quality indicators that truly reflect the quality of cancer care, benchmark across sites, and identify meaningful opportunities for improvement. pkae073_Supplementary_Data |
30+ years of media analysis of relevance to chronic disease: a scoping review | 88cfdd35-2bc1-429a-b3f2-c80a99586e4d | 7083065 | Health Communication[mh] | Chronic, non-communicable diseases (hereafter ‘chronic diseases’) such as cancer, diabetes mellitus and cardiovascular disease, are a major contributor to the global burden of disease and are responsible for over 40 million deaths per year . Despite increasing recognition of the urgent need to tackle chronic diseases and growing evidence on both the effectiveness and cost-effectiveness of prevention , significant progress has not yet been made. Chronic diseases are a complex problem, with multifactorial causes that extend beyond individual behaviours and include the social, environmental and socio-economic aspects of the environments in which people live, work and play . Chronic disease prevention therefore requires coordinated, inter-sectoral efforts at the individual, community and population levels . For example, addressing childhood obesity is likely to require a range of interventions, including restricting junk food advertising to children, teaching cookery skills to new parents, providing nutritional information on food labels, changing school canteen menus, improving pricing and availability of fresh food, and reformulating processed foods . Garnering public and political support and momentum for such actions requires a shift away from thinking at the individual level to an appreciation of the social, environmental and cultural drivers for behaviour, and an understanding of the interrelated nature of chronic disease causes, risk factors and solutions. The public is continually exposed to mass media, including news, entertainment and advertising media, through channels such as television, radio, movies, newspapers, magazines and the internet. Such exposure is likely to play a key role in shaping attitudes and behaviours of relevance to chronic disease prevention . News media lies at the nexus of the public and policy agenda and news coverage of issues and events both shapes and reflects public and political opinion . While print newspapers are considered to be something of a ‘dying industry’, online news media exposure continues to increase, with much of the population having direct access 24 hours a day, 7 days a week, from almost any location . Thus, the news media continues to be a vital social institution and digital technologies have reshaped this industry in recent years. In particular, the emergence of an array of new actors, such as BuzzFeed, The Huffington Post and The Conversation, along with the growth of social media platforms and blogs, has resulted in significant changes in who and what constitutes the news media institution. Further, the ease of sharing content across social networks, as well as the so-called ‘echo-chamber’ effect, have changed the flow of information, including what gets amplified and how. Understanding how these shifts in the media landscape affect the public and political agenda setting process will therefore be of increasing importance going forward. The study of news media communication occurs within a multidisciplinary paradigm with roots in sociology and political science, and draws heavily on framing theory, which concerns the “holistic study of media effects on individuals and audiences” (p. 423 ), focusing on four elements of the communication process: the sender, the receiver, the (informative) message and culture . Framing theory posits that messages are packaged in particular ways to emphasize certain pieces of information and de-emphasise others , and particular framings will “promote a particular problem definition, causal interpretation, moral evaluation, and/or treatment recommendation” (p.53 ). Research within this paradigm has revealed that the nature of information conveyed through the media, including what gets reported, the amount of coverage received and the way in which it is represented can have a powerful effect on knowledge, attitudes, and behaviours . In addition to shaping societal attitudes towards issues, media coverage is a societal product in itself, such that issue framing is constrained by social structures, values and norms . Thus understanding how issues are framed can provide insights into wider trends in society. Other forms of media, including entertainment, commercial advertising, and social marketing are also likely to play a role in influencing public attitudes, opinions and behaviours of relevance to chronic disease. For example, commercial advertising through television commercials, online advertising campaigns, and point of sale advertising are used often to influence consumer behaviours that may increase the risk of chronic diseases, such as encouraging consumption of unhealthy foods or alcohol (e.g., ), and may also encourage the purchase of products or services that promote health, such as commercial weight loss programs or meal plans. Social marketing campaigns may employ mass media channels to encourage healthy behaviours, such as smoking cessation, responsible alcohol consumption, and cancer screening (see, for example for a review of mass media campaigns to change health behaviour). Entertainment media, such as films, television shows and music videos may influence attitudes and behaviours of relevance to chronic disease for example by using plotlines that raise awareness of issues related to chronic disease, or model behaviours such as smoking and alcohol consumption . In recent years there has been a proliferation of media research on issues of relevance to chronic disease (including disease risks, causes and solutions). While such a growth in research is promising both in terms of interest in this field and the potential for new and useful knowledge to emerge, the volume and breadth of evidence can be overwhelming for those who need to access the key messages from this research, such as policy makers and practitioners. In particular, both original research articles and reviews have tended to ‘zoom in’ on specific issues, such as how obesity is portrayed within news media (e.g. ) or the framing of arguments around smoking restrictions (e.g. ), and to date, no comprehensive synthesis or mapping of the area as a whole exists. Within this paper we aim to provide an initial mapping of media research on topics of relevance to chronic disease. In particular, we explore the scope and nature of research on how issues related to chronic disease prevention have been portrayed across various forms of media in order to provide an overview of the key focus areas and highlight gaps and opportunities for future investigation. In doing so we seek to address the following research questions: What are the key trends in research on media coverage of chronic diseases? How has research on media coverage of chronic diseases changed over time? What are the key gaps and opportunities for further research on media coverage of chronic diseases?
Aim To map existing research examining mass media content of relevance to chronic disease. Design A scoping review was selected as it allows for rapid mapping of the key concepts underpinning a research area and the main sources and types of evidence available and is most appropriate when endeavouring to: examine the extent, range, and nature of research activity; summarize and disseminate research findings; and/or identify gaps in the existing research . The methodology for this scoping review was based on previous the framework outlined by Arksey and O’Malley and ensuing recommendations made by Levac, Colquhoun and O’Brien . For the purpose of this study, a scoping review is defined as a type of research synthesis that aims to “map the literature on a particular topic or research area and provide an opportunity to identify key concepts; gaps in the research; and types and sources of evidence to inform practice, policymaking, and research” (p.2 ). The review included the following five key phases: (1) identifying the research question, (2) identifying relevant studies, (3) study selection, (4) data extraction, and (5) collating, summarizing, and reporting the results. The review was completed in accordance with the PRISMA-ScR checklist and copy of the completed checklist can be found in Additional file . Search strategy We searched three electronic databases: MEDLINE (1946–), PsycINFO (1967–), and Global Health (1973–) via OVID in July 2016, to identify studies published in English. As the purpose of this review was to provide an overview of media research of relevance to lifestyle-related chronic diseases (e.g. cardiovascular disease, cancer, and diabetes), and their risk factors (e.g. smoking, alcohol consumption, obesity, physical activity), search terms were constructed across three concepts: topics and issues related to chronic disease (including search terms related to chronic diseases, risk factors, and public health), types of media (including advertising, news, entertainment and social media), and content or framing (see Table ). Search terms were piloted and refined prior to use, including consultation with experts and checking for capture of studies that the authors expected to be included. Study selection In line with the recommendations of Levac, Colquhoun and O’Brien the criteria for study inclusion were refined through discussion amongst the research team in an iterative manner as the reviewers became more familiar with the research. Studies were included if they reported original research related to media representations of chronic disease, including how issues are framed, the impact or effects of media representations (e.g. on public opinion or behaviour), and factors that influence media representations. Chronic diseases were defined as non-communicable conditions, for which there are a range of lifestyle-related risk factors, and included cardiovascular disease, cancer, and diabetes. Studies were included if they focused on any issues related to chronic disease, including prevalence, causes and risk factors (e.g. obesity, high blood pressure, diet, physical inactivity, alcohol, smoking, social/economic inequality), and/or prevention (including policies and programs). Although mental health issues were not a key focus of our search, a number of articles related to mental health were captured within our search terms. These were included these as they represent an important group of chronic conditions for which media coverage is likely to impact on public and political attitudes towards prevention and treatment. Only original research articles were included; other types of publications, including systematic reviews, meta-analyses, letters, and guidelines were not included within this review. Due to the volume of results returned by the database searches, further searching of grey literature and hand searching of reference lists and journals was beyond the scope of the study. We included published articles that focused on any form of public media, including news media (e.g. newspapers, magazines, TV news), social media (e.g. Twitter, Facebook, blogs), entertainment media (e.g. TV sitcoms, movies, music videos), and/or advertising and marketing (including commercial advertisements and social marketing) (see Table for definitions of media types). Conference abstracts, dissertations and other unpublished materials were not included within the review. One reviewer (SR) screened article titles and abstracts for eligibility and reviewed the full-text of articles identified as ‘eligible’ or ‘unclear’. For reliability purposes, a second reviewer (TAB) reviewed a random subset of articles on the basis of titles and abstracts ( n =100) and full-texts ( n =30). There was a good level of agreement at both stages (title and abstract: 86% agreement; Cohen’s k = .71; full-text: 93% agreement; Cohen’s k =.84) and all disagreements were discussed and resolved. Figure outlines the flow of articles through the review process. Data extraction A data extraction template was developed in Microsoft Excel to extract key details about included studies. Extracted data included study characteristics, research focus, sample and methods, media types and topics covered. The data extraction form was initially reviewed by the research team and pretested by SR and TAB before use, and was continually refined during the early stages of data extraction. The characteristics of each full-text article were extracted by one reviewer (SR or TB), while a second reviewer (TAB or SR) performed data extraction on a randomly selected subset of full-text articles to check for consistency in information extracted. Comparison of extracted data indicated a high level of consistency and all disagreements were discussed and resolved. Data synthesis Extracted data were imported into NVivo qualitative data analysis software for additional coding and data synthesis. Following the process outlined by Arksey and O'Malley , this began with a quantitative, descriptive analysis of the studies included within the review, including the distribution of studies over time, and media type and health topic in order to identify the dominant areas of research and any significant gaps. Following this a thematic approach was employed, in which data were coded inductively to identify key themes in the focus areas and research questions of the included studies, attending to similarities and differences within and across the main media types in a way which accounted for the heterogeneity across studies. Data synthesis was performed by one reviewer (SR) and refined through ongoing discussion with the research team. Due to the volume of studies identified, a comprehensive synthesis of findings across all studies was beyond the scope of the current paper. Instead we have sought to categorise studies according to common themes and present examples of studies and key findings to highlight these.
To map existing research examining mass media content of relevance to chronic disease.
A scoping review was selected as it allows for rapid mapping of the key concepts underpinning a research area and the main sources and types of evidence available and is most appropriate when endeavouring to: examine the extent, range, and nature of research activity; summarize and disseminate research findings; and/or identify gaps in the existing research . The methodology for this scoping review was based on previous the framework outlined by Arksey and O’Malley and ensuing recommendations made by Levac, Colquhoun and O’Brien . For the purpose of this study, a scoping review is defined as a type of research synthesis that aims to “map the literature on a particular topic or research area and provide an opportunity to identify key concepts; gaps in the research; and types and sources of evidence to inform practice, policymaking, and research” (p.2 ). The review included the following five key phases: (1) identifying the research question, (2) identifying relevant studies, (3) study selection, (4) data extraction, and (5) collating, summarizing, and reporting the results. The review was completed in accordance with the PRISMA-ScR checklist and copy of the completed checklist can be found in Additional file .
We searched three electronic databases: MEDLINE (1946–), PsycINFO (1967–), and Global Health (1973–) via OVID in July 2016, to identify studies published in English. As the purpose of this review was to provide an overview of media research of relevance to lifestyle-related chronic diseases (e.g. cardiovascular disease, cancer, and diabetes), and their risk factors (e.g. smoking, alcohol consumption, obesity, physical activity), search terms were constructed across three concepts: topics and issues related to chronic disease (including search terms related to chronic diseases, risk factors, and public health), types of media (including advertising, news, entertainment and social media), and content or framing (see Table ). Search terms were piloted and refined prior to use, including consultation with experts and checking for capture of studies that the authors expected to be included.
In line with the recommendations of Levac, Colquhoun and O’Brien the criteria for study inclusion were refined through discussion amongst the research team in an iterative manner as the reviewers became more familiar with the research. Studies were included if they reported original research related to media representations of chronic disease, including how issues are framed, the impact or effects of media representations (e.g. on public opinion or behaviour), and factors that influence media representations. Chronic diseases were defined as non-communicable conditions, for which there are a range of lifestyle-related risk factors, and included cardiovascular disease, cancer, and diabetes. Studies were included if they focused on any issues related to chronic disease, including prevalence, causes and risk factors (e.g. obesity, high blood pressure, diet, physical inactivity, alcohol, smoking, social/economic inequality), and/or prevention (including policies and programs). Although mental health issues were not a key focus of our search, a number of articles related to mental health were captured within our search terms. These were included these as they represent an important group of chronic conditions for which media coverage is likely to impact on public and political attitudes towards prevention and treatment. Only original research articles were included; other types of publications, including systematic reviews, meta-analyses, letters, and guidelines were not included within this review. Due to the volume of results returned by the database searches, further searching of grey literature and hand searching of reference lists and journals was beyond the scope of the study. We included published articles that focused on any form of public media, including news media (e.g. newspapers, magazines, TV news), social media (e.g. Twitter, Facebook, blogs), entertainment media (e.g. TV sitcoms, movies, music videos), and/or advertising and marketing (including commercial advertisements and social marketing) (see Table for definitions of media types). Conference abstracts, dissertations and other unpublished materials were not included within the review. One reviewer (SR) screened article titles and abstracts for eligibility and reviewed the full-text of articles identified as ‘eligible’ or ‘unclear’. For reliability purposes, a second reviewer (TAB) reviewed a random subset of articles on the basis of titles and abstracts ( n =100) and full-texts ( n =30). There was a good level of agreement at both stages (title and abstract: 86% agreement; Cohen’s k = .71; full-text: 93% agreement; Cohen’s k =.84) and all disagreements were discussed and resolved. Figure outlines the flow of articles through the review process.
A data extraction template was developed in Microsoft Excel to extract key details about included studies. Extracted data included study characteristics, research focus, sample and methods, media types and topics covered. The data extraction form was initially reviewed by the research team and pretested by SR and TAB before use, and was continually refined during the early stages of data extraction. The characteristics of each full-text article were extracted by one reviewer (SR or TB), while a second reviewer (TAB or SR) performed data extraction on a randomly selected subset of full-text articles to check for consistency in information extracted. Comparison of extracted data indicated a high level of consistency and all disagreements were discussed and resolved.
Extracted data were imported into NVivo qualitative data analysis software for additional coding and data synthesis. Following the process outlined by Arksey and O'Malley , this began with a quantitative, descriptive analysis of the studies included within the review, including the distribution of studies over time, and media type and health topic in order to identify the dominant areas of research and any significant gaps. Following this a thematic approach was employed, in which data were coded inductively to identify key themes in the focus areas and research questions of the included studies, attending to similarities and differences within and across the main media types in a way which accounted for the heterogeneity across studies. Data synthesis was performed by one reviewer (SR) and refined through ongoing discussion with the research team. Due to the volume of studies identified, a comprehensive synthesis of findings across all studies was beyond the scope of the current paper. Instead we have sought to categorise studies according to common themes and present examples of studies and key findings to highlight these.
Study characteristics Four hundred and ninety-nine studies were included in the review. Table provides a description of the included studies and details of the key characteristics of each included study are also provided (see Additional File ). The majority of studies ( n =297; 60%) were conducted in the USA, followed by Australia ( n =52; 10%), Canada ( n =37; 7%), and the United Kingdom ( n =31; 6%), and only 13 (3%) studies took a multi-country approach (e.g. a comparative analysis of media coverage across countries). News and information media were the most frequent focus of studies followed by marketing media. Studies were categorised according to the approach taken. Descriptive studies were those that involved an analysis (whether qualitative, quantitative or both) of media content, and were the most common study type within the sample ( n =446). Descriptive studies were most often cross-sectional in nature, i.e., the analysis of news coverage of a particular issue at a particular point in time, although some studies took a longitudinal approach, for example examining patterns in media coverage over time. A smaller number of studies ( n =60) employed an experimental approach, seeking to test the impact of differences in how chronic diseases were portrayed on a specified variable, e.g. testing the effect of presenting different framings of a news story on public attitudes to chronic disease, and included both lab-based and naturalistic studies. For studies of media content, the sample timeframe was most often between 0 and 5 years in duration, with a small proportion sampling over a duration exceeding 10 years. Newspapers were the most common media channel examined within our sample, followed by television and online media. The number of studies increased over time. Studies covered a range of health topics related to chronic disease prevention, with the majority of studies ( n =342; 69%) focusing on behavioural risk factors related to chronic disease, particularly smoking and nutrition. Just over a quarter of studies ( n =134; 27%) focused on specific chronic diseases, including cancer ( n =93; 19%), type 2 diabetes ( n =15; 3%), cardiovascular disease ( n =16; 3%), and other chronic diseases (e.g. chronic kidney disease, hypertension; n =9; 2%). Eighty-three studies (17%) focused on other health topics relevant to chronic disease prevention, such as oral health, mental health, and child and maternal health. The cumulative frequency of studies for each health topic over time is displayed in Fig . Synthesis of included studies Due to the volume of studies in our sample, for the purpose of synthesis we have grouped studies according to four broad media categories: 1) news media, 2) entertainment media, 3) social media, and 4) marketing media (see Table 2 for definitions of the media types used within this study). Mapping of the cumulative frequency of studies over time (see Fig. ) revealed that news media has remained the most frequent focus of studies, followed by studies of marketing media (including both commercial marketing, e.g. of unhealthy products such as cigarettes, and social marketing, e.g. smoking cessation campaigns). However, in recent years, there has been an increase in the number of studies examining entertainment media such as television dramas, music and film, as well as an increase in studies of social media, such as Facebook and Twitter. The distribution of health topics varied across the categories of media examined (see Fig. ). While chronic diseases, obesity and other health topics were most frequently examined in the context of news media, nutrition was considered most often in relation to marketing media, and smoking, alcohol, and physical activity were considered at a similar rate in both news and marketing media. News media A total of 264 studies reported research on news media. Studies of news media included descriptive analyses of news content, studies of audience exposure to news, and investigation of factors that influence news reporting. Figure provides an overview of the main themes and sub-themes of research within the news media category, and these are summarised in more detail below, along with example studies to illustrate. Content of news media A large proportion of studies ( n =244) focused on the content of news media , particularly in terms of the amount and/or type of news coverage of health issues ( n =207), and the characteristics of such coverage ( n =191). The majority of studies used content analysis approaches (e.g. ), with a smaller proportion of studies using other qualitative approaches, such as discourse analysis, to explore the patterns and trends in news media coverage (e.g. ). Of those studies examining the amount and/or type of news coverage, a key focus was on news coverage over time ( n =71) , particularly in terms of changes in the amount of coverage received and key themes within the coverage (e.g. ) . For example, studies have found that the amount of news coverage of obesity , cancer , and smoking-related harms have increased over time. Other studies examined how the nature of news coverage had changed over time, for example demonstrating temporal changes in predominant themes and framing of tobacco , alcohol use , obesity , social and racial disparities in health , and mental health issues . Other studies have used critical analysis methods to track how issues such as second hand-smoke have emerged over time . Another focus area was the impact of events or actions (e.g. implementation of interventions and policies) on news coverage ( n =15; e.g. ). For example, one study considered how the framing of obesity shifted over the course of a sugar-sweetened beverage reduction media campaign , while another considered how news coverage of skin cancer changed following the release of a key public health report on cancer . Nineteen studies compared the amount of news coverage received by different health topics (e.g. ) and/or whether the amount of news coverage received was proportionate to the burden of the problem (e.g. ). For example, two studies demonstrated that news coverage of a range of cancers is underrepresented relative to their population burden . Finally, studies have also considered how coverage differs across news media , including differences across news media aimed at different cultural or language groups (e.g. ), geographical regions (e.g. ), and news media types, such as middle market versus quality newspapers (e.g. ). Studies focusing on characteristics of news coverage predominantly considered the framing of issues related to chronic disease prevention ( n =147). The synthesis revealed that the most frequent focus was on valence of coverage (i.e. whether issues were framed positively or negatively) (e.g. ), and responsibility for causes and solutions (e.g. individual versus government or industry responsibility) (e.g. ), with studies focusing on obesity being particularly prevalent here (e.g. ). Studies of valence and framing included those examining news coverage of particular behaviours of relevance to chronic disease, such as breastfeeding and smoking , as well as those examining support for policy actions, such as regulation to limit sales of sugar sweetened beverages , an ‘alcopop tax’ on ready-to-drink spirits in Australia , and legislation for plain packaging of tobacco . Examples of other specific types of frames studied included gain versus loss frames (e.g. ), thematic (which focus on the broader context) versus episodic frames (which focus on the immediate event or incident and give little or no context) (e.g. ) and health versus appearance frames (e.g. ). Twenty-six studies considered the quality of news media content , including how well content aligned with guidelines or recommendations (e.g. ). For example, one study examined the accuracy of information and level of stigmatisation around obesity in newspaper articles , while another considered the relationship between the amount of news coverage of food groups compared with the recommended amount of consumption of these foods. Finally, a number of studies also considered structural characteristics of news media, including the prominence of articles and use of images (e.g. ), while others considered the actors, evidence or sources used within news articles (e.g. ). Factors that influence news reporting Five studies examined factors that influence health reporting in the news media. Two used surveys to examine associations between journalist characteristics such as gender, age, ethnicity and experience, and news story characteristics, such as framing, source utilisation, and news priorities . A third study explored how journalists judge the newsworthiness of stories that report race-specific health disparities and whether informing journalists of audience reactions to different kinds of framing influences these judgements . The remaining two used interviews to explore the barriers faced by journalists when covering health disparities in the media , and to seek the opinions of health experts on the problems of dominant obesity-prevention frames (personal responsibility and the environment) and explore alternative frames . Exposure to news media Of the 39 studies examining audience exposure to news media, eight focused on awareness of and/or attitudes towards news media, including investigations of public awareness of news coverage of chronic disease topics and health promotion campaigns (e.g. ), attitudes towards news coverage of issues such as obesity , factors that drive audience interest in prevention , and sociodemographic influences on exposure to news media . A number of studies considered the effects of exposure to news media on or association with actual or intended behaviours ( n =9; e.g. ), or on knowledge, attitudes, and beliefs about the causes, consequences and solutions to a range of health issues ( n =25, e.g. ). Such studies often employed experimental designs to test the impact of differences in framing (e.g. negative versus positive, thematic versus episodic, and gain versus loss frames), evidence use, and message salience (e.g. ). For example, one study found that participants who read a news article in which obesity was framed in societal (i.e. highlighting the role of the environment), rather than individual terms, were more likely to attribute obesity to social conditions and identify the government, food industry, and marketing sector to be responsible for solving the problem . Other studies examined the relationship between community level news exposure and individual attitudes and behaviours using a combination of content analysis, surveys, interviews, and community-level health data (e.g. ). For example using content analysis of local news media coverage of tobacco and community survey data, Smith and colleagues found an association between volume of tobacco related newspaper articles and perceived harms of smoking, perceived peer smoking, disapproval of smoking, and smoking within the past 30 days. Eight studies considered the impact of news exposure on attitudes towards public policies to tackle chronic disease . For example, one study found that thematic framing (i.e. incorporating information on context, risk factors, prevention strategies, and social attributions of responsibility), increases support for policy change across a range of health issues, including obesity, smoking and diabetes , while another found that a taste-engineering frame (i.e. highlighting strategies used by the food industry to increase consumption), increases support for food and beverage policies . In contrast, individualising the problem of obesity by identifying an individual child within a news story was associated with reduced support for obesity policies, regardless of how causes of obesity were framed . Finally, a study in the US demonstrated that the effect of framing on policy support is mediated by political opinion, with Democrats expressing a higher level of support for a range of public health policies after exposure to a social determinants of health frame, while Republicans expressed a lower level of support following exposure to the same message . Entertainment media Forty-five studies examined entertainment media, with most focusing on televised entertainment (including reality shows, drama, soaps and documentaries). The majority of studies involved descriptive analyses of entertainment media, and/or investigations into the effects of exposure to entertainment media. Figure provides an overview of the main themes within this media category. Content of entertainment media Thirty-four studies considered the amount of coverage received by health topics (e.g. chronic disease prevention ;), products (e.g. alcohol, cigarettes, unhealthy food ;) and behaviours (e.g. eating, drinking, smoking, weight stigmatization ;) within entertainment media. One study considered the impact of regulation on the frequency of tobacco placement in movies . Over half of the studies a considered the characteristics of coverage in entertainment media ( n =23), for example whether behaviour is portrayed in positive or negative terms (e.g. ), or using message appeal strategies such as sexualisation, glamour or humour (e.g. ). For example, one study found that depictions of alcohol in popular music were associated with wealth, sex, and luxury . Four studies considered whether portrayals of food and drinks within entertainment media aligned with health recommendations , finding that they often do not . Nine studies examined the attributes of the characters involved in entertainment media representations, for example in terms of gender, ethnicity, and age (e.g. ). Exposure to entertainment media Of the nine studies that considered exposure to entertainment media, the main focus areas were audience awareness of the issues portrayed through entertainment media ( n =3; e.g. ), audience attitudes towards portrayal of these issues ( n =5; e.g. ), and the effects of exposure to entertainment media on attitudes and behaviours ( n =6; e.g. ) For example, one study explored audience awareness of and attitudes towards an online social marketing campaign coupled with a popular TV series which aimed to reduce harmful alcohol consumption , while another examined the impact of alcohol portrayals in a television soap on adolescents' attitudes towards alcohol . Social media Forty-nine studies examined social media channels including Twitter and YouTube, social networking sites such as Facebook and MySpace, blogs, and online discussion boards. Studies of social media primarily examined the content of social media ( n =48) and/or factors related to social media exposure ( n =14), including levels of social media engagement and the effects of exposure to messages via social media. Figure provides an overview of the main themes of research within this media category. Analysis of social media content Of the 48 studies that examined the content of social media messages, 28 focused on the amount of coverage of issues related to chronic disease, and included studies of the number of tweets, blog posts or online comments about a particular issue or topic (e.g. smoking regulation, e-cigarettes, or alcohol use) (e.g. ). For example, one study examined the number of tweets related to hookah smoking , while another examined the frequency of health-related tweets by health professionals on Twitter . Thirty-five studies examined characteristics of social media content . These included considerations of how issues such as smoking, alcohol use, cancer and eating disorders are depicted, for example in terms of the key themes in coverage of health topics (e.g. ), the use of message appeal strategies and images (e.g. ), and studies of the quality of information conveyed through social media, including whether the information aligned with health recommendations (e.g. ). For example one study examined how responsibility and solutions for obesity are framed within YouTube videos . Other studies considered how users talk about issues on social media (e.g. ), including the valence of messages, including public sentiment towards policy and regulation (e.g. ) and health promotion campaigns (e.g. ). Exposure to social media There were three main sub-themes identified within studies of exposure to social media coverage. The first examined audience awareness of or attitudes towards social media coverage of issues related to chronic disease (e.g. ). For example, one study used focus groups and surveys to explore women’s attitudes towards healthy eating blogs and their beliefs and attitudes towards using such blogs to improve their dietary habits , while another examined how friends react to adolescents’ portrayals of alcohol on Facebook . The second sub-theme contained studies that examined the factors associated with exposure to and/or engagement with social media coverage of issues related to chronic disease (e.g. ). These included a study of the demographic factors associated with display of alcohol references on MySpace , and another examining whether exposure to tobacco content online was associated with smoking status . Finally, one study examined the effect of exposure to social media messages on behaviour . Marketing media Overall, 159 studies focused on marketing media, of which the majority concerned commercial marketing ( n =110), with a smaller proportion concerning social marketing (e.g. health promotion campaigns) ( n =58). Figure provides an overview of the main themes within this media category. Commercial marketing Of the 109 studies focused on commercial marketing media, the majority ( n =107) focused on examining product portrayals within commercial advertisements and product packaging, including frequency of advertisements and content and characteristics of marketing strategies (e.g. ), with the majority of studies focusing on tobacco and food advertising. For example, one study explored cigarette marketing strategies in India by examining cigarette advertising on billboards, storefronts and at point of sale as well as in films, magazines and newspapers , while another examined how tobacco companies increase magazine advertising in January and February to pre-empt quitting by providing cues to smoking . Other studies examined how marketing strategies such as physical activity references (e.g. ), personal attributes (e.g. ), emotional appeals (e.g. ), and sexual imagery (e.g. ) were used to market products. Nearly a quarter of studies ( n =21) focused on marketing regulations, with the majority of these considering the impact of regulation on advertising practices (e.g. ). For example, one study evaluated the impact of industry self-regulation on television marketing of food to children , while another examined adherence to federal and voluntary standards for alcohol advertising in magazines . Both studies found that while advertising regulations resulted in fewer advertisements, industries find ways to circumnavigate such restrictions . Other studies examined industry counter-strategies in response to advertising regulation, such as the use of brand imagery to promote tobacco use in the face of advertising restrictions (e.g. ). Social marketing Fifty-eight studies explored social marketing for health promotion. Of these, 34 involved an analysis of the content and characteristics of social marketing media, such as content analysis of the characteristics of antismoking or physical advertisements . Other studies explored the impact of social marketing strategies on consumer’s attitudes and behaviours, for example using experimental approaches to examine the impact of message framing (e.g. gain- versus loss-framing) on health-related attitudes and behaviours such as seeking smoking cessation support , visiting the dentist , healthy snack choice , and chronic disease risk perception . Eleven studies used focus groups, interviews and/or surveys to explore public perceptions of social marketing strategies (e.g., awareness, recall, liking, and perceived effectiveness of health promotion campaigns) .
Four hundred and ninety-nine studies were included in the review. Table provides a description of the included studies and details of the key characteristics of each included study are also provided (see Additional File ). The majority of studies ( n =297; 60%) were conducted in the USA, followed by Australia ( n =52; 10%), Canada ( n =37; 7%), and the United Kingdom ( n =31; 6%), and only 13 (3%) studies took a multi-country approach (e.g. a comparative analysis of media coverage across countries). News and information media were the most frequent focus of studies followed by marketing media. Studies were categorised according to the approach taken. Descriptive studies were those that involved an analysis (whether qualitative, quantitative or both) of media content, and were the most common study type within the sample ( n =446). Descriptive studies were most often cross-sectional in nature, i.e., the analysis of news coverage of a particular issue at a particular point in time, although some studies took a longitudinal approach, for example examining patterns in media coverage over time. A smaller number of studies ( n =60) employed an experimental approach, seeking to test the impact of differences in how chronic diseases were portrayed on a specified variable, e.g. testing the effect of presenting different framings of a news story on public attitudes to chronic disease, and included both lab-based and naturalistic studies. For studies of media content, the sample timeframe was most often between 0 and 5 years in duration, with a small proportion sampling over a duration exceeding 10 years. Newspapers were the most common media channel examined within our sample, followed by television and online media. The number of studies increased over time. Studies covered a range of health topics related to chronic disease prevention, with the majority of studies ( n =342; 69%) focusing on behavioural risk factors related to chronic disease, particularly smoking and nutrition. Just over a quarter of studies ( n =134; 27%) focused on specific chronic diseases, including cancer ( n =93; 19%), type 2 diabetes ( n =15; 3%), cardiovascular disease ( n =16; 3%), and other chronic diseases (e.g. chronic kidney disease, hypertension; n =9; 2%). Eighty-three studies (17%) focused on other health topics relevant to chronic disease prevention, such as oral health, mental health, and child and maternal health. The cumulative frequency of studies for each health topic over time is displayed in Fig .
Due to the volume of studies in our sample, for the purpose of synthesis we have grouped studies according to four broad media categories: 1) news media, 2) entertainment media, 3) social media, and 4) marketing media (see Table 2 for definitions of the media types used within this study). Mapping of the cumulative frequency of studies over time (see Fig. ) revealed that news media has remained the most frequent focus of studies, followed by studies of marketing media (including both commercial marketing, e.g. of unhealthy products such as cigarettes, and social marketing, e.g. smoking cessation campaigns). However, in recent years, there has been an increase in the number of studies examining entertainment media such as television dramas, music and film, as well as an increase in studies of social media, such as Facebook and Twitter. The distribution of health topics varied across the categories of media examined (see Fig. ). While chronic diseases, obesity and other health topics were most frequently examined in the context of news media, nutrition was considered most often in relation to marketing media, and smoking, alcohol, and physical activity were considered at a similar rate in both news and marketing media.
A total of 264 studies reported research on news media. Studies of news media included descriptive analyses of news content, studies of audience exposure to news, and investigation of factors that influence news reporting. Figure provides an overview of the main themes and sub-themes of research within the news media category, and these are summarised in more detail below, along with example studies to illustrate. Content of news media A large proportion of studies ( n =244) focused on the content of news media , particularly in terms of the amount and/or type of news coverage of health issues ( n =207), and the characteristics of such coverage ( n =191). The majority of studies used content analysis approaches (e.g. ), with a smaller proportion of studies using other qualitative approaches, such as discourse analysis, to explore the patterns and trends in news media coverage (e.g. ). Of those studies examining the amount and/or type of news coverage, a key focus was on news coverage over time ( n =71) , particularly in terms of changes in the amount of coverage received and key themes within the coverage (e.g. ) . For example, studies have found that the amount of news coverage of obesity , cancer , and smoking-related harms have increased over time. Other studies examined how the nature of news coverage had changed over time, for example demonstrating temporal changes in predominant themes and framing of tobacco , alcohol use , obesity , social and racial disparities in health , and mental health issues . Other studies have used critical analysis methods to track how issues such as second hand-smoke have emerged over time . Another focus area was the impact of events or actions (e.g. implementation of interventions and policies) on news coverage ( n =15; e.g. ). For example, one study considered how the framing of obesity shifted over the course of a sugar-sweetened beverage reduction media campaign , while another considered how news coverage of skin cancer changed following the release of a key public health report on cancer . Nineteen studies compared the amount of news coverage received by different health topics (e.g. ) and/or whether the amount of news coverage received was proportionate to the burden of the problem (e.g. ). For example, two studies demonstrated that news coverage of a range of cancers is underrepresented relative to their population burden . Finally, studies have also considered how coverage differs across news media , including differences across news media aimed at different cultural or language groups (e.g. ), geographical regions (e.g. ), and news media types, such as middle market versus quality newspapers (e.g. ). Studies focusing on characteristics of news coverage predominantly considered the framing of issues related to chronic disease prevention ( n =147). The synthesis revealed that the most frequent focus was on valence of coverage (i.e. whether issues were framed positively or negatively) (e.g. ), and responsibility for causes and solutions (e.g. individual versus government or industry responsibility) (e.g. ), with studies focusing on obesity being particularly prevalent here (e.g. ). Studies of valence and framing included those examining news coverage of particular behaviours of relevance to chronic disease, such as breastfeeding and smoking , as well as those examining support for policy actions, such as regulation to limit sales of sugar sweetened beverages , an ‘alcopop tax’ on ready-to-drink spirits in Australia , and legislation for plain packaging of tobacco . Examples of other specific types of frames studied included gain versus loss frames (e.g. ), thematic (which focus on the broader context) versus episodic frames (which focus on the immediate event or incident and give little or no context) (e.g. ) and health versus appearance frames (e.g. ). Twenty-six studies considered the quality of news media content , including how well content aligned with guidelines or recommendations (e.g. ). For example, one study examined the accuracy of information and level of stigmatisation around obesity in newspaper articles , while another considered the relationship between the amount of news coverage of food groups compared with the recommended amount of consumption of these foods. Finally, a number of studies also considered structural characteristics of news media, including the prominence of articles and use of images (e.g. ), while others considered the actors, evidence or sources used within news articles (e.g. ). Factors that influence news reporting Five studies examined factors that influence health reporting in the news media. Two used surveys to examine associations between journalist characteristics such as gender, age, ethnicity and experience, and news story characteristics, such as framing, source utilisation, and news priorities . A third study explored how journalists judge the newsworthiness of stories that report race-specific health disparities and whether informing journalists of audience reactions to different kinds of framing influences these judgements . The remaining two used interviews to explore the barriers faced by journalists when covering health disparities in the media , and to seek the opinions of health experts on the problems of dominant obesity-prevention frames (personal responsibility and the environment) and explore alternative frames . Exposure to news media Of the 39 studies examining audience exposure to news media, eight focused on awareness of and/or attitudes towards news media, including investigations of public awareness of news coverage of chronic disease topics and health promotion campaigns (e.g. ), attitudes towards news coverage of issues such as obesity , factors that drive audience interest in prevention , and sociodemographic influences on exposure to news media . A number of studies considered the effects of exposure to news media on or association with actual or intended behaviours ( n =9; e.g. ), or on knowledge, attitudes, and beliefs about the causes, consequences and solutions to a range of health issues ( n =25, e.g. ). Such studies often employed experimental designs to test the impact of differences in framing (e.g. negative versus positive, thematic versus episodic, and gain versus loss frames), evidence use, and message salience (e.g. ). For example, one study found that participants who read a news article in which obesity was framed in societal (i.e. highlighting the role of the environment), rather than individual terms, were more likely to attribute obesity to social conditions and identify the government, food industry, and marketing sector to be responsible for solving the problem . Other studies examined the relationship between community level news exposure and individual attitudes and behaviours using a combination of content analysis, surveys, interviews, and community-level health data (e.g. ). For example using content analysis of local news media coverage of tobacco and community survey data, Smith and colleagues found an association between volume of tobacco related newspaper articles and perceived harms of smoking, perceived peer smoking, disapproval of smoking, and smoking within the past 30 days. Eight studies considered the impact of news exposure on attitudes towards public policies to tackle chronic disease . For example, one study found that thematic framing (i.e. incorporating information on context, risk factors, prevention strategies, and social attributions of responsibility), increases support for policy change across a range of health issues, including obesity, smoking and diabetes , while another found that a taste-engineering frame (i.e. highlighting strategies used by the food industry to increase consumption), increases support for food and beverage policies . In contrast, individualising the problem of obesity by identifying an individual child within a news story was associated with reduced support for obesity policies, regardless of how causes of obesity were framed . Finally, a study in the US demonstrated that the effect of framing on policy support is mediated by political opinion, with Democrats expressing a higher level of support for a range of public health policies after exposure to a social determinants of health frame, while Republicans expressed a lower level of support following exposure to the same message .
A large proportion of studies ( n =244) focused on the content of news media , particularly in terms of the amount and/or type of news coverage of health issues ( n =207), and the characteristics of such coverage ( n =191). The majority of studies used content analysis approaches (e.g. ), with a smaller proportion of studies using other qualitative approaches, such as discourse analysis, to explore the patterns and trends in news media coverage (e.g. ). Of those studies examining the amount and/or type of news coverage, a key focus was on news coverage over time ( n =71) , particularly in terms of changes in the amount of coverage received and key themes within the coverage (e.g. ) . For example, studies have found that the amount of news coverage of obesity , cancer , and smoking-related harms have increased over time. Other studies examined how the nature of news coverage had changed over time, for example demonstrating temporal changes in predominant themes and framing of tobacco , alcohol use , obesity , social and racial disparities in health , and mental health issues . Other studies have used critical analysis methods to track how issues such as second hand-smoke have emerged over time . Another focus area was the impact of events or actions (e.g. implementation of interventions and policies) on news coverage ( n =15; e.g. ). For example, one study considered how the framing of obesity shifted over the course of a sugar-sweetened beverage reduction media campaign , while another considered how news coverage of skin cancer changed following the release of a key public health report on cancer . Nineteen studies compared the amount of news coverage received by different health topics (e.g. ) and/or whether the amount of news coverage received was proportionate to the burden of the problem (e.g. ). For example, two studies demonstrated that news coverage of a range of cancers is underrepresented relative to their population burden . Finally, studies have also considered how coverage differs across news media , including differences across news media aimed at different cultural or language groups (e.g. ), geographical regions (e.g. ), and news media types, such as middle market versus quality newspapers (e.g. ). Studies focusing on characteristics of news coverage predominantly considered the framing of issues related to chronic disease prevention ( n =147). The synthesis revealed that the most frequent focus was on valence of coverage (i.e. whether issues were framed positively or negatively) (e.g. ), and responsibility for causes and solutions (e.g. individual versus government or industry responsibility) (e.g. ), with studies focusing on obesity being particularly prevalent here (e.g. ). Studies of valence and framing included those examining news coverage of particular behaviours of relevance to chronic disease, such as breastfeeding and smoking , as well as those examining support for policy actions, such as regulation to limit sales of sugar sweetened beverages , an ‘alcopop tax’ on ready-to-drink spirits in Australia , and legislation for plain packaging of tobacco . Examples of other specific types of frames studied included gain versus loss frames (e.g. ), thematic (which focus on the broader context) versus episodic frames (which focus on the immediate event or incident and give little or no context) (e.g. ) and health versus appearance frames (e.g. ). Twenty-six studies considered the quality of news media content , including how well content aligned with guidelines or recommendations (e.g. ). For example, one study examined the accuracy of information and level of stigmatisation around obesity in newspaper articles , while another considered the relationship between the amount of news coverage of food groups compared with the recommended amount of consumption of these foods. Finally, a number of studies also considered structural characteristics of news media, including the prominence of articles and use of images (e.g. ), while others considered the actors, evidence or sources used within news articles (e.g. ).
Five studies examined factors that influence health reporting in the news media. Two used surveys to examine associations between journalist characteristics such as gender, age, ethnicity and experience, and news story characteristics, such as framing, source utilisation, and news priorities . A third study explored how journalists judge the newsworthiness of stories that report race-specific health disparities and whether informing journalists of audience reactions to different kinds of framing influences these judgements . The remaining two used interviews to explore the barriers faced by journalists when covering health disparities in the media , and to seek the opinions of health experts on the problems of dominant obesity-prevention frames (personal responsibility and the environment) and explore alternative frames .
Of the 39 studies examining audience exposure to news media, eight focused on awareness of and/or attitudes towards news media, including investigations of public awareness of news coverage of chronic disease topics and health promotion campaigns (e.g. ), attitudes towards news coverage of issues such as obesity , factors that drive audience interest in prevention , and sociodemographic influences on exposure to news media . A number of studies considered the effects of exposure to news media on or association with actual or intended behaviours ( n =9; e.g. ), or on knowledge, attitudes, and beliefs about the causes, consequences and solutions to a range of health issues ( n =25, e.g. ). Such studies often employed experimental designs to test the impact of differences in framing (e.g. negative versus positive, thematic versus episodic, and gain versus loss frames), evidence use, and message salience (e.g. ). For example, one study found that participants who read a news article in which obesity was framed in societal (i.e. highlighting the role of the environment), rather than individual terms, were more likely to attribute obesity to social conditions and identify the government, food industry, and marketing sector to be responsible for solving the problem . Other studies examined the relationship between community level news exposure and individual attitudes and behaviours using a combination of content analysis, surveys, interviews, and community-level health data (e.g. ). For example using content analysis of local news media coverage of tobacco and community survey data, Smith and colleagues found an association between volume of tobacco related newspaper articles and perceived harms of smoking, perceived peer smoking, disapproval of smoking, and smoking within the past 30 days. Eight studies considered the impact of news exposure on attitudes towards public policies to tackle chronic disease . For example, one study found that thematic framing (i.e. incorporating information on context, risk factors, prevention strategies, and social attributions of responsibility), increases support for policy change across a range of health issues, including obesity, smoking and diabetes , while another found that a taste-engineering frame (i.e. highlighting strategies used by the food industry to increase consumption), increases support for food and beverage policies . In contrast, individualising the problem of obesity by identifying an individual child within a news story was associated with reduced support for obesity policies, regardless of how causes of obesity were framed . Finally, a study in the US demonstrated that the effect of framing on policy support is mediated by political opinion, with Democrats expressing a higher level of support for a range of public health policies after exposure to a social determinants of health frame, while Republicans expressed a lower level of support following exposure to the same message .
Forty-five studies examined entertainment media, with most focusing on televised entertainment (including reality shows, drama, soaps and documentaries). The majority of studies involved descriptive analyses of entertainment media, and/or investigations into the effects of exposure to entertainment media. Figure provides an overview of the main themes within this media category. Content of entertainment media Thirty-four studies considered the amount of coverage received by health topics (e.g. chronic disease prevention ;), products (e.g. alcohol, cigarettes, unhealthy food ;) and behaviours (e.g. eating, drinking, smoking, weight stigmatization ;) within entertainment media. One study considered the impact of regulation on the frequency of tobacco placement in movies . Over half of the studies a considered the characteristics of coverage in entertainment media ( n =23), for example whether behaviour is portrayed in positive or negative terms (e.g. ), or using message appeal strategies such as sexualisation, glamour or humour (e.g. ). For example, one study found that depictions of alcohol in popular music were associated with wealth, sex, and luxury . Four studies considered whether portrayals of food and drinks within entertainment media aligned with health recommendations , finding that they often do not . Nine studies examined the attributes of the characters involved in entertainment media representations, for example in terms of gender, ethnicity, and age (e.g. ). Exposure to entertainment media Of the nine studies that considered exposure to entertainment media, the main focus areas were audience awareness of the issues portrayed through entertainment media ( n =3; e.g. ), audience attitudes towards portrayal of these issues ( n =5; e.g. ), and the effects of exposure to entertainment media on attitudes and behaviours ( n =6; e.g. ) For example, one study explored audience awareness of and attitudes towards an online social marketing campaign coupled with a popular TV series which aimed to reduce harmful alcohol consumption , while another examined the impact of alcohol portrayals in a television soap on adolescents' attitudes towards alcohol .
Thirty-four studies considered the amount of coverage received by health topics (e.g. chronic disease prevention ;), products (e.g. alcohol, cigarettes, unhealthy food ;) and behaviours (e.g. eating, drinking, smoking, weight stigmatization ;) within entertainment media. One study considered the impact of regulation on the frequency of tobacco placement in movies . Over half of the studies a considered the characteristics of coverage in entertainment media ( n =23), for example whether behaviour is portrayed in positive or negative terms (e.g. ), or using message appeal strategies such as sexualisation, glamour or humour (e.g. ). For example, one study found that depictions of alcohol in popular music were associated with wealth, sex, and luxury . Four studies considered whether portrayals of food and drinks within entertainment media aligned with health recommendations , finding that they often do not . Nine studies examined the attributes of the characters involved in entertainment media representations, for example in terms of gender, ethnicity, and age (e.g. ).
Of the nine studies that considered exposure to entertainment media, the main focus areas were audience awareness of the issues portrayed through entertainment media ( n =3; e.g. ), audience attitudes towards portrayal of these issues ( n =5; e.g. ), and the effects of exposure to entertainment media on attitudes and behaviours ( n =6; e.g. ) For example, one study explored audience awareness of and attitudes towards an online social marketing campaign coupled with a popular TV series which aimed to reduce harmful alcohol consumption , while another examined the impact of alcohol portrayals in a television soap on adolescents' attitudes towards alcohol .
Forty-nine studies examined social media channels including Twitter and YouTube, social networking sites such as Facebook and MySpace, blogs, and online discussion boards. Studies of social media primarily examined the content of social media ( n =48) and/or factors related to social media exposure ( n =14), including levels of social media engagement and the effects of exposure to messages via social media. Figure provides an overview of the main themes of research within this media category. Analysis of social media content Of the 48 studies that examined the content of social media messages, 28 focused on the amount of coverage of issues related to chronic disease, and included studies of the number of tweets, blog posts or online comments about a particular issue or topic (e.g. smoking regulation, e-cigarettes, or alcohol use) (e.g. ). For example, one study examined the number of tweets related to hookah smoking , while another examined the frequency of health-related tweets by health professionals on Twitter . Thirty-five studies examined characteristics of social media content . These included considerations of how issues such as smoking, alcohol use, cancer and eating disorders are depicted, for example in terms of the key themes in coverage of health topics (e.g. ), the use of message appeal strategies and images (e.g. ), and studies of the quality of information conveyed through social media, including whether the information aligned with health recommendations (e.g. ). For example one study examined how responsibility and solutions for obesity are framed within YouTube videos . Other studies considered how users talk about issues on social media (e.g. ), including the valence of messages, including public sentiment towards policy and regulation (e.g. ) and health promotion campaigns (e.g. ). Exposure to social media There were three main sub-themes identified within studies of exposure to social media coverage. The first examined audience awareness of or attitudes towards social media coverage of issues related to chronic disease (e.g. ). For example, one study used focus groups and surveys to explore women’s attitudes towards healthy eating blogs and their beliefs and attitudes towards using such blogs to improve their dietary habits , while another examined how friends react to adolescents’ portrayals of alcohol on Facebook . The second sub-theme contained studies that examined the factors associated with exposure to and/or engagement with social media coverage of issues related to chronic disease (e.g. ). These included a study of the demographic factors associated with display of alcohol references on MySpace , and another examining whether exposure to tobacco content online was associated with smoking status . Finally, one study examined the effect of exposure to social media messages on behaviour .
Of the 48 studies that examined the content of social media messages, 28 focused on the amount of coverage of issues related to chronic disease, and included studies of the number of tweets, blog posts or online comments about a particular issue or topic (e.g. smoking regulation, e-cigarettes, or alcohol use) (e.g. ). For example, one study examined the number of tweets related to hookah smoking , while another examined the frequency of health-related tweets by health professionals on Twitter . Thirty-five studies examined characteristics of social media content . These included considerations of how issues such as smoking, alcohol use, cancer and eating disorders are depicted, for example in terms of the key themes in coverage of health topics (e.g. ), the use of message appeal strategies and images (e.g. ), and studies of the quality of information conveyed through social media, including whether the information aligned with health recommendations (e.g. ). For example one study examined how responsibility and solutions for obesity are framed within YouTube videos . Other studies considered how users talk about issues on social media (e.g. ), including the valence of messages, including public sentiment towards policy and regulation (e.g. ) and health promotion campaigns (e.g. ).
There were three main sub-themes identified within studies of exposure to social media coverage. The first examined audience awareness of or attitudes towards social media coverage of issues related to chronic disease (e.g. ). For example, one study used focus groups and surveys to explore women’s attitudes towards healthy eating blogs and their beliefs and attitudes towards using such blogs to improve their dietary habits , while another examined how friends react to adolescents’ portrayals of alcohol on Facebook . The second sub-theme contained studies that examined the factors associated with exposure to and/or engagement with social media coverage of issues related to chronic disease (e.g. ). These included a study of the demographic factors associated with display of alcohol references on MySpace , and another examining whether exposure to tobacco content online was associated with smoking status . Finally, one study examined the effect of exposure to social media messages on behaviour .
Overall, 159 studies focused on marketing media, of which the majority concerned commercial marketing ( n =110), with a smaller proportion concerning social marketing (e.g. health promotion campaigns) ( n =58). Figure provides an overview of the main themes within this media category. Commercial marketing Of the 109 studies focused on commercial marketing media, the majority ( n =107) focused on examining product portrayals within commercial advertisements and product packaging, including frequency of advertisements and content and characteristics of marketing strategies (e.g. ), with the majority of studies focusing on tobacco and food advertising. For example, one study explored cigarette marketing strategies in India by examining cigarette advertising on billboards, storefronts and at point of sale as well as in films, magazines and newspapers , while another examined how tobacco companies increase magazine advertising in January and February to pre-empt quitting by providing cues to smoking . Other studies examined how marketing strategies such as physical activity references (e.g. ), personal attributes (e.g. ), emotional appeals (e.g. ), and sexual imagery (e.g. ) were used to market products. Nearly a quarter of studies ( n =21) focused on marketing regulations, with the majority of these considering the impact of regulation on advertising practices (e.g. ). For example, one study evaluated the impact of industry self-regulation on television marketing of food to children , while another examined adherence to federal and voluntary standards for alcohol advertising in magazines . Both studies found that while advertising regulations resulted in fewer advertisements, industries find ways to circumnavigate such restrictions . Other studies examined industry counter-strategies in response to advertising regulation, such as the use of brand imagery to promote tobacco use in the face of advertising restrictions (e.g. ). Social marketing Fifty-eight studies explored social marketing for health promotion. Of these, 34 involved an analysis of the content and characteristics of social marketing media, such as content analysis of the characteristics of antismoking or physical advertisements . Other studies explored the impact of social marketing strategies on consumer’s attitudes and behaviours, for example using experimental approaches to examine the impact of message framing (e.g. gain- versus loss-framing) on health-related attitudes and behaviours such as seeking smoking cessation support , visiting the dentist , healthy snack choice , and chronic disease risk perception . Eleven studies used focus groups, interviews and/or surveys to explore public perceptions of social marketing strategies (e.g., awareness, recall, liking, and perceived effectiveness of health promotion campaigns) .
Of the 109 studies focused on commercial marketing media, the majority ( n =107) focused on examining product portrayals within commercial advertisements and product packaging, including frequency of advertisements and content and characteristics of marketing strategies (e.g. ), with the majority of studies focusing on tobacco and food advertising. For example, one study explored cigarette marketing strategies in India by examining cigarette advertising on billboards, storefronts and at point of sale as well as in films, magazines and newspapers , while another examined how tobacco companies increase magazine advertising in January and February to pre-empt quitting by providing cues to smoking . Other studies examined how marketing strategies such as physical activity references (e.g. ), personal attributes (e.g. ), emotional appeals (e.g. ), and sexual imagery (e.g. ) were used to market products. Nearly a quarter of studies ( n =21) focused on marketing regulations, with the majority of these considering the impact of regulation on advertising practices (e.g. ). For example, one study evaluated the impact of industry self-regulation on television marketing of food to children , while another examined adherence to federal and voluntary standards for alcohol advertising in magazines . Both studies found that while advertising regulations resulted in fewer advertisements, industries find ways to circumnavigate such restrictions . Other studies examined industry counter-strategies in response to advertising regulation, such as the use of brand imagery to promote tobacco use in the face of advertising restrictions (e.g. ).
Fifty-eight studies explored social marketing for health promotion. Of these, 34 involved an analysis of the content and characteristics of social marketing media, such as content analysis of the characteristics of antismoking or physical advertisements . Other studies explored the impact of social marketing strategies on consumer’s attitudes and behaviours, for example using experimental approaches to examine the impact of message framing (e.g. gain- versus loss-framing) on health-related attitudes and behaviours such as seeking smoking cessation support , visiting the dentist , healthy snack choice , and chronic disease risk perception . Eleven studies used focus groups, interviews and/or surveys to explore public perceptions of social marketing strategies (e.g., awareness, recall, liking, and perceived effectiveness of health promotion campaigns) .
We aimed to explore the scope and nature of research on media coverage of issues related to chronic disease. Research in this area has proliferated over the last three decades, with a particularly steep increase in the number of studies published since 2000. Across the sample, behavioural risk factors for chronic disease, tobacco smoking and nutrition especially, have received the most research attention. The volume of research on media portrayals of nutrition appears to be driven by research on advertising media, where there has been considerable focus on how unhealthy foods are marketed, particularly to children. In contrast, the volume of articles related to smoking seems to be driven by a combination of studies of cigarette marketing and news media representations of smoking. The large proportion of research articles examining media portrayals of smoking is unsurprising when considered in light of the huge shifts in public and political opinion in relation to tobacco control legislation, policy, and program support in recent decades. For example, since the 1970s in Australia, tobacco control advocacy, which is often enacted through news and other media coverage, has resulted in significant gains including advertising bans, increased taxation and banning of smoking in indoor spaces . Much of the pioneering work in media advocacy and framing of public health issues therefore originated in tobacco control, and has paved the way for research into media portrayals of other public health issues . The findings revealed a tendency for studies to focus on single health topics, with those studies that did consider multiple health topics tending to either examine closely related topics, such as nutrition and obesity, or focus on the amount of coverage across different topics . Comparative analyses, such as those considering similarities and differences in media coverage of policies to encourage different health behaviours, such as smoking cessation and weight control or considering the differential effects of framing effects on audience attitudes depending on health topic were few and far between. In addition, there was only a handful of multi-country studies, for example, exploring how obesity was framed within news media in France and the US , and the impact of policies around online marketing of food to children across three countries . Comparative approaches across countries and settings allow for exploration of the various contextual and cultural factors that influence media portrayal of issues related to chronic disease prevention, and allow broader insights and generalisations to be drawn. While such approaches may be challenging to undertake (not least when there are language differences to take into account), cross-country policy approaches to chronic disease prevention, such as those within the European Union or driven by the World Health Organisation require cross-country understanding of the media landscape. The majority of studies in this review have focused on analyses of the content of media, with a large proportion of studies in each media category considering the content and characteristics of media coverage of a variety of issues (news = 92%, entertainment = 85%, social media = 98%, marketing = 88%). In contrast, a much smaller proportion of studies in each media category were concerned with the impact of exposure to media (news = 15%, entertainment = 20%, social media = 31%, marketing = 16%). This difference may reflect the relative ease of describing and analysing media content compared with assessing the impact of exposure to content on factors such as audience attitudes and behaviours. However, while studies of media content are valuable in demonstrating what issues are likely to gain traction within the media and provide important insights into the way that issues are being communicated to the public, it is also critical to understand the impact that such communications have on audiences’ attitudes and behaviours. The effects of message framing on audiences cannot be taken for granted as audiences are not passive receptacles for information. Instead individuals actively engage with messages to a greater or lesser extent, and may accept, reject or negotiate how they interpret information, particularly in light of their existing knowledge, attitudes, beliefs, biases and previous experiences . Understanding the factors that influence message interpretation is crucial in thinking about audience segmentation and targeting, and the range of potential impacts that a single message could have on different groups and across contexts of contrasting social and physical geographies. A good example of this is a study of differences in Republican vs. Democrat voter attitudes towards policy following presentation of the same message . However, studying the effects of exposure to media is challenging, particularly as the social nature of interpreting media messages is difficult to capture through experimental methods, and reactions studied under artificial settings may not provide insights that are generalisable to community-based settings . However, social media platforms may provide us with a natural laboratory in which these kinds of effects could be studied (see below for a discussion of this). There were also very few studies that consider the factors that influence media reporting of issues related to chronic disease. News reporting can be shaped by personal and professional biases , and understanding these biases is vital if we are to move beyond simple description of news stories towards strategies to change the way that issues related to chronic disease are portrayed. In terms of the types of media that have been studied, news and marketing media have been the most frequent focus of research across the time period, with comparatively fewer studies of social media, such as Twitter, Facebook and YouTube. This is likely to be a historical bias which reflects the relatively recent growth of social media and advances in techniques for the analysis of social media data. In recent years the media landscape has changed, and continues to change rapidly, as people increasingly use social media platforms to access news and entertainment media, as well as to interact with others . An understanding of how issues related to chronic disease are being portrayed and discussed within these social media spaces will be crucial going forwards. In particular, social media platforms represent a more interactive form of media engagement that traditional channels such as newspapers and radio, allowing audiences to share and discuss information in real time, while algorithms such as those used by Facebook use a range of information to target the content that users are exposed to. Social media platforms therefore provide fertile ground for research examining the diffusion and reverberation of information within and across networks, audience discussion and opinions about a range of issues, and provide opportunities for experiments to test how audience react to and interact with different kinds of messages related to chronic disease. There is already pioneering work happening within this space, and we would expect to see a rapid growth in research in these areas in the coming years. Limitations Within this scoping review we have provided a snapshot of the current landscape of research on media portrayals of issues related to chronic disease, highlighting the key focus areas across the field as a whole, and thus going further than previous reviews which have tended to focus on media portrayals of single health topics or media types (e.g. ). As a result, this review was necessarily broad and our search strategy reflects this, for example in the decision to use a select subset of key MESH headings to capture articles in each of the topic areas rather than an exhaustive list of key words. As pointed out by one of the reviewers of this article, this may have resulted in the omission of relevant papers that used different terms from those contained in our search strategy. For example, it was noted that the work by Emery and colleagues on Twitter content related to tobacco use was not picked up within our search. However, a post-hoc deployment of our search strategy in Medline with the inclusion of additional search terms related to the original search terms for ‘smoking’ (addition terms: Tobacco Smoking/ OR Tobacco/ OR Tobacco, Smokeless/ OR Electronic Nicotine Delivery Systems/ OR Tobacco Products/ OR Vaping/ OR e-cig*.mp OR cigarette.mp OR juul.mp) and ‘social media’ (additional terms: facebook.mp OR twitter.mp OR Instagram.mp OR youtube.mp) only returned an additional 26 and 9 articles respectively (prior to any screening to assess whether these additional studies met the inclusion criteria). Similarly, we recognise that the decision to use ‘content analysis’ as search term (see Table ) may have resulted in the omission of studies using different approaches such as discourse or textual analysis. However, the use of ‘frame’ and ‘framing’ as alongside ‘content analysis’ (see Table ) meant that articles that examined framing of chronic disease issues using approaches other than content analysis were still captured within our search. Indeed, a post-hoc re-run of our search strategy with the addition of ‘discourse analysis’ and ‘text analysis’ in Medline, only returned an additional 34 results prior to any screening. As such, while a minority of papers may indeed have been missed as a result of our search strategy, this review still serves as a useful and novel snapshot of the literature, as intended when we set out to undertake a scoping review, and the current search strategy is unlikely to have significantly biased the findings. The breadth of this review, spanning media coverage of a range of non-communicable diseases and their risk factors, meant that there was an extremely high volume of search results returned and articles included, which had implications for our handling of the data. First, due to the volume of results returned from the databases searches, and the intention for this review to be a ‘rapid mapping’ of key themes in this area, we did not extend the search to include unpublished literature or hand-searching of journals and recognise that this may have led to some studies being missed. Second, while it would have been desirable to have a second reviewer check all references for inclusion and data extraction, the volume of literature precluded this. Instead, we engaged in frequent discussions within the research team to ensure consistency and discuss uncertainties as they arose, and additional reviewers checked randomly selected subsets of data and demonstrated a high level of agreement (see ‘Study selection’). Finally, while more applicable to systematic reviews than scoping reviews, the large number of studies included within our sample meant that critical appraisal of the evidence and assessment of study quality was beyond the scope of this review. Finally, the volume of studies identified within this review also presented challenges to data synthesis. For example, while we have identified a number of studies examining media portrayals of different policy interventions such as smoking regulation and sugar taxes, a more in depth synthesis of these papers to draw out similarities and differences in how different policies are framed within the news media and how this influences public opinion will be a valuable next step. Another insight that would be important to follow up is how risks, causes and solutions of chronic diseases have been framed across the topic areas in order to identify similarities and differences and the impacts of different framings across topics.
Within this scoping review we have provided a snapshot of the current landscape of research on media portrayals of issues related to chronic disease, highlighting the key focus areas across the field as a whole, and thus going further than previous reviews which have tended to focus on media portrayals of single health topics or media types (e.g. ). As a result, this review was necessarily broad and our search strategy reflects this, for example in the decision to use a select subset of key MESH headings to capture articles in each of the topic areas rather than an exhaustive list of key words. As pointed out by one of the reviewers of this article, this may have resulted in the omission of relevant papers that used different terms from those contained in our search strategy. For example, it was noted that the work by Emery and colleagues on Twitter content related to tobacco use was not picked up within our search. However, a post-hoc deployment of our search strategy in Medline with the inclusion of additional search terms related to the original search terms for ‘smoking’ (addition terms: Tobacco Smoking/ OR Tobacco/ OR Tobacco, Smokeless/ OR Electronic Nicotine Delivery Systems/ OR Tobacco Products/ OR Vaping/ OR e-cig*.mp OR cigarette.mp OR juul.mp) and ‘social media’ (additional terms: facebook.mp OR twitter.mp OR Instagram.mp OR youtube.mp) only returned an additional 26 and 9 articles respectively (prior to any screening to assess whether these additional studies met the inclusion criteria). Similarly, we recognise that the decision to use ‘content analysis’ as search term (see Table ) may have resulted in the omission of studies using different approaches such as discourse or textual analysis. However, the use of ‘frame’ and ‘framing’ as alongside ‘content analysis’ (see Table ) meant that articles that examined framing of chronic disease issues using approaches other than content analysis were still captured within our search. Indeed, a post-hoc re-run of our search strategy with the addition of ‘discourse analysis’ and ‘text analysis’ in Medline, only returned an additional 34 results prior to any screening. As such, while a minority of papers may indeed have been missed as a result of our search strategy, this review still serves as a useful and novel snapshot of the literature, as intended when we set out to undertake a scoping review, and the current search strategy is unlikely to have significantly biased the findings. The breadth of this review, spanning media coverage of a range of non-communicable diseases and their risk factors, meant that there was an extremely high volume of search results returned and articles included, which had implications for our handling of the data. First, due to the volume of results returned from the databases searches, and the intention for this review to be a ‘rapid mapping’ of key themes in this area, we did not extend the search to include unpublished literature or hand-searching of journals and recognise that this may have led to some studies being missed. Second, while it would have been desirable to have a second reviewer check all references for inclusion and data extraction, the volume of literature precluded this. Instead, we engaged in frequent discussions within the research team to ensure consistency and discuss uncertainties as they arose, and additional reviewers checked randomly selected subsets of data and demonstrated a high level of agreement (see ‘Study selection’). Finally, while more applicable to systematic reviews than scoping reviews, the large number of studies included within our sample meant that critical appraisal of the evidence and assessment of study quality was beyond the scope of this review. Finally, the volume of studies identified within this review also presented challenges to data synthesis. For example, while we have identified a number of studies examining media portrayals of different policy interventions such as smoking regulation and sugar taxes, a more in depth synthesis of these papers to draw out similarities and differences in how different policies are framed within the news media and how this influences public opinion will be a valuable next step. Another insight that would be important to follow up is how risks, causes and solutions of chronic diseases have been framed across the topic areas in order to identify similarities and differences and the impacts of different framings across topics.
This scoping review provides a high-level overview of the key topics, approaches and themes across existing research on media coverage of issues related to chronic disease spanning more than thirty years. Taken together, the findings of this review indicate that while there has been a considerable body of research on the amount and type of media coverage of issues related to chronic disease prevention, there has been less focus on the factors that influence the amount and type of media coverage, and the effects of media coverage on public attitudes and behaviours. While an understanding of how issues are framed within the news media is vital to understanding how stories around chronic disease are being told, greater understanding of the factors that influence how issues related to chronic disease prevention get reported and what audiences do with the information is needed going forwards. Further synthesis of study findings across different risk factors, causes and solutions, is also an important next step in order to demonstrate the key insights from the field as a whole that can be applied to aid understanding of future actions. For example, we recently conducted a synthesis of studies of the content and effects of media framing of a range of policy interventions for chronic disease prevention to inform an understanding of the how future policies might be portrayed in the media and responded to by the public . Finally, while not the main focus of our search, we noted a steady increase in recent years in the number of articles considering the social determinants of health in relation to chronic disease prevention, which may represent an important shift towards recognising the key role that such factors play in shaping health.
Additional file 1. Characteristics of included papers. Additional file 2. Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) Checklist.
|
Standard operating procedure combined with comprehensive quality control system for multiple LC-MS platforms urinary proteomics | 63f66a67-4b29-4851-8601-f058bf1e62ea | 11770173 | Biochemistry[mh] | Molecular biosignatures of altered proteomes in urine are promising in the diagnostic, prognostic, long-term disease progression, and treatment response monitoring of diseases for precision medicine studies, owing to the advantage of costs, time, and the noninvasive nature of urine – . Increasing numbers of urinary proteomic studies have been used to discover potential disease biomarkers across various diseases, such as urological cancers – , colorectal cancer , virus infection , , neurodegenerative disorders , and many others . Recent advances in mass spectrometry techniques and informatic pipelines have greatly extended the research to large-scale clinical sample cohorts, to improve the statistical significance of the discovered biomarkers and yield more valuable clinical insights , , . For large-scale cohort study, the assessment of reproducibility is crucial owing to the data usually generated across instruments, platforms, and laboratories , . Recent studies have demonstrated that data-independent acquisition (DIA) proteomic data generated across 11 laboratories revealed high consistency and reproducibility of identification and quantitation results for cell and tissue samples – . This was achieved by employing harmonized mass spectrometry (MS) instrument platforms and standard operating procedure (SOP) – . However, the unified SOP for bodily fluids, with higher complexity or wider protein concentration dynamic range samples as represented by urine have not been provided . Currently, each platform applied its own experiment and data acquisition procedures, which results in a large variation across instruments, platforms, and laboratories , , , , . For establishing and applying an SOP, there is still a lack of comprehensive quality control (QC) system to ensure reproducibility and robustness during data generation and maximize the accessibility of downstream data . Over the years dozens of QC metrics have been proposed , , generated by a range of bioinformatics tools, which contain NIST MSQC , QuaMeter , , RawBeans , DO-MS , , PTXQC , QCloud , , QC-ART , MSstatsQC 2.0 , QuiC , et al. However, these QC tools and metrics mainly focus on data-independent acquisition (DDA) experiments, and few of them were developed specifically for DIA experiments. In addition, these tools only extract a limited number of metrics and display them in the form of charts, without illustrating the relationship between the metrics and identification results. Users need to rely on expert experience to optimize parameters and locate experimental problems step by step, which is time-consuming and laborious . Especially, for large-scale DIA experiments, a fully automated and comprehensive QC system with systematic evaluation and optimization for the entire LC-MS workflow is in great demand, in the case of evaluating the system performance, locating potential problems, detecting low-quality experiments, etc. These hindered the application of large-scale urinary proteomics into clinical research. In this work, we first summarize and integrate the metrics extracted by existing QC software and introduce new metrics, striving to provide the most comprehensive set of proteomics QC metrics. We develop the MSCohort QC system to perform individual experiments and the whole cohort data quality evaluation. Especially, we propose a scoring system for individual DIA experiment evaluation and optimization. Based on this comprehensive QC system, we develop a SOP for high-throughput urinary proteome. We systematically investigate the consistency and reproducibility of urinary proteome across 20 LC-MS platforms with and without unified SOP, respectively. Then, benchmarking samples, consisting of tryptic digests of human urine, yeast, and E. coli proteins in defined proportions, are generated to mimic differential expressed biological samples and investigate the performance in the quantitative accuracy, precision, and robustness of the different platforms, as well as the capability to detect differentially expressed proteins (DEPs) deemed as biomarkers. Furthermore, the above SOP is applied to colorectal cancer (CRC) urinary samples across 3 different LC-MS platforms to further demonstrate the combination of a comprehensive QC system and unified SOP could guarantee high consistency and reproducibility in cohort clinical proteome analyses (Fig. ). Taken together, our study provides a comprehensive QC system and reference SOP for large-scale urine proteomic analysis spanning different platforms, which could benefit the applications of urinary proteomics to clinical disease researches. The MSCohort software tool is accessible for download from Github ( https://github.com/BUAA-LiuLab/MSCohort ).
Design of MSCohort for comprehensive data quality control We developed the MSCohort QC system, which provides more comprehensive quality control metrics (Supplementary Note ) and enhances more extensive quality control functionalities (Supplementary Note ), compared to existing quality control software. It assists users in assessing and optimizing individual experimental procedures, evaluating system stability across multiple experiments, and identifying outlier experiments (Fig. ). The primary phase of quality control involves the extraction of QC Metrics. The MSCohort QC system summarizes and integrates metrics extracted by existing quality control software and introduces 26 new metrics, totaling 81 QC metrics (Supplementary Note ): (1) For individual experiment data quality evaluation, we investigated the QC metrics proposed by NIST MSQC , QuaMeter , , DO-MS , RawBeans , Spectronaut , MSRefine , and added 10 metrics for DIA individual experiment. This comprehensive set, amounting to a total of 58 metrics, termed intra-experiment metrics in this study (Supplementary Data ); (2) For evaluating data performance across experiments, we investigated the QC metrics proposed by PTXQC , QCloud , , QC-ART , MSstatsQC 2.0 , QuiC , and added 16 new metrics. This yields a set of 23 metrics encompassing in precursor, peptide, and protein group levels, termed inter-experiment metrics in this study (Supplementary Data ). We strive to provide the most comprehensive set of proteomics QC metrics. The quality control software tool extracts relevant metrics tailored to specific data types (e.g., DDA or DIA, individual experiments or cohort experiments), then conducts the following quality evaluation, to generate detailed analysis reports. For individual experiments, MSCohort extracts comprehensive metrics that map to the whole LC-MS workflow, illustrates the relationship between extracted metrics and identification results, scores the metrics, and reports visual results, assisting users in evaluating the workflow, and locating problems. We have previously proposed a quality evaluation system for individual DDA experiments , and here, we developed a quality evaluation system for individual DIA experiments. We designed a scoring formula to characterize different kinds of DIA experiments: 1 [12pt]{minimal}
$${N}_{{identified}{{\_}}{precursors}}={N}_{{acquired}{{\_}}{MS}2} {Q}_{{MS}2} ({N}_{{precursor}{{\_}}{per}{{\_}}{MS}2}/{R}_{{precursor}})$$ N i d e n t i f i e d _ p r e c u r s o r s = N a c q u i r e d _ M S 2 × Q M S 2 × ( N p r e c u r s o r _ p e r _ M S 2 / R p r e c u r s o r ) where [12pt]{minimal}
$${N}_{{identified\_precursors}}$$ N i d e n t i f i e d _ p r e c u r s o r s is the number of identified peptide precursors, [12pt]{minimal}
$${N}_{{acquired\_MS}2}$$ N a c q u i r e d _ M S 2 is the number of acquired MS2 scans, [12pt]{minimal}
$${Q}_{{MS}2}$$ Q M S 2 is the identification rate of the MS2 scans, [12pt]{minimal}
$${N}_{{precursor\_per\_MS}2}$$ N p r e c u r s o r _ p e r _ M S 2 is the spectra complexity of MS2 scans, [12pt]{minimal}
$${R}_{{precursor}}$$ R p r e c u r s o r is the precursors duplicate identification rate, and [12pt]{minimal}
$${N}_{{precursor\_per\_MS}2}$$ N p r e c u r s o r _ p e r _ M S 2 / [12pt]{minimal}
$${R}_{{precursor}}$$ R p r e c u r s o r is the utilization rate of the MS2 scans. MSCohort scores the relative metrics, reports metric−score diagram, and flags the metrics with low scores to assist experimenters in assessing the quality of data directly, enabling systematic evaluation and optimization of individual DIA experiments (See Methods and Supplementary Fig. for details). For the cohort proteomics data, it is imperative not only to conduct meticulous quality control analysis on individual experiments but also to perform longitudinal tracking to evaluate the performance over time for the cohort experiments. MSCohort reports corresponding scores to each of the inter-experiment metrics, and provides a heatmap overview, which yields an assessment of the quality at a glance and facilitates pinpointing the low-quality experiments (Supplementary Fig. ). MSCohort also incorporates unsupervised machine learning algorithm (isolation forest) to detect potential outlier experiments. Furthermore, to guarantee the reliability of the subsequent statistical analyses, it incorporates various normalization methods to remove systematic bias in peptide/protein abundances that could mask true biological discoveries or give rise to false conclusions (See Methods for details). Notably, MSCohort serves as a potent tool that offers comprehensive support for data originating from diverse vendor platforms, including the Thermo Scientific Orbitrap, Bruker timsTOF, and SCIEX ZenoTOF systems. This tool facilitates rigorous quality assessment and optimization of both DDA and DIA experiments, as well as inter-experimental quality control analysis across DDA, DIA, and PRM studies. MSCohort applies not only to urinary proteomics analysis but also to the analyses of other samples (such as cell, tissue, blood, etc.). The MSCohort software tool and the user manual are available from Github: https://github.com/BUAA-LiuLab/MSCohort . Optimization and establishment of SOP for urinary proteomics with MSCohort QC system To meet the demands of extensive clinical urinary proteomics analysis, we have developed an SOP for urinary proteomics that demonstrates high throughput, sensitivity, and reproducibility (See Fig. and Supplementary Note for details). Herein, we underscore the importance of incorporating a data quality control step in the development and implementation of the SOP. Previous studies have reported the SOPs for other proteomics applications , , , yet they primarily rely on manual inspection of identification results and limited metrics to assess experimental quality. Consequently, it becomes difficult to promptly identify and address issues or propose specific strategies for further experimental refinement when identification results are not satisfactory. Supplementary Note showed how to use the comprehensive MSCohort QC system to optimize the LC-MS methods to establish the SOP for urinary proteomics. Through the DIA scoring formula (1) and comprehensive metrics analysis reports, MSCohort provides direct explanations for the underlying causes of data results. By conducting a limited number of experiments instead of iterating through all parameter combinations, we could achieve the whole procedure optimization of urinary proteomics, including sample (Figure S.Note ), chromatography (Figure S.Note ), and mass spectrometry (Figure S.Note ), and established the SOP for urinary proteomics. This SOP integrates the optimal strategies at each step, including the 96DRA-Urine (Direct reduction/alkylation in urine) high-throughput sample preparation method, stable and efficient chromatography system, highly sensitive and high throughput DIA-based MS method , and comprehensive MSCohort QC system. As illustrated in Supplementary Note , the adoption of this SOP yields the following benefits: (i) Pretreating of nearly 200 samples in a single day, meeting the demands of large-scale analysis; (ii) Identifying over 3000 protein groups in a single sample within 30-min gradient. Figure demonstrates that the SOP increases protein identification numbers per unit time by 3 to 90 times compared to representative methods; (iii) Achieving excellent inter-experimental stability with a retention time deviation of less than 0.2 minutes for the same peptides over 7 days (Figure S.Note ). To the best of our knowledge, this SOP presents the deepest urinary proteome coverage for single-run analysis in short gradient, a promising basis for the discovery of urinary biomarkers in large-scale sample cohorts (Fig. ). Comprehensive and comparative analysis of urinary proteome data from multi-platform study To validate the performance of this SOP across different LC-MS platforms, we performed urinary proteomics experiments collected across multiple LC-MS platforms including different types of mass spectrometers. We prepared a urine peptide QC sample and distributed aliquots to 20 LC-MS platforms, which were classified into two groups: 1) Ten platforms employed the unified SOP developed above (numbered U01-U10), termed with LC-SOP in this study (Supplementary Note ). 2) Another ten platforms without LC-SOP. These LC-MS platforms were encouraged to use their individual optimized experimental parameters (numbered M01-M10) . The detailed data acquisition parameters were provided in Supplementary Data . Firstly, the qualitative results showed clear difference among the 10 LC-MS platforms without LC-SOP (M01-M10), the number of identified proteins ranged from 2371 (M03) to 3695 (M07), with a relative standard deviation (RSD) of 8% (Fig. and Supplementary Data ). Among them, four platforms (M03, M04, M06, and M09) showed obviously lower identification results than the others. We investigated the possible causes in detail based on MSCohort. As illustrated in Supplementary Fig. and Supplementary Data , the metric scores related to the identification rate of MS2 scans were low for M03-E480 F (Exploris 480 with a High Field Asymmetric Waveform Ion Mobility Spectrometry (FAIMS)). Its peak intensities and peak counts of MS1 and MS2 were significantly lower than other instrument platforms, resulting in a low MS2 identification rate (49%). Consequently, the final identified protein number was also reduced. M04-E480 F also showed a similar low identification rate of MS2 scan (64%). Additionally, the chromatographic peak width and full width at half maximum (FWHM) of M04 were significantly wider than other instrument platforms, which also affects the LC separation efficiency and identification rate. For M06-Eclipse, the precursors duplicate identification rate was higher than the complexity of the MS2 scans, which led to the low utilization rate of the MS2 scans ( [12pt]{minimal}
$${N}_{{precursor\_per\_MS}2}/{R}_{{precursor}}\,$$ N p r e c u r s o r _ p e r _ M S 2 / R p r e c u r s o r = 0.74). M06 also showed a short MS2 ion injection time and low signal-to-noise ratio, which was correlated with its relatively low setting threshold (50000) of AGC target. The chromatographic invalid acquiring time (LC delay time) for M09-E240 (Exploris 240) was 12 minutes, leading to over 40% of spectra being wasted without identifying precursors. Consequently, the overall identification rate was only 50%. The above metric-score results demonstrated that MSCohort can effectively and accurately locate potential problems and provide clear insights for DIA optimization. Furthermore, we also assessed the consistency of identification results among the 10 instrument platforms. Only 2045 proteins were found to overlap across all platforms (Fig. ), constituting 55%–86% of the identified proteins on individual instrument platforms, which indicates relatively low consistency of identification results. After removing 4 platforms with lower identification results, the overlap proteins increased to 2876, showing an improved qualitative consistency (Supplementary Fig. ). We also analyzed the precision and reproducibility of quantitative results among the 10 platforms without LC-SOP using MSCohort. As illustrated in Fig. , the median coefficients of variation (CV) of protein intensity for each LC-MS platform were below 20% and the Pearson correlation coefficients of protein intensity were greater than 0.96, demonstrating that DIA approaches achieved high quantitative precision and reproducibility in the intra-instrument level. For inter-instrument analysis, Pearson correlations of M03 and M04 with other LC-MS platforms were low (0.6–0.73). The Pearson correlation between the other 6 Orbitrap instrument platforms was 0.9-0.94, and the Pearson correlation between timsTOF and Orbitrap instrument was 0.83–0.89. While, the qualitative and quantitative results showed higher consistency and reproducibility among the 10 LC-MS platforms with LC-SOP (U01-U10), the range of identified proteins was 3346–3752, with RSD of the identified numbers less than 4% (Fig. ). Within 30 minutes, each LC-MS platform was able to identify more than 3300 proteins. Among them, 3080 proteins overlap across all platforms, showing high qualitative consistency (Fig. ). In addition, MSCohort analysis report among 10 platforms with LC-SOP also showed good consistency (Fig. and Supplementary Data ). In particular, the Pearson correlation of protein intensity between the 7 Orbitrap instruments was 0.93–0.97, and the Pearson correlation between timsTOF and Orbitrap instrument was 0.86–0.92 (Fig. ). Besides, the average sequence coverage and the dynamic range of the proteins that were identified among the 10 LC-MS platforms with LC-SOP also improved than that in 10 LC-MS platforms without LC-SOP. Furthermore, we sought to utilize data collected by multiple Orbitrap instrument platforms to investigate the effects of different LC and MS conditions on the reproducibility of results. We classified different platforms into 3 groups based on the conditions, A (same MS condition, same LC condition), B (different MS condition, same LC condition), C (different MS condition, different LC condition) (Supplementary Fig. ) (M03, M04, M06, M09 were not included in the group). The results displayed that the data collected with the same LC condition (group A and B) outperformed data without LC-SOP (group C), and group A with the same LC and MS conditions showed the highest reproducibility. Furthermore, group B with the same LC condition and different MS type also showed good qualitative and quantitative reproducibility, with RSD of qualitative data <2% and Pearson correlation of quantitative data > 0.9. Above group A and group B results were comparable to the previous inter-instrument reproducibility results obtained from harmonized instruments . Performance evaluation of proteome quantification and detection of differentially expressed proteins from multi-platform study To evaluate the quantitative accuracy, precision, and sensitivity of urinary proteome from different platforms under the unified SOP, we prepared benchmarking samples, with adding yeast and E. coli peptides in specified distinct ratios to a complex urine peptide background. Sample A was 65% human urine peptides, 15% yeast, and 20% E. coli , and Sample B was 65% human urine peptides, 30% yeast, and 5% E. coli , similar to previous benchmarks . We analyzed samples A and B in technical triplicates on three LC-MS platforms, Orbitrap Fusion Lumos, Orbitrap Exploris 480, and timsTOF Pro 2 (Lumos, E480, and TIMS for short) based on the above with LC-SOP data acquisition methods. Firstly, we conducted quality assessment for the data generated from the three platforms using MSCohort. Three technical repetitions within each platform demonstrated high qualitative and quantitative repeatability, with RSD of the identified numbers less than 1% and median CV of protein intensity below 15% (Supplementary Fig. ). For inter-instrument analysis, within 30 minutes, the total identified protein numbers were 4953, 5224, 6006 in Lumos, E480, and TIMS, respectively. Among them, 4667 proteins overlap across 3 platforms, showing high qualitative consistency (Supplementary Fig. ). The Pearson correlation of protein intensity among the 3 LC-MS platforms was 0.93–0.97, which was comparable with that in the above 10 platforms with LC-SOP (Supplementary Fig. ). Next, more precise assessment of quantitative performance was performed based on benchmarking samples. The number of quantified proteins from different species among 3 LC-MS platforms was shown in Fig. (Supplementary Data ). Different species proteins correspond to different theoretical quantitative ratios (1:1, 1:2, 4:1 and the log2 ratio is 0, 1, 2 for human, yeast, and E. coli , respectively). As illustrated in Fig. , the median values of the log2 ratios were 0.02 – 0.07, (-0.86) – (-0.74), 1.65 – 1.91, and the medians of relative deviation from theoretical ratio were 0.09 (0.05 – 0.11), 0.15 (0.11 – 0.2), 0.18 (0.15 – 0.18) for human, yeast and E. coli proteins, respectively. The median CV values were typically below 15% for Lumos and E480 data and below 10% for TIMS data (Fig. ). These results demonstrated that excellent label-free quantitative accuracy and precision were achieved in different LC-MS platforms. As the ultimate goal of most proteomic analysis is to detect differentially expressed proteins (DEPs) from different conditions, we assessed the sensitivity and specificity of different LC-MS platforms in DEP detection using our benchmarking data sets. DEPs were extracted using the widely applied criteria of a fold change >1.5 and an adjusted p -value < 0.05. In this condition, yeast and E. coli in urine could both be considered as significantly different proteins, as their expected ratios are 2-fold and 4-fold, respectively. Three LC-MS platforms showed high sensitivity of DEP detection from Lumos data (67.5% for yeast and 94.5% for E. coli quantified proteins as DEPs), E480 data (74.7% for yeast and 94.0% for E. coli quantified proteins as DEPs) and TIMS data (80.4% for yeast and 96.0% for E. coli quantified proteins as DEPs) (Fig. ). Moreover, in the pairwise comparison of human proteins with an expected ratio of 1:1, three LC-MS platforms resulted in detection of false DEPs at comparable rates (0.2%, 0.2%, 1.1% false DEPs rates for Lumos, E480 and TIMS data, respectively) (Fig. ). Furthermore, we assessed the robustness of DEP detection based on the receiver operating characteristic (ROC) curve analysis, which led to the similar conclusion (Fig. ). In summary, all three LC-MS platforms could detect DEPs from the benchmarking dataset with high sensitivity and specificity. In addition, we provided further proof by applying this workflow to benchmark samples mixed with human HEK 293 cell, yeast, and E. coli , labeled Sample A-H and Sample B-H (Supplementary Fig. ). Collectively, these results demonstrated that under the unified SOP, most of the DEPs could be recalled accurately in complex samples of mixed species with high quantitative accuracy and precision even in different type of LC-MS platforms, which would broadly increase the confidence in DIA-based urinary proteomics as a reproducible method for large cohort protein quantification. Analyses of cohort clinical proteomics from multi-platform study We further illustrated the generalization by applying the above urinary proteomics SOP to clinical biomarker discovery research in different platforms. Herein, we collected a clinical cohort comprising 80 urine samples from colorectal cancer (CRC) patients and 80 samples from matched healthy controls (HC) (The detailed clinical information is shown in Supplementary Data ). Based on the above SOP, LC-MS data collection for these 160 samples was conducted on Lumos, E480, and TIMS, respectively (see Methods). QC samples were randomly analyzed during the collection process for systematic evaluation of reproducibility. In total, the three platforms generated 527 DIA experiments including 47 QC experiments. In the process of cohort experiments collection, an intra-experiment analysis based on MSCohort QC system was performed for each newly collected experiment to evaluate the individual data quality. After the number of experiments collected exceeded 2 runs, an inter-experiment analysis based on MSCohort QC system was performed to evaluate the stability and reproducibility of the instrument system. Finally, after all the samples have been collected on one instrument platform, inter-experiment analysis based on MSCohort QC system was performed to evaluate cohort data quality and detect low-quality experiments. This cohort experiment is a time course study with at least five consecutive days of acquisition on each instrument platform. We analyzed the overall cohort data quality based on the MSCohort QC system. First, the QC samples demonstrated good technical repeatability, with median Pearson correlation of protein intensity > 0.94 for each of the 3 LC-MS platforms (Supplementary Fig. ), indicating good LC-MS system stability. The results showed that the overall chromatographic retention time was stable (the average retention time deviation <0.25 min) for 7 consecutive days (Figure S.Note ). In addition, MSCohort detected and reported low-performance experiments based on isolation forest algorithm, as shown in Supplementary Fig. , 8 low-quality samples were reported in TIMS. Among them, 7 and 6 samples were also reported in Lumos and E480, respectively. The corresponding heatmap in MSCohort report indicated that there were significant differences between these 8 samples and other samples in multiple inter-experiment metrics (at least 7 of 23 metrics showed a variation more than two standard deviations (SD) from its median). Among them, 5 samples (D943, D1116, D1412, H771, D994) showed higher ratios of contaminants (erythrocytes, cellular debris, or serum high abundance proteins) than the other samples, which resulted in the sample-to-sample variability compared to regular urinary proteins (Methods). Another 3 samples (H349, D1036, D1069) showed lower Pearson correlation, and higher robust standard deviation at precursor, peptide, and protein groups intensity with the other samples, indicating these samples were heterogeneous compared with other samples (Supplementary Fig. ). Thus, these 8 experiments were excluded from further analysis (Supplementary Fig. ). A total of 6780, 7089, and 6620 protein groups were identified in Lumos, E480, and TIMS, respectively. And 5227 proteins overlap across the three LC-MS platforms (Fig. , Supplementary Fig. ). Subsequently, we analyzed the quantitative consistency among the three LC-MS platforms. The unsupervised learning t-Distributed Stochastic Neighbor Embedding (t-SNE) results showed that the results from three different LC-MS platforms for the same sample cluster together, and no platform effect was observed (Fig. , Supplementary Fig. ). In the Orthogonal Projections to Latent Structures Discriminant Analysis (OPLS-DA) model, the CRC and Control groups can be clearly separated, and 100-fold cross-validation experiments indicate that the model did not overfit (Supplementary Fig. ). These results demonstrated the good consistency and reproducibility of the 3 LC-MS platforms. Different expressed proteins analysis showed that a total of 455, 539, and 679 proteins were reported as DEPs (Benjamini-Hochberg adjusted p -value < 0.05) in Lumos, E480, and TIMS, respectively (Fig. and Supplementary Data ). And 215 DEPs were overlapped across 3 platforms. CRC/HC fold changes of these 215 proteins were highly correlated with Pearson correlation coefficients at r = 0.99, r = 0.95, and r = 0.95 for the comparisons of E480 and Lumos, E480 and TIMS, and Lumos and TIMS, respectively (Fig. ). Above results showed the high quantitative reproducibility of three platforms. According to function annotation, the top enriched pathways for DEPs of 3 LC-MS platforms showed a high degree of consistency, with upregulation of cell proliferation, inflammatory response, and metabolism pathways (actin cytoskeleton signaling, acute phase response signaling, complement system, etc.) and downregulation of cell death- and apoptosis-related pathways (FAK Signaling, FAT10 Cancer Signaling Pathway, etc.) (Fig. , Supplementary Data ). These results were consistent with the previous reports that tumor proliferation, migration, and metabolism modules were activated and cell death and apoptosis were inhibited in CRC patients . We also compared our results with previous CRC tissue proteomics analysis study . GO enrichment analysis (Supplementary Fig. ) showed that extracellular matrix proteins were both enriched in urine and tissue and ribosome proteins were only enriched in tissue. The proteins involved in complement activation, immune response, and cell growth/development process were highly enriched in urine, and the proteins involved in glucose metabolic, amino acid metabolic, and ribonucleotide metabolic processes were highly enriched in tissue. In addition, the proteins involved in cell adhesion, coagulation, and regulation of peptidase activity were both enriched in urine and tissue. These results suggested that urinary proteome changes could reflect not only the changes of tissue , but also the changes of body immune systems, etc. Among them, the protein related to complement activation was significantly upregulated and showed a high degree of consistency (Supplementary Fig. ). Complement is a key player in the innate immune defense against pathogens and the maintenance of host homeostasis . In the tumor-immune interaction, complement-associated proteins play a vital role whether directly or indirectly by regulating tumorigenesis, development, and metastasis . In CRC, tumor cells were found to produce Complement C3 (C3) component thus leading to modulation of the response of macrophages and its anti-tumor immunity, via the C3a-C3aR axis and PI3K signaling pathways. The complement C5a/C5aR pathway was found to induce cell proliferation, motility, and invasiveness . Complement components C5b, C6, C7, C8, and C9 form the membrane attack complex (MAC), MAC accumulation on the cell membrane promotes cell proliferation and differentiation, inhibits apoptosis, and protects cells against complement-mediated lysis in a sublytic density , – . Complement factor H (CFH) and Complement factor I (CFI) modulated the fundamental processes of the tumor cell, promoting proliferation and tumor progression when tested in animal models , . Collectively, these highlighted the value of the complement system in tumor progression, especially that of CRC. Next, we evaluated these 67 upregulated proteins of 215 proteins as input variables and investigated the classification performance in the CRC/HC stratification. The top 15 proteins with the highest area under the curve (AUC) of each protein in 3 LC-MS platforms were chosen as candidates (Supplementary Fig. ) and 8 common highly ranked DEPs in 3 platforms were submitted to further machine learning model building (Supplementary Data ). We compared six machine learning classifiers and performed cross-validation by training the model on one platform and testing it on the other two platforms. Finally, a 5-protein panel (C9, CFI, CFH, RELT, GDF15) showed the highest and reproducible AUC values in 3 different LC-MS platforms (Supplementary Fig. ). On average, the Support Vector Machines (SVM) model showed the highest AUC values of 0.88, 0.93, and 0.89 for the classification of CRC and HC in Lumos, E480, and TIMS, respectively (Fig. ). A previous study reported that C9 was significantly upregulated in colorectal cancer plasma . Growth/differentiation factor 15 (GDF15) is a divergent member of the transforming growth factor-b (TGF-b) superfamily. Experimental evidence shows that GDF15 enhances tumor growth, stimulates cell proliferation, and promotes distant metastases . Previous blood and colorectal tumor samples from 2 large studies also found high plasma levels of GDF15 before diagnosis of CRC are associated with greater CRC specific mortality . Taken together, three different LC-MS platform data indicated consistent and excellent performance for biomarker discovery and patient stratification in CRC. These results demonstrated the generalization of urinary proteomics to support clinical discovery proteomics research under the condition of a unified SOP and MSCohort QC system.
We developed the MSCohort QC system, which provides more comprehensive quality control metrics (Supplementary Note ) and enhances more extensive quality control functionalities (Supplementary Note ), compared to existing quality control software. It assists users in assessing and optimizing individual experimental procedures, evaluating system stability across multiple experiments, and identifying outlier experiments (Fig. ). The primary phase of quality control involves the extraction of QC Metrics. The MSCohort QC system summarizes and integrates metrics extracted by existing quality control software and introduces 26 new metrics, totaling 81 QC metrics (Supplementary Note ): (1) For individual experiment data quality evaluation, we investigated the QC metrics proposed by NIST MSQC , QuaMeter , , DO-MS , RawBeans , Spectronaut , MSRefine , and added 10 metrics for DIA individual experiment. This comprehensive set, amounting to a total of 58 metrics, termed intra-experiment metrics in this study (Supplementary Data ); (2) For evaluating data performance across experiments, we investigated the QC metrics proposed by PTXQC , QCloud , , QC-ART , MSstatsQC 2.0 , QuiC , and added 16 new metrics. This yields a set of 23 metrics encompassing in precursor, peptide, and protein group levels, termed inter-experiment metrics in this study (Supplementary Data ). We strive to provide the most comprehensive set of proteomics QC metrics. The quality control software tool extracts relevant metrics tailored to specific data types (e.g., DDA or DIA, individual experiments or cohort experiments), then conducts the following quality evaluation, to generate detailed analysis reports. For individual experiments, MSCohort extracts comprehensive metrics that map to the whole LC-MS workflow, illustrates the relationship between extracted metrics and identification results, scores the metrics, and reports visual results, assisting users in evaluating the workflow, and locating problems. We have previously proposed a quality evaluation system for individual DDA experiments , and here, we developed a quality evaluation system for individual DIA experiments. We designed a scoring formula to characterize different kinds of DIA experiments: 1 [12pt]{minimal}
$${N}_{{identified}{{\_}}{precursors}}={N}_{{acquired}{{\_}}{MS}2} {Q}_{{MS}2} ({N}_{{precursor}{{\_}}{per}{{\_}}{MS}2}/{R}_{{precursor}})$$ N i d e n t i f i e d _ p r e c u r s o r s = N a c q u i r e d _ M S 2 × Q M S 2 × ( N p r e c u r s o r _ p e r _ M S 2 / R p r e c u r s o r ) where [12pt]{minimal}
$${N}_{{identified\_precursors}}$$ N i d e n t i f i e d _ p r e c u r s o r s is the number of identified peptide precursors, [12pt]{minimal}
$${N}_{{acquired\_MS}2}$$ N a c q u i r e d _ M S 2 is the number of acquired MS2 scans, [12pt]{minimal}
$${Q}_{{MS}2}$$ Q M S 2 is the identification rate of the MS2 scans, [12pt]{minimal}
$${N}_{{precursor\_per\_MS}2}$$ N p r e c u r s o r _ p e r _ M S 2 is the spectra complexity of MS2 scans, [12pt]{minimal}
$${R}_{{precursor}}$$ R p r e c u r s o r is the precursors duplicate identification rate, and [12pt]{minimal}
$${N}_{{precursor\_per\_MS}2}$$ N p r e c u r s o r _ p e r _ M S 2 / [12pt]{minimal}
$${R}_{{precursor}}$$ R p r e c u r s o r is the utilization rate of the MS2 scans. MSCohort scores the relative metrics, reports metric−score diagram, and flags the metrics with low scores to assist experimenters in assessing the quality of data directly, enabling systematic evaluation and optimization of individual DIA experiments (See Methods and Supplementary Fig. for details). For the cohort proteomics data, it is imperative not only to conduct meticulous quality control analysis on individual experiments but also to perform longitudinal tracking to evaluate the performance over time for the cohort experiments. MSCohort reports corresponding scores to each of the inter-experiment metrics, and provides a heatmap overview, which yields an assessment of the quality at a glance and facilitates pinpointing the low-quality experiments (Supplementary Fig. ). MSCohort also incorporates unsupervised machine learning algorithm (isolation forest) to detect potential outlier experiments. Furthermore, to guarantee the reliability of the subsequent statistical analyses, it incorporates various normalization methods to remove systematic bias in peptide/protein abundances that could mask true biological discoveries or give rise to false conclusions (See Methods for details). Notably, MSCohort serves as a potent tool that offers comprehensive support for data originating from diverse vendor platforms, including the Thermo Scientific Orbitrap, Bruker timsTOF, and SCIEX ZenoTOF systems. This tool facilitates rigorous quality assessment and optimization of both DDA and DIA experiments, as well as inter-experimental quality control analysis across DDA, DIA, and PRM studies. MSCohort applies not only to urinary proteomics analysis but also to the analyses of other samples (such as cell, tissue, blood, etc.). The MSCohort software tool and the user manual are available from Github: https://github.com/BUAA-LiuLab/MSCohort .
To meet the demands of extensive clinical urinary proteomics analysis, we have developed an SOP for urinary proteomics that demonstrates high throughput, sensitivity, and reproducibility (See Fig. and Supplementary Note for details). Herein, we underscore the importance of incorporating a data quality control step in the development and implementation of the SOP. Previous studies have reported the SOPs for other proteomics applications , , , yet they primarily rely on manual inspection of identification results and limited metrics to assess experimental quality. Consequently, it becomes difficult to promptly identify and address issues or propose specific strategies for further experimental refinement when identification results are not satisfactory. Supplementary Note showed how to use the comprehensive MSCohort QC system to optimize the LC-MS methods to establish the SOP for urinary proteomics. Through the DIA scoring formula (1) and comprehensive metrics analysis reports, MSCohort provides direct explanations for the underlying causes of data results. By conducting a limited number of experiments instead of iterating through all parameter combinations, we could achieve the whole procedure optimization of urinary proteomics, including sample (Figure S.Note ), chromatography (Figure S.Note ), and mass spectrometry (Figure S.Note ), and established the SOP for urinary proteomics. This SOP integrates the optimal strategies at each step, including the 96DRA-Urine (Direct reduction/alkylation in urine) high-throughput sample preparation method, stable and efficient chromatography system, highly sensitive and high throughput DIA-based MS method , and comprehensive MSCohort QC system. As illustrated in Supplementary Note , the adoption of this SOP yields the following benefits: (i) Pretreating of nearly 200 samples in a single day, meeting the demands of large-scale analysis; (ii) Identifying over 3000 protein groups in a single sample within 30-min gradient. Figure demonstrates that the SOP increases protein identification numbers per unit time by 3 to 90 times compared to representative methods; (iii) Achieving excellent inter-experimental stability with a retention time deviation of less than 0.2 minutes for the same peptides over 7 days (Figure S.Note ). To the best of our knowledge, this SOP presents the deepest urinary proteome coverage for single-run analysis in short gradient, a promising basis for the discovery of urinary biomarkers in large-scale sample cohorts (Fig. ).
To validate the performance of this SOP across different LC-MS platforms, we performed urinary proteomics experiments collected across multiple LC-MS platforms including different types of mass spectrometers. We prepared a urine peptide QC sample and distributed aliquots to 20 LC-MS platforms, which were classified into two groups: 1) Ten platforms employed the unified SOP developed above (numbered U01-U10), termed with LC-SOP in this study (Supplementary Note ). 2) Another ten platforms without LC-SOP. These LC-MS platforms were encouraged to use their individual optimized experimental parameters (numbered M01-M10) . The detailed data acquisition parameters were provided in Supplementary Data . Firstly, the qualitative results showed clear difference among the 10 LC-MS platforms without LC-SOP (M01-M10), the number of identified proteins ranged from 2371 (M03) to 3695 (M07), with a relative standard deviation (RSD) of 8% (Fig. and Supplementary Data ). Among them, four platforms (M03, M04, M06, and M09) showed obviously lower identification results than the others. We investigated the possible causes in detail based on MSCohort. As illustrated in Supplementary Fig. and Supplementary Data , the metric scores related to the identification rate of MS2 scans were low for M03-E480 F (Exploris 480 with a High Field Asymmetric Waveform Ion Mobility Spectrometry (FAIMS)). Its peak intensities and peak counts of MS1 and MS2 were significantly lower than other instrument platforms, resulting in a low MS2 identification rate (49%). Consequently, the final identified protein number was also reduced. M04-E480 F also showed a similar low identification rate of MS2 scan (64%). Additionally, the chromatographic peak width and full width at half maximum (FWHM) of M04 were significantly wider than other instrument platforms, which also affects the LC separation efficiency and identification rate. For M06-Eclipse, the precursors duplicate identification rate was higher than the complexity of the MS2 scans, which led to the low utilization rate of the MS2 scans ( [12pt]{minimal}
$${N}_{{precursor\_per\_MS}2}/{R}_{{precursor}}\,$$ N p r e c u r s o r _ p e r _ M S 2 / R p r e c u r s o r = 0.74). M06 also showed a short MS2 ion injection time and low signal-to-noise ratio, which was correlated with its relatively low setting threshold (50000) of AGC target. The chromatographic invalid acquiring time (LC delay time) for M09-E240 (Exploris 240) was 12 minutes, leading to over 40% of spectra being wasted without identifying precursors. Consequently, the overall identification rate was only 50%. The above metric-score results demonstrated that MSCohort can effectively and accurately locate potential problems and provide clear insights for DIA optimization. Furthermore, we also assessed the consistency of identification results among the 10 instrument platforms. Only 2045 proteins were found to overlap across all platforms (Fig. ), constituting 55%–86% of the identified proteins on individual instrument platforms, which indicates relatively low consistency of identification results. After removing 4 platforms with lower identification results, the overlap proteins increased to 2876, showing an improved qualitative consistency (Supplementary Fig. ). We also analyzed the precision and reproducibility of quantitative results among the 10 platforms without LC-SOP using MSCohort. As illustrated in Fig. , the median coefficients of variation (CV) of protein intensity for each LC-MS platform were below 20% and the Pearson correlation coefficients of protein intensity were greater than 0.96, demonstrating that DIA approaches achieved high quantitative precision and reproducibility in the intra-instrument level. For inter-instrument analysis, Pearson correlations of M03 and M04 with other LC-MS platforms were low (0.6–0.73). The Pearson correlation between the other 6 Orbitrap instrument platforms was 0.9-0.94, and the Pearson correlation between timsTOF and Orbitrap instrument was 0.83–0.89. While, the qualitative and quantitative results showed higher consistency and reproducibility among the 10 LC-MS platforms with LC-SOP (U01-U10), the range of identified proteins was 3346–3752, with RSD of the identified numbers less than 4% (Fig. ). Within 30 minutes, each LC-MS platform was able to identify more than 3300 proteins. Among them, 3080 proteins overlap across all platforms, showing high qualitative consistency (Fig. ). In addition, MSCohort analysis report among 10 platforms with LC-SOP also showed good consistency (Fig. and Supplementary Data ). In particular, the Pearson correlation of protein intensity between the 7 Orbitrap instruments was 0.93–0.97, and the Pearson correlation between timsTOF and Orbitrap instrument was 0.86–0.92 (Fig. ). Besides, the average sequence coverage and the dynamic range of the proteins that were identified among the 10 LC-MS platforms with LC-SOP also improved than that in 10 LC-MS platforms without LC-SOP. Furthermore, we sought to utilize data collected by multiple Orbitrap instrument platforms to investigate the effects of different LC and MS conditions on the reproducibility of results. We classified different platforms into 3 groups based on the conditions, A (same MS condition, same LC condition), B (different MS condition, same LC condition), C (different MS condition, different LC condition) (Supplementary Fig. ) (M03, M04, M06, M09 were not included in the group). The results displayed that the data collected with the same LC condition (group A and B) outperformed data without LC-SOP (group C), and group A with the same LC and MS conditions showed the highest reproducibility. Furthermore, group B with the same LC condition and different MS type also showed good qualitative and quantitative reproducibility, with RSD of qualitative data <2% and Pearson correlation of quantitative data > 0.9. Above group A and group B results were comparable to the previous inter-instrument reproducibility results obtained from harmonized instruments .
To evaluate the quantitative accuracy, precision, and sensitivity of urinary proteome from different platforms under the unified SOP, we prepared benchmarking samples, with adding yeast and E. coli peptides in specified distinct ratios to a complex urine peptide background. Sample A was 65% human urine peptides, 15% yeast, and 20% E. coli , and Sample B was 65% human urine peptides, 30% yeast, and 5% E. coli , similar to previous benchmarks . We analyzed samples A and B in technical triplicates on three LC-MS platforms, Orbitrap Fusion Lumos, Orbitrap Exploris 480, and timsTOF Pro 2 (Lumos, E480, and TIMS for short) based on the above with LC-SOP data acquisition methods. Firstly, we conducted quality assessment for the data generated from the three platforms using MSCohort. Three technical repetitions within each platform demonstrated high qualitative and quantitative repeatability, with RSD of the identified numbers less than 1% and median CV of protein intensity below 15% (Supplementary Fig. ). For inter-instrument analysis, within 30 minutes, the total identified protein numbers were 4953, 5224, 6006 in Lumos, E480, and TIMS, respectively. Among them, 4667 proteins overlap across 3 platforms, showing high qualitative consistency (Supplementary Fig. ). The Pearson correlation of protein intensity among the 3 LC-MS platforms was 0.93–0.97, which was comparable with that in the above 10 platforms with LC-SOP (Supplementary Fig. ). Next, more precise assessment of quantitative performance was performed based on benchmarking samples. The number of quantified proteins from different species among 3 LC-MS platforms was shown in Fig. (Supplementary Data ). Different species proteins correspond to different theoretical quantitative ratios (1:1, 1:2, 4:1 and the log2 ratio is 0, 1, 2 for human, yeast, and E. coli , respectively). As illustrated in Fig. , the median values of the log2 ratios were 0.02 – 0.07, (-0.86) – (-0.74), 1.65 – 1.91, and the medians of relative deviation from theoretical ratio were 0.09 (0.05 – 0.11), 0.15 (0.11 – 0.2), 0.18 (0.15 – 0.18) for human, yeast and E. coli proteins, respectively. The median CV values were typically below 15% for Lumos and E480 data and below 10% for TIMS data (Fig. ). These results demonstrated that excellent label-free quantitative accuracy and precision were achieved in different LC-MS platforms. As the ultimate goal of most proteomic analysis is to detect differentially expressed proteins (DEPs) from different conditions, we assessed the sensitivity and specificity of different LC-MS platforms in DEP detection using our benchmarking data sets. DEPs were extracted using the widely applied criteria of a fold change >1.5 and an adjusted p -value < 0.05. In this condition, yeast and E. coli in urine could both be considered as significantly different proteins, as their expected ratios are 2-fold and 4-fold, respectively. Three LC-MS platforms showed high sensitivity of DEP detection from Lumos data (67.5% for yeast and 94.5% for E. coli quantified proteins as DEPs), E480 data (74.7% for yeast and 94.0% for E. coli quantified proteins as DEPs) and TIMS data (80.4% for yeast and 96.0% for E. coli quantified proteins as DEPs) (Fig. ). Moreover, in the pairwise comparison of human proteins with an expected ratio of 1:1, three LC-MS platforms resulted in detection of false DEPs at comparable rates (0.2%, 0.2%, 1.1% false DEPs rates for Lumos, E480 and TIMS data, respectively) (Fig. ). Furthermore, we assessed the robustness of DEP detection based on the receiver operating characteristic (ROC) curve analysis, which led to the similar conclusion (Fig. ). In summary, all three LC-MS platforms could detect DEPs from the benchmarking dataset with high sensitivity and specificity. In addition, we provided further proof by applying this workflow to benchmark samples mixed with human HEK 293 cell, yeast, and E. coli , labeled Sample A-H and Sample B-H (Supplementary Fig. ). Collectively, these results demonstrated that under the unified SOP, most of the DEPs could be recalled accurately in complex samples of mixed species with high quantitative accuracy and precision even in different type of LC-MS platforms, which would broadly increase the confidence in DIA-based urinary proteomics as a reproducible method for large cohort protein quantification.
We further illustrated the generalization by applying the above urinary proteomics SOP to clinical biomarker discovery research in different platforms. Herein, we collected a clinical cohort comprising 80 urine samples from colorectal cancer (CRC) patients and 80 samples from matched healthy controls (HC) (The detailed clinical information is shown in Supplementary Data ). Based on the above SOP, LC-MS data collection for these 160 samples was conducted on Lumos, E480, and TIMS, respectively (see Methods). QC samples were randomly analyzed during the collection process for systematic evaluation of reproducibility. In total, the three platforms generated 527 DIA experiments including 47 QC experiments. In the process of cohort experiments collection, an intra-experiment analysis based on MSCohort QC system was performed for each newly collected experiment to evaluate the individual data quality. After the number of experiments collected exceeded 2 runs, an inter-experiment analysis based on MSCohort QC system was performed to evaluate the stability and reproducibility of the instrument system. Finally, after all the samples have been collected on one instrument platform, inter-experiment analysis based on MSCohort QC system was performed to evaluate cohort data quality and detect low-quality experiments. This cohort experiment is a time course study with at least five consecutive days of acquisition on each instrument platform. We analyzed the overall cohort data quality based on the MSCohort QC system. First, the QC samples demonstrated good technical repeatability, with median Pearson correlation of protein intensity > 0.94 for each of the 3 LC-MS platforms (Supplementary Fig. ), indicating good LC-MS system stability. The results showed that the overall chromatographic retention time was stable (the average retention time deviation <0.25 min) for 7 consecutive days (Figure S.Note ). In addition, MSCohort detected and reported low-performance experiments based on isolation forest algorithm, as shown in Supplementary Fig. , 8 low-quality samples were reported in TIMS. Among them, 7 and 6 samples were also reported in Lumos and E480, respectively. The corresponding heatmap in MSCohort report indicated that there were significant differences between these 8 samples and other samples in multiple inter-experiment metrics (at least 7 of 23 metrics showed a variation more than two standard deviations (SD) from its median). Among them, 5 samples (D943, D1116, D1412, H771, D994) showed higher ratios of contaminants (erythrocytes, cellular debris, or serum high abundance proteins) than the other samples, which resulted in the sample-to-sample variability compared to regular urinary proteins (Methods). Another 3 samples (H349, D1036, D1069) showed lower Pearson correlation, and higher robust standard deviation at precursor, peptide, and protein groups intensity with the other samples, indicating these samples were heterogeneous compared with other samples (Supplementary Fig. ). Thus, these 8 experiments were excluded from further analysis (Supplementary Fig. ). A total of 6780, 7089, and 6620 protein groups were identified in Lumos, E480, and TIMS, respectively. And 5227 proteins overlap across the three LC-MS platforms (Fig. , Supplementary Fig. ). Subsequently, we analyzed the quantitative consistency among the three LC-MS platforms. The unsupervised learning t-Distributed Stochastic Neighbor Embedding (t-SNE) results showed that the results from three different LC-MS platforms for the same sample cluster together, and no platform effect was observed (Fig. , Supplementary Fig. ). In the Orthogonal Projections to Latent Structures Discriminant Analysis (OPLS-DA) model, the CRC and Control groups can be clearly separated, and 100-fold cross-validation experiments indicate that the model did not overfit (Supplementary Fig. ). These results demonstrated the good consistency and reproducibility of the 3 LC-MS platforms. Different expressed proteins analysis showed that a total of 455, 539, and 679 proteins were reported as DEPs (Benjamini-Hochberg adjusted p -value < 0.05) in Lumos, E480, and TIMS, respectively (Fig. and Supplementary Data ). And 215 DEPs were overlapped across 3 platforms. CRC/HC fold changes of these 215 proteins were highly correlated with Pearson correlation coefficients at r = 0.99, r = 0.95, and r = 0.95 for the comparisons of E480 and Lumos, E480 and TIMS, and Lumos and TIMS, respectively (Fig. ). Above results showed the high quantitative reproducibility of three platforms. According to function annotation, the top enriched pathways for DEPs of 3 LC-MS platforms showed a high degree of consistency, with upregulation of cell proliferation, inflammatory response, and metabolism pathways (actin cytoskeleton signaling, acute phase response signaling, complement system, etc.) and downregulation of cell death- and apoptosis-related pathways (FAK Signaling, FAT10 Cancer Signaling Pathway, etc.) (Fig. , Supplementary Data ). These results were consistent with the previous reports that tumor proliferation, migration, and metabolism modules were activated and cell death and apoptosis were inhibited in CRC patients . We also compared our results with previous CRC tissue proteomics analysis study . GO enrichment analysis (Supplementary Fig. ) showed that extracellular matrix proteins were both enriched in urine and tissue and ribosome proteins were only enriched in tissue. The proteins involved in complement activation, immune response, and cell growth/development process were highly enriched in urine, and the proteins involved in glucose metabolic, amino acid metabolic, and ribonucleotide metabolic processes were highly enriched in tissue. In addition, the proteins involved in cell adhesion, coagulation, and regulation of peptidase activity were both enriched in urine and tissue. These results suggested that urinary proteome changes could reflect not only the changes of tissue , but also the changes of body immune systems, etc. Among them, the protein related to complement activation was significantly upregulated and showed a high degree of consistency (Supplementary Fig. ). Complement is a key player in the innate immune defense against pathogens and the maintenance of host homeostasis . In the tumor-immune interaction, complement-associated proteins play a vital role whether directly or indirectly by regulating tumorigenesis, development, and metastasis . In CRC, tumor cells were found to produce Complement C3 (C3) component thus leading to modulation of the response of macrophages and its anti-tumor immunity, via the C3a-C3aR axis and PI3K signaling pathways. The complement C5a/C5aR pathway was found to induce cell proliferation, motility, and invasiveness . Complement components C5b, C6, C7, C8, and C9 form the membrane attack complex (MAC), MAC accumulation on the cell membrane promotes cell proliferation and differentiation, inhibits apoptosis, and protects cells against complement-mediated lysis in a sublytic density , – . Complement factor H (CFH) and Complement factor I (CFI) modulated the fundamental processes of the tumor cell, promoting proliferation and tumor progression when tested in animal models , . Collectively, these highlighted the value of the complement system in tumor progression, especially that of CRC. Next, we evaluated these 67 upregulated proteins of 215 proteins as input variables and investigated the classification performance in the CRC/HC stratification. The top 15 proteins with the highest area under the curve (AUC) of each protein in 3 LC-MS platforms were chosen as candidates (Supplementary Fig. ) and 8 common highly ranked DEPs in 3 platforms were submitted to further machine learning model building (Supplementary Data ). We compared six machine learning classifiers and performed cross-validation by training the model on one platform and testing it on the other two platforms. Finally, a 5-protein panel (C9, CFI, CFH, RELT, GDF15) showed the highest and reproducible AUC values in 3 different LC-MS platforms (Supplementary Fig. ). On average, the Support Vector Machines (SVM) model showed the highest AUC values of 0.88, 0.93, and 0.89 for the classification of CRC and HC in Lumos, E480, and TIMS, respectively (Fig. ). A previous study reported that C9 was significantly upregulated in colorectal cancer plasma . Growth/differentiation factor 15 (GDF15) is a divergent member of the transforming growth factor-b (TGF-b) superfamily. Experimental evidence shows that GDF15 enhances tumor growth, stimulates cell proliferation, and promotes distant metastases . Previous blood and colorectal tumor samples from 2 large studies also found high plasma levels of GDF15 before diagnosis of CRC are associated with greater CRC specific mortality . Taken together, three different LC-MS platform data indicated consistent and excellent performance for biomarker discovery and patient stratification in CRC. These results demonstrated the generalization of urinary proteomics to support clinical discovery proteomics research under the condition of a unified SOP and MSCohort QC system.
Large-scale cohort studies usually involve multi-center and long-term experiments for which comprehensive QC system is needed to ensure reproducibility and robustness during data generation and integration , . In this study, we developed MSCohort QC system to perform urinary individual DIA experiment and the whole cohort data quality evaluation. MSCohort extracted 70 metrics covering the intra- and inter-experiment, as well as established a DIA scoring system to provide the relationship between metrics and identification/quantification results, assisting users in monitoring the LC-MS workflow performance, detecting potential problems, providing optimizing direction, detecting low-quality experiments, and facilitating the data quality control and experimental standardization with large cohort studies. This system could be applied not only to urinary proteomics analysis but also to large-scale data analysis of other samples (such as cell, tissue, blood, etc.). The unified urinary proteome SOP was developed based on the MSCohort QC system and applied in multiple laboratories. Analysis results from 20 LC-MS platforms demonstrated the necessity of establishing the SOP. Meanwhile, results from 10 platforms without SOP indicated that these metrics showed lower scores, including the identification rate of MS2 scans, the utilization rate of MS2 scans, peak counts of MS2, peak intensities of MS2, and chromatographic invalid acquiring time, etc. in experiments with fewer identification results (Supplementary Fig. ), which also indicated that we should pay attention to above these metrics when conducting urinary proteomics experiments. In particular, the identification rate of MS2 scans and the utilization rate of MS2 scans in the DIA scoring formula play an important role in the evaluation and optimization of individual experiments. We further applied the comprehensive QC system and unified SOP to the analysis of the complex mixture of digests from human urine, yeast, and E. coli , to investigate the ability to detect DEPs with the quantitative accuracy, precision, and robustness of the different platforms. These results demonstrated that most of the DEPs could be recalled accurately in complex urine backgrounds with high quantitative accuracy and precision even in different types of LC-MS platforms, which would broadly increase the confidence in DIA-based urinary proteomics as a reproducible method for large cohort biomarker discovery research. Moreover, the above workflow was applied to clinical large-cohort colorectal cancer (CRC) urinary proteome with more than 500 proteome experiments from three LC-MS platforms. More than 8000 proteins were reported from the three platforms. To the best of our knowledge, this study presented the deepest urinary proteome coverage, representing a promising basis for the discovery of biomarkers. Three different LC-MS platform analyses reported consistent quantitative precision and disease patterns. Interestingly, our data revealed complement systems were significantly activated in CRC patients. When combined with machine learning, the urinary proteome data achieved an AUC > 0.9 to classify CRC and HC. These results validated urinary proteomics as a valuable strategy for biomarker discovery and patient classification in CRC. The demand for precision medicine is driving the need to increase throughput, improve consistency and accuracy, facilitate longitudinal research, and make data obtained across laboratories more comparable. Previous studies have demonstrated the reproducibility and quantitative performance of DIA proteomics with harmonized mass spectrometry instrument platforms and standardized data acquisition procedures in benchmark cell and tissue samples , . Our study expanded the technology to different types of mass spectrometers from different vendors, and higher complexity urine samples. The results showed that the highest reproducibility was achieved with the same LC and MS condition, which was consistent with the previous study (Supplementary Fig. ). Our study also found that different LC-MS platforms (Lumos, E480, and TIMS) also achieved high consistency under the same LC conditions and comprehensive QC system. Consistent quantitative accuracy and the ability to discover biomarkers were also validated in complex benchmarking samples and large-scale cohort clinical samples. These results highlighted the robustness of urinary proteomics under the combination of comprehensive QC system and unified SOP to support both basic discovery proteomics research and population-scale clinical sample analyses in a high-throughput manner. This work also increased the confidence that distributed urinary proteomics studies with hundreds to thousands of samples and data integration between labs are becoming feasible. Recent advances in mass spectrometry hardware have provided a boost in the depth of standard analyses and enabled near-complete model proteome quantification in minimal measuring time , . Coupled with the development of data processing software, and the establishment of comprehensive quality control systems, urinary proteomics based on DIA technology is poised to mature further, showing potential for routine analysis of large clinical cohorts with the necessary depth and sample size to support clinical decision-making based on biomarker signatures.
Preparation and distribution of the quality control urine samples First-morning urine (midstream) samples were collected from ten healthy individuals at Peking Union Medical College. Ten urine samples were combined and centrifuged at 3000 × g for 30 minutes at 4 °C to remove cell debris. The supernatant was transferred into the 2 mL EP tube (Corning, USA) and stored at −80 °C for further analysis. The quality control (QC) urine samples were prepared at the Institute of Basic Medical Sciences Chinese Academy of Medical Sciences, School of Basic Medicine Peking Union Medical College, and then the urine peptide QC samples were distributed to 20 LC-MS platforms. Urine peptides were prepared by the 96 DRA-Urine method by following the same steps as in the previous work . Briefly, a total of 400 mL of pooled urine mixture was processed. Urine (2 mL each tube) was reduced with 20 mM dithiothreitol (DTT) for 5 min at 95 °C, and then alkylated with 50 mM iodoacetamide (IAM) at room temperature (RT) in the dark for 45 min, then urine proteins were pelleted using 6-fold volume precooled acetone, and centrifuged at 10,000 × g for 10 min at 4 °C. The protein precipitate was re-dissolved in 200 μL of 20 mM Tris(pH 8.0) and then combined. The concentration of pooled urine proteins was quantified using Pierce™ BCA protein assay kit (Thermo Fisher Scientific, USA) following the manufacturer’s protocol. During protein precipitation and quantification, each well of the 96-well PVDF plate (MSIPS4510, Millipore, Billerica, MA) was prewetted with 150 μL of 70% ethanol and equilibrated with 300 μL of 20 mM Tris. For each well, one hundred micrograms of proteins were transferred to the 96-well PVDF plate. The samples were then washed three times with 200 μL of 20 mM Tris buffer (pH 8.0) and centrifugated. Proteins were digested by adding 30 μL of 20 mM Tris buffer (pH 8.0) with trypsin at a ratio of 50:1 (w:w) on the membrane. The samples were subjected to microwave-assisted protein enzymatic digestion twice in a water bath for 1 min under microwave irradiation and then at 37 °C water bath for 2 h. Subsequently, the resulting peptides were collected by centrifugation at 3000 × g for 5 min. The eluted peptides were combined together, and purified with Sep-Pak C18 Vac Cartridge (Waters). The concentration of pooled peptides was determined by using Pierce™ Quantitative Colorimetric Peptide Assay kit (Thermo Scientific) following the manufacturer’s protocol. Then peptides were aliquoted and lyophilized. Twenty micrograms of urinary peptides were resolved in 0.1 % formic acid (FA) to 1 μg/μL. Subsequently, eleven non-naturally occurring synthetic peptides from the iRT kit (Biognosys) were spiked into the sample at a ratio of 1:30 (v/v) to correct relative retention times between acquisitions. Finally, samples were shipped to the 20 LC-MS platforms. Preparation of Cell, E. coli , and S. cerevisiae samples HEK 293 cells were grown in Dulbecco’s modified eagle medium (DMEM) supplemented with 10% fetal bovine serum (FBS) and penicillin/streptomycin (1:1000) in 37 °C chamber supplied with 5% CO 2 . Cells were lysed in buffer (50 mM Tris-HCl pH 8.0, 2 % sodium dodecyl sulfate (SDS), Protease Inhibitor) and sonicated for 5 minutes with power on 4 seconds and off 3 seconds at 25% energy. Proteins were pelleted by cold acetone, then resolved in 25 mM Tris-HCl buffer (pH 8.0), and concentration was determined by the bicinchoninic acid assay (BCA) kit (Thermo Scientific). Escherichia coli DH5a was cultured in Luria Broth (LB) medium at 37 °C to mid-log phase, shaking at 240 rpm/min, in Luria Broth (LB). S. cerevisiae CG1945 were grown at 30 °C to mid-log phase, shaking at 300 rpm/min in the yeast-peptone-dextrose (YPD) medium. Cells of E. coli and S. cerevisiae were harvested by centrifugation at 4000 × g for 5 min and washed twice with ice-cold phosphate buffered saline (PBS). Cell pellets were resuspended in lysis buffer (50 mM Tris-HCl pH 8.0, 2% SDS, Protease Inhibitor). Then sonicated for 10 minutes with power on 4 seconds and off 3 seconds at 25% energy. Proteins were pelleted by cold acetone, then resolved in 25 mM Tris-HCl buffer (pH 8.0) and concentration was determined by the bicinchoninic acid assay (BCA) kit (Thermo Scientific). The HEK 293 cells, Escherichia coli , and S. cerevisiae proteins were delivered to digestion by Filter-aided sample preparation (FASP) method. In brief, 100 μg of cell lysates were reduced with 20 mM DTT at 95 °C for 5 minutes and alkylated with 50 mM IAM for 45 min at room temperature with dark. Protein solutions were loaded into the 10 kD ultracentrifugation tube equivalented with 25 mM NH 4 HCO 3 buffer. Then proteins were digested with trypsin (Promega) with a 50:1 ratio (w/w) overnight at 37 °C in an ultracentrifugation tube. Peptides were desalted by SPE column (Waters), aliquoted, and dried by SpeedVac. Peptides were resuspended at a concentration of 1 µg/µL with HPLC-grade water containing 0.1% (v/v) FA. The pooled sample A was prepared by mixing human urine, S. cerevisiae (yeast), and Escherichia coli (E. coli) peptides at 65%, 15%, and 20% w/w, respectively. The pooled sample B was prepared by mixing human urine, yeast, and E. coli protein digests at 65%, 30%, and 5% w/w, respectively. The pooled sample A-H was prepared by mixing human (HEK 293), yeast ( S. cerevisiae ), and E. coli (Escherichia coli) peptides at 65%, 15%, and 20% w/w, respectively. The pooled sample B-H was prepared by mixing human HEK 293, yeast, and E. coli protein digests at 65%, 30%, and 5% w/w, respectively. The iRT kit (Biognosys) was added to each of the pooled samples at a ratio of 1:30 (v/v). For LC-MS analysis, 1 µg of pooled sample was adopted. Preparation of human colorectal cancer and healthy control samples A total of 80 CRC patients (48 males and 32 females; median age 57 years, min-max: 42–69 years) were recruited from the Cancer Hospital, Chinese Academy of Medical Sciences. All patients were pathologically diagnosed by two senior pathologists, and first-morning midstream urine samples were collected before surgical operations or chemotherapy/radiotherapy. In addition, 80 urine samples from healthy control (HC) (52 males and 28 females; median age 55 years, min-max: 40–68 years) were obtained from the Health Medical Center of the Cancer Hospital. The enrollment criteria for HC subjects were as follows: (1) the absence of benign or malignant tumors; (2) a qualified physical examination finding no dysfunction of vital organs and (3) normal renal function and without albuminuria. Supplementary Data lists the demographic and clinical characteristics of the 80 CRC patients and 80 HCs. CRC and HC samples urine protein preparation and digestion were performed in the same way as the quality control urine samples by the 96 DRA-Urine method. In addition, each 96-well plate contains 3 Quality Assurance (QA) samples (pooled urine samples of equal protein amount from each sample) to monitor the reproducibility of the sample preparation. The resulting urine peptides from each sample were equally divided into triplicates for data acquisition on three LC-MS platforms, respectively. Data acquisition of urinary proteome from 20 LC-MS platforms The 20 participant platforms used 3 different types of mass spectrometers, including Orbitrap (ThermoFisher Scientific), timsTOF (Bruker Daltonik), and ZenoTOF (SCIEX). To systematically analyze the variation among different LC-MS platforms and the main influencing factors, we divided 20 platforms into 2 groups. Ten of these platforms used the procedures and parameters they routinely used (number M01-M10), and the other 10 LC-MS platforms employed a unified LC condition (the same type of column and same gradient) and consistent MS parameters for the same type of instrument (number U01-U10). The detailed LC and MS acquisition parameters are provided in Supplementary Data , and the data acquisition was performed according to the SOP for urinary proteomics (Supplementary Note ). All participant LC-MS platforms collected 1 pure iRT (Biognosys) data, 3 DDA data, 3 DIA data, and 1 blank to assess carry-over. The acquisition time was 30 min. The performance of DDA data acquired across multi-platform was investigated and provided in Supplementary Fig. . Data processing of urinary proteome from 20 LC-MS platforms Data-independent acquisition data from the 20 LC-MS platforms dataset were processed with Spectronaut v.18.0 performing the directDIA analysis. All searches were performed against the human SwissProt database (Homo sapiens, 20386 reviewed entries, 2022_06 version), concatenated with iRT peptide.fasta file (downloaded from the Biognosys webpage). Briefly, the specific enzyme used Trypsin/P, peptide length from 7 to 52, max missed cleavages was set 2, toggle N-terminal M turned on, Carbamidomethyl on C was set as fixed modification, and Oxidation on M as variable modification. The extraction of data used dynamic MS1 and MS2 mass tolerances, a dynamic window for extracted ion current extraction window, and a non-linear iRT calibration strategy. The identification was carried out using a kernel density estimator and Qvalue cut-off of 0.01 at precursor and protein levels. The top N (min:1; max:3) precursors per peptide and peptide per protein were used for quantification. Peptide intensity was calculated by the mean precursor intensity. Cross-run normalization was turned off. Additional DDA data collected under the same conditions as DIA data were added to create a hybrid library. The data processing results were exported using customized reports for further data analysis using MSCohort. The customized reports required in MSCohort were provided in Supplementary Data . According to the different DIA methods of different LC-MS platforms, PG.MS2Quantity results were chosen for conventional DIA method, and PG.MS1Quantity results were chosen for HRMS1-DIA methods for subsequent quantitative analysis. Data acquisition of pooled samples A and B Equivalent amounts of pooled sample A, sample B, sample A-H, and sample B-H were shipped to three LC-MS platforms (U02-Lumos, U06-E480, and U08-timsPro 2). Samples were resuspended to final concentrations of 1 μg/µL in 0.1% FA with iRT and analyzed in three technical replicates using the DIA method provided in SOP on each LC-MS platform. Data processing of pooled samples A and B Data-independent acquisition spectra in the 3 participant platforms dataset were analyzed with Spectronaut v.18.0 performing the directDIA analysis. All searches were performed against the Uniprot database for human (organism ID 9606, 20386 reviewed entries, 2022_06 version), yeast (organism ID 559292, 6727 entries, 2023_02 version), E. coli (organism ID 83333, 4634 entries, 2023_02 version) taxonomies, concatenated with iRT peptide.fasta file (downloaded from the Biognosys webpage). Default settings were used unless otherwise noted. Cross-run normalization was turned off. Carbamidomethyl on C was set as fixed modification, and Oxidation on M as variable modification. Data acquisition of CRC and HC proteomes Equivalent amounts of urine peptides were shipped to three LC-MS platforms (U02-Lumos, U06-E480, and U08-timsPro 2). Samples were resuspended to final concentrations of 1 μg/ µL in 0.1% FA with iRT and analyzed using the DIA method provided in SOP. QC samples were analyzed in triplicate before CRC and HC samples analyses and a single QC sample analysis was performed midway through the overall analysis. The acquisition of samples was randomized to avoid bias. LC-MS analysis of generating dataset for three different LC-MS platforms was collected by Orbitrap Fusion Lumos coupled with an EASY-nLC 1000 system, an Orbitrap Exploris 480 mass spectrometer coupled with Vanquish Neo UHPLC system (Thermo Fisher Scientific), and a timsTOF Pro 2 mass spectrometer (Bruker) coupled with an UltiMate 3000 UHPLC system. All three LC-MS platforms were operated in DIA mode over a 30-minute total gradient. For three LC systems, peptides ware separated at a constant flow rate of 500 nL/min by the same type of analytical column (50 cm × 50 μm monolithic silica capillary column (Beijing Uritech Biotech)). LC mobile phases A and B were 100% H 2 O with 0.1% FA (v/v) and 80% ACN / 20% H 2 O with 0.1% FA (v/v), respectively. In 30 min experiments, the gradient of mobile phase B increased from 5% to 20% over 22 min and then increased to 30% over 3 min, a further 1 min plateau phase at 90% B, and a 4 min wash phase of 1% B. Data acquisition on Orbitrap Fusion Lumos was performed in DIA mode using 80 variable windows covering a mass range of 350–1200 m/z. The resolution was set to 120,000 for MS1 and 30,000 for MS2. The Normalized AGC Target was 300% for MS1 and 200% for MS2, with a maximum injection time of 50 ms in MS1 and 50 ms in MS2. HCD Normalized Collision Energies was set to 32%. Data acquisition on Orbitrap Exploris 480 was performed in DIA mode using 80 variable windows covering a mass range of 350–1200 m/z. The resolution was set to 120,000 for MS1 and 30,000 for MS2. The Normalized AGC Target was 300% for MS1 and 200% for MS2, with a maximum injection time of 50 ms in MS1 and 50 ms in MS2. HCD Normalized Collision Energies was set to 30%. Data acquisition on timsTOF Pro 2 was performed in diaPASEF mode using 50 windows. The MS spectra were acquired from 100 to 1700 m/z. The ion mobility was scanned from 0.75 to 1.3 Vs/cm 2 . The ramp time was set to 100 ms. The collision energy was ramped linearly as a function of the mobility from 59 eV at 1/K0 = 1.6 Vs/cm 2 to 20 eV at 1/K0 = 0.6Vs/cm 2 . Isolation windows of a 16 m/z width were set to cover the mass range of 350 to 1200 m/z in diaPASEF. Data processing and analysis of CRC and HC proteomes The DIA data from 3 LC-MS platforms were analyzed separately with Spectronaut v.18.0 performing the directDIA analysis. Default settings were used unless otherwise noted. Cross-run normalization was turned off. Carbamidomethyl on C was set as fixed modification, and Oxidation on M as variable modification. All searches were performed against the human SwissProt database (Homo sapiens, 20,386 reviewed entries, 2022_06 version), concatenated with iRT peptide.fasta file. The data processing results were exported using customized reports for further data analysis using MSCohort. The low-quality data analyzed by MSCohort was excluded. Then log2 transform and directLFQ normalization were performed for all samples. Proteins with missing values < 50% of the samples in each group were retained for further analysis. Missing values were imputed based on the sequential k-nearest neighbor (Seq-KNN) method using NAguideR . Statistical analyses Differentially expressed proteins analysis was performed using the LIMMA package (version 3.58) in R (version 4.3) with the expectation that proteins significantly altered between CRC and HC exhibited Benjamini & Hochberg-adjusted p < 0.05. Pathway analysis of protein alterations was performed using Ingenuity Pathway Analysis (Qiagen). The correlation, t-SNE, and heatmap plots were performed using Corrplot (version 0.92), Rtsne (version 0.16), and ComplexHeatmap (version 2.16.0) packages in R (version 4.3). Pattern recognition analysis (OPLS-DA) was performed using SIMCA 14.0 (Umetrics, Sweden) software. Six machine learning models (Logistic Regression, K-Nearest Neighbor, Gaussian Naïve Bayes, Support Vector Machines, Random Forest, Gradient Boosting Decision Tree) were performed using scikit-learn modules (version 0.23) in Python (version 3.7). The UniProtKB/Swiss-Prot public database was used to map the gene names of DEPs, and enrich GO terms and KEGG pathways was performed using clusterProfiler (v.4.8.3) in R (version 4.3). Protein–protein interaction (PPI) plot was performed using STRING (v.11.0) and Cytoscape (v.3.7.2). MSCohort system The workflow of MSCohort consists of two modules: Intra-experiment analysis and inter-experiment analysis. Intra-experiment analysis enables the systematic evaluation and optimization of individual experiment. We have developed the quality control software tool MSRefine to evaluate and optimize the performance of individual DDA experiment in previous study . MSCohort integrated the metrics and function of MSRefine, and established a quality control system for individual DIA experiment. Here, we mainly focus on introducing the metrics and steps for evaluation and optimization of individual DIA experiment in MSCohort. The QC metrics for DIA experiments are divided into six categories: sample, chromatography, DIA windows, ion source, MS1 and MS2 signal, and identification result. We established a metric-score system for individual experiment quality evaluation and data optimization (Supplementary Fig. ). A detailed description of the metrics is provided in Supplementary Data . The workflow for intra-experiment analysis consists of three steps: reading.raw/.d/.wiff files and identification/quantitation results, extracting metrics and scoring, and generating a visual report. Step 1: Reading the.raw/.d/.wiff files and identification/quantitation results . The.raw/.d/.wiff files can be converted to.ms1/.ms2 files using pXtract, timsTOFExtract, and wiffExtract, respectively. For processing of timsTOF data, timsTOFExtract used TimsPy to convert the proprietary format (Bruker Tims data format (TDF)) to the textual.ms1/.ms2 files. All three in-house tools were embedded in MSCohort. Besides, the identification and quantification results were extracted from Spectronaut , and the customized reports required in MSCohort were provided in Supplementary Data . Step 2: Extracting metrics and scoring. This module carries out two tasks: (1) extraction of metrics and (2) calculation of the first-level and second-level scores. To represent the experimental conditions of DIA with a mathematical model, we designed a quality scoring system for DIA data based on our previous DDA data quality scoring system . The DDA scoring formula is expressed as: 2 [12pt]{minimal}
$${N}_{{identified}{{}}{precursors}}={N}_{{acquired}{{}}{MS}2} {Q}_{{MS}2} {P}_{{MS}2{{}}{per}{{}}{precursor}}$$ N i d e n t i f i e d _ p r e c u r s o r s = N a c q u i r e d _ M S 2 × Q M S 2 × P M S 2 _ p e r _ p r e c u r s o r Where [12pt]{minimal}
$${N}_{{identified\_precursors}}$$ N i d e n t i f i e d _ p r e c u r s o r s is the number of identified peptide precursors, [12pt]{minimal}
$${N}_{{acquired\_MS}2}$$ N a c q u i r e d _ M S 2 is the number of acquired MS2 scans, [12pt]{minimal}
$${Q}_{{MS}2}$$ Q M S 2 is the identification rate of the MS2 scans (the number of identified MS2 scans/ the number of acquired MS2 scans), and [12pt]{minimal}
$${P}_{{MS}2{\_per\_precursor}}$$ P M S 2 _ p e r _ p r e c u r s o r is the utilization rate of the MS2 scans (the number of unique peptide precursors/ the number of identified MS2 scans). The DIA scoring formula is expressed as: 3 [12pt]{minimal}
$${N}_{{identified}{{\_}}{precursors}}={N}_{{acquired}{{\_}}{MS}2} {Q}_{{MS}2} ({N}_{{precursor}{{\_}}{per}{{\_}}{MS}2}/{R}_{{precursor}})$$ N i d e n t i f i e d _ p r e c u r s o r s = N a c q u i r e d _ M S 2 × Q M S 2 × ( N p r e c u r s o r _ p e r _ M S 2 / R p r e c u r s o r ) where [12pt]{minimal}
$${N}_{{identified\_precursors}}$$ N i d e n t i f i e d _ p r e c u r s o r s is the number of identified peptide precursors, [12pt]{minimal}
$${N}_{{acquired\_MS}2}$$ N a c q u i r e d _ M S 2 is the number of acquired MS2 scans, [12pt]{minimal}
$${Q}_{{MS}2}$$ Q M S 2 is the identification rate of the MS2 scans (the number of identified MS2 scans/ the number of acquired MS2 scans), [12pt]{minimal}
$${N}_{{precursor\_per\_MS}2}$$ N p r e c u r s o r _ p e r _ M S 2 is the spectra complexity of MS2 scans (the number of redundant identified precursors/ the number of identified MS2 scans), [12pt]{minimal}
$${R}_{{precursor}}$$ R p r e c u r s o r is the precursors duplicate identification rate (the number of redundant identified precursors/ the number of identified precursors), and [12pt]{minimal}
$$(.{N}_{{precursor\_per\_MS}2}/{R}_{{precursor}}$$ N p r e c u r s o r _ p e r _ M S 2 / R p r e c u r s o r is the utilization rate of the MS2 scans (the number of unique peptide precursors/ the number of identified MS2 scans). This DIA scoring formula was designed based on the DDA scoring formula, the utilization rate of the MS2 scans was divided into the spectra complexity of MS2 scans and the precursors duplicate identification rate. Since the DIA method was to fragment all the parent ions in the isolation window to obtain a mixture MS2 spectrum, theoretically an MS2 scan can be identified to multiple precursors. Therefore, we established a spectra complexity index to represent the number of precursors that can be identified by an average MS2 spectrum/scan. The spectra complexity depends on the DIA windows number and the window size. For example, in Supplementary Note , when we set 80 MS2 windows per cycle, the average window size was 6 Da, the spectra complexity (Redundant identified precursors/ Identified scan rate) was 2.27, and the precursors duplicate identification rate (Redundant identified precursors/ Identified precursors rate) was 1.58. Therefore, the utilization rate of the MS2 scans of 80 MS2 windows was 1.44 (2.27/1.58 = 1.44); When the number of MS2 windows was set to 22 windows per cycle, the average window size was 26 Da, and the spectra complexity was increases to 4.1. However, the corresponding precursors duplicate identification rate was 4.07, so the utilization rate of MS2 scans was 1.01 (4.1/4.07 = 1.01). Therefore, balancing spectra complexity and precursors duplicate identification rate was the key to improve the utilization rate of MS2 scans. Formula (3) was a naive formula but can be extended and used to characterize different kinds of DIA experiments. Step 2.1 Extraction of metrics . MSCohort read.ms1/.ms2 files, extracting and calculating MS1- or MS2-related metrics, including cycle time, ion injection time, peaks intensity, peak counts, and scans number of MS1 or MS2. At the same time, MSCohort extracted peptide features from MS1 scans, analyzing how many features are detectable by high-resolution MS and how many of them are identified by search engine. After data processing, MSCohort obtained more metrics according to the Spectronaut report, including the number of identified precursors, peptides, and proteins. In addition, the data processing results also included missed cleavages of peptides, peak width, FWHM, mass accuracy, precursors intensity, and protein group intensity-related metrics. MSCohort also calculated the spectra complexity of MS2 scans and the precursors duplicate identification rate to analyze whether the MS2 scans were fully utilized for peptide precursors. We provided a detailed introduction to all the metrics proposed by MSCohort in Supplementary Data , including their specific meanings and extraction processes. Step 2.2 Calculation of the first-level and second-level scores . It is not convenient for users to comprehensively evaluate the data if only display the values of each metric. Therefore, we assigned scores to each metric and established a first-level and second-level scoring system (Supplementary Fig. a, ). MSCohort computes a quality score for each of the QC metrics using a score function (see below), and these QC metrics quality scores were defined as second-level scores. These five categories in Formula (3) were defined as first-level scores, including the number of identified peptide precursors, the number of acquired MS2 scans, the identification rate of MS2 scans, the spectra complexity of the MS2 scans, and the precursors duplicate identification rate. Each metric was scored on a scale of one to five, with “5 points” being excellent and “1 point” indicating plenty of room for improvement. Taking the metric M12. Median of MS1 raw mass accuracy (The median of delta mass between the monoisotopic theoretical and the measured m/z of precursors) as an example, for Thermo Orbitrap instrument, the median of MS1 raw mass accuracy ≤1 ppm was 5 points. The median of MS1 raw mass accuracy ≥ 5 ppm was 1 point. For the median of MS1 raw mass accuracy between > 1 ppm and <5 ppm, a linear scoring algorithm is applied. Due to the diversity of experimental methods and instruments, scoring standards are often not fixed and uniform in practice. In this study, the scoring standards for each metric were set based on the data of urinary proteomic optimization experiments collected under different parameter conditions and the data of urine QC samples collected on 20 platforms. Users can adjust scoring define standards according to the actual situation. After determining the scoring standards, each of the metrics is assigned an individual score. We provided the detailed instructions for users to adjust the scoring standards in the user manual of MSCohort ( https://github.com/BUAA-LiuLab/MSCohort ). As shown in Supplementary Fig. , different scores are represented in different colors. Users can directly determine which metric is low-performance based on the colors. The QC metrics in the DIA LC − MS workflow would affect the first-level scores. For example, high missed cleavage, long chromatographic invalid acquiring time, and lower peaks intensity of MS2 would result in low MS2 identification rate. The metrics that affect first-level scoring are measured by second-level scores. MSCohort calculates the second-level scores, then averages them to obtain the corresponding first-level scores, and subsequently applies a similar process to calculate the total score. The final scoring results will be visually presented in various forms. Step 3: Generating a visual report . MSCohort generates a report with comprehensive tables and charts. We use suitable graphs or tables, such as graphs showing the accumulated number of MS2 scans or precursors, to give a global view of the performance of the LC-MS workflow. More details are complemented by various graphs, such as a histogram showing the peptide eluting width, a statistical bar graph showing the number of precursors identified by one MS2 scan, or a scatter graph showing the peak counts of each MS1 or MS2 Scan. The outputs are also exported to simple tab-delimited text files, so visualization or analysis can also be performed using external tools or code scripts. Inter-experiment analysis . This module incorporates an additional 23 inter-experiment metrics for inter-experiment comparisons and low-quality experiments detection. The workflow for inter-experiment analysis consists of three steps: reading intra-experiment data and identification/quantitation results, extracting inter-experiment metrics and scoring, and generating a visual report. Step 1: Reading intra-experiment analysis results . For the submitted cohort data, MSCohort first conducted the intra-experiment analysis on each original data to generate the intra-experiment metrics value and score result for each data. The scores are collated to create an overview chart (heatmap) that displays metric scores per Raw file for a comprehensive overview of the whole cohort. Step 2: Extracting inter-experiment metrics and scoring . This module carries out two tasks: extraction of metrics and calculation of quality scores. Step 2.1 Extraction of metrics MSCohort inter-experiment metrics can be assigned to five categories (Sample Preparation, Liquid Chromatography, and Precursor-level, Peptide-level, and Protein group-level Quantification Results) according to the experimental workflow and quantification results. We provided a detailed description to all the inter-experiment metrics proposed by MSCohort in Supplementary Data . The key metrics are listed below. Metrics SP1-SP3: Customizable contaminant search Pre-analytical variation caused by contaminations during sample collection or inconsistent sample processing can have an impact on the results and may cause the reporting of incorrect biomarkers . MSCohort offers configurable lists of custom protein contaminants to help users assess each sample for potential quality issues. For urine sample quality control, we used three urine-specific quality marker panels to assess the degree of contamination of the samples. Firstly, we used two previously reported quality marker panels to determine the degree of contamination with erythrocytes , and cellular debris . Contamination of erythrocytes occurs during urine collection due to hematuria or hemolysis caused by kidney function issues or systemic disorders, leading to a high sample-to-sample variability compared to regularly secreted urinary proteins , . Insufficient removal of cells and cellular debris from urine will lead to increased detection of intracellular proteins with a high sample-to-sample variability compared to regularly secreted urinary proteins . In addition, proteinuria occurs due to abnormalities in kidney function or systemic disorders, resulting in the leakage of serum proteins into the urine. This can lead to increased detection of serum proteins with high sample-to-sample variability . We generate the third urine-specific quality marker panel to asses of contamination with serum high abundant protein . MSCohort reports the proportion of the summed contaminant protein intensity/ the sum intensity of all proteins for each sample. For each metric, we initially defined potentially contaminated samples as those with a value more than two standard deviations above the median. Metrics LC1 and LC2: Retention time deviation The retention time (RT) of each analyte in MS data usually has shifts for multiple reasons, including matrix effects and instrument performances, especially for large cohort studies . MSCohort extracts the retention time (RT) and deltaRT of precursors from the Spectronaut report. MSCohort also calculates the mean-square error (MSE) between any two LC-MS experiments. 4 [12pt]{minimal}
$${MSE}=_{i=1}^{n}{({Y}_{i}-{Y}_{j})}^{2}$$ M S E = 1 n ∑ i = 1 n ( Y i − Y j ) 2 Where n is the number of LC-MS experiments, Y is the RT array of the retention times of the same precursors identified in both two experiments. MSE is calculated for precursors RT between all LC-MS experiments (i = 1,…, n ; j = 1,…, n ). Metrics MS1 - MS18: Precursor-level, Peptide-level, and Protein group-level quantification results Six statistical metrics were chosen to describe the distribution of precursor/ peptide/ protein abundance across all experiments. These metrics include the number of identifications, the median intensity distribution, the interquartile range (IQR), robust standard deviation , Pearson correlation, and the normalization factor of the intensity at the precursor, peptide, and protein levels. Currently, MSCohort offers three commonly used normalization algorithms, directLFQ , maxLFQ , and quantile , . Step 2.2 Calculation of cohort quality scores For each inter-experiment metric, quality control scores are calculated based on statistical assessment of the median and standard deviations for all experiments. We exploit the assumption that the majority of the proteome typically does not change between any two conditions so that the median behavior could be used as a relative standard . The values of more than two standard deviations (SD) from its median indicating heterogeneity with other experiments. We initially defined potentially low-quality data as those with a value more than two standard deviations from the median. Each score is scored on a scale of one to five, with “5 points” when the parameter is close to or above the overall median and “1 point” when the parameter is more than two standard deviations (SD) from its median indicating plenty of room for improvement. This setup enables automated non-subjective inter-experiment or instrument performance evaluation. Taking the median of protein intensity as an example, the median of protein intensity close to or above the overall median is 5 points, less than the overall median minus two standard deviations is 1 point. Due to the diversity of experimental conditions and sample type, scoring standards are often not fixed and uniform in practice. Users can adjust scoring and define standards according to the actual situation. Step 2.3 Identify outlier LC-MS experiment(s) using the isolation forest algorithm Previous studies have shown that the LC-MS experimental process is complex, with numerous factors influencing LC-MS data, and these factors are not independent but may affect each other. Therefore, for high-dimensional and complex LC-MS data, supervised classifiers heavily rely on training data. Data from different instruments, laboratories, and sample types require re-labeling and retraining, leading to poor generalization. Consequently, unsupervised machine learning algorithms are commonly used for outlier data analysis , . Here, we applied an excellent unsupervised and online outlier detection algorithm, Isolation forest (iForest), to distinguish outlier experiments. iForest achieves outstanding success in most scenarios by taking advantage of the anomalous nature of “few and different” . It has unique advantages in dealing with large datasets due to its low-computational complexity , . As mentioned in the original iForest paper, the unsupervised and online outlier detection algorithm is a two-stage process. The first (training) stage builds isolation trees using sub-samples of the training set. The second (testing) stage passes the test instances through isolation trees to obtain an anomaly score for each instance. This algorithm does not require a labeled dataset or pre-training of offline models, it can dynamically construct isolation trees online for any batch of data. First, the 23 inter-experiment metrics value for each experiment in the cohort were integrated into a two-dimensional matrix. Then, the outlier experiments detection were performed using iForest algorithm with two-stage process: Training stage: iForest randomly selects subsamples from the cohort, then a feature (metric) is randomly selected, and a separation value is randomly generated within the selected feature value range to “isolate” the sample point. Then iForest recursively selects different features and values from the child subset to split the child into smaller subsamples. iTrees are constructed by recursively partitioning the given training set until instances (samples) are isolated or a specific tree height is reached of which results a partial model. Many iTrees will make up the iForest. Thus, we can get the average path length of all iTrees in the iForest. Evaluating stage: iForest passes the samples through isolation trees to obtain an anomaly score for each sample. Outliers are those samples which have short average path lengths on the iTrees and low anomaly score. The iForest was implemented using scikit-learn python library (version 0.23) module sklearn.ensemble.IsolationForest with default parameters. Step 3: Generating a visual report MSCohort generates a report with comprehensive tables and charts. The scores are collated to create an overview heatmap that displays the metrics scores per experiment for a compressed overview of the whole cohort. The user can subsequently follow up on detailed quality metric plots of interest in the remainder of the report. In summary, quality control metrics offer a visual guide to users to judge the data quality, whereas scores computed from the underlying data represent a mathematically more rigid way to automatically flag data sets as failed or successful. In addition, the underlying metrics values and scores are automatically exported to a text file and can be readily used for manual comparison and annotation of data sets. Reporting summary Further information on research design is available in the linked to this article.
First-morning urine (midstream) samples were collected from ten healthy individuals at Peking Union Medical College. Ten urine samples were combined and centrifuged at 3000 × g for 30 minutes at 4 °C to remove cell debris. The supernatant was transferred into the 2 mL EP tube (Corning, USA) and stored at −80 °C for further analysis. The quality control (QC) urine samples were prepared at the Institute of Basic Medical Sciences Chinese Academy of Medical Sciences, School of Basic Medicine Peking Union Medical College, and then the urine peptide QC samples were distributed to 20 LC-MS platforms. Urine peptides were prepared by the 96 DRA-Urine method by following the same steps as in the previous work . Briefly, a total of 400 mL of pooled urine mixture was processed. Urine (2 mL each tube) was reduced with 20 mM dithiothreitol (DTT) for 5 min at 95 °C, and then alkylated with 50 mM iodoacetamide (IAM) at room temperature (RT) in the dark for 45 min, then urine proteins were pelleted using 6-fold volume precooled acetone, and centrifuged at 10,000 × g for 10 min at 4 °C. The protein precipitate was re-dissolved in 200 μL of 20 mM Tris(pH 8.0) and then combined. The concentration of pooled urine proteins was quantified using Pierce™ BCA protein assay kit (Thermo Fisher Scientific, USA) following the manufacturer’s protocol. During protein precipitation and quantification, each well of the 96-well PVDF plate (MSIPS4510, Millipore, Billerica, MA) was prewetted with 150 μL of 70% ethanol and equilibrated with 300 μL of 20 mM Tris. For each well, one hundred micrograms of proteins were transferred to the 96-well PVDF plate. The samples were then washed three times with 200 μL of 20 mM Tris buffer (pH 8.0) and centrifugated. Proteins were digested by adding 30 μL of 20 mM Tris buffer (pH 8.0) with trypsin at a ratio of 50:1 (w:w) on the membrane. The samples were subjected to microwave-assisted protein enzymatic digestion twice in a water bath for 1 min under microwave irradiation and then at 37 °C water bath for 2 h. Subsequently, the resulting peptides were collected by centrifugation at 3000 × g for 5 min. The eluted peptides were combined together, and purified with Sep-Pak C18 Vac Cartridge (Waters). The concentration of pooled peptides was determined by using Pierce™ Quantitative Colorimetric Peptide Assay kit (Thermo Scientific) following the manufacturer’s protocol. Then peptides were aliquoted and lyophilized. Twenty micrograms of urinary peptides were resolved in 0.1 % formic acid (FA) to 1 μg/μL. Subsequently, eleven non-naturally occurring synthetic peptides from the iRT kit (Biognosys) were spiked into the sample at a ratio of 1:30 (v/v) to correct relative retention times between acquisitions. Finally, samples were shipped to the 20 LC-MS platforms.
E. coli , and S. cerevisiae samples HEK 293 cells were grown in Dulbecco’s modified eagle medium (DMEM) supplemented with 10% fetal bovine serum (FBS) and penicillin/streptomycin (1:1000) in 37 °C chamber supplied with 5% CO 2 . Cells were lysed in buffer (50 mM Tris-HCl pH 8.0, 2 % sodium dodecyl sulfate (SDS), Protease Inhibitor) and sonicated for 5 minutes with power on 4 seconds and off 3 seconds at 25% energy. Proteins were pelleted by cold acetone, then resolved in 25 mM Tris-HCl buffer (pH 8.0), and concentration was determined by the bicinchoninic acid assay (BCA) kit (Thermo Scientific). Escherichia coli DH5a was cultured in Luria Broth (LB) medium at 37 °C to mid-log phase, shaking at 240 rpm/min, in Luria Broth (LB). S. cerevisiae CG1945 were grown at 30 °C to mid-log phase, shaking at 300 rpm/min in the yeast-peptone-dextrose (YPD) medium. Cells of E. coli and S. cerevisiae were harvested by centrifugation at 4000 × g for 5 min and washed twice with ice-cold phosphate buffered saline (PBS). Cell pellets were resuspended in lysis buffer (50 mM Tris-HCl pH 8.0, 2% SDS, Protease Inhibitor). Then sonicated for 10 minutes with power on 4 seconds and off 3 seconds at 25% energy. Proteins were pelleted by cold acetone, then resolved in 25 mM Tris-HCl buffer (pH 8.0) and concentration was determined by the bicinchoninic acid assay (BCA) kit (Thermo Scientific). The HEK 293 cells, Escherichia coli , and S. cerevisiae proteins were delivered to digestion by Filter-aided sample preparation (FASP) method. In brief, 100 μg of cell lysates were reduced with 20 mM DTT at 95 °C for 5 minutes and alkylated with 50 mM IAM for 45 min at room temperature with dark. Protein solutions were loaded into the 10 kD ultracentrifugation tube equivalented with 25 mM NH 4 HCO 3 buffer. Then proteins were digested with trypsin (Promega) with a 50:1 ratio (w/w) overnight at 37 °C in an ultracentrifugation tube. Peptides were desalted by SPE column (Waters), aliquoted, and dried by SpeedVac. Peptides were resuspended at a concentration of 1 µg/µL with HPLC-grade water containing 0.1% (v/v) FA. The pooled sample A was prepared by mixing human urine, S. cerevisiae (yeast), and Escherichia coli (E. coli) peptides at 65%, 15%, and 20% w/w, respectively. The pooled sample B was prepared by mixing human urine, yeast, and E. coli protein digests at 65%, 30%, and 5% w/w, respectively. The pooled sample A-H was prepared by mixing human (HEK 293), yeast ( S. cerevisiae ), and E. coli (Escherichia coli) peptides at 65%, 15%, and 20% w/w, respectively. The pooled sample B-H was prepared by mixing human HEK 293, yeast, and E. coli protein digests at 65%, 30%, and 5% w/w, respectively. The iRT kit (Biognosys) was added to each of the pooled samples at a ratio of 1:30 (v/v). For LC-MS analysis, 1 µg of pooled sample was adopted.
A total of 80 CRC patients (48 males and 32 females; median age 57 years, min-max: 42–69 years) were recruited from the Cancer Hospital, Chinese Academy of Medical Sciences. All patients were pathologically diagnosed by two senior pathologists, and first-morning midstream urine samples were collected before surgical operations or chemotherapy/radiotherapy. In addition, 80 urine samples from healthy control (HC) (52 males and 28 females; median age 55 years, min-max: 40–68 years) were obtained from the Health Medical Center of the Cancer Hospital. The enrollment criteria for HC subjects were as follows: (1) the absence of benign or malignant tumors; (2) a qualified physical examination finding no dysfunction of vital organs and (3) normal renal function and without albuminuria. Supplementary Data lists the demographic and clinical characteristics of the 80 CRC patients and 80 HCs. CRC and HC samples urine protein preparation and digestion were performed in the same way as the quality control urine samples by the 96 DRA-Urine method. In addition, each 96-well plate contains 3 Quality Assurance (QA) samples (pooled urine samples of equal protein amount from each sample) to monitor the reproducibility of the sample preparation. The resulting urine peptides from each sample were equally divided into triplicates for data acquisition on three LC-MS platforms, respectively.
The 20 participant platforms used 3 different types of mass spectrometers, including Orbitrap (ThermoFisher Scientific), timsTOF (Bruker Daltonik), and ZenoTOF (SCIEX). To systematically analyze the variation among different LC-MS platforms and the main influencing factors, we divided 20 platforms into 2 groups. Ten of these platforms used the procedures and parameters they routinely used (number M01-M10), and the other 10 LC-MS platforms employed a unified LC condition (the same type of column and same gradient) and consistent MS parameters for the same type of instrument (number U01-U10). The detailed LC and MS acquisition parameters are provided in Supplementary Data , and the data acquisition was performed according to the SOP for urinary proteomics (Supplementary Note ). All participant LC-MS platforms collected 1 pure iRT (Biognosys) data, 3 DDA data, 3 DIA data, and 1 blank to assess carry-over. The acquisition time was 30 min. The performance of DDA data acquired across multi-platform was investigated and provided in Supplementary Fig. .
Data-independent acquisition data from the 20 LC-MS platforms dataset were processed with Spectronaut v.18.0 performing the directDIA analysis. All searches were performed against the human SwissProt database (Homo sapiens, 20386 reviewed entries, 2022_06 version), concatenated with iRT peptide.fasta file (downloaded from the Biognosys webpage). Briefly, the specific enzyme used Trypsin/P, peptide length from 7 to 52, max missed cleavages was set 2, toggle N-terminal M turned on, Carbamidomethyl on C was set as fixed modification, and Oxidation on M as variable modification. The extraction of data used dynamic MS1 and MS2 mass tolerances, a dynamic window for extracted ion current extraction window, and a non-linear iRT calibration strategy. The identification was carried out using a kernel density estimator and Qvalue cut-off of 0.01 at precursor and protein levels. The top N (min:1; max:3) precursors per peptide and peptide per protein were used for quantification. Peptide intensity was calculated by the mean precursor intensity. Cross-run normalization was turned off. Additional DDA data collected under the same conditions as DIA data were added to create a hybrid library. The data processing results were exported using customized reports for further data analysis using MSCohort. The customized reports required in MSCohort were provided in Supplementary Data . According to the different DIA methods of different LC-MS platforms, PG.MS2Quantity results were chosen for conventional DIA method, and PG.MS1Quantity results were chosen for HRMS1-DIA methods for subsequent quantitative analysis.
Equivalent amounts of pooled sample A, sample B, sample A-H, and sample B-H were shipped to three LC-MS platforms (U02-Lumos, U06-E480, and U08-timsPro 2). Samples were resuspended to final concentrations of 1 μg/µL in 0.1% FA with iRT and analyzed in three technical replicates using the DIA method provided in SOP on each LC-MS platform.
Data-independent acquisition spectra in the 3 participant platforms dataset were analyzed with Spectronaut v.18.0 performing the directDIA analysis. All searches were performed against the Uniprot database for human (organism ID 9606, 20386 reviewed entries, 2022_06 version), yeast (organism ID 559292, 6727 entries, 2023_02 version), E. coli (organism ID 83333, 4634 entries, 2023_02 version) taxonomies, concatenated with iRT peptide.fasta file (downloaded from the Biognosys webpage). Default settings were used unless otherwise noted. Cross-run normalization was turned off. Carbamidomethyl on C was set as fixed modification, and Oxidation on M as variable modification.
Equivalent amounts of urine peptides were shipped to three LC-MS platforms (U02-Lumos, U06-E480, and U08-timsPro 2). Samples were resuspended to final concentrations of 1 μg/ µL in 0.1% FA with iRT and analyzed using the DIA method provided in SOP. QC samples were analyzed in triplicate before CRC and HC samples analyses and a single QC sample analysis was performed midway through the overall analysis. The acquisition of samples was randomized to avoid bias. LC-MS analysis of generating dataset for three different LC-MS platforms was collected by Orbitrap Fusion Lumos coupled with an EASY-nLC 1000 system, an Orbitrap Exploris 480 mass spectrometer coupled with Vanquish Neo UHPLC system (Thermo Fisher Scientific), and a timsTOF Pro 2 mass spectrometer (Bruker) coupled with an UltiMate 3000 UHPLC system. All three LC-MS platforms were operated in DIA mode over a 30-minute total gradient. For three LC systems, peptides ware separated at a constant flow rate of 500 nL/min by the same type of analytical column (50 cm × 50 μm monolithic silica capillary column (Beijing Uritech Biotech)). LC mobile phases A and B were 100% H 2 O with 0.1% FA (v/v) and 80% ACN / 20% H 2 O with 0.1% FA (v/v), respectively. In 30 min experiments, the gradient of mobile phase B increased from 5% to 20% over 22 min and then increased to 30% over 3 min, a further 1 min plateau phase at 90% B, and a 4 min wash phase of 1% B. Data acquisition on Orbitrap Fusion Lumos was performed in DIA mode using 80 variable windows covering a mass range of 350–1200 m/z. The resolution was set to 120,000 for MS1 and 30,000 for MS2. The Normalized AGC Target was 300% for MS1 and 200% for MS2, with a maximum injection time of 50 ms in MS1 and 50 ms in MS2. HCD Normalized Collision Energies was set to 32%. Data acquisition on Orbitrap Exploris 480 was performed in DIA mode using 80 variable windows covering a mass range of 350–1200 m/z. The resolution was set to 120,000 for MS1 and 30,000 for MS2. The Normalized AGC Target was 300% for MS1 and 200% for MS2, with a maximum injection time of 50 ms in MS1 and 50 ms in MS2. HCD Normalized Collision Energies was set to 30%. Data acquisition on timsTOF Pro 2 was performed in diaPASEF mode using 50 windows. The MS spectra were acquired from 100 to 1700 m/z. The ion mobility was scanned from 0.75 to 1.3 Vs/cm 2 . The ramp time was set to 100 ms. The collision energy was ramped linearly as a function of the mobility from 59 eV at 1/K0 = 1.6 Vs/cm 2 to 20 eV at 1/K0 = 0.6Vs/cm 2 . Isolation windows of a 16 m/z width were set to cover the mass range of 350 to 1200 m/z in diaPASEF.
The DIA data from 3 LC-MS platforms were analyzed separately with Spectronaut v.18.0 performing the directDIA analysis. Default settings were used unless otherwise noted. Cross-run normalization was turned off. Carbamidomethyl on C was set as fixed modification, and Oxidation on M as variable modification. All searches were performed against the human SwissProt database (Homo sapiens, 20,386 reviewed entries, 2022_06 version), concatenated with iRT peptide.fasta file. The data processing results were exported using customized reports for further data analysis using MSCohort. The low-quality data analyzed by MSCohort was excluded. Then log2 transform and directLFQ normalization were performed for all samples. Proteins with missing values < 50% of the samples in each group were retained for further analysis. Missing values were imputed based on the sequential k-nearest neighbor (Seq-KNN) method using NAguideR .
Differentially expressed proteins analysis was performed using the LIMMA package (version 3.58) in R (version 4.3) with the expectation that proteins significantly altered between CRC and HC exhibited Benjamini & Hochberg-adjusted p < 0.05. Pathway analysis of protein alterations was performed using Ingenuity Pathway Analysis (Qiagen). The correlation, t-SNE, and heatmap plots were performed using Corrplot (version 0.92), Rtsne (version 0.16), and ComplexHeatmap (version 2.16.0) packages in R (version 4.3). Pattern recognition analysis (OPLS-DA) was performed using SIMCA 14.0 (Umetrics, Sweden) software. Six machine learning models (Logistic Regression, K-Nearest Neighbor, Gaussian Naïve Bayes, Support Vector Machines, Random Forest, Gradient Boosting Decision Tree) were performed using scikit-learn modules (version 0.23) in Python (version 3.7). The UniProtKB/Swiss-Prot public database was used to map the gene names of DEPs, and enrich GO terms and KEGG pathways was performed using clusterProfiler (v.4.8.3) in R (version 4.3). Protein–protein interaction (PPI) plot was performed using STRING (v.11.0) and Cytoscape (v.3.7.2).
The workflow of MSCohort consists of two modules: Intra-experiment analysis and inter-experiment analysis. Intra-experiment analysis enables the systematic evaluation and optimization of individual experiment. We have developed the quality control software tool MSRefine to evaluate and optimize the performance of individual DDA experiment in previous study . MSCohort integrated the metrics and function of MSRefine, and established a quality control system for individual DIA experiment. Here, we mainly focus on introducing the metrics and steps for evaluation and optimization of individual DIA experiment in MSCohort. The QC metrics for DIA experiments are divided into six categories: sample, chromatography, DIA windows, ion source, MS1 and MS2 signal, and identification result. We established a metric-score system for individual experiment quality evaluation and data optimization (Supplementary Fig. ). A detailed description of the metrics is provided in Supplementary Data . The workflow for intra-experiment analysis consists of three steps: reading.raw/.d/.wiff files and identification/quantitation results, extracting metrics and scoring, and generating a visual report. Step 1: Reading the.raw/.d/.wiff files and identification/quantitation results . The.raw/.d/.wiff files can be converted to.ms1/.ms2 files using pXtract, timsTOFExtract, and wiffExtract, respectively. For processing of timsTOF data, timsTOFExtract used TimsPy to convert the proprietary format (Bruker Tims data format (TDF)) to the textual.ms1/.ms2 files. All three in-house tools were embedded in MSCohort. Besides, the identification and quantification results were extracted from Spectronaut , and the customized reports required in MSCohort were provided in Supplementary Data . Step 2: Extracting metrics and scoring. This module carries out two tasks: (1) extraction of metrics and (2) calculation of the first-level and second-level scores. To represent the experimental conditions of DIA with a mathematical model, we designed a quality scoring system for DIA data based on our previous DDA data quality scoring system . The DDA scoring formula is expressed as: 2 [12pt]{minimal}
$${N}_{{identified}{{}}{precursors}}={N}_{{acquired}{{}}{MS}2} {Q}_{{MS}2} {P}_{{MS}2{{}}{per}{{}}{precursor}}$$ N i d e n t i f i e d _ p r e c u r s o r s = N a c q u i r e d _ M S 2 × Q M S 2 × P M S 2 _ p e r _ p r e c u r s o r Where [12pt]{minimal}
$${N}_{{identified\_precursors}}$$ N i d e n t i f i e d _ p r e c u r s o r s is the number of identified peptide precursors, [12pt]{minimal}
$${N}_{{acquired\_MS}2}$$ N a c q u i r e d _ M S 2 is the number of acquired MS2 scans, [12pt]{minimal}
$${Q}_{{MS}2}$$ Q M S 2 is the identification rate of the MS2 scans (the number of identified MS2 scans/ the number of acquired MS2 scans), and [12pt]{minimal}
$${P}_{{MS}2{\_per\_precursor}}$$ P M S 2 _ p e r _ p r e c u r s o r is the utilization rate of the MS2 scans (the number of unique peptide precursors/ the number of identified MS2 scans). The DIA scoring formula is expressed as: 3 [12pt]{minimal}
$${N}_{{identified}{{\_}}{precursors}}={N}_{{acquired}{{\_}}{MS}2} {Q}_{{MS}2} ({N}_{{precursor}{{\_}}{per}{{\_}}{MS}2}/{R}_{{precursor}})$$ N i d e n t i f i e d _ p r e c u r s o r s = N a c q u i r e d _ M S 2 × Q M S 2 × ( N p r e c u r s o r _ p e r _ M S 2 / R p r e c u r s o r ) where [12pt]{minimal}
$${N}_{{identified\_precursors}}$$ N i d e n t i f i e d _ p r e c u r s o r s is the number of identified peptide precursors, [12pt]{minimal}
$${N}_{{acquired\_MS}2}$$ N a c q u i r e d _ M S 2 is the number of acquired MS2 scans, [12pt]{minimal}
$${Q}_{{MS}2}$$ Q M S 2 is the identification rate of the MS2 scans (the number of identified MS2 scans/ the number of acquired MS2 scans), [12pt]{minimal}
$${N}_{{precursor\_per\_MS}2}$$ N p r e c u r s o r _ p e r _ M S 2 is the spectra complexity of MS2 scans (the number of redundant identified precursors/ the number of identified MS2 scans), [12pt]{minimal}
$${R}_{{precursor}}$$ R p r e c u r s o r is the precursors duplicate identification rate (the number of redundant identified precursors/ the number of identified precursors), and [12pt]{minimal}
$$(.{N}_{{precursor\_per\_MS}2}/{R}_{{precursor}}$$ N p r e c u r s o r _ p e r _ M S 2 / R p r e c u r s o r is the utilization rate of the MS2 scans (the number of unique peptide precursors/ the number of identified MS2 scans). This DIA scoring formula was designed based on the DDA scoring formula, the utilization rate of the MS2 scans was divided into the spectra complexity of MS2 scans and the precursors duplicate identification rate. Since the DIA method was to fragment all the parent ions in the isolation window to obtain a mixture MS2 spectrum, theoretically an MS2 scan can be identified to multiple precursors. Therefore, we established a spectra complexity index to represent the number of precursors that can be identified by an average MS2 spectrum/scan. The spectra complexity depends on the DIA windows number and the window size. For example, in Supplementary Note , when we set 80 MS2 windows per cycle, the average window size was 6 Da, the spectra complexity (Redundant identified precursors/ Identified scan rate) was 2.27, and the precursors duplicate identification rate (Redundant identified precursors/ Identified precursors rate) was 1.58. Therefore, the utilization rate of the MS2 scans of 80 MS2 windows was 1.44 (2.27/1.58 = 1.44); When the number of MS2 windows was set to 22 windows per cycle, the average window size was 26 Da, and the spectra complexity was increases to 4.1. However, the corresponding precursors duplicate identification rate was 4.07, so the utilization rate of MS2 scans was 1.01 (4.1/4.07 = 1.01). Therefore, balancing spectra complexity and precursors duplicate identification rate was the key to improve the utilization rate of MS2 scans. Formula (3) was a naive formula but can be extended and used to characterize different kinds of DIA experiments. Step 2.1 Extraction of metrics . MSCohort read.ms1/.ms2 files, extracting and calculating MS1- or MS2-related metrics, including cycle time, ion injection time, peaks intensity, peak counts, and scans number of MS1 or MS2. At the same time, MSCohort extracted peptide features from MS1 scans, analyzing how many features are detectable by high-resolution MS and how many of them are identified by search engine. After data processing, MSCohort obtained more metrics according to the Spectronaut report, including the number of identified precursors, peptides, and proteins. In addition, the data processing results also included missed cleavages of peptides, peak width, FWHM, mass accuracy, precursors intensity, and protein group intensity-related metrics. MSCohort also calculated the spectra complexity of MS2 scans and the precursors duplicate identification rate to analyze whether the MS2 scans were fully utilized for peptide precursors. We provided a detailed introduction to all the metrics proposed by MSCohort in Supplementary Data , including their specific meanings and extraction processes. Step 2.2 Calculation of the first-level and second-level scores . It is not convenient for users to comprehensively evaluate the data if only display the values of each metric. Therefore, we assigned scores to each metric and established a first-level and second-level scoring system (Supplementary Fig. a, ). MSCohort computes a quality score for each of the QC metrics using a score function (see below), and these QC metrics quality scores were defined as second-level scores. These five categories in Formula (3) were defined as first-level scores, including the number of identified peptide precursors, the number of acquired MS2 scans, the identification rate of MS2 scans, the spectra complexity of the MS2 scans, and the precursors duplicate identification rate. Each metric was scored on a scale of one to five, with “5 points” being excellent and “1 point” indicating plenty of room for improvement. Taking the metric M12. Median of MS1 raw mass accuracy (The median of delta mass between the monoisotopic theoretical and the measured m/z of precursors) as an example, for Thermo Orbitrap instrument, the median of MS1 raw mass accuracy ≤1 ppm was 5 points. The median of MS1 raw mass accuracy ≥ 5 ppm was 1 point. For the median of MS1 raw mass accuracy between > 1 ppm and <5 ppm, a linear scoring algorithm is applied. Due to the diversity of experimental methods and instruments, scoring standards are often not fixed and uniform in practice. In this study, the scoring standards for each metric were set based on the data of urinary proteomic optimization experiments collected under different parameter conditions and the data of urine QC samples collected on 20 platforms. Users can adjust scoring define standards according to the actual situation. After determining the scoring standards, each of the metrics is assigned an individual score. We provided the detailed instructions for users to adjust the scoring standards in the user manual of MSCohort ( https://github.com/BUAA-LiuLab/MSCohort ). As shown in Supplementary Fig. , different scores are represented in different colors. Users can directly determine which metric is low-performance based on the colors. The QC metrics in the DIA LC − MS workflow would affect the first-level scores. For example, high missed cleavage, long chromatographic invalid acquiring time, and lower peaks intensity of MS2 would result in low MS2 identification rate. The metrics that affect first-level scoring are measured by second-level scores. MSCohort calculates the second-level scores, then averages them to obtain the corresponding first-level scores, and subsequently applies a similar process to calculate the total score. The final scoring results will be visually presented in various forms. Step 3: Generating a visual report . MSCohort generates a report with comprehensive tables and charts. We use suitable graphs or tables, such as graphs showing the accumulated number of MS2 scans or precursors, to give a global view of the performance of the LC-MS workflow. More details are complemented by various graphs, such as a histogram showing the peptide eluting width, a statistical bar graph showing the number of precursors identified by one MS2 scan, or a scatter graph showing the peak counts of each MS1 or MS2 Scan. The outputs are also exported to simple tab-delimited text files, so visualization or analysis can also be performed using external tools or code scripts. Inter-experiment analysis . This module incorporates an additional 23 inter-experiment metrics for inter-experiment comparisons and low-quality experiments detection. The workflow for inter-experiment analysis consists of three steps: reading intra-experiment data and identification/quantitation results, extracting inter-experiment metrics and scoring, and generating a visual report. Step 1: Reading intra-experiment analysis results . For the submitted cohort data, MSCohort first conducted the intra-experiment analysis on each original data to generate the intra-experiment metrics value and score result for each data. The scores are collated to create an overview chart (heatmap) that displays metric scores per Raw file for a comprehensive overview of the whole cohort. Step 2: Extracting inter-experiment metrics and scoring . This module carries out two tasks: extraction of metrics and calculation of quality scores. Step 2.1 Extraction of metrics MSCohort inter-experiment metrics can be assigned to five categories (Sample Preparation, Liquid Chromatography, and Precursor-level, Peptide-level, and Protein group-level Quantification Results) according to the experimental workflow and quantification results. We provided a detailed description to all the inter-experiment metrics proposed by MSCohort in Supplementary Data . The key metrics are listed below.
Pre-analytical variation caused by contaminations during sample collection or inconsistent sample processing can have an impact on the results and may cause the reporting of incorrect biomarkers . MSCohort offers configurable lists of custom protein contaminants to help users assess each sample for potential quality issues. For urine sample quality control, we used three urine-specific quality marker panels to assess the degree of contamination of the samples. Firstly, we used two previously reported quality marker panels to determine the degree of contamination with erythrocytes , and cellular debris . Contamination of erythrocytes occurs during urine collection due to hematuria or hemolysis caused by kidney function issues or systemic disorders, leading to a high sample-to-sample variability compared to regularly secreted urinary proteins , . Insufficient removal of cells and cellular debris from urine will lead to increased detection of intracellular proteins with a high sample-to-sample variability compared to regularly secreted urinary proteins . In addition, proteinuria occurs due to abnormalities in kidney function or systemic disorders, resulting in the leakage of serum proteins into the urine. This can lead to increased detection of serum proteins with high sample-to-sample variability . We generate the third urine-specific quality marker panel to asses of contamination with serum high abundant protein . MSCohort reports the proportion of the summed contaminant protein intensity/ the sum intensity of all proteins for each sample. For each metric, we initially defined potentially contaminated samples as those with a value more than two standard deviations above the median.
The retention time (RT) of each analyte in MS data usually has shifts for multiple reasons, including matrix effects and instrument performances, especially for large cohort studies . MSCohort extracts the retention time (RT) and deltaRT of precursors from the Spectronaut report. MSCohort also calculates the mean-square error (MSE) between any two LC-MS experiments. 4 [12pt]{minimal}
$${MSE}=_{i=1}^{n}{({Y}_{i}-{Y}_{j})}^{2}$$ M S E = 1 n ∑ i = 1 n ( Y i − Y j ) 2 Where n is the number of LC-MS experiments, Y is the RT array of the retention times of the same precursors identified in both two experiments. MSE is calculated for precursors RT between all LC-MS experiments (i = 1,…, n ; j = 1,…, n ).
Six statistical metrics were chosen to describe the distribution of precursor/ peptide/ protein abundance across all experiments. These metrics include the number of identifications, the median intensity distribution, the interquartile range (IQR), robust standard deviation , Pearson correlation, and the normalization factor of the intensity at the precursor, peptide, and protein levels. Currently, MSCohort offers three commonly used normalization algorithms, directLFQ , maxLFQ , and quantile , . Step 2.2 Calculation of cohort quality scores For each inter-experiment metric, quality control scores are calculated based on statistical assessment of the median and standard deviations for all experiments. We exploit the assumption that the majority of the proteome typically does not change between any two conditions so that the median behavior could be used as a relative standard . The values of more than two standard deviations (SD) from its median indicating heterogeneity with other experiments. We initially defined potentially low-quality data as those with a value more than two standard deviations from the median. Each score is scored on a scale of one to five, with “5 points” when the parameter is close to or above the overall median and “1 point” when the parameter is more than two standard deviations (SD) from its median indicating plenty of room for improvement. This setup enables automated non-subjective inter-experiment or instrument performance evaluation. Taking the median of protein intensity as an example, the median of protein intensity close to or above the overall median is 5 points, less than the overall median minus two standard deviations is 1 point. Due to the diversity of experimental conditions and sample type, scoring standards are often not fixed and uniform in practice. Users can adjust scoring and define standards according to the actual situation. Step 2.3 Identify outlier LC-MS experiment(s) using the isolation forest algorithm Previous studies have shown that the LC-MS experimental process is complex, with numerous factors influencing LC-MS data, and these factors are not independent but may affect each other. Therefore, for high-dimensional and complex LC-MS data, supervised classifiers heavily rely on training data. Data from different instruments, laboratories, and sample types require re-labeling and retraining, leading to poor generalization. Consequently, unsupervised machine learning algorithms are commonly used for outlier data analysis , . Here, we applied an excellent unsupervised and online outlier detection algorithm, Isolation forest (iForest), to distinguish outlier experiments. iForest achieves outstanding success in most scenarios by taking advantage of the anomalous nature of “few and different” . It has unique advantages in dealing with large datasets due to its low-computational complexity , . As mentioned in the original iForest paper, the unsupervised and online outlier detection algorithm is a two-stage process. The first (training) stage builds isolation trees using sub-samples of the training set. The second (testing) stage passes the test instances through isolation trees to obtain an anomaly score for each instance. This algorithm does not require a labeled dataset or pre-training of offline models, it can dynamically construct isolation trees online for any batch of data. First, the 23 inter-experiment metrics value for each experiment in the cohort were integrated into a two-dimensional matrix. Then, the outlier experiments detection were performed using iForest algorithm with two-stage process: Training stage: iForest randomly selects subsamples from the cohort, then a feature (metric) is randomly selected, and a separation value is randomly generated within the selected feature value range to “isolate” the sample point. Then iForest recursively selects different features and values from the child subset to split the child into smaller subsamples. iTrees are constructed by recursively partitioning the given training set until instances (samples) are isolated or a specific tree height is reached of which results a partial model. Many iTrees will make up the iForest. Thus, we can get the average path length of all iTrees in the iForest. Evaluating stage: iForest passes the samples through isolation trees to obtain an anomaly score for each sample. Outliers are those samples which have short average path lengths on the iTrees and low anomaly score. The iForest was implemented using scikit-learn python library (version 0.23) module sklearn.ensemble.IsolationForest with default parameters. Step 3: Generating a visual report MSCohort generates a report with comprehensive tables and charts. The scores are collated to create an overview heatmap that displays the metrics scores per experiment for a compressed overview of the whole cohort. The user can subsequently follow up on detailed quality metric plots of interest in the remainder of the report. In summary, quality control metrics offer a visual guide to users to judge the data quality, whereas scores computed from the underlying data represent a mathematically more rigid way to automatically flag data sets as failed or successful. In addition, the underlying metrics values and scores are automatically exported to a text file and can be readily used for manual comparison and annotation of data sets.
Further information on research design is available in the linked to this article.
Supplementary Information Reporting Summary Description of Additional Supplementary files Supplementary Data 1 Supplementary Data 2 Supplementary Data 3 Supplementary Data 4 Supplementary Data 5 Supplementary Data 6 Supplementary Data 7 Supplementary Data 8 Supplementary Data 9 Supplementary Data 10 Transparent Peer Review File
Source data
|
Shenmai injection revives cardiac function in rats with hypertensive heart failure: involvement of microbial-host co-metabolism | b74536a1-ed59-43dc-973e-36e405f5048d | 11761217 | Biochemistry[mh] | Heart failure (HF) represents the most advanced stage of cardiac illnesses . The elderly population is predominantly burdened with chronic illnesses such as heart failure, and this causes substantial economic strain on the patient’s family and society . In up to 85% of cases, hypertension (HTN) is the primary modifiable risk factor for the onset of HF . The current estimated population of HF in the US is 6 million , making hypertension-induced HF a serious public health concern. New studies have demonstrated a link between intestinal microbial illnesses and the onset and progression of HF . It has been hypothesized that HF is caused by an increase in intestinal bacteria, resulting in increased inflammation and an increased number of bacteria in the bloodstream . Cui et al. used metabolomic and metagenomic analyses of feces and blood from HF patients to reveal an imbalance among intestinal microflora . According to Pasini et al., the severity of HF may be correlated with the proliferation of pathogenic intestinal microflora and increased intestinal permeability . The discovery of a heart-gut axis provides new approaches to the therapy of HF . Trimethylamine N-oxide (TMAO) is formed when trimethylamine (TMA), produced by gut microbes acting on dietary compounds, is further oxidized by liver flavin-containing mono-oxygenase (FMO) . A systematic review of 19,256 subjects has previously demonstrated that raised levels of TMAO and its precursors were reported to be related to major detrimental cardiovascular complications, such as HF, and an increased risk of death from any cause . An investigation using mice fed a diet high in choline or TMAO revealed heightened serum levels of TMAO and a more severe HF . These findings support the notion that a better prognosis for HF patients is associated with lower TMAO level . Therefore, methods for treating HF that lower the level of serum TMAO or TMAO-producing microbes are ideal. Numerous studies have provided ample evidence of the efficacy of Traditional Chinese medicine (TCM) in HF therapy . Based on recent research, TCM has been proven to regulate gut microbiota to inhibit the onset of cardiovascular diseases . TCM successfully maintains a healthy intestinal environment, encourages the propagation of beneficial bacteria, and balances the gut microbiota. While also inhibiting the proliferation of harmful bacteria . Shenmai Injection (SMI) is a type of Traditional Chinese Medicine injection (TCMI) prepared from Ginseng Radix et Rhizoma Rubra (Panax ginseng C. A. Mey, Hongshen) and Radix Ophiopogonis ( Ophiopogon japonicus (Linn. f.) Ker-Gawl, Maidong) using modern technology. Pharmacological studies established that ophiopogonin D, ginsenoside Rg1, ginsenoside Rb1, and ginsenoside Re are the compounds of SMI. It has been demonstrated that SMI protects cardiomyocytes through the regulation of the activity of enzymes related to energy metabolism, ATP production, and mitochondrial function . Research has demonstrated that SMI can protect against the cardiotoxicity caused by doxorubicin by maintaining mitochondrial homeostasis and the miR-30a/Beclin pathway . By activating Nrf2/GPX4 signaling-mediated ferroptosis, pretreatment with SMI minimized myocardial I/R damage and presented a therapeutic approach to treating and preventing ischemic heart diseases . SMI has been shown in clinical trials to improve energy metabolism in HF patients . Our previous research has demonstrated that SMI regulates the TGF-β 1/Smad signaling pathway, thereby preventing myocardial fibrosis and effectively improving H-HF , but the potential involvement of gut microbiota in these therapeutic effects remains unclear. Studies have shown that ginsenoside Rg1 alleviates acute ulcerative colitis by modulating gut microbiota and microbial tryptophan metabolism , while ginsenoside Rh4 inhibits colorectal cancer through the regulation of gut microbiota-mediated bile acid metabolism . However, there is currently no research exploring the impact of Shenmai Injection on gut microbiota.Thus, we aim to investigate the mechanisms behind the improvement of cardiac activity in chronic heart failure by using SMI. Animals and treatment The Institutional Animal Care and Use Committee (IACUC) at the Hunan University of Chinese Medicine (HUMC) approved the experimental protocol. Salt-sensitive rats ( n = 24), aged six weeks and weighing 200–220 g, were obtained from Beijing Weitong Lihua Animal Co., Ltd., license number: SCXK(Beijing)2016-0011, animal batch number: N1100111911056755.All of the animals were housed in a standard husbandry environment. Following the acclimation period of seven days, eight rats were allocated to three groups at random: Control (CON), H-HF Model (MOD), and H-HF Model with Shenmai injection (SM). The rat model of H-HF was created utilizing the procedures previously reported . For 20 weeks, the Control (CON) group had a regular diet containing 0.3% NaCl(normal diet), while the MOD and SM groups received a diet high in salt (8% NaCl). Animals were supplied with an unlimited supply of food and water. The CON and MOD groups received intraperitoneal injections of sterile water (6.0 mL/kg), whereas the SMI group received Shenmai injections (6.0 mL/kg) for a period of 15 days. Samples The rats were anesthetized after 15 days using urethane (1.0 g/kg, i.p.). Blood was drawn from the abdominal aorta, and euthanasia was via dislocation of the neck. Briefly, the rats were held securely by the body, with one hand gripping the back or base of the tail. The head was quickly pulled downward and backward to separate the cervical vertebrae, causing immediate loss of consciousness. Blood was left for 3 h at room temperature and then centrifuged to separate and obtain serum. Myocardial and colonic tissues preserved with 4% paraformaldehyde underwent histopathological evaluation. After that, sections embedded in paraffin were stained with hematoxylin and eosin (HE). ELISA was conducted using commercial ELISA kits. NT-proBNP ELISA Kit(CUSABIO, CSB-E08752r). CRP ELISA Kit(CUSABIO, CSB-E07922r). IL-1β ELISA Kit(CUSABIO, CSB-E08055r). Zonulin ELISA Kit(mlbio, ml059419. LPS ELISA Kit(CUSABIO, CSB-E14247r). Samples from the colon were obtained by firmly compressing the inner contents into a clean tube, which was then frozen with liquid nitrogen and stored at -80 °C. For 16S rRNA sequencing and microbiome analysis, 6 rats were picked from each group at random. Echocardiography and blood pressure measurement Echocardiography was employed to assess cardiac function with an ultrasound color Doppler diagnostic equipment(S2N, Shenzhen Kaili Technology Co., Ltd., China). The dimensions of the left atrium and ventricle, as well as the left ventricular ejection fraction (LVEF), were assessed using M-mode echocardiography in the parasternal long-axis view, following the American Society of Echocardiography’s M-mode technique. Three consecutive cardiac cycles were examined to calculate the mean value. The Teichholtz formula was utilized to calculate the left ventricular fractional shortening (LVFS) and LVEF. The echocardiography parameters are as follows: frame rate = 54, dynamic range/gain = 100/3, gain = 150, frequency = 8.0–12.0 MHz. Blood pressure was assessed using a Volume Pressure Recording (VPR) system (CODA; Kent Scientific). For each animal, systolic blood pressure (SBP) and diastolic blood pressure (DBP) were calculated as the average of three independent measurements. Quality control of Shenmai injection Shenmai injection (lot number,1909288) was manufactured by Chiatai Qingchunbao(CTQ) Pharmaceutical Co. Ltd. (Hanzhou, China) with a China FDA drug ratification number of GuoYaoZhunZi- Z33020019. It is a solution extracted from Ginseng Radix et Rhizoma Rubra (Panax ginseng C. A. Mey, Hongshen) and Radix Ophiopogonis ( Ophiopogon japonicus (Linn. f.) Ker-Gawl, Maidong), as described in Table . Its quality meets the standard of China Food and Drug Administation (approval No: WS3-B-3428-98-2004).According to the CTQ Pharmaceutical Group Co. Ltd, the quality control standards for SMI require that the total concentration of GinsenosideRg1 (C42H72O14), GinsenosideRe (C48H82O18), and GinsenosideRb1(C54H92O23) must not be lower than 100 µg/mL and that the overall concentration of the three agents should be between 300 and 600 µg/mL . To ensure the quality of the Shenmai injection, High-Performance Liquid Chromatography (HPLC) analysis was performed.Following filtration through a 0.22 μm nylon membrane, the components of SMI were analyzed using an HPLC System (U3000, ThermoFisher Scientific). Detailed amplification conditions can be found in the Supplementary Material and Methods file. The HPLC analysis demonstrated the presence of Ginsenoside Rg1, Ginsenoside Re, and GinsenosideRb1 in SMI, which agreed with the results reported previously . The outputs of HPLC are summarized in Fig. . Network pharmacology methodology Following the requirements of oral bioavailability (OB) of ≥ 30% and drug-likeness (DL) of ≥ 0.18, we were able to determine all of the active ingredients in red ginseng (hong shen) using the Traditional Chinese Medicine Database Analysis Platform (TCMSP, https://tcmsp-e.com/ ) . The BATMAN-TCM database ( http://bionet.ncpsb.org.cn/batman-tcm/ ) provided high-confidence proteins for Ophiopogon japonicus (Maidong) . Disease-related targets were identified by searching for the term “heart failure” in the databases OMIM ( https://www.omim.org/ ) and GeneCards ( https://www.genecards.org/ ). We used Cytoscape 3.7.2 to construct the “drug component-target” network by mapping the targets of the drug component to the targets of the disease. Based on the STRING database ( https://string-db.org/ ), a protein-protein interaction network was constructed, with a minimum interaction score of 0.7. The drug-disease intersecting genes were uploaded to the DAVID database ( https://david.ncifcrf.gov/summary.jsp ), with gene identifiers set to OFFICIAL_GENE_SYMBOL and the species set to Homo sapiens. DAVID 6.8 was utilized to annotate GO gene functions into three categories: Molecular Function (MF), Cellular Component (CC), and Biological Process (BP) to describe the function of active proteins in Shenmai injection therapy for heart failure. Fecal metabolic profiling Fecal Metabolic Profiling was carried out using the procedures described in our earlier study . A QC sample was generated by combining an equivalent amount of sample supernatant (Fig. A, B).Analysis of the negative and positive modes identified 12,356 and 14,389 peaks, respectively, identifying 344 and 1,058 metabolites. The same software package was used for multivariate analysis, where normalized peak area data was imported into SIMCA16.0.2 . Online databases such as HMDB, ChemSpider, and KEGG were searched to identify metabolites with a VIP greater than 1 and a P-value of less than 0.05 (ascertained by Student’s t-test). Metabolomics analyses were conducted by Biotree Biomedical Company (Shanghai, China). 16S rRNA sequencing The 16S rRNA sequencing analysis was carried out using the procedures described in our earlier research . In brief, PCR amplification was carried out, and the purified amplicons were pooled and sequenced using paired-end sequencing. The raw data was subsequently evaluated. Detailed sequencing analysis procedures are provided in the Supplementary Material and Methods. Biotree Biomedical Company (Shanghai, China) was responsible for sequencing and analysis. Quantification of serum TMAO Using UHPLC-MRM-MS/MS, the Agilent 1290 Infinity II series UHPLC System (Agilent Technologies) was employed to analyze the supernatant (80 µL). The Agilent 6460 triple quadrupole mass spectrometer, outfitted with an AJS electrospray ionization interface, was utilized to create an assay. Biotree Biomedical Company performed the analysis while Agilent MassHunter Workstation Software (B.08.00, Agilent Technologies) was utilized for the MRM data processing and capture. Detailed amplification conditions can be found in the Supplementary Material and Methods file. Metabolomics analyses were conducted by Biotree Biomedical Company (Shanghai, China). Correlation network among “compounds-targets-metabolites- microbiota” Correlation coefficients between the different gut microbiotas and metabolites were computed using the Spearman correlation analysis method. To identify the correlation between metabolites and targets, more SMI differential metabolites and targets were integrated into the metaboanalyst platform . A “components-targets-metabolites-microbes” interaction network was created by integrating the aforementioned results to further reveal the regulatory function of SMI against H-HF. OmicShare, an online tool, was used for the visualization process. Statistical analysis The data analysis was carried out with SPSS 22.0 (IBM, USA). Data with equal variances and normal distribution were assessed for significance using one-way ANOVA and Tukey’s post hoc test. Otherwise, the Mann-Whitney U test was used. A significance threshold of p < 0.05 was established. Additionally, Metorigin ( http://metorigin.metbioinformatics.cn/ ) was used to analyze the traceability of differential metabolites. Sankey network generation, origin analysis, and function analysis were all carried out utilizing the basic Metorigin analysis mode that is accessible on the official website. The Institutional Animal Care and Use Committee (IACUC) at the Hunan University of Chinese Medicine (HUMC) approved the experimental protocol. Salt-sensitive rats ( n = 24), aged six weeks and weighing 200–220 g, were obtained from Beijing Weitong Lihua Animal Co., Ltd., license number: SCXK(Beijing)2016-0011, animal batch number: N1100111911056755.All of the animals were housed in a standard husbandry environment. Following the acclimation period of seven days, eight rats were allocated to three groups at random: Control (CON), H-HF Model (MOD), and H-HF Model with Shenmai injection (SM). The rat model of H-HF was created utilizing the procedures previously reported . For 20 weeks, the Control (CON) group had a regular diet containing 0.3% NaCl(normal diet), while the MOD and SM groups received a diet high in salt (8% NaCl). Animals were supplied with an unlimited supply of food and water. The CON and MOD groups received intraperitoneal injections of sterile water (6.0 mL/kg), whereas the SMI group received Shenmai injections (6.0 mL/kg) for a period of 15 days. The rats were anesthetized after 15 days using urethane (1.0 g/kg, i.p.). Blood was drawn from the abdominal aorta, and euthanasia was via dislocation of the neck. Briefly, the rats were held securely by the body, with one hand gripping the back or base of the tail. The head was quickly pulled downward and backward to separate the cervical vertebrae, causing immediate loss of consciousness. Blood was left for 3 h at room temperature and then centrifuged to separate and obtain serum. Myocardial and colonic tissues preserved with 4% paraformaldehyde underwent histopathological evaluation. After that, sections embedded in paraffin were stained with hematoxylin and eosin (HE). ELISA was conducted using commercial ELISA kits. NT-proBNP ELISA Kit(CUSABIO, CSB-E08752r). CRP ELISA Kit(CUSABIO, CSB-E07922r). IL-1β ELISA Kit(CUSABIO, CSB-E08055r). Zonulin ELISA Kit(mlbio, ml059419. LPS ELISA Kit(CUSABIO, CSB-E14247r). Samples from the colon were obtained by firmly compressing the inner contents into a clean tube, which was then frozen with liquid nitrogen and stored at -80 °C. For 16S rRNA sequencing and microbiome analysis, 6 rats were picked from each group at random. Echocardiography was employed to assess cardiac function with an ultrasound color Doppler diagnostic equipment(S2N, Shenzhen Kaili Technology Co., Ltd., China). The dimensions of the left atrium and ventricle, as well as the left ventricular ejection fraction (LVEF), were assessed using M-mode echocardiography in the parasternal long-axis view, following the American Society of Echocardiography’s M-mode technique. Three consecutive cardiac cycles were examined to calculate the mean value. The Teichholtz formula was utilized to calculate the left ventricular fractional shortening (LVFS) and LVEF. The echocardiography parameters are as follows: frame rate = 54, dynamic range/gain = 100/3, gain = 150, frequency = 8.0–12.0 MHz. Blood pressure was assessed using a Volume Pressure Recording (VPR) system (CODA; Kent Scientific). For each animal, systolic blood pressure (SBP) and diastolic blood pressure (DBP) were calculated as the average of three independent measurements. Shenmai injection (lot number,1909288) was manufactured by Chiatai Qingchunbao(CTQ) Pharmaceutical Co. Ltd. (Hanzhou, China) with a China FDA drug ratification number of GuoYaoZhunZi- Z33020019. It is a solution extracted from Ginseng Radix et Rhizoma Rubra (Panax ginseng C. A. Mey, Hongshen) and Radix Ophiopogonis ( Ophiopogon japonicus (Linn. f.) Ker-Gawl, Maidong), as described in Table . Its quality meets the standard of China Food and Drug Administation (approval No: WS3-B-3428-98-2004).According to the CTQ Pharmaceutical Group Co. Ltd, the quality control standards for SMI require that the total concentration of GinsenosideRg1 (C42H72O14), GinsenosideRe (C48H82O18), and GinsenosideRb1(C54H92O23) must not be lower than 100 µg/mL and that the overall concentration of the three agents should be between 300 and 600 µg/mL . To ensure the quality of the Shenmai injection, High-Performance Liquid Chromatography (HPLC) analysis was performed.Following filtration through a 0.22 μm nylon membrane, the components of SMI were analyzed using an HPLC System (U3000, ThermoFisher Scientific). Detailed amplification conditions can be found in the Supplementary Material and Methods file. The HPLC analysis demonstrated the presence of Ginsenoside Rg1, Ginsenoside Re, and GinsenosideRb1 in SMI, which agreed with the results reported previously . The outputs of HPLC are summarized in Fig. . Following the requirements of oral bioavailability (OB) of ≥ 30% and drug-likeness (DL) of ≥ 0.18, we were able to determine all of the active ingredients in red ginseng (hong shen) using the Traditional Chinese Medicine Database Analysis Platform (TCMSP, https://tcmsp-e.com/ ) . The BATMAN-TCM database ( http://bionet.ncpsb.org.cn/batman-tcm/ ) provided high-confidence proteins for Ophiopogon japonicus (Maidong) . Disease-related targets were identified by searching for the term “heart failure” in the databases OMIM ( https://www.omim.org/ ) and GeneCards ( https://www.genecards.org/ ). We used Cytoscape 3.7.2 to construct the “drug component-target” network by mapping the targets of the drug component to the targets of the disease. Based on the STRING database ( https://string-db.org/ ), a protein-protein interaction network was constructed, with a minimum interaction score of 0.7. The drug-disease intersecting genes were uploaded to the DAVID database ( https://david.ncifcrf.gov/summary.jsp ), with gene identifiers set to OFFICIAL_GENE_SYMBOL and the species set to Homo sapiens. DAVID 6.8 was utilized to annotate GO gene functions into three categories: Molecular Function (MF), Cellular Component (CC), and Biological Process (BP) to describe the function of active proteins in Shenmai injection therapy for heart failure. Fecal Metabolic Profiling was carried out using the procedures described in our earlier study . A QC sample was generated by combining an equivalent amount of sample supernatant (Fig. A, B).Analysis of the negative and positive modes identified 12,356 and 14,389 peaks, respectively, identifying 344 and 1,058 metabolites. The same software package was used for multivariate analysis, where normalized peak area data was imported into SIMCA16.0.2 . Online databases such as HMDB, ChemSpider, and KEGG were searched to identify metabolites with a VIP greater than 1 and a P-value of less than 0.05 (ascertained by Student’s t-test). Metabolomics analyses were conducted by Biotree Biomedical Company (Shanghai, China). The 16S rRNA sequencing analysis was carried out using the procedures described in our earlier research . In brief, PCR amplification was carried out, and the purified amplicons were pooled and sequenced using paired-end sequencing. The raw data was subsequently evaluated. Detailed sequencing analysis procedures are provided in the Supplementary Material and Methods. Biotree Biomedical Company (Shanghai, China) was responsible for sequencing and analysis. Using UHPLC-MRM-MS/MS, the Agilent 1290 Infinity II series UHPLC System (Agilent Technologies) was employed to analyze the supernatant (80 µL). The Agilent 6460 triple quadrupole mass spectrometer, outfitted with an AJS electrospray ionization interface, was utilized to create an assay. Biotree Biomedical Company performed the analysis while Agilent MassHunter Workstation Software (B.08.00, Agilent Technologies) was utilized for the MRM data processing and capture. Detailed amplification conditions can be found in the Supplementary Material and Methods file. Metabolomics analyses were conducted by Biotree Biomedical Company (Shanghai, China). Correlation coefficients between the different gut microbiotas and metabolites were computed using the Spearman correlation analysis method. To identify the correlation between metabolites and targets, more SMI differential metabolites and targets were integrated into the metaboanalyst platform . A “components-targets-metabolites-microbes” interaction network was created by integrating the aforementioned results to further reveal the regulatory function of SMI against H-HF. OmicShare, an online tool, was used for the visualization process. The data analysis was carried out with SPSS 22.0 (IBM, USA). Data with equal variances and normal distribution were assessed for significance using one-way ANOVA and Tukey’s post hoc test. Otherwise, the Mann-Whitney U test was used. A significance threshold of p < 0.05 was established. Additionally, Metorigin ( http://metorigin.metbioinformatics.cn/ ) was used to analyze the traceability of differential metabolites. Sankey network generation, origin analysis, and function analysis were all carried out utilizing the basic Metorigin analysis mode that is accessible on the official website. Pharmacodynamic study of SMI against H-HF rats SMI improved cardiac function We observed that the blood pressure of the MOD and SM groups increased to 190/150mmHg at 12 weeks (Fig. A-B). The control group was found to have significantly lower SBP and DBP readings than the MOD and SM groups. Following the treatment, no substantial changes in blood pressure were observed between the groups (Fig. C-D). Furthermore, no alterations in the weight of rats were observed in each group after the intervention (Fig. E). To validate the H-HF rat model, we first assessed the serum level of NT-proBNP. It was observed that the MOD group had a higher NT-proBNP serum level than the CON group (Fig. A). Comparing the MOD groups to the CON group, the MOD groups showed lower levels of LVEF and LVFS (Fig. B and C), and the MOD group’s M-mode echocardiogram showed impaired cardiac performance (Fig. F). MOD cardiomyocytes were observed by HE staining to be enlarged, irregularly shaped, with a disordered arrangement; the interstitial space between the cells was also filled with fibrous tissue and heavily infiltrated with inflammatory cells (Fig. G). CRP and IL-1β are cytokines used to identify inflammation . The MOD group displayed elevated serum CRP and IL-1β levels compared to controls (Fig. D and E). These observations corroborated the H-HF model , indicating that establishing the H-HF rat model had succeeded. Administering SMI to H-HF rats reduced the elevation of NT-proBNP, CRP, and IL-1β levels. The SMI treatment restored decreased levels of LVEF and LVFS in the MOD group (Fig. B and C). The cardiac functions were also restored by the administration of SMI, as manifested by M-mode echocardiogram (Fig. F) and HE staining (Fig. G). SMI improved intestinal barrier function HE staining of colonic tissue indicated a reduction in mucosal integrity and increased inflammatory cells in the MOD group (Fig. I). This impairment of mucosal functions was reversed with SMI treatment. The presence of Lipopolysaccharide (LPS) is indicative of damage to the intestinal mucosa , and Zonulin is used to assess intestinal permeability . In H-HF rats, the MOD group showed noticeably higher serum concentrations of LPS and Zonulin than the control group, indicating a breakdown of the intestinal mucosal barrier and increased intestinal permeability (Fig. H and J). LPS and Zonulin levels were lower in the SMI group compared to the MOD group, suggesting that SMI effectively improved intestinal permeability and intestinal barrier function. Network pharmacology analysis Three compounds were collected from Red Ginseng (Hong Shen) and ten compounds were obtained from Ophiopogon japonicus (Mai Dong) (Table ). Based on searches of the GeneCards and OMIM disease databases, 4158 heart failure (HF)-related disease targets and 122 overlapping targets were discovered (Fig. A). TNF, IL-6, IL-1β, AKT1, STAT3, NFκΒ, IFNG, IL-10, TP53, and TLR4 were identified as the primary targets by PPI protein interaction analysis (Fig. B). Figure C illustrates the findings of the “drug-component-disease-target” network. As suggested by KEGG analysis, the TNF, IL-17, and Toll-like receptor signaling pathways may be involved in the mechanism through which SMI prevents and treats HF (Fig. D). Apoptotic processes, inflammatory responses, response to external biotic stimuli, negative regulation of cell proliferation, negative regulation of the apoptotic process, cellular response to lipopolysaccharide, G protein-coupled receptor signaling pathway, and negative regulation of gene expression are among the main biological processes predicted by GO analysis and included 474 significantly enriched biological function entries for treating heart failure (Fig. E). There are 52 entries related to cellular components (CC), involving the plasma membrane, membrane, cytoplasm, extracellular space, extracellular region, extracellular exosome, cell surface, mitochondrion, endoplasmic reticulum membrane, and endoplasmic reticulum. Additionally, there are 80 entries related to molecular functions, involving protein binding, identical protein binding, enzyme binding, protein homodimerization activity, DNA binding, zinc ion binding, heme binding, signaling receptor activity, sequence-specific DNA binding, and receptor binding. SMI restored the gut microbiota of H-HF rats Sequencing analysis of gut microbiota The sequencing of 18 fecal samples yielded 1,440,437 raw reads, which were merged and filtered to produce 1,408,229 clean tags. On average, 67,749 clean tags were obtained. To determine whether the sequencing data adequately reflected the diversity of species in the sample, a rarefaction curve was employed. Overall consistency in the results revealed that the sequencing data was adequate (Fig. A). A Venn diagram depicting the OTU distributions was shown in Fig. B. Across the three groups, 607 OTUs were identified, with 518 being shared by all of them. Alpha diversity analysis was carried out to assess the disparities in the structural complexity of the gut microbiota. Chao 1 and Shannon indices did not uncover any significant discrepancies in diversity across the three groups (Fig. C, D). However, a distinct divergence of profiles was found between the CON, MOD, and SM groups according to weighted unifrac PCoA of beta diversity (Fig. C). ANOSIM analysis (ANOSIM: R = 0.732, p = 0.001) demonstrated that the three groups were distinctly segregated. The proximity of the CON and SM populations indicated that their gut bacteria profiles were similar. The points representing the MOD group were further away from the points of the CON and SM groups, implying that the MOD group’s colony structure was markedly different from the other two groups. The PCoA outcomes (model stress = 0.0959 < 0.2) were supported by the NMDS analysis (Fig. D). Composition of gut microbiota and its difference analysis The microbial community composition was evaluated at the phylum and genus level (Fig. E and F). Results revealed that the MOD group had a reduced abundance of Bacteroidetes, Patescibacteria, Spirochaetes, and Elusimicrobia, and an elevated proportion of Firmicutes and Proteobacteria in comparison to the controls (Fig. A). As shown in Fig. A(g), the F/B ratio of the MOD group was substantially higher in comparison to that of the other group. The MOD group was noticed to be deficient in Muribaculaceae and Lachnospiraceae_NK4A136_group in comparison to the controls while having an increased abundance of Romboutsia and Ruminococcaceae at the genus level. These findings suggest that H-HF rats had an alteration in their gut bacterial equilibrium, which SMI treatment partially reversed (Fig. A). The heatmaps in Fig. B further illustrate the variations in the gut microbiota between the three groups. Prediction of the function of the gut microbiota By utilizing PICRUSt, a KEGG pathway analysis technique, we were able to evaluate the functional composition of the bacterial communities in the metagenome. All functional genes were shown at level III (Fig. ). The genes associated with energy and amino acid metabolism were more abundant in the MOD group, indicating that these metabolic pathways were disturbed in H-HF. Furthermore, after receiving SMI treatment, several genes related to energy and amino acid metabolism were altered (Fig. ). SMI improved disordered metabolism in H-HF Regulation of SMI on the differential metabolites Principal component analysis (PCA) and orthogonal partial least square discriminate analysis (OPLS-DA), which scaled and log-translated the data to reduce noise and high variance effects, were successful in differentiating between the three groups. PCA demonstrated a clear distinction between the three groups, with the SM group being closer to the CON group than the MOD group (Fig. A and B). OPLS-DA analysis verified the distinctiveness among the three groups identified by PCA (Fig. ). Metabolic analysis revealed that 17 metabolites had significantly altered levels in positive mode and other 17 metabolites in negative ion mode, while 29 biomarkers were significantly restored after SMI treatment (Table , Fig. C-D). Our results revealed that these metabolites were linked to energy, methylamine, bile acid, and amino acid metabolisms (Table ). Metorgin tracing analysis The analysis of source-based metabolic function and metabolite traceability found 29 differential metabolites linked to SMI: 6 bacterial metabolites, 1 host-specific metabolite, and 24 bacteria-host cometabolites (Fig. A-B). According to metabolites pathway enrichment analysis (MPEA), the databases for the host, bacterial, and co-metabolism metabolic pathways were paired with 1, 4, and 28 relevant metabolic pathways, respectively (Fig. C-D). Of these pathways, 1, 1, and 10 revealed a significant ( p < 0.05) association with SMI. The origin-based functional analysis revealed that the microbial community was specific to phenylalanine metabolism and that the host was specific to primary bile acid biosynthesis. Histidine metabolism, tryptophan metabolism, valine, leucine, and isoleucine biosynthesis, beta-Alanine metabolism, butanoate metabolism, inositol phosphate metabolism, ascorbate and aldarate metabolism, nicotinate and nicotinamide metabolism, aminoacyl-tRNA biosynthesis and pentose and glucuronate interconversions were pathways of co-metabolism between microbes and hosts. The primary mechanism linked to SMI was histidine metabolism. A Bio-Sankey network based on MetOrigin analysis further visualized the biological relationships and statistical correlations between microbiota and metabolites to better depict the co-metabolic relationships between microbiota and hosts (Fig. A-B). Quantification of serum TMAO Studies have demonstrated a positive association between TMAO levels and cardiovascular conditions . In comparison to the controls, the serum levels of TMAO and TMA levels in the MOD group were considerably higher (Fig. A–B), which supports earlier findings . The drops in serum levels of TMAO and TMA after SMI treatment were not statistically significant, which may have been due to the small sample size. Correlation analysis Correlations between the gut microbiota and fecal metabolic phenotype The relationship between metabolites and gut genera was evaluated by the Spearman correlation coefficient. A strong correlation is indicated by a value of r greater than 0.7. Figure C displays the network diagram with strong correlations. TMAO was strongly positively correlated with Elusimicrobium , _xylanophilum_group , oxidoreducens_group and negatively correlated with Catabacter , Defluviitaleaceae_UCG-011 , Parvibacter , _f_Atopobiaceae , Peptococcus , Coriobacteriaceae_UCG-002 , Staphylococcus , and Romboutsi . Therefore, methylamine metabolism must be impacted by the gut bacteria mentioned above. Similarly, D-glucuronic acid and D-xylitol correlated positively with the xylanophilum group , whereas creatinine correlated negatively with Coriobacteriaceae_UCG-002 . These findings suggest that Coriobacteriaceae_UCG-002 , Dubosiella , and the xylanophilum_group have an impact on energy metabolism. Bile acid metabolites (including chenodeoxycholic acid, cholic acid, and glycocholic acid) and amino acids (such as norvaline and L-threonine) have also been found to have a strong correlation with gut microbe composition. Our correlation data demonstrated modifications to the gut microbiome, resulting in a significantly altered metabolomic profile. Therefore, our current findings suggest that the mechanism by which SMI can improve heart function in an H-HF rat model may include effects on microbial energy, methylamine, bile acid, and amino acid metabolism in the intestine. Correlations between TMAO and gut microbiota Tables and present the results of Spearman’s correlation analysis used to evaluate the associations between the composition of the gut and the levels of TMAO metabolites. A positive correlation was uncovered between serum TMAO levels and the proportion of Actinobacteria. In contrast, a negative correlation was observed with Elusimicrobia at the phylum level (Fig. D). The analysis showed that serum TMAO had a direct relationship with Romboutsia , and an inverse relationship with Ruminococcaceae_UCG_014 and _f_Muribaculaceae . Furthermore, serum TMA levels had a strong negative correlation with Ruminococcaceae_UCG_014 (Fig. E). Based on these results, it is possible that SMI administration could alter TMAO levels by affecting the relevant microflora. Correlations between differential metabolites and targets of SMI Figure shows the connections between the targets of SMI and differential metabolites. Additionally, the regulatory role of SMI in preventing heart failure is highlighted by the “components-targets-metabolites-microbes” interaction network depicted in Fig. . By controlling for 46 proteins, network integration analysis shows that the 11 potentially active components of SMI can affect the differential expression of 8 metabolites and 24 gut microbes. SMI improved cardiac function We observed that the blood pressure of the MOD and SM groups increased to 190/150mmHg at 12 weeks (Fig. A-B). The control group was found to have significantly lower SBP and DBP readings than the MOD and SM groups. Following the treatment, no substantial changes in blood pressure were observed between the groups (Fig. C-D). Furthermore, no alterations in the weight of rats were observed in each group after the intervention (Fig. E). To validate the H-HF rat model, we first assessed the serum level of NT-proBNP. It was observed that the MOD group had a higher NT-proBNP serum level than the CON group (Fig. A). Comparing the MOD groups to the CON group, the MOD groups showed lower levels of LVEF and LVFS (Fig. B and C), and the MOD group’s M-mode echocardiogram showed impaired cardiac performance (Fig. F). MOD cardiomyocytes were observed by HE staining to be enlarged, irregularly shaped, with a disordered arrangement; the interstitial space between the cells was also filled with fibrous tissue and heavily infiltrated with inflammatory cells (Fig. G). CRP and IL-1β are cytokines used to identify inflammation . The MOD group displayed elevated serum CRP and IL-1β levels compared to controls (Fig. D and E). These observations corroborated the H-HF model , indicating that establishing the H-HF rat model had succeeded. Administering SMI to H-HF rats reduced the elevation of NT-proBNP, CRP, and IL-1β levels. The SMI treatment restored decreased levels of LVEF and LVFS in the MOD group (Fig. B and C). The cardiac functions were also restored by the administration of SMI, as manifested by M-mode echocardiogram (Fig. F) and HE staining (Fig. G). SMI improved intestinal barrier function HE staining of colonic tissue indicated a reduction in mucosal integrity and increased inflammatory cells in the MOD group (Fig. I). This impairment of mucosal functions was reversed with SMI treatment. The presence of Lipopolysaccharide (LPS) is indicative of damage to the intestinal mucosa , and Zonulin is used to assess intestinal permeability . In H-HF rats, the MOD group showed noticeably higher serum concentrations of LPS and Zonulin than the control group, indicating a breakdown of the intestinal mucosal barrier and increased intestinal permeability (Fig. H and J). LPS and Zonulin levels were lower in the SMI group compared to the MOD group, suggesting that SMI effectively improved intestinal permeability and intestinal barrier function. We observed that the blood pressure of the MOD and SM groups increased to 190/150mmHg at 12 weeks (Fig. A-B). The control group was found to have significantly lower SBP and DBP readings than the MOD and SM groups. Following the treatment, no substantial changes in blood pressure were observed between the groups (Fig. C-D). Furthermore, no alterations in the weight of rats were observed in each group after the intervention (Fig. E). To validate the H-HF rat model, we first assessed the serum level of NT-proBNP. It was observed that the MOD group had a higher NT-proBNP serum level than the CON group (Fig. A). Comparing the MOD groups to the CON group, the MOD groups showed lower levels of LVEF and LVFS (Fig. B and C), and the MOD group’s M-mode echocardiogram showed impaired cardiac performance (Fig. F). MOD cardiomyocytes were observed by HE staining to be enlarged, irregularly shaped, with a disordered arrangement; the interstitial space between the cells was also filled with fibrous tissue and heavily infiltrated with inflammatory cells (Fig. G). CRP and IL-1β are cytokines used to identify inflammation . The MOD group displayed elevated serum CRP and IL-1β levels compared to controls (Fig. D and E). These observations corroborated the H-HF model , indicating that establishing the H-HF rat model had succeeded. Administering SMI to H-HF rats reduced the elevation of NT-proBNP, CRP, and IL-1β levels. The SMI treatment restored decreased levels of LVEF and LVFS in the MOD group (Fig. B and C). The cardiac functions were also restored by the administration of SMI, as manifested by M-mode echocardiogram (Fig. F) and HE staining (Fig. G). HE staining of colonic tissue indicated a reduction in mucosal integrity and increased inflammatory cells in the MOD group (Fig. I). This impairment of mucosal functions was reversed with SMI treatment. The presence of Lipopolysaccharide (LPS) is indicative of damage to the intestinal mucosa , and Zonulin is used to assess intestinal permeability . In H-HF rats, the MOD group showed noticeably higher serum concentrations of LPS and Zonulin than the control group, indicating a breakdown of the intestinal mucosal barrier and increased intestinal permeability (Fig. H and J). LPS and Zonulin levels were lower in the SMI group compared to the MOD group, suggesting that SMI effectively improved intestinal permeability and intestinal barrier function. Three compounds were collected from Red Ginseng (Hong Shen) and ten compounds were obtained from Ophiopogon japonicus (Mai Dong) (Table ). Based on searches of the GeneCards and OMIM disease databases, 4158 heart failure (HF)-related disease targets and 122 overlapping targets were discovered (Fig. A). TNF, IL-6, IL-1β, AKT1, STAT3, NFκΒ, IFNG, IL-10, TP53, and TLR4 were identified as the primary targets by PPI protein interaction analysis (Fig. B). Figure C illustrates the findings of the “drug-component-disease-target” network. As suggested by KEGG analysis, the TNF, IL-17, and Toll-like receptor signaling pathways may be involved in the mechanism through which SMI prevents and treats HF (Fig. D). Apoptotic processes, inflammatory responses, response to external biotic stimuli, negative regulation of cell proliferation, negative regulation of the apoptotic process, cellular response to lipopolysaccharide, G protein-coupled receptor signaling pathway, and negative regulation of gene expression are among the main biological processes predicted by GO analysis and included 474 significantly enriched biological function entries for treating heart failure (Fig. E). There are 52 entries related to cellular components (CC), involving the plasma membrane, membrane, cytoplasm, extracellular space, extracellular region, extracellular exosome, cell surface, mitochondrion, endoplasmic reticulum membrane, and endoplasmic reticulum. Additionally, there are 80 entries related to molecular functions, involving protein binding, identical protein binding, enzyme binding, protein homodimerization activity, DNA binding, zinc ion binding, heme binding, signaling receptor activity, sequence-specific DNA binding, and receptor binding. Sequencing analysis of gut microbiota The sequencing of 18 fecal samples yielded 1,440,437 raw reads, which were merged and filtered to produce 1,408,229 clean tags. On average, 67,749 clean tags were obtained. To determine whether the sequencing data adequately reflected the diversity of species in the sample, a rarefaction curve was employed. Overall consistency in the results revealed that the sequencing data was adequate (Fig. A). A Venn diagram depicting the OTU distributions was shown in Fig. B. Across the three groups, 607 OTUs were identified, with 518 being shared by all of them. Alpha diversity analysis was carried out to assess the disparities in the structural complexity of the gut microbiota. Chao 1 and Shannon indices did not uncover any significant discrepancies in diversity across the three groups (Fig. C, D). However, a distinct divergence of profiles was found between the CON, MOD, and SM groups according to weighted unifrac PCoA of beta diversity (Fig. C). ANOSIM analysis (ANOSIM: R = 0.732, p = 0.001) demonstrated that the three groups were distinctly segregated. The proximity of the CON and SM populations indicated that their gut bacteria profiles were similar. The points representing the MOD group were further away from the points of the CON and SM groups, implying that the MOD group’s colony structure was markedly different from the other two groups. The PCoA outcomes (model stress = 0.0959 < 0.2) were supported by the NMDS analysis (Fig. D). Composition of gut microbiota and its difference analysis The microbial community composition was evaluated at the phylum and genus level (Fig. E and F). Results revealed that the MOD group had a reduced abundance of Bacteroidetes, Patescibacteria, Spirochaetes, and Elusimicrobia, and an elevated proportion of Firmicutes and Proteobacteria in comparison to the controls (Fig. A). As shown in Fig. A(g), the F/B ratio of the MOD group was substantially higher in comparison to that of the other group. The MOD group was noticed to be deficient in Muribaculaceae and Lachnospiraceae_NK4A136_group in comparison to the controls while having an increased abundance of Romboutsia and Ruminococcaceae at the genus level. These findings suggest that H-HF rats had an alteration in their gut bacterial equilibrium, which SMI treatment partially reversed (Fig. A). The heatmaps in Fig. B further illustrate the variations in the gut microbiota between the three groups. Prediction of the function of the gut microbiota By utilizing PICRUSt, a KEGG pathway analysis technique, we were able to evaluate the functional composition of the bacterial communities in the metagenome. All functional genes were shown at level III (Fig. ). The genes associated with energy and amino acid metabolism were more abundant in the MOD group, indicating that these metabolic pathways were disturbed in H-HF. Furthermore, after receiving SMI treatment, several genes related to energy and amino acid metabolism were altered (Fig. ). The sequencing of 18 fecal samples yielded 1,440,437 raw reads, which were merged and filtered to produce 1,408,229 clean tags. On average, 67,749 clean tags were obtained. To determine whether the sequencing data adequately reflected the diversity of species in the sample, a rarefaction curve was employed. Overall consistency in the results revealed that the sequencing data was adequate (Fig. A). A Venn diagram depicting the OTU distributions was shown in Fig. B. Across the three groups, 607 OTUs were identified, with 518 being shared by all of them. Alpha diversity analysis was carried out to assess the disparities in the structural complexity of the gut microbiota. Chao 1 and Shannon indices did not uncover any significant discrepancies in diversity across the three groups (Fig. C, D). However, a distinct divergence of profiles was found between the CON, MOD, and SM groups according to weighted unifrac PCoA of beta diversity (Fig. C). ANOSIM analysis (ANOSIM: R = 0.732, p = 0.001) demonstrated that the three groups were distinctly segregated. The proximity of the CON and SM populations indicated that their gut bacteria profiles were similar. The points representing the MOD group were further away from the points of the CON and SM groups, implying that the MOD group’s colony structure was markedly different from the other two groups. The PCoA outcomes (model stress = 0.0959 < 0.2) were supported by the NMDS analysis (Fig. D). The microbial community composition was evaluated at the phylum and genus level (Fig. E and F). Results revealed that the MOD group had a reduced abundance of Bacteroidetes, Patescibacteria, Spirochaetes, and Elusimicrobia, and an elevated proportion of Firmicutes and Proteobacteria in comparison to the controls (Fig. A). As shown in Fig. A(g), the F/B ratio of the MOD group was substantially higher in comparison to that of the other group. The MOD group was noticed to be deficient in Muribaculaceae and Lachnospiraceae_NK4A136_group in comparison to the controls while having an increased abundance of Romboutsia and Ruminococcaceae at the genus level. These findings suggest that H-HF rats had an alteration in their gut bacterial equilibrium, which SMI treatment partially reversed (Fig. A). The heatmaps in Fig. B further illustrate the variations in the gut microbiota between the three groups. By utilizing PICRUSt, a KEGG pathway analysis technique, we were able to evaluate the functional composition of the bacterial communities in the metagenome. All functional genes were shown at level III (Fig. ). The genes associated with energy and amino acid metabolism were more abundant in the MOD group, indicating that these metabolic pathways were disturbed in H-HF. Furthermore, after receiving SMI treatment, several genes related to energy and amino acid metabolism were altered (Fig. ). Regulation of SMI on the differential metabolites Principal component analysis (PCA) and orthogonal partial least square discriminate analysis (OPLS-DA), which scaled and log-translated the data to reduce noise and high variance effects, were successful in differentiating between the three groups. PCA demonstrated a clear distinction between the three groups, with the SM group being closer to the CON group than the MOD group (Fig. A and B). OPLS-DA analysis verified the distinctiveness among the three groups identified by PCA (Fig. ). Metabolic analysis revealed that 17 metabolites had significantly altered levels in positive mode and other 17 metabolites in negative ion mode, while 29 biomarkers were significantly restored after SMI treatment (Table , Fig. C-D). Our results revealed that these metabolites were linked to energy, methylamine, bile acid, and amino acid metabolisms (Table ). Metorgin tracing analysis The analysis of source-based metabolic function and metabolite traceability found 29 differential metabolites linked to SMI: 6 bacterial metabolites, 1 host-specific metabolite, and 24 bacteria-host cometabolites (Fig. A-B). According to metabolites pathway enrichment analysis (MPEA), the databases for the host, bacterial, and co-metabolism metabolic pathways were paired with 1, 4, and 28 relevant metabolic pathways, respectively (Fig. C-D). Of these pathways, 1, 1, and 10 revealed a significant ( p < 0.05) association with SMI. The origin-based functional analysis revealed that the microbial community was specific to phenylalanine metabolism and that the host was specific to primary bile acid biosynthesis. Histidine metabolism, tryptophan metabolism, valine, leucine, and isoleucine biosynthesis, beta-Alanine metabolism, butanoate metabolism, inositol phosphate metabolism, ascorbate and aldarate metabolism, nicotinate and nicotinamide metabolism, aminoacyl-tRNA biosynthesis and pentose and glucuronate interconversions were pathways of co-metabolism between microbes and hosts. The primary mechanism linked to SMI was histidine metabolism. A Bio-Sankey network based on MetOrigin analysis further visualized the biological relationships and statistical correlations between microbiota and metabolites to better depict the co-metabolic relationships between microbiota and hosts (Fig. A-B). Quantification of serum TMAO Studies have demonstrated a positive association between TMAO levels and cardiovascular conditions . In comparison to the controls, the serum levels of TMAO and TMA levels in the MOD group were considerably higher (Fig. A–B), which supports earlier findings . The drops in serum levels of TMAO and TMA after SMI treatment were not statistically significant, which may have been due to the small sample size. Principal component analysis (PCA) and orthogonal partial least square discriminate analysis (OPLS-DA), which scaled and log-translated the data to reduce noise and high variance effects, were successful in differentiating between the three groups. PCA demonstrated a clear distinction between the three groups, with the SM group being closer to the CON group than the MOD group (Fig. A and B). OPLS-DA analysis verified the distinctiveness among the three groups identified by PCA (Fig. ). Metabolic analysis revealed that 17 metabolites had significantly altered levels in positive mode and other 17 metabolites in negative ion mode, while 29 biomarkers were significantly restored after SMI treatment (Table , Fig. C-D). Our results revealed that these metabolites were linked to energy, methylamine, bile acid, and amino acid metabolisms (Table ). The analysis of source-based metabolic function and metabolite traceability found 29 differential metabolites linked to SMI: 6 bacterial metabolites, 1 host-specific metabolite, and 24 bacteria-host cometabolites (Fig. A-B). According to metabolites pathway enrichment analysis (MPEA), the databases for the host, bacterial, and co-metabolism metabolic pathways were paired with 1, 4, and 28 relevant metabolic pathways, respectively (Fig. C-D). Of these pathways, 1, 1, and 10 revealed a significant ( p < 0.05) association with SMI. The origin-based functional analysis revealed that the microbial community was specific to phenylalanine metabolism and that the host was specific to primary bile acid biosynthesis. Histidine metabolism, tryptophan metabolism, valine, leucine, and isoleucine biosynthesis, beta-Alanine metabolism, butanoate metabolism, inositol phosphate metabolism, ascorbate and aldarate metabolism, nicotinate and nicotinamide metabolism, aminoacyl-tRNA biosynthesis and pentose and glucuronate interconversions were pathways of co-metabolism between microbes and hosts. The primary mechanism linked to SMI was histidine metabolism. A Bio-Sankey network based on MetOrigin analysis further visualized the biological relationships and statistical correlations between microbiota and metabolites to better depict the co-metabolic relationships between microbiota and hosts (Fig. A-B). Studies have demonstrated a positive association between TMAO levels and cardiovascular conditions . In comparison to the controls, the serum levels of TMAO and TMA levels in the MOD group were considerably higher (Fig. A–B), which supports earlier findings . The drops in serum levels of TMAO and TMA after SMI treatment were not statistically significant, which may have been due to the small sample size. Correlations between the gut microbiota and fecal metabolic phenotype The relationship between metabolites and gut genera was evaluated by the Spearman correlation coefficient. A strong correlation is indicated by a value of r greater than 0.7. Figure C displays the network diagram with strong correlations. TMAO was strongly positively correlated with Elusimicrobium , _xylanophilum_group , oxidoreducens_group and negatively correlated with Catabacter , Defluviitaleaceae_UCG-011 , Parvibacter , _f_Atopobiaceae , Peptococcus , Coriobacteriaceae_UCG-002 , Staphylococcus , and Romboutsi . Therefore, methylamine metabolism must be impacted by the gut bacteria mentioned above. Similarly, D-glucuronic acid and D-xylitol correlated positively with the xylanophilum group , whereas creatinine correlated negatively with Coriobacteriaceae_UCG-002 . These findings suggest that Coriobacteriaceae_UCG-002 , Dubosiella , and the xylanophilum_group have an impact on energy metabolism. Bile acid metabolites (including chenodeoxycholic acid, cholic acid, and glycocholic acid) and amino acids (such as norvaline and L-threonine) have also been found to have a strong correlation with gut microbe composition. Our correlation data demonstrated modifications to the gut microbiome, resulting in a significantly altered metabolomic profile. Therefore, our current findings suggest that the mechanism by which SMI can improve heart function in an H-HF rat model may include effects on microbial energy, methylamine, bile acid, and amino acid metabolism in the intestine. Correlations between TMAO and gut microbiota Tables and present the results of Spearman’s correlation analysis used to evaluate the associations between the composition of the gut and the levels of TMAO metabolites. A positive correlation was uncovered between serum TMAO levels and the proportion of Actinobacteria. In contrast, a negative correlation was observed with Elusimicrobia at the phylum level (Fig. D). The analysis showed that serum TMAO had a direct relationship with Romboutsia , and an inverse relationship with Ruminococcaceae_UCG_014 and _f_Muribaculaceae . Furthermore, serum TMA levels had a strong negative correlation with Ruminococcaceae_UCG_014 (Fig. E). Based on these results, it is possible that SMI administration could alter TMAO levels by affecting the relevant microflora. Correlations between differential metabolites and targets of SMI Figure shows the connections between the targets of SMI and differential metabolites. Additionally, the regulatory role of SMI in preventing heart failure is highlighted by the “components-targets-metabolites-microbes” interaction network depicted in Fig. . By controlling for 46 proteins, network integration analysis shows that the 11 potentially active components of SMI can affect the differential expression of 8 metabolites and 24 gut microbes. The relationship between metabolites and gut genera was evaluated by the Spearman correlation coefficient. A strong correlation is indicated by a value of r greater than 0.7. Figure C displays the network diagram with strong correlations. TMAO was strongly positively correlated with Elusimicrobium , _xylanophilum_group , oxidoreducens_group and negatively correlated with Catabacter , Defluviitaleaceae_UCG-011 , Parvibacter , _f_Atopobiaceae , Peptococcus , Coriobacteriaceae_UCG-002 , Staphylococcus , and Romboutsi . Therefore, methylamine metabolism must be impacted by the gut bacteria mentioned above. Similarly, D-glucuronic acid and D-xylitol correlated positively with the xylanophilum group , whereas creatinine correlated negatively with Coriobacteriaceae_UCG-002 . These findings suggest that Coriobacteriaceae_UCG-002 , Dubosiella , and the xylanophilum_group have an impact on energy metabolism. Bile acid metabolites (including chenodeoxycholic acid, cholic acid, and glycocholic acid) and amino acids (such as norvaline and L-threonine) have also been found to have a strong correlation with gut microbe composition. Our correlation data demonstrated modifications to the gut microbiome, resulting in a significantly altered metabolomic profile. Therefore, our current findings suggest that the mechanism by which SMI can improve heart function in an H-HF rat model may include effects on microbial energy, methylamine, bile acid, and amino acid metabolism in the intestine. Tables and present the results of Spearman’s correlation analysis used to evaluate the associations between the composition of the gut and the levels of TMAO metabolites. A positive correlation was uncovered between serum TMAO levels and the proportion of Actinobacteria. In contrast, a negative correlation was observed with Elusimicrobia at the phylum level (Fig. D). The analysis showed that serum TMAO had a direct relationship with Romboutsia , and an inverse relationship with Ruminococcaceae_UCG_014 and _f_Muribaculaceae . Furthermore, serum TMA levels had a strong negative correlation with Ruminococcaceae_UCG_014 (Fig. E). Based on these results, it is possible that SMI administration could alter TMAO levels by affecting the relevant microflora. Figure shows the connections between the targets of SMI and differential metabolites. Additionally, the regulatory role of SMI in preventing heart failure is highlighted by the “components-targets-metabolites-microbes” interaction network depicted in Fig. . By controlling for 46 proteins, network integration analysis shows that the 11 potentially active components of SMI can affect the differential expression of 8 metabolites and 24 gut microbes. In recent years, there has been an increasing amount of research on the connection between alterations in the gut microbiota and metabolites and the onset of heart failure . However, the mechanisms by which SMI affects chronic heart failure from this perspective remain largely unknown. This study employs metabolomics, 16S rRNA high-throughput sequencing, and network pharmacology to investigate the influence of Shenmai injection on gut microbiota and metabolites in hypertensive heart failure rats. Moreover, the MetaboAnalyst platform was employed to clarify the connection between metabolites and targets, while the MetOrigin platform was used to examine the origin and function of metabolites. To establish a comprehensive analysis of the systematic relationships between the components, targets, metabolites, and gut microbiota influenced by SMI, a “component-target-metabolite-microbiota” interaction network was constructed. This provided new information about the mechanisms underlying SMI in heart failure therapy. SMI is recognized for its effects of invigorating Qi to prevent collapse, nourishing Yin, and promoting saliva production. Both red ginseng and Radix Ophiopogonis have been shown in basic experiments and clinical studies to possess immune-regulating, blood circulation-improving, antioxidant, anti-inflammatory, and anticancer properties . Radix Ophiopogonis has also been shown to exhibit anti-atherosclerotic effects. Research has shown that SMI has antioxidant properties and can reduce oxidative stress . Based on systematic review and meta-analysis, SMI has demonstrated efficacy in treating anthracycline-induced cardiotoxicity and is consequently a possible course of therapy for this condition . In this study, first, we evaluated cardiac function indicators, myocardial tissue HE staining, echocardiographic parameters (LVFS and LVEF), serum NT-proBNP levels, and inflammatory markers CRP and IL-1β to establish that SMI significantly improves cardiac function. In addition, SMI may be able to improve gut barrier function and decrease intestinal permeability based on its effects on intestinal permeability marker Zonulin and gut barrier function indicators (LPS). Indicating regulating the homeostasis of gut microbiota could be one of the primary ways that SMI enhances cardiac function. We evaluated alterations in the composition and functionality of gut microbiota in salt-sensitive hypertensive heart failure rats to learn more about the effect of SMI on gut microbiota. The gut microbiome profiles of the CON and MOD groups differed significantly, as demonstrated by our results, suggesting that the H-HF modeling had changed the microbial structure thus offering insight into how SMI affected the gut microbiota of the H-HF model. The SMI administration successfully revived the microbiota’s structure and functions in H-HF rats. It has been observed that an augmented proportion of Proteobacteria is a potential indicator of epithelial dysfunction and can also be used to diagnose gut dysbiosis and associated health risks . According to our research, the proportions of Proteobacteria in H-HF rats were successfully decreased by SMI treatment, bringing the F/B ratio back to par with the CON group. This suggests that SMI has a positive effect on reestablishing the equilibrium of intestinal flora. In addition, a previous study found that the gut microbiome of patients with chronic heart failure included fewer butyrate-producing bacteria . Butyrate and SCFA are known to be produced by bacteria in the Lachnospiraceae family . Research has demonstrated a correlation between Muribaculaceae and propionic acid levels, an indicator of SCFA concentration . Our research detected that the proportion of Lachnospiraceae_NK4A136 and _ f_Muribaculaceae augmented after SMI treatment, suggesting that SMI treatment reinstates bacteria that generate SCFAs. Assessing co-metabolic relationships between the host and gut microbiota can provide fresh perspectives on the critical function of the gut microbiome in host health . Combining 16S high-throughput sequencing with metabolomics provides a powerful approach to exploring the mechanisms underlying disease development. Compared to omics approaches that employ biofluids, such as urine and serum, the fecal metabolome offers a more comprehensive view as it reflects the combined effects of genetic, environmental, and dietary factors . Microbiome sequencing can therefore be used to understand the relationships between bacterial populations by using non-targeted metabolomics analysis of fecal samples. In the H-HF rat model, 34 metabolites had been significantly altered; SMI treatment restored 29 of these biomarkers. Our study demonstrates that Shenmai injection can significantly improve metabolic disorders in hypertensive heart failure rats. To determine if the host or the microbial community is the source of differential metabolites, we employed MetOrigin. Numerous identified metabolites participate in co-metabolism activities shared between the host and its resident gut microbiota. Energy metabolism, amino acid metabolism, methylamine metabolism, and bile acid metabolism are the key metabolic pathways engaged in these processes. It is well-established that energy metabolism is disturbed in heart failure (HF), and modulating cardiac energy metabolism has been proposed as a therapeutic strategy for HF .Recent studies have indicated that energy metabolism dysfunction plays a critical role in the pathophysiology of HF, with alterations in metabolic pathways contributing to the progression of the disease . Our study further supports this notion, as the metabolites associated with energy metabolism, such as gamma-aminobutyric acid, glutaric acid, D-glucuronic acid, 2-hydroxybutyric acid, and creatinine, were significantly lower in the MOD group. These findings align with previous research showing that impaired cardiac energetics is a major contributor to HF . Moreover, the 16S functional prediction analysis demonstrated that H-HF had a notable association with energy metabolism, consistent with other studies linking metabolic disturbances to HF. In particular, studies have highlighted how altered energy metabolism in HF affects mitochondrial function and cellular ATP production, contributing to cardiac dysfunction . Our correlation analysis further demonstrated that metabolites like creatinine were inversely associated with Coriobacteriaceae_UCG-002 , while D-glucuronic acid and D-xylitol showed significant positive correlations with the [Eubacterium]_xylanophilum_group but were negatively related to Coriobacteriaceae_UCG-002 . These results suggest a close relationship between gut microbiota and energy metabolism in H-HF, which is consistent with emerging evidence on the interplay between gut microbiota and metabolic disturbances in cardiovascular diseases . Notably, the levels of key bacteria such as Coriobacteriaceae_UCG-002 and [Eubacterium]_xylanophilum_group returned to normal following SMI treatment, accompanied by an increase in metabolites linked to energy metabolism. This suggests that the therapeutic mechanism of SMI may involve the regulation of both microbiota and metabolites related to energy metabolism, an idea supported by similar findings in other studies on TCM and its effects on metabolic regulation.Metabolizing amino acids is indispensable to the energy supply, as it facilitates the conversion of amino acids into glucose through gluconeogenesis. Studies have shown that the administration of amino acids can be advantageous to people with HF, with improvements seen in various clinical endpoints . Our study revealed a substantial decline in the metabolism of several amino acids (e.g., norvaline, ketoleucine, L-threonine, L-Valine, and N-acetylornithine) in the MOD group. Additionally, 16 S functional prediction was linked to amino acid metabolism, implying that H-HF is associated with a disruption in amino acid metabolism. The outcome of the correlation analysis indicated a close link between gut flora and metabolites related to amino acid metabolism; for example, norvaline demonstrated a strong and negative correlation with uncultured_bacterium_f_Ruminococcaceae , Adlercreutzia , Streptococcus , Faecalibaculum , [Eubacterium]_brachy_group , Dubosiella , uncultured_bacterium_f_Atopobiaceae , Coriobacteriaceae_UCG-002 , and UBA1819 . This study revealed that SMI treatment could adjust the relevant gut microbiota and amino acid metabolism. Bile acids have been shown in earlier studies to be integral to managing metabolism and energy expenditure . Moreover, a study found that patients with HF had a higher ratio of secondary to primary bile acids in their plasma and lower levels of primary bile acids . According to the correlation analysis conducted here, SMI may enhance the gut microbiota and metabolites linked to bile acid metabolism. The interactions between differential metabolites and gut microbiota are complex and multifactorial, with each influencing the other in a dynamic manner. Our study provides evidence that gut microbiota plays a pivotal role in modulating the metabolism of key metabolites involved in energy production, amino acid metabolism, and bile acid metabolism. Moreover, the therapeutic effects of SMI seem to be partly mediated by its ability to regulate these microbiota-related metabolites, thus restoring metabolic homeostasis and improving heart function in H-HF. Microbial homeostasis is defined as the maintenance of a balanced composition of gut microbiota in a healthy state . Disruption of this equilibrium, however, can lead to the proliferation of pathogenic microorganisms, which raises serum concentrations of TMA and TMAO and increases the risk of cardiovascular diseases . TMAO is considered a risk factor for cardiovascular disease as it is found in high concentrations in the blood when the intestinal wall is disrupted . Measuring serum TMAO levels has consequently emerged as a crucial marker of cardiovascular risk . Our study revealed that the MOD group had lower fecal TMAO levels than the control group. Nevertheless, data from targeted metabolomics indicated that the MOD group had significantly higher serum TMAO and TMA concentrations than any other groups, a result that is consistent with numerous earlier studies on H-HF. A potential cause for the disparity could be the use of different sample types, such as fecal samples instead of serum samples. The reason for the decrease in TMAO in feces is thought to be related to the elevated TMAO levels in serum. According to studies by Nagatomo et al., elevated TMAO levels may cause myocardial fibrosis, LVEF reduction, multi-organ fibrosis, and an increase in BNP levels, all of which can contribute to heart failure . Moreover, it has been noted that raised serum TMAO levels are linked to an increased risk of heart failure and its associated mortality . A study found that TMAO combined with NT-proBNP were useful prognostic indicators for heart failure in patients . This study revealed that the intervention involving SMI had a slight, albeit not statistically relevant, impact on the decrease of serum TMAO and TMA levels. Correlation analysis results indicate that SMI significantly reduces gut microbiota associated with TMAO and TMA, suggesting that SMI may influence serum TMAO levels by modulating these related microbial communities. Thirteen of SMI’s active ingredients were screened based on the TCMSP and BATMAN-TCM database. Network integration analysis showed that by targeting 46 proteins, the 11 potentially active components of SMI can affect the differential expression of 8 metabolites and 24 gut microbes. Taken together, these investigations revealed that various SMI components work synergistically to exert their therapeutic function. This study has some limitations. Firstly, while 13 active components of SMI were identified through database screening, their effectiveness was not experimentally validated. Future research could verify these active ingredients through pharmacokinetic experiments. Secondly, while the gut microbiota and some metabolites showed a strong correlation in this study, the associations do not always imply causation. Furthermore, the distinct levels of TMAO observed between fecal and serum samples suggest a potential tissue-specific role of TMAO. Therefore, additional experiments are warranted to investigate the role of TMAO in different tissues or organs in heart failure. Our study offers a thorough investigation into the mechanisms through which SMI generates therapeutic benefits in heart failure. We observed that in hypertensive heart failure rats, SMI dramatically improves gut barrier function, cardiac function, and gut microbiota composition. By reestablishing homeostasis in the gut microbiota, SMI drives vital metabolic pathways such as energy metabolism, amino acid metabolism, and bile acid metabolism, as indicated by metabolomics and 16S rRNA sequencing analyses. Higher serum TMAO levels were found to be a risk factor for H-HF using TMAO-targeted metabolomics analysis, and SMI was able to downregulate these levels of TMAO-related metabolites. Network pharmacology analysis identified 13 active components of SMI targeting 46 proteins, resulting in differential expression changes in 8 metabolites and 24 gut microbes. This study highlights the effectiveness of SMI in alleviating H-HF and its potential to modulate microbial-host co-metabolism, underscoring the synergistic actions of multiple SMI components on various biological pathways implicated in heart failure. Future research should focus on validating these observations in clinical settings and elucidating the specific molecular mechanisms underlying SMI’s therapeutic benefits. Below is the link to the electronic supplementary material. Supplementary Material 1: Figure S1. (A) The chromatogram of Ginsenoside Rg1, Re, Rb1 reference solution. (B) The chromatogram of Shenmai injection test solution. Supplementary Material 2: Figure S2. (A) The PCA score of three groups and QC samples in positive-ion mode. (B) The PCA score of three groups and QC samples in negative-ion mode. (C) Chao1 index. (D) Shannon index. ## Data were represented as the mean ± SD Supplementary Material 3: Figure S3. (A) systolic blood pressure (SBP). (B) diastolic blood pressure (DBP). (C) SBP before and after treatment. (D) DBP before and after treatment. (E) Body weight prior to and after treatment. No remarkable alteration was noticed in the same group prior to and after treatment. Supplementary Material 4: Figure S4. PCA analysis of metabolites profile. CON: turquoise; MOD: red; SM: dark blue. (A) PCA score plots for positive-ion mode (R 2 X = 0.680). (B) PCA score plots for negative-ion mode (R 2 X = 0.674). (C) Heat map of the differential metabolites in positive-ion mode. (D) Heat map of the differential metabolites in negative-ion mode. Supplementary Material 5: Figure S5. OPLS-DA analysis of metabolites profile. (A) Comparison plots in positive-ion mode for CON and MOD groups (R 2 X=0.606, R 2 Y=0.998, Q 2 =0.974). (B) Permutation test for comparison between CON and MOD groups in positive-ion mode (n=200). (C) Comparison between CON and MOD groups in negative-ion mode (R 2 X=0.573, R 2 Y=0.999, Q 2 =0.974). (D) Permutation test for comparison between CON and MOD groups in positive-ion mode (n= 200). (E) Comparison between CON and MOD groups in positive-ion mode (R 2 X=0.513, R 2 Y=0.995, Q 2 =0.954). (F) Permutation test for comparison between CON and MOD groups in positive-ion mode (n= 200). (G) Comparison between MOD and SM group in negative-ion mode (R 2 X=0.532, R 2 Y=0.997, Q 2 =0.956). (H) Permutation test for comparison between MOD and SM groups in positive-ion mode (n= 200). Supplementary Material 6: Table S1: The related dataset of spearman’s correlation at the phylum level. Supplementary Material 7: Table S2: The related dataset of spearman’s correlation at the genus level. Supplementary Material 8 Supplementary Material 9 |
ERJ Advances: interventional bronchoscopy | 3960ecf7-2359-437a-a5a3-164e7f960c4f | 11540446 | Internal Medicine[mh] | The field of interventional bronchoscopy is rapidly growing, with the development of minimally invasive approaches and innovative devices to diagnose and treat a spectrum of respiratory diseases , often as outpatient procedures, and supported by high quality collaborative research. This short review covers aspects related to COPD, peripheral pulmonary nodules, interstitial lung disease, and airway stenosis and malacia. COPD is a complex inflammatory disorder of the small airways with a variety of manifestations of interest to the interventional bronchoscopist. The therapeutic approach to severe emphysema and hyperinflation continues to build on the proven reduction of excessive residual volume and restoration of normality to lung mechanics . Current interests include the regeneration of a healthy functioning epithelium in patients with chronic bronchitis and the amelioration of frequent exacerbations using targeted lung denervation. Severe emphysema and hyperinflation Approximately 1 in 100 patients with COPD are suitable for lung volume reduction. Evaluation is by a multidisciplinary team to ensure individuals have completed a programme of pulmonary rehabilitation, are taking optimal medical therapy, and are considered for all available approaches, including lung transplant. Impairment of the passive elastic recoil mechanism, which in health maintains the patency of the small airways during expiration, results in an accumulation of trapped gases and a mechanical impediment to the ventilatory pump. Several devices with different modes of action have been developed to reduce hyperinflation. Unidirectional valves induce atelectasis by occluding the segmental bronchi during inspiration, permitting evacuation of air and mucus in expiration. There are two main marketed devices: The Zephyr Endobronchial Valve (EBV) by Pulmonx Inc. (CA, USA), and the Spiration Valve System by Olympus (WA, USA). Absent interlobar collateral ventilation (CV), determined by surrogate quantification of fissure integrity on high resolution computed tomography and/or physiological measurement of lobar flow, is critical to success. The EBV, a duck-bill mechanism, has been the most studied and shown to improve lung function, exercise capacity, quality of life and survival in selected individuals, and is a guideline-approved therapy; moreover, benefits are observed in homogeneous emphysema and in lower lobe predominant disease . Pneumothorax is the most frequent complication, occurring in up to 30% of recipients and mandates a 72-h stay in hospital; management is usually by insertion of a 12-French thoracostomy tube . A persistent air leak may require removal of one or more valves with staged re-implantation; reassuringly, outcomes are not adversely affected, provided complete lobar atelectasis is achieved . Of those subjects screened in the TRANSFORM study, 16.5% were excluded owing to the presence of CV . The CONVERT trial has been designed to broaden eligibility with instillations of Aeriseal sealant (Pulmonx Inc., CA, USA) into leaky fissures prior to EBV insertion: preliminary reports of up to an 83% “conversion” are most encouraging . Several alternative strategies to induce volume reduction independent of interlobar CV are the subject of research. Endobronchial coils by PneumRx (CA, USA) are proposed to work by gathering the surrounding lung parenchyma and re-tensioning the airway network. They have shown promise, especially in patients with severe hyperinflation (residual volume >225% best predicted) and homogeneous emphysema deemed unsuitable for transplantation ; however, the technology has been withdrawn for financial reasons. A similar lung tensioning device system by Free Flow Medical (CA, USA) is currently under trial evaluation (NCT04520152). Another coil-shaped device from Lifetech Medical (Shenzhen, China) is being evaluated in a randomised controlled trial in China after finishing the first in-human study (NCT03685526). Airway bypass, creating non-anatomical transbronchial fenestrations at the segmental level, supported by self-expanding stents, proved the concept. However, the benefits were short-lived despite paclitaxel elution and patency was not sustained . The Pulmair (CA, USA) “Implantable Artificial Bronchus” instead stents the lobar segmental airways out to the 15th generation and counteracts expiratory airways collapse, facilitating gas emptying (US20180344445A1) , and is being prospectively evaluated in a multicentre trial (NCT05087641). An alternative embodiment by Apreo Health Inc. (CA, USA) employs an innovative geometric implant design to maintain luminal patency (US20220280279A1) and is under evaluation in a multicentre trial (NCT05854550). Bronchoscopic thermal vapour ablation (BTVA) by Uptake Medical (WA, USA) offers a non-mechanical method of segmental volume reduction, preserving less diseased tissue and lessening the risk of pneumothorax, with promising results in individuals with heterogeneous upper-lobe predominant emphysema and hyperinflation . A post-market BTVA registry is in progress (NCT03318406), and another randomised controlled trial was recently approved in Germany (NCT05717192). Morair Medtech (WA, USA) have produced a similar system, endobronchial thermal liquid ablation, which uses heated normal saline (ACTRN12622001327774). Chronic bronchitis Clinically defined by cough and sputum expectoration occurring on most days for at least three months of two consecutive years, chronic bronchitis is associated with frequent exacerbations and hospitalisations, accelerated lung function decline, poor quality of life, and reduced life expectancy . A novel approach to reverse airways metaplasia and chronic mucus hypersecretion is selective cellular ablation, preserving a framework of extracellular structures, which is followed by healthy tissue regeneration – two such bronchoscopic epithelial resurfacing technologies, under trial evaluation, are described below. The RejuvenAir system by CSA Medical (MA, USA) employs radial metered cryospray to flash freeze the epithelial lining at −196°C, inducing intracellular ice crystal formation, disrupting cellular structures but sparing the extracellular matrix. At 12 months, the treatment was shown to be safe, feasible, well-tolerated and associated with clinically meaningful improvements in cough, sputum production, breathlessness and quality of life . A randomised sham-controlled study is under way to confirm the benefits and durability of this treatment in a larger population of patients (NCT03893370). Bronchial rheoplasty by Galvanize Therapeutics Inc. (CA, USA) utilises pulsed electric fields to ablate the mucosal lining. Similarly encouraging benefits in patient-reported outcomes have been published and a large, randomised sham-controlled trial is in progress (NCT04677465). A German–Austrian registry recently completed enrolment and will provide real world data (NCT04182841). Frequent exacerbations Pharmacological blockade of the vagal innervation of the lungs results in bronchodilation, improved ciliary function, and reduced mucus secretion and exacerbation frequency . Limitations to inhaled therapy include adherence and short duration of action. Targeted lung denervation, delivering circumferential radiofrequency ablation to the main bronchi in one treatment session, offers an alternative and more durable means of attenuating the overactive parasympathetic tone . Employment of a radio-opaque gastro-oesophageal balloon serves to minimise gastric vagal plexus capture. The randomised double-blind sham-controlled trial, Airflow-2, showed clinically meaningful reductions in severe exacerbation frequency requiring hospitalisation . A pivotal trial, Airflow-3, is designed to evaluate the safety and efficacy of targeted lung denervation in reducing moderate to severe exacerbations over 1 year . Approximately 1 in 100 patients with COPD are suitable for lung volume reduction. Evaluation is by a multidisciplinary team to ensure individuals have completed a programme of pulmonary rehabilitation, are taking optimal medical therapy, and are considered for all available approaches, including lung transplant. Impairment of the passive elastic recoil mechanism, which in health maintains the patency of the small airways during expiration, results in an accumulation of trapped gases and a mechanical impediment to the ventilatory pump. Several devices with different modes of action have been developed to reduce hyperinflation. Unidirectional valves induce atelectasis by occluding the segmental bronchi during inspiration, permitting evacuation of air and mucus in expiration. There are two main marketed devices: The Zephyr Endobronchial Valve (EBV) by Pulmonx Inc. (CA, USA), and the Spiration Valve System by Olympus (WA, USA). Absent interlobar collateral ventilation (CV), determined by surrogate quantification of fissure integrity on high resolution computed tomography and/or physiological measurement of lobar flow, is critical to success. The EBV, a duck-bill mechanism, has been the most studied and shown to improve lung function, exercise capacity, quality of life and survival in selected individuals, and is a guideline-approved therapy; moreover, benefits are observed in homogeneous emphysema and in lower lobe predominant disease . Pneumothorax is the most frequent complication, occurring in up to 30% of recipients and mandates a 72-h stay in hospital; management is usually by insertion of a 12-French thoracostomy tube . A persistent air leak may require removal of one or more valves with staged re-implantation; reassuringly, outcomes are not adversely affected, provided complete lobar atelectasis is achieved . Of those subjects screened in the TRANSFORM study, 16.5% were excluded owing to the presence of CV . The CONVERT trial has been designed to broaden eligibility with instillations of Aeriseal sealant (Pulmonx Inc., CA, USA) into leaky fissures prior to EBV insertion: preliminary reports of up to an 83% “conversion” are most encouraging . Several alternative strategies to induce volume reduction independent of interlobar CV are the subject of research. Endobronchial coils by PneumRx (CA, USA) are proposed to work by gathering the surrounding lung parenchyma and re-tensioning the airway network. They have shown promise, especially in patients with severe hyperinflation (residual volume >225% best predicted) and homogeneous emphysema deemed unsuitable for transplantation ; however, the technology has been withdrawn for financial reasons. A similar lung tensioning device system by Free Flow Medical (CA, USA) is currently under trial evaluation (NCT04520152). Another coil-shaped device from Lifetech Medical (Shenzhen, China) is being evaluated in a randomised controlled trial in China after finishing the first in-human study (NCT03685526). Airway bypass, creating non-anatomical transbronchial fenestrations at the segmental level, supported by self-expanding stents, proved the concept. However, the benefits were short-lived despite paclitaxel elution and patency was not sustained . The Pulmair (CA, USA) “Implantable Artificial Bronchus” instead stents the lobar segmental airways out to the 15th generation and counteracts expiratory airways collapse, facilitating gas emptying (US20180344445A1) , and is being prospectively evaluated in a multicentre trial (NCT05087641). An alternative embodiment by Apreo Health Inc. (CA, USA) employs an innovative geometric implant design to maintain luminal patency (US20220280279A1) and is under evaluation in a multicentre trial (NCT05854550). Bronchoscopic thermal vapour ablation (BTVA) by Uptake Medical (WA, USA) offers a non-mechanical method of segmental volume reduction, preserving less diseased tissue and lessening the risk of pneumothorax, with promising results in individuals with heterogeneous upper-lobe predominant emphysema and hyperinflation . A post-market BTVA registry is in progress (NCT03318406), and another randomised controlled trial was recently approved in Germany (NCT05717192). Morair Medtech (WA, USA) have produced a similar system, endobronchial thermal liquid ablation, which uses heated normal saline (ACTRN12622001327774). Clinically defined by cough and sputum expectoration occurring on most days for at least three months of two consecutive years, chronic bronchitis is associated with frequent exacerbations and hospitalisations, accelerated lung function decline, poor quality of life, and reduced life expectancy . A novel approach to reverse airways metaplasia and chronic mucus hypersecretion is selective cellular ablation, preserving a framework of extracellular structures, which is followed by healthy tissue regeneration – two such bronchoscopic epithelial resurfacing technologies, under trial evaluation, are described below. The RejuvenAir system by CSA Medical (MA, USA) employs radial metered cryospray to flash freeze the epithelial lining at −196°C, inducing intracellular ice crystal formation, disrupting cellular structures but sparing the extracellular matrix. At 12 months, the treatment was shown to be safe, feasible, well-tolerated and associated with clinically meaningful improvements in cough, sputum production, breathlessness and quality of life . A randomised sham-controlled study is under way to confirm the benefits and durability of this treatment in a larger population of patients (NCT03893370). Bronchial rheoplasty by Galvanize Therapeutics Inc. (CA, USA) utilises pulsed electric fields to ablate the mucosal lining. Similarly encouraging benefits in patient-reported outcomes have been published and a large, randomised sham-controlled trial is in progress (NCT04677465). A German–Austrian registry recently completed enrolment and will provide real world data (NCT04182841). Pharmacological blockade of the vagal innervation of the lungs results in bronchodilation, improved ciliary function, and reduced mucus secretion and exacerbation frequency . Limitations to inhaled therapy include adherence and short duration of action. Targeted lung denervation, delivering circumferential radiofrequency ablation to the main bronchi in one treatment session, offers an alternative and more durable means of attenuating the overactive parasympathetic tone . Employment of a radio-opaque gastro-oesophageal balloon serves to minimise gastric vagal plexus capture. The randomised double-blind sham-controlled trial, Airflow-2, showed clinically meaningful reductions in severe exacerbation frequency requiring hospitalisation . A pivotal trial, Airflow-3, is designed to evaluate the safety and efficacy of targeted lung denervation in reducing moderate to severe exacerbations over 1 year . Lung cancer is the leading oncological cause of death worldwide . Presentation is often advanced and prognosis consequently poor: 5-year survival is 25% for non-small cell lung carcinoma and 7% for small cell carcinoma . The National Lung Cancer Screening Trial employing low-dose computed tomography (CT) demonstrated a risk reduction in mortality of 20% in individuals who were former smokers with a 30-pack year history . Diagnosis Most incidental pulmonary nodules are found in the periphery of the lung and undergo surveillance as guided by predictive models . A minority require interrogation and traditionally this is undertaken using transthoracic needle biopsy. The diagnostic yield is high (93%, 95% CI 90–96%) , offset by the risk of pneumothorax of up to 16%, with 6.6% requiring drainage . Navigation bronchoscopy is an alternative approach embracing a range of techniques: virtual bronchoscopy, electromagnetic navigation, radial endobronchial ultrasound (REBUS), tomosynthesis, cone-beam computed tomography (CBCT), slimline scopes, robotic assistance, and combinations of these . A meta-analysis of 126 studies comprising 16 077 patients with 16 389 lesions reported a pooled diagnostic yield of 69.4% (95% CI 0.67–0.71), with substantial variation among studies (40% to 97%) and significant between-study heterogeneity . There was no difference in yield when comparing technologies; however, larger nodule size and the presence of a bronchus sign were associated with improved outcomes. A pneumothorax rate of 2.1% was quoted. Robotic-assisted bronchoscopy is designed to optimise tool placement within a lesion: bespoke ventilation protocols and real-time imaging feedback mitigate pre-procedural CT-to-body divergence . The first prospective multicentre trial published a diagnostic yield of 74.1% (95% CI 61–84%) and pneumothorax rate of 3.7% . A promising paradigm from the USA combines a shape-sensing robotic platform with REBUS and CBCT and is under trial evaluation in the UK (NCT05867953). With the advent of nationwide lung cancer screening programmes, navigation bronchoscopy is likely to become a widespread complementary technology with multimodal sampling approaches adopted, facilitated by advanced imaging adjuncts and rapid on-site evaluation (human or artificial intelligence driven) , to maximise diagnostic yields and reduce procedural times. Therapy Surgical resection is the preferred treatment modality for peripheral early-stage non-small cell lung carcinoma . For those individuals who decline surgery or in whom the risk is prohibitive, stereotactic body radiation therapy (SBRT) and percutaneous ablative techniques are established alternatives. An alternative approach under trial evaluation is bronchoscopy-delivered transbronchial ablation employing thermal (microwave: NCT05299606, NCT05281237, NCT05786625; cryotherapy: NCT04049474) and non-thermal (brachytherapy and photodynamic therapy) energy sources. The use of localised ablation to release tumour antigen into the circulation and potentiate the effects of immunotherapy is currently being evaluated (NCT05053802, NCT04793815). Expanding indications may include individuals with unresectable local recurrence after surgery or SBRT and biopsy-proven synchronous lesions. It remains to be seen whether the impact of bronchoscopy ablation on local recurrence rates and survival compares favourably with current standard of care therapies. Most incidental pulmonary nodules are found in the periphery of the lung and undergo surveillance as guided by predictive models . A minority require interrogation and traditionally this is undertaken using transthoracic needle biopsy. The diagnostic yield is high (93%, 95% CI 90–96%) , offset by the risk of pneumothorax of up to 16%, with 6.6% requiring drainage . Navigation bronchoscopy is an alternative approach embracing a range of techniques: virtual bronchoscopy, electromagnetic navigation, radial endobronchial ultrasound (REBUS), tomosynthesis, cone-beam computed tomography (CBCT), slimline scopes, robotic assistance, and combinations of these . A meta-analysis of 126 studies comprising 16 077 patients with 16 389 lesions reported a pooled diagnostic yield of 69.4% (95% CI 0.67–0.71), with substantial variation among studies (40% to 97%) and significant between-study heterogeneity . There was no difference in yield when comparing technologies; however, larger nodule size and the presence of a bronchus sign were associated with improved outcomes. A pneumothorax rate of 2.1% was quoted. Robotic-assisted bronchoscopy is designed to optimise tool placement within a lesion: bespoke ventilation protocols and real-time imaging feedback mitigate pre-procedural CT-to-body divergence . The first prospective multicentre trial published a diagnostic yield of 74.1% (95% CI 61–84%) and pneumothorax rate of 3.7% . A promising paradigm from the USA combines a shape-sensing robotic platform with REBUS and CBCT and is under trial evaluation in the UK (NCT05867953). With the advent of nationwide lung cancer screening programmes, navigation bronchoscopy is likely to become a widespread complementary technology with multimodal sampling approaches adopted, facilitated by advanced imaging adjuncts and rapid on-site evaluation (human or artificial intelligence driven) , to maximise diagnostic yields and reduce procedural times. Surgical resection is the preferred treatment modality for peripheral early-stage non-small cell lung carcinoma . For those individuals who decline surgery or in whom the risk is prohibitive, stereotactic body radiation therapy (SBRT) and percutaneous ablative techniques are established alternatives. An alternative approach under trial evaluation is bronchoscopy-delivered transbronchial ablation employing thermal (microwave: NCT05299606, NCT05281237, NCT05786625; cryotherapy: NCT04049474) and non-thermal (brachytherapy and photodynamic therapy) energy sources. The use of localised ablation to release tumour antigen into the circulation and potentiate the effects of immunotherapy is currently being evaluated (NCT05053802, NCT04793815). Expanding indications may include individuals with unresectable local recurrence after surgery or SBRT and biopsy-proven synchronous lesions. It remains to be seen whether the impact of bronchoscopy ablation on local recurrence rates and survival compares favourably with current standard of care therapies. The diffuse parenchymal lung diseases encompass more than 200 conditions characterised by inflammation or fibrosis of the alveolar-capillary compartment. There is significant overlap in presentation but differences in treatment response and prognosis make diagnosis challenging; multidisciplinary discussion integrating clinical, radiological, serological and histological data is fundamental to this process . Surgical lung biopsy is regarded as the gold standard modality for tissue acquisition, but is associated with complications including persistent air leak in up to 5% , exacerbation of the underlying disease process in 7%, major bleeding in 2.2% , and 30-day mortality in 2.4% (similar to lobectomy for lung cancer) . Transbronchial lung cryobiopsy offers a minimally invasive means of procuring tissue with a comparable diagnostic accuracy , and substantially lower morbidity and mortality . The architectural preservation of the sample not subject to the crush artefacts seen with mechanical forceps transbronchial biopsies permits a more detailed histological characterisation with similar prognostic value to surgical lung biopsy . The technique is now incorporated into society guidelines . Procedural modifications employing advanced imaging adjuncts such as radial endobronchial ultrasound and cone-beam computed tomography and the addition of a genomic classifier have been proposed to improve diagnostic yield. Three-dimensional stent printing The treatment of benign and malignant airway stenoses frequently poses a challenge owing to complex anatomy with individual variation. Commercially available stents not infrequently suffer migration and granulation tissue reaction, leading to luminal occlusion. A number of software platforms exist that can segment anatomical structures including the airway tree to a high resolution. Three-dimensional modelling and printing circumvents the “one-glove-fits-all” paradigm and permits the manufacture of patient-specific devices, with promising preliminary results published . The use of novel biocompatible materials with antimicrobial properties will minimise the risk of endoluminal tissue encroachment and infection. The process is seemingly applicable to any form of bronchial implant, which holds great potential in the pursuit of personalised medicine. Biodegradable stents Problems common to silicone and to metal (nitinol and medical grade stainless steel) stents are an aggressive granulation response and biofilm formation, both degrading their function and necessitating further bronchoscopic procedures to clean or replace them. Stenotic and malacic airway pathologies are especially prevalent after lung transplantation. For this indication, and perhaps for other benign and longer lasting malignant problems, biodegradable ELLA stents (ELLA-CS, Hradec Králové, Czech Republic) have been suggested . The treatment of benign and malignant airway stenoses frequently poses a challenge owing to complex anatomy with individual variation. Commercially available stents not infrequently suffer migration and granulation tissue reaction, leading to luminal occlusion. A number of software platforms exist that can segment anatomical structures including the airway tree to a high resolution. Three-dimensional modelling and printing circumvents the “one-glove-fits-all” paradigm and permits the manufacture of patient-specific devices, with promising preliminary results published . The use of novel biocompatible materials with antimicrobial properties will minimise the risk of endoluminal tissue encroachment and infection. The process is seemingly applicable to any form of bronchial implant, which holds great potential in the pursuit of personalised medicine. Problems common to silicone and to metal (nitinol and medical grade stainless steel) stents are an aggressive granulation response and biofilm formation, both degrading their function and necessitating further bronchoscopic procedures to clean or replace them. Stenotic and malacic airway pathologies are especially prevalent after lung transplantation. For this indication, and perhaps for other benign and longer lasting malignant problems, biodegradable ELLA stents (ELLA-CS, Hradec Králové, Czech Republic) have been suggested . This short review summarises the latest bronchoscopic innovations and advances in response to the demand for minimally invasive management of a broadening spectrum of problems. Evolution of the speciality will depend on randomised sham-controlled double-blind trials and head-to-head comparisons of technologies with capture of important end-points (for example, requirement for repeat procedures, impact on quality of life) to determine the most appropriate modalities and management algorithms for our patients. The future is promising. 10.1183/13993003.01946-2023.Shareable1 This one-page PDF can be shared freely online. Shareable PDF ERJ-01946-2023.Shareable |
Biomaterial-based sponge for efficient and environmentally sound removal of bacteria from water | 03fc8eb9-85bb-43b0-adea-0c3a65e69163 | 11143301 | Microbiology[mh] | For most people, the access to safe water is inadequate. Four billion people, two-thirds of the global population, live under conditions of severe water scarcity for at least 1 month of the year . Water scarcity limits access to clean drinking water and basic hygiene. In these conditions, diseases can proliferate rapidly at home, in schools, or even in health-care facilities. Additionally, antimicrobial resistance from bacteria is expected to be an increasing problem in the near future due to the misuse of antibiotics . It is therefore of high interest to develop new cost-effective strategies and materials that can inactivate bacteria without the use of antibiotics or other chemical substances that are released into the environment. For water disinfection, chlorination is the most widely used method to kill bacteria, despite its drawbacks . UV irradiation is a more recent method for disinfection. However, besides the need for an electricity source and the cost of acquisition of the equipment, there are limitations with increased distance and beam angles . Bactericidal additives including metal particles , , can be integrated into carrier materials like cellulose facilitating passive bacterial adsorption and subsequent inactivation to prevent proliferation and biofilm formation. In the case of active bacteria adsorption via electrostatic interaction of polycationic materials, adsorption and inactivation occur simultaneously as high surface charge materials both bind and kill bacteria . The polycationic surfaces target their cytoplasmatic membrane, which is net-negatively charged. Through this electrostatic interaction, the materials bind the bacterial cells, inhibit proliferation, and can even promote membrane lysis . The advantages of purely polycationic materials over common antibacterial materials or strategies are the absence of leaching chemicals and the independence from an energy source. Typical polycationic polymers known for their antibacterial properties are chitosan , , polyethylene imine , and ε-polylysine . These, along with quaternary ammonium groups, have been employed to functionalize other materials like dendrimers , particles , graphene derivatives – , textiles or hydrogels – to incorporate antibacterial properties. Hydrogels can be prepared as bulk material, films, or cryogels. While films and non-porous hydrogels offer a lower effective contact area, the macroporous structure of cryogels allows bacterial cells to enter the material and adsorb onto their highly increased surface area. Cryogels are formed via freezing–thawing technique. Generally, a solution of a polymer and a crosslinker is stored at temperatures below the melting point of the solvent (e.g. water, dioxane, or DMSO) which, in its solid state, acts as a pore-forming agent . Once crosslinking is completed, the cryogel is thawed at room temperature and washed with water to remove unreacted residual components/ingredients rendering a porous network with pores surrounded by polymer walls – . Among the wide variety of materials that can be used to form cryogels, biopolymers have gained great attention during the last decades. In comparison to their fossil fuel counterparts, biopolymers can offer eco-friendly alternatives that are still cost-effective, processable, and offer post-functionalization. Obtained from renewable resources and built up of biodegradable structures, these materials can be included in natural recycling systems. Cellulose is the most available biopolymer on earth and is used in countless applications. In the context of material science, small contents of cellulose have been demonstrated to improve the structure of cryogels, providing better mechanical properties and performance , . The addition of 2 wt% of cellulose, for instance, reinforces the mechanical properties of polyimide/CNC hybrid aero-gels significantly . The structurally related biopolymer chitosan can be produced by extraction of chitin from shrimp shells and other crustaceans and subsequent treatment with alkaline substances. Its amine functionality is the reason for its widely reported antimicrobial properties – . However, its antimicrobial activity is limited to pH values below 6, when protonation of the amino groups occurs. This can restrict applicability and bioactivity studies in physiological conditions . To enhance its antimicrobial activity and render it independent of pH, permanent cations can be introduced by reacting with glycidyltrimethylammonium chloride (GTMAC). The resulting materials bear quaternary ammonium ions with permanent charge, expected to enhance their antimicrobial activity. In this work, we present a polycationic sponge material made from chitosan derivatives and cellulose fibers exhibiting antibacterial efficacy against both gram-negative and gram-positive bacteria cells (Fig. ). The design followed three key principles: (1) implementation of macroporous structure via cryogelation, resulting in high surface area and sponge-like properties, (2) enhancement of bacteria adsorption and antibacterial activity by introducing cationic quaternary ammonium moieties, (3) incorporation of cellulose fibers to reinforce its mechanical properties. The environmentally sound synthesis utilizing water and abundant biomaterials, offers a low-cost approach suitable for various water purification applications.
Materials design and synthesis A series of cryogels and a non-porous chitosan hydrogel ( NPCHI ) were synthesized for comparison purposes. The prepared cryogels consist of pure chitosan ( CHI ), chitosan with 45% and 90% quaternary ammonium group functionalization ( QCHI45 and QCHI90 , Scheme ), as well as the same cryogels blended with cellulose fibers ( CHI/Cell , QCHI45/Cell , QCHI90/Cell ). Quaternary ammonium group functionalized chitosan was synthesized via epoxide nucleophilic ring-opening reaction with GTMAC. The degree of quaternization (DQ) in chitosan was modified by heating the reaction at different temperatures and times (55 °C/18 h for lower and 85 °C/6 h for higher functionalization degree) , . The resulting DQ was determined by 1 H NMR, resulting in 45% and 90 mol%, respectively (Fig. S2). The prepared chitosan derivatives were used in the same cryo-gelation procedure as pure chitosan. Glutaraldehyde (GA) was selected as the crosslinker for cryo-gelation known for its efficacy with polyamines. The amino groups of chitosan react with two of the aldehyde groups of GA, creating net points of a crosslinked polymer network. Subsequently, the formed Schiff-base imin groups of the network are reduced with sodium borohydride to improve the chemical stability of the cryogels. After cryogel formation, the water was removed via lyophilization. The detailed feed compositions of the series are listed in Table , and the detailed synthetic protocols are provided in the supporting information. Physical properties All synthesized cryogels exhibit an immediate water uptake at contact. This can be attributed to the fast diffusion of liquids into the cryogels due to their high surface-to-volume ratio. The water uptake of the cryogels occurred rapidly within 15 s and reached the swelling equilibrium within 30 s (Table and Fig. a). The final degree of swelling expressed as a ratio of weight before and after water uptake, increased with an increasing degree of quaternization from 66 to 105. The hydrophilicity of cryogels was enhanced by introducing quaternary ammonium moieties. As a result, QCHI90, which has a higher degree of quaternization, exhibits the greatest degree of swelling. Conversely, the cellulose fiber reinforced cryogels exhibit a lower water uptake when compared to the chitosan cryogels, with a swelling degree in the range from 20 to 23. The absorbed water can be released again by squeezing the sponge-like materials without damaging the material. The stiffness of the cryogels was determined from the linear viscoelastic region of the storage modulus. The cryogels that do not contain cellulose in their structure exhibit a lower stiffness in the range from 1.2 to 3.1 kPa than those incorporating it (5.3–27.3 kPa). Furthermore, both classes of cryogels demonstrate an elastic compressive stress–strain behaviour, as illustrated in Fig. b. All cryogels can endure significant deformations of up to 90% without becoming damage or permanent deformation. In contrast, the non-porous hydrogel NPCHI experienced mechanical fracture at approximately 24% compressive strain. The tissue-like elasticity of the cryogels, which prevents material damage even at high strains, can be attributed to their interconnected macroporous structure. In addition, the cryogel matrix was strengthened significantly by the incorporation of cellulose fibers, resulting in an increase in Young's modulus from 2.1–5.7 kPa to 10.3–47 kPa. This enhancement makes the cryogel composites more durable and suitable for practical use. Cryogel morphology The pore morphology of the prepared cryogels and their composites were visualized by scanning electron microscopy (SEM) and a representative image of the physical appearance of NPCHI and CHI/Cell can be seen in Fig. S4. An interconnected highly porous structure can be observed for the CHI , QCHI45 , and QCHI90 cryogels, confirming the large surface area of the material (Fig. ). The analysis of pore diameter using ImageJ revealed that the mean pore diameters of the cryogels CHI , QCHI45 , and QCHI90 were 78 ± 41 µm, 75 ± 38 µm, and 74 ± 41 µm, respectively. Additionally, the distribution of their pore diameters is shown in Fig. S5. In comparison, the cryogels that contain cellulose ( CHI/Cell , QCHI45/Cell, and QCHI90/Cell ) exhibit a porous structure with smaller pore sizes, and the interpenetration of cellulose fibers through the chitosan walls can be observed. Bacteria adsorption studies Investigating the influence of the hydrogels’ porosity on their antibacterial effect, they were first tested against E. coli contaminated water via optical density measurement at 600 nm (OD 600 ). In multiple paralleled measurements in a 96-well plate, 1 mg of the material was incubated in 0.2 mL of medium for 60 min with a starting OD 600 value of 0.8, which roughly translates to 10 8 CFU/mL. In the case of the non-porous hydrogel NPCHI, this value decreases to 0.6 after 60 min, which roughly translates to a decrease of 35% of the E. coli concentration. In the case of macroporous cryogel CHI, the decrease in OD 600 intensity from 0.8 to 0.2 is observed after 15 min, reaching 0.1 after 60 min, which roughly translates to a decrease of E. coli concentration of two orders of magnitude (Fig. a). This comparison demonstrates the more efficient adsorption and bacteria removal due to the high surface area and porous structure of the chitosan cryogel. A gram-positive bacterium S. aureus was selected to further investigate the spectrum of bacterial adsorption effect of chitosan cryogels CHI, QCHI45 and QCHI90. Similarly, the OD 600 values of the residual S. aureus suspension were monitored over time as shown in Fig. b. For cryogel CHI, the OD 600 value was reduced from 0.95 to 0.48 after 15 min, reaching 0.23 after 60 min. For the quaternized cryogels QCHI45 and QCHI90, a decrease in OD 600 from 0.95 to 0.2 and 0.14 respectively was observed after 60 min. The higher efficiency of cryogel QCHI90 for S. aureus can be attributed to its higher degree of quaternization. In comparison, the non-porous NPCHI hydrogel showed a decrease in OD 600 value from 0.95 to 0.66 after 60 min. This further demonstrates that the adsorption efficiency of the cryogels was enhanced due to the presence of the porous structure. The cryogels’ ability to adsorb also gram-positive bacteria S. aureus suggests that their bacterial adsorption effect is not specific to one type of bacteria. To further investigate how the incorporation of cellulose would influence the bacteria adsorption, the E. coli adsorption tests were performed with QCHI45 and QCHI90 cryogels and their cellulose composites QCHI45/Cell and QCHI90/Cell similarly. The OD 600 values of the E. coli suspension decreased upon contact with QCHI45 and QCHI90 cryogels, dropping from 0.50 to 0.08 after 15 min and further decreasing to 0.05 at 30 min (Fig. c). In contrast, the cellulose composites QCHI45/Cell and QCHI90/Cell exhibit a slower decrease of the OD 600 value, dropping from 0.50 to 0.15 and 0.33 after 15 min, respectively. Nevertheless, they both reach the same final value around 0.05 after 60 min of incubation. The slower bacteria adsorption rate of the cellulose composites could be attributed to their lower degree of swelling, resulting in a slower diffusion of bacterial cells into the samples. However, the capability as well as the capacity of E. coli adsorption is not reduced by the addition of cellulose. Following the E. coli adsorption test, the CHI cryogel was immersed in a standard live/dead kit dye solution containing SYTO 9 and propidium iodide to allow the visualization of bacteria by confocal microscopy (Fig. a). The interconnected macroporous structure can be observed and the pore walls are covered with live (green) and dead (red) stained bacteria. The ratio of live to dead bacteria after 30 min is roughly 1:5 consistent with the OD 600 data (Fig. a). Complementary to the previous experiment we aimed to visualize the adsorption and antibacterial activity of CHI , QCHI45, and QCHI90 , by incubation for 24 h with live/dead stained E. coli cells and imaged via fluorescence microscopy. Due to the interference of cellulose with the dye-staining solution, the other cryogels could not be used. Figure b shows the 3D images from multiple recorded z-stacks of the cryogels allowing visualization of the bacteria cells onto the cryogel surfaces. While some of the bacteria cells are still intact and visible in green in the image of CHI cryogel, almost all bacterial cells are killed upon contact with QCHI90 cryogel, in agreement with the increase in quaternary amino groups of the materials. Antibacterial activity studies As the integration of cellulose into the cryogel matrix substantially enhanced the mechanical properties of the cryogels while showing similar adsorption capacities, a quantitative determination of the antibacterial activity of CHI/Cell , CHI45/Cell, and CHI90/Cell was performed. The cryogels were incubated with gram-negative bacterial cells ( E. coli ) and gram-positive bacterial cells (B. subtilis). Briefly, 30 mg of a cryogel sample was incubated with 1 mL bacterial suspension (10 5 CFU/mL) at 37 °C for either 1 or 6 h. The experimental procedure is described in detail in the supporting information. The reduction of the number of colony-forming units after incubation is shown in Fig. . After 1 h of incubation with E. coli, the cryogels showed a 97–99% CFU reduction. Notably, the cryogel with the highest degree of quaternization ( QCHI90/Cell) decreased the number of E. coli CFU by 99.5%, which is a 2-log reduction. This observation suggests a higher rate of antibacterial activity due to the increased number of quaternary amino groups. Upon increasing the incubation time to 6 h, the CFU reduction further increased for all samples to a 3-log reduction and 4-log reduction for QCHI90/Cell . The antibacterial effect on the gram-positive B. subtilis was even greater as both CHI/Cell and QCHI45/Cell showed a 3-log reduction and QCHI90/Cell had no detectable viable bacteria corresponding to more than a 4.5-log reduction after 1 h. Similarly, as in the previous case, incubation for 6 h further increase the reduction percentage in all samples, reaching more than 99.99% (4-log) of reduction in CFU for all cryogels. While both gram-positive and gram-negative bacteria are affected by the polycationic antimicrobial surface of all materials, gram-positive B. subtilis seem generally more susceptible possibly due to their thicker peptidoglycan layer or fewer protective structures on their cell surface. Gram-negative bacteria seem to exhibit more resistance due to their outer membrane, which can be overcome by increasing the surface charge density of the materials. From the material development perspective, the results show that the biomaterial-based QCHI90/Cell sponge can disinfect more than 30 times its mass of bacteria-contaminated water within 1 h. Furthermore it exhibits a 4-log reduction over 6 h. These results highlight the enhanced efficacy of the contact-killing material compared to previously presented polycationic modified chitosan cryogels: 65–95% bacteria reduction within 24 h or 95–98% within 12 h against E. coli and S. aureus .
A series of cryogels and a non-porous chitosan hydrogel ( NPCHI ) were synthesized for comparison purposes. The prepared cryogels consist of pure chitosan ( CHI ), chitosan with 45% and 90% quaternary ammonium group functionalization ( QCHI45 and QCHI90 , Scheme ), as well as the same cryogels blended with cellulose fibers ( CHI/Cell , QCHI45/Cell , QCHI90/Cell ). Quaternary ammonium group functionalized chitosan was synthesized via epoxide nucleophilic ring-opening reaction with GTMAC. The degree of quaternization (DQ) in chitosan was modified by heating the reaction at different temperatures and times (55 °C/18 h for lower and 85 °C/6 h for higher functionalization degree) , . The resulting DQ was determined by 1 H NMR, resulting in 45% and 90 mol%, respectively (Fig. S2). The prepared chitosan derivatives were used in the same cryo-gelation procedure as pure chitosan. Glutaraldehyde (GA) was selected as the crosslinker for cryo-gelation known for its efficacy with polyamines. The amino groups of chitosan react with two of the aldehyde groups of GA, creating net points of a crosslinked polymer network. Subsequently, the formed Schiff-base imin groups of the network are reduced with sodium borohydride to improve the chemical stability of the cryogels. After cryogel formation, the water was removed via lyophilization. The detailed feed compositions of the series are listed in Table , and the detailed synthetic protocols are provided in the supporting information.
All synthesized cryogels exhibit an immediate water uptake at contact. This can be attributed to the fast diffusion of liquids into the cryogels due to their high surface-to-volume ratio. The water uptake of the cryogels occurred rapidly within 15 s and reached the swelling equilibrium within 30 s (Table and Fig. a). The final degree of swelling expressed as a ratio of weight before and after water uptake, increased with an increasing degree of quaternization from 66 to 105. The hydrophilicity of cryogels was enhanced by introducing quaternary ammonium moieties. As a result, QCHI90, which has a higher degree of quaternization, exhibits the greatest degree of swelling. Conversely, the cellulose fiber reinforced cryogels exhibit a lower water uptake when compared to the chitosan cryogels, with a swelling degree in the range from 20 to 23. The absorbed water can be released again by squeezing the sponge-like materials without damaging the material. The stiffness of the cryogels was determined from the linear viscoelastic region of the storage modulus. The cryogels that do not contain cellulose in their structure exhibit a lower stiffness in the range from 1.2 to 3.1 kPa than those incorporating it (5.3–27.3 kPa). Furthermore, both classes of cryogels demonstrate an elastic compressive stress–strain behaviour, as illustrated in Fig. b. All cryogels can endure significant deformations of up to 90% without becoming damage or permanent deformation. In contrast, the non-porous hydrogel NPCHI experienced mechanical fracture at approximately 24% compressive strain. The tissue-like elasticity of the cryogels, which prevents material damage even at high strains, can be attributed to their interconnected macroporous structure. In addition, the cryogel matrix was strengthened significantly by the incorporation of cellulose fibers, resulting in an increase in Young's modulus from 2.1–5.7 kPa to 10.3–47 kPa. This enhancement makes the cryogel composites more durable and suitable for practical use.
The pore morphology of the prepared cryogels and their composites were visualized by scanning electron microscopy (SEM) and a representative image of the physical appearance of NPCHI and CHI/Cell can be seen in Fig. S4. An interconnected highly porous structure can be observed for the CHI , QCHI45 , and QCHI90 cryogels, confirming the large surface area of the material (Fig. ). The analysis of pore diameter using ImageJ revealed that the mean pore diameters of the cryogels CHI , QCHI45 , and QCHI90 were 78 ± 41 µm, 75 ± 38 µm, and 74 ± 41 µm, respectively. Additionally, the distribution of their pore diameters is shown in Fig. S5. In comparison, the cryogels that contain cellulose ( CHI/Cell , QCHI45/Cell, and QCHI90/Cell ) exhibit a porous structure with smaller pore sizes, and the interpenetration of cellulose fibers through the chitosan walls can be observed.
Investigating the influence of the hydrogels’ porosity on their antibacterial effect, they were first tested against E. coli contaminated water via optical density measurement at 600 nm (OD 600 ). In multiple paralleled measurements in a 96-well plate, 1 mg of the material was incubated in 0.2 mL of medium for 60 min with a starting OD 600 value of 0.8, which roughly translates to 10 8 CFU/mL. In the case of the non-porous hydrogel NPCHI, this value decreases to 0.6 after 60 min, which roughly translates to a decrease of 35% of the E. coli concentration. In the case of macroporous cryogel CHI, the decrease in OD 600 intensity from 0.8 to 0.2 is observed after 15 min, reaching 0.1 after 60 min, which roughly translates to a decrease of E. coli concentration of two orders of magnitude (Fig. a). This comparison demonstrates the more efficient adsorption and bacteria removal due to the high surface area and porous structure of the chitosan cryogel. A gram-positive bacterium S. aureus was selected to further investigate the spectrum of bacterial adsorption effect of chitosan cryogels CHI, QCHI45 and QCHI90. Similarly, the OD 600 values of the residual S. aureus suspension were monitored over time as shown in Fig. b. For cryogel CHI, the OD 600 value was reduced from 0.95 to 0.48 after 15 min, reaching 0.23 after 60 min. For the quaternized cryogels QCHI45 and QCHI90, a decrease in OD 600 from 0.95 to 0.2 and 0.14 respectively was observed after 60 min. The higher efficiency of cryogel QCHI90 for S. aureus can be attributed to its higher degree of quaternization. In comparison, the non-porous NPCHI hydrogel showed a decrease in OD 600 value from 0.95 to 0.66 after 60 min. This further demonstrates that the adsorption efficiency of the cryogels was enhanced due to the presence of the porous structure. The cryogels’ ability to adsorb also gram-positive bacteria S. aureus suggests that their bacterial adsorption effect is not specific to one type of bacteria. To further investigate how the incorporation of cellulose would influence the bacteria adsorption, the E. coli adsorption tests were performed with QCHI45 and QCHI90 cryogels and their cellulose composites QCHI45/Cell and QCHI90/Cell similarly. The OD 600 values of the E. coli suspension decreased upon contact with QCHI45 and QCHI90 cryogels, dropping from 0.50 to 0.08 after 15 min and further decreasing to 0.05 at 30 min (Fig. c). In contrast, the cellulose composites QCHI45/Cell and QCHI90/Cell exhibit a slower decrease of the OD 600 value, dropping from 0.50 to 0.15 and 0.33 after 15 min, respectively. Nevertheless, they both reach the same final value around 0.05 after 60 min of incubation. The slower bacteria adsorption rate of the cellulose composites could be attributed to their lower degree of swelling, resulting in a slower diffusion of bacterial cells into the samples. However, the capability as well as the capacity of E. coli adsorption is not reduced by the addition of cellulose. Following the E. coli adsorption test, the CHI cryogel was immersed in a standard live/dead kit dye solution containing SYTO 9 and propidium iodide to allow the visualization of bacteria by confocal microscopy (Fig. a). The interconnected macroporous structure can be observed and the pore walls are covered with live (green) and dead (red) stained bacteria. The ratio of live to dead bacteria after 30 min is roughly 1:5 consistent with the OD 600 data (Fig. a). Complementary to the previous experiment we aimed to visualize the adsorption and antibacterial activity of CHI , QCHI45, and QCHI90 , by incubation for 24 h with live/dead stained E. coli cells and imaged via fluorescence microscopy. Due to the interference of cellulose with the dye-staining solution, the other cryogels could not be used. Figure b shows the 3D images from multiple recorded z-stacks of the cryogels allowing visualization of the bacteria cells onto the cryogel surfaces. While some of the bacteria cells are still intact and visible in green in the image of CHI cryogel, almost all bacterial cells are killed upon contact with QCHI90 cryogel, in agreement with the increase in quaternary amino groups of the materials.
As the integration of cellulose into the cryogel matrix substantially enhanced the mechanical properties of the cryogels while showing similar adsorption capacities, a quantitative determination of the antibacterial activity of CHI/Cell , CHI45/Cell, and CHI90/Cell was performed. The cryogels were incubated with gram-negative bacterial cells ( E. coli ) and gram-positive bacterial cells (B. subtilis). Briefly, 30 mg of a cryogel sample was incubated with 1 mL bacterial suspension (10 5 CFU/mL) at 37 °C for either 1 or 6 h. The experimental procedure is described in detail in the supporting information. The reduction of the number of colony-forming units after incubation is shown in Fig. . After 1 h of incubation with E. coli, the cryogels showed a 97–99% CFU reduction. Notably, the cryogel with the highest degree of quaternization ( QCHI90/Cell) decreased the number of E. coli CFU by 99.5%, which is a 2-log reduction. This observation suggests a higher rate of antibacterial activity due to the increased number of quaternary amino groups. Upon increasing the incubation time to 6 h, the CFU reduction further increased for all samples to a 3-log reduction and 4-log reduction for QCHI90/Cell . The antibacterial effect on the gram-positive B. subtilis was even greater as both CHI/Cell and QCHI45/Cell showed a 3-log reduction and QCHI90/Cell had no detectable viable bacteria corresponding to more than a 4.5-log reduction after 1 h. Similarly, as in the previous case, incubation for 6 h further increase the reduction percentage in all samples, reaching more than 99.99% (4-log) of reduction in CFU for all cryogels. While both gram-positive and gram-negative bacteria are affected by the polycationic antimicrobial surface of all materials, gram-positive B. subtilis seem generally more susceptible possibly due to their thicker peptidoglycan layer or fewer protective structures on their cell surface. Gram-negative bacteria seem to exhibit more resistance due to their outer membrane, which can be overcome by increasing the surface charge density of the materials. From the material development perspective, the results show that the biomaterial-based QCHI90/Cell sponge can disinfect more than 30 times its mass of bacteria-contaminated water within 1 h. Furthermore it exhibits a 4-log reduction over 6 h. These results highlight the enhanced efficacy of the contact-killing material compared to previously presented polycationic modified chitosan cryogels: 65–95% bacteria reduction within 24 h or 95–98% within 12 h against E. coli and S. aureus .
A series of antibacterial cryogels is presented. Their straightforward syntheses allow for the feasible production of a macroporous biomaterial with a tunable degree of quaternary ammonium functionality and increased structural reinforcement via cellulose fibers. The introduction of cellulose in the cryogel composition reduces the degree of swelling while increasing the strain toughness of the materials. The antibacterial tests of the presented macroporous chitosan cryogels compared to the non-porous version show the strong effect of increased surface area on the antibacterial effect and indicate that the antibacterial mechanism of the materials is based on surface contact. Interestingly, gram-positive B. subtilis cells, which are protected by a thick peptidoglycan layer against physical and chemical stresses, including exposure to antimicrobial agents, are more affected by the antibacterial mechanism of the cryogels than gram-negative E. coli cells. Instead of trying to penetrate the peptidoglycan layer, the surface-charged cryogels bind the cell’s surface and exhibit a bacteriostatic effect. However, the live/dead staining fluorescence images also suggest that the adsorbed E. coli membranes can be disrupted as the cells show red spots indicating penetrated red dye. The presented biomaterial-based cryogels exhibit at least a 3-log reduction within 6 h against gram-positive B. subtilis and gram-negative E. coli. The highest surface charge material QCHI90/Cell even exhibits a 2–4.5 log reduction within 1 h of incubation time. This material can disinfect more than 30 times its mass of bacteria-contaminated water within 1 h. Its antibacterial activity, reinforced matrix, and feasible production render this macroporous biomaterial a promising candidate for applications in water purification systems or medical applications such as wound dressings.
Supplementary Information.
|
Berkowitz’s pediatrics: A primary care approach, Sixth revised edition | 749c929e-8d85-444f-9d07-a6b68cecad6d | 7001062 | Pediatrics[mh] | |
Effects of soil pH on the growth, soil nutrient composition, and rhizosphere microbiome of | 7679d253-5556-4892-93a3-460e4ff9ba9a | 11027909 | Microbiology[mh] | Ageratina adenophora (Spreng.) is a perennial semi-shrubby herb native to Mexico and Costa Rica. It is one of the major invasive weed species in Africa, Oceania, and Asia . It sexually reproduces by producing many seeds and has a strong ability to asexually reproduce through the roots and stems . It is widely distributed throughout southwest China after its invasion of the Yunnan Province in China from the border of Myanmar in the 1940s. It is found vertically between altitudes of 165–2,915 m and at tropical, subtropical, central and northern subtropical, warm temperate and temperate zones. It grows in open areas, forest margins, riverbanks, roadsides, grasslands, crop fields, pastures, woodlands, limestone shrubs, plantations, arid wastelands, and surrounding agricultural areas with different soil types . Its bud bursting stage typically begins in late November and the first blossom happens in mid- to late-February of the following year. Its growth stage starts from May to September with the fastest growth period from July to August. Flower-bud differentiation occurs in November . A. adenophora forms a single-dominant community in an invaded area, thus reducing the biodiversity and destroying the balance of the ecosystems . This weed has led to an annual loss of 150 million US dollars from livestock production decline and 400 million US dollars from the service function of grassland ecosystems in China . To date, the effectiveness of various control methods including artificial mechanical and chemical control, introduction of natural enemy controls, and biological substitution control have been largely unsuccessful . The influence of soil pH on the growth of invasive plants has attracted considerable attention. The soil pH plays an important role in the process of plant growth and development . It affects several important soil biological and physicochemical processes including the mineralization of soil organic matter, microbial enzyme activities, ammonia volatilization, bacterial nitrification, and denitrification. All these processes are related to the survival and migration of nutrients in the soil, and thus their availability to plants (reviewed by ). Nitrogen (N) is an important plant nutrient, and is most readily available to plants where soil pH is higher than 5.5. In acidic soils, nitrification is inhibited, thus reducing the availability of nitrate. Under these conditions, plants must use ammonia as their source of N, thereby reducing N utilization efficiency . Maximum phosphorus availability occurs when the soil pH ranges between 6–7. In acidic soils aluminum and iron, which form strong bonds with phosphate, are present, while at higher pH when calcium is the dominant cation, soil phosphate tends to convert to insoluble calcium phosphate . Available potassium (K) decreases with any increase in soil pH, . The ideal soil pH for plant growth is between 6.5 and 7.5. Soils that are too acidic or alkaline can negatively affect the physical properties of the soil and reduce the availability of nutrients to plants . Many studies have demonstrated that the application of lime to acidic soils neutralizes excessive hydrogen ions and raises soil pH, which results in greater crop productivity . Understanding the mechanism of the influence of soil pH on plant growth is of theoretical and practical importance for the amelioration of soils with acid–base imbalances. It may also lead to improvements of soil fertility, better crop production, and the prevention and control of invasive plants . The soil microbiome is responsible for the decomposition and transformation of soil nutrients, which in turn affect their uptake and utilization by the plant . Changes in soil pH can affect its biomass levels, diversity and structure . Fungi dominate in low pH soils, while high pH soils favour the bacteria . The ratio of fungi:bacteria in soil decreases with an increase in soil pH. At pH 3, this ratio is about 9, but at pH 7, falls to about 2, and soil microbial activity is inhibited at a pH less than 4.5 . High-throughput DNA sequencing technology reveals that temperature, geographical location and other factors may affect the composition of the soil microbiome, however, the soil pH is the most important parameter . A. adenophora has strong allelopathy and competitiveness and changes the diversity and composition of the microbiome in the invaded soil . This invasive weed improves the composition of soil nutrient elements making it beneficial to support its own growth, while inhibiting or reducing the growth and competitiveness of adjacent native plants . However, it is still unclear how soil pH affects its rhizosphere microbial community diversity and composition. In this study, pot experiments were performed to examine what effects soil pH might have on the growth of A. adenophora and how it affects availability of soil nutrients, antioxidant enzyme activities of its leaves, and the diversity, composition, and interactions of its rhizosphere microbiome. Such data will help to develop effective control measures for A. adenophora growth and its ecological impact.
Preparation of soils with different pH values for pot experiments Original soil was collected from a scenery orchard located at Kunming University, Kunming, China (24°58′53″N, 102°47′54″E), which had not been invaded by A. adenophora . Soil samples were sieved to two mm mesh to remove plant roots and debris, and thoroughly homogenized and air dried. To ensure sufficient nutrients for A. adenophora growth, the soil sample was mixed with 1:1 (v/v) humus. The chemical properties of the humus-mixed soil were as follows: pH 6.5, EC 385.6 (±14.5) µs cm −1 , organic matter 16.90 (±1.82) g kg −1 , total nitrogen (TN) 1.13 (±0.06) g kg −1 , total phosphorus (TP) 0.27 (±0.03) g kg −1 , total potassium (TK) 1.08 (±0.05) g kg −1 , available nitrogen (AN) 291.50 (±41.9) mg kg −1 , available phosphorus (AP) 3.66 (±0.22) mg kg −1 , available potassium (AK) 32.76 (±2.41) mg kg −1 . The pH of the humus-mixed soil was adjusted according to . Briefly, a soil neutralization curve was generated to determine the amount of hydrated limes powder or ferrous sulfate to be added to the potting soil. Their levels required to increase or decrease soil pH to the desired levels were determined based on the regression equation resulting from pH measurement of the incubated soils. For pH 7.2, 9.0: Y (pH value) = 34.25 X (g lime/gsoil) + 7.10. For pH 5.5: Y (pH value) = − 17.51 X (g ferrous sulfate/gsoil) + 6.26. The pH-adjusted soils were irrigated and incubated in a greenhouse located at Kunming University (altitude 1,890 m; 24°58′N; 102°48′E). The greenhouse had a natural light with an average 25.4 (±12.1)°C temperature and 75 ∼95% relative humidity during the experiment. Soil moisture content in the pots was maintained at 20% every 2 days. After 2 months, the soil pH values were determined. Seed collection, seedling Seeds were collected from A. adenophora in an evergreen mixed forest located at Mao-Mao Qing, Xishan of Kunming, Yunnan Province, China (24°58′33.1′′N102°37′04.9′′E). The sampling area has been dominated by A. adenophora for the last 30 years and has an average altitude of 2,200 m, a mean annual precipitation of 932.7 mm and a mean annual temperature of 15.6 °C. The seeds were collected from more than 10 individuals that were at least 5 m apart from one another, and stored at 5 °C after air-drying at room temperature. They were germinated in seed beds with the humus-mixed soil in December 2021 in the same greenhouse at Kunming University. Pot experiment design Four pot planting treatments were designed. The soil samples with their pH adjusted to pH 5.5, 6.5 (original soil), 7.2 and 9.0 were used in pot experiments. According to , these soils corresponded to acidic (pH 4.5–5.5), weakly acidic (pH 5.6–6.5), neutral (pH 6.5–7.5), and alkaline (pH 8.5–9.5) soils. In each pH treatment, three healthy seedlings that were approximately 10 cm tall, similar-sized were transplanted at equal distance from each other in plastic pots (height 20 cm, diameter 19 cm) containing 8 kg soils. Each pot was watered to 2/9th of soil maximum holding capacity 24 h prior to transplantation, and then at 1/9th of their maximum water retention capacity every 48 h to the end of the experiments, which were carried out in the same greenhouse for 180 days (from December 2021 to May 2022). Six replicate pots were used for each pH treatment for determinations of plant growth indices at 10, 90 and 180 days. Chemical analysis of soil samples Chemical analyses of soil samples were carried out according to the protocols described by . Briefly, TN (total nitrogen), TP (total phosphorus), and TK (total potassium) were determined using the Kjeldahl method, the molybdenum blue colorimetric method, and the flame photometric method , respectively. AN (available nitrogen), AP (available phosphorus), and AK (available potassium) were determined with the alkaline hydrolysis diffusion method, the molybdenum blue colorimetric method, and the flame photometric method respectively. The soil pH (1:2.5 solution of soil to water) values were measured using a pH meter (Mettler-Toledo International Inc., Columbus, OH, USA). The soil EC (1:5 solution of soil to water) values were measured according 1:5 soil to water ratio conductivity method . Soil water holding capacity was determined according to the cutting ring method described by . All parameters were measured in triplicates (see “DNA extraction and PCR amplification of rhizosphere microbiomes” for more detail). Plant growth indices Of the 18 plants of A. adenophora in the six pots of each pH treatment, 16 plants (excluding the highest and shortest ones) were chosen to determine above- and under-ground fresh and dry weights, plant heights and root lengths at 10, 90 and 180 days after seedling transplanting. These data were analysed for each plant. In order to determine their dry weight, plant components were placed in a hot air oven at 60 °C until a steady value was reached. Antioxidant enzyme activities and redox marker levels in the leaves of A. adenophora Aliquots of 0.2 g leaf samples were homogenized in 1.8 mL of 0.1 M phosphate buffer (pH 7.0) on ice, followed by centrifugation at 10,000 rpm for 10 min. Triplicated leaf samples from different pots were used for activity analysis of each enzyme. The supernatants were collected, the activities of oxidative enzymes and levels of redox markers determined. These were superoxide dismutase (SOD), peroxidase (POD), catalase (CAT), glutathione (GSH) which controls reactive oxygen species and involves in detoxification of methylglyoxal, and lipid peroxidation marker malondialdehyde (MDA), and were analysed using the detection kits (Nanjing Jiangcheng Bioengineering Institute, Nanjing, China) A001-1-2, A084-3-1, A007-1-1, A061-2-1 and A003-1-2 respectively according to the manufacturer’s instructions. Quantification of the activities of these enzymes were measured with a microplate reader (Thermo Fisher Scientific, Waltham, MA, USA). DNA extraction and PCR amplification of rhizosphere microbiomes Rhizosphere soils of A. adenophora were collected by using the shaking root method . Equal amounts (80 g) of these from the three plants in a single pot were homogenized and divided into two aliquots. Rhizosphere soils were sampled at each soil pH taken at 10, 90 and 180 days after transplanting. One aliquot was air-dried and used for analysis of soil chemical properties. The other was quick-frozen in liquid nitrogen immediately and stored at –80 °C for microbial community analysis (only for soil samples 90 and 180 days after transplanting). DNA extraction, PCR amplification, and Illumina sequencing were carried out according to the protocols described by . Soil DNA was submitted to Majorbio Bio-Pharm Technology Co., Ltd., Shanghai of China for amplification and Illumina sequencing (NovaSeq PE 250) of the V3-V4 hypervariable region of the 16S rRNA genes and the ITS2 regions of fungal rRNA genes. The 16S rRNA and ITS amplicon sequences have been deposited in the NCBI Sequence Read Archive under the submission ID: SUB13856309 and BioProject ID: PRJNA1034221 . Phylogenetic analyses of rhizosphere microbiomes Phylogenetic analyses were carried out according to the methods described by with modifications. The V3–V4 amplicons of the 16S rRNA genes and ITS fragments were pair-end assembled and checked using Flash software to ensure that their sequences matched perfectly with the index sequences, had no more than one mismatch error present in the forward primer sequences, and trimmed sequences were longer than 200 bp. Then, QIIME was used to analyse the 16S rRNA and ITS amplicons to generate ASV (amplicon sequence variant) clusters and perform alpha diversity analyses. The number of sequences per sample was normalized based on the number of sequences obtained from the smallest library for each community before analysis. The V3–V4 and ITS amplicon sequences were grouped into ASVs at the 97% identity threshold (3% dissimilarity levels) using the RDP classifier (Release 11.1; https://sourceforge.net/projects/rdp-classifier/ ) and Unite (Release 6.0; https://unite.ut.ee/index.php ), respectively. Any ASV represented by ≤3 sequences was removed. Biodiversity indices, including the Chao1 index, Shannon index, and coverage ratios, were calculated with Mothur following the procedures provided and again applying a 97% identity threshold. Statistical analyses The Kruskal–Wallis test was used to assess differences in soil chemical properties, leaf enzyme activities, plant growth indices, microbial community abundances, and diversities between different pH treatments. Pairwise-Wilcox test was used to determine the difference significance ( P < 0.05). Pearson’s correlation analysis was performed between microbial abundances and soil pH or growth time of A. adenophora , and between soil pH and soil nutrient concentrations. For Pearson’s correlation analyses, the normality of data was confirmed by a Kolmogorov–Smirnov test. All analyses described above were performed with SPSS 17.0. The PCoA (principal coordinates analysis) based on the Bray–Curtis distance was chosen to perform the cluster analyses of the prokaryotic and fungal community composition between different pH treatments. db-RDA (distance-based redundancy) analyses base on Bray-Curtis distance of the correlations between soil pH, growth time and the composition of the rhizosphere prokaryotic and fungal communities of A. adenophora were performed with vegan in R packages. PERMANOVA analyses to differentiate the impact of soil pH and days of planting A. adenophora on the composition of the rhizosphere microbiomes of A. adenophora were carried out in R packages. Active microbial co-occurrence network analysis was conducted using the R packages. A Pearson’s coefficient of greater than 0.6 (or less than −0.6) and a significance level of less than 0.05 indicated a significant correlation. Topological features including number of edges, betweenness- and degree-centralization of each subnetwork were calculated for analysis of the distance-decay relationship of the prokaryotic and fungal co-occurrence patterns. Subsequently, network diagrams were generated using Gephi software (version 0.9.2) . Heatmaps showing changes in the degree-centrality values of the top 50 prokaryotic and fungal genera of the rhizosphere microbiomes of A. adenophora with soil pH and planting time (90 and 180 days) were modelled with pheatmap in R package.
Original soil was collected from a scenery orchard located at Kunming University, Kunming, China (24°58′53″N, 102°47′54″E), which had not been invaded by A. adenophora . Soil samples were sieved to two mm mesh to remove plant roots and debris, and thoroughly homogenized and air dried. To ensure sufficient nutrients for A. adenophora growth, the soil sample was mixed with 1:1 (v/v) humus. The chemical properties of the humus-mixed soil were as follows: pH 6.5, EC 385.6 (±14.5) µs cm −1 , organic matter 16.90 (±1.82) g kg −1 , total nitrogen (TN) 1.13 (±0.06) g kg −1 , total phosphorus (TP) 0.27 (±0.03) g kg −1 , total potassium (TK) 1.08 (±0.05) g kg −1 , available nitrogen (AN) 291.50 (±41.9) mg kg −1 , available phosphorus (AP) 3.66 (±0.22) mg kg −1 , available potassium (AK) 32.76 (±2.41) mg kg −1 . The pH of the humus-mixed soil was adjusted according to . Briefly, a soil neutralization curve was generated to determine the amount of hydrated limes powder or ferrous sulfate to be added to the potting soil. Their levels required to increase or decrease soil pH to the desired levels were determined based on the regression equation resulting from pH measurement of the incubated soils. For pH 7.2, 9.0: Y (pH value) = 34.25 X (g lime/gsoil) + 7.10. For pH 5.5: Y (pH value) = − 17.51 X (g ferrous sulfate/gsoil) + 6.26. The pH-adjusted soils were irrigated and incubated in a greenhouse located at Kunming University (altitude 1,890 m; 24°58′N; 102°48′E). The greenhouse had a natural light with an average 25.4 (±12.1)°C temperature and 75 ∼95% relative humidity during the experiment. Soil moisture content in the pots was maintained at 20% every 2 days. After 2 months, the soil pH values were determined.
Seeds were collected from A. adenophora in an evergreen mixed forest located at Mao-Mao Qing, Xishan of Kunming, Yunnan Province, China (24°58′33.1′′N102°37′04.9′′E). The sampling area has been dominated by A. adenophora for the last 30 years and has an average altitude of 2,200 m, a mean annual precipitation of 932.7 mm and a mean annual temperature of 15.6 °C. The seeds were collected from more than 10 individuals that were at least 5 m apart from one another, and stored at 5 °C after air-drying at room temperature. They were germinated in seed beds with the humus-mixed soil in December 2021 in the same greenhouse at Kunming University.
Four pot planting treatments were designed. The soil samples with their pH adjusted to pH 5.5, 6.5 (original soil), 7.2 and 9.0 were used in pot experiments. According to , these soils corresponded to acidic (pH 4.5–5.5), weakly acidic (pH 5.6–6.5), neutral (pH 6.5–7.5), and alkaline (pH 8.5–9.5) soils. In each pH treatment, three healthy seedlings that were approximately 10 cm tall, similar-sized were transplanted at equal distance from each other in plastic pots (height 20 cm, diameter 19 cm) containing 8 kg soils. Each pot was watered to 2/9th of soil maximum holding capacity 24 h prior to transplantation, and then at 1/9th of their maximum water retention capacity every 48 h to the end of the experiments, which were carried out in the same greenhouse for 180 days (from December 2021 to May 2022). Six replicate pots were used for each pH treatment for determinations of plant growth indices at 10, 90 and 180 days.
Chemical analyses of soil samples were carried out according to the protocols described by . Briefly, TN (total nitrogen), TP (total phosphorus), and TK (total potassium) were determined using the Kjeldahl method, the molybdenum blue colorimetric method, and the flame photometric method , respectively. AN (available nitrogen), AP (available phosphorus), and AK (available potassium) were determined with the alkaline hydrolysis diffusion method, the molybdenum blue colorimetric method, and the flame photometric method respectively. The soil pH (1:2.5 solution of soil to water) values were measured using a pH meter (Mettler-Toledo International Inc., Columbus, OH, USA). The soil EC (1:5 solution of soil to water) values were measured according 1:5 soil to water ratio conductivity method . Soil water holding capacity was determined according to the cutting ring method described by . All parameters were measured in triplicates (see “DNA extraction and PCR amplification of rhizosphere microbiomes” for more detail).
Of the 18 plants of A. adenophora in the six pots of each pH treatment, 16 plants (excluding the highest and shortest ones) were chosen to determine above- and under-ground fresh and dry weights, plant heights and root lengths at 10, 90 and 180 days after seedling transplanting. These data were analysed for each plant. In order to determine their dry weight, plant components were placed in a hot air oven at 60 °C until a steady value was reached.
A. adenophora Aliquots of 0.2 g leaf samples were homogenized in 1.8 mL of 0.1 M phosphate buffer (pH 7.0) on ice, followed by centrifugation at 10,000 rpm for 10 min. Triplicated leaf samples from different pots were used for activity analysis of each enzyme. The supernatants were collected, the activities of oxidative enzymes and levels of redox markers determined. These were superoxide dismutase (SOD), peroxidase (POD), catalase (CAT), glutathione (GSH) which controls reactive oxygen species and involves in detoxification of methylglyoxal, and lipid peroxidation marker malondialdehyde (MDA), and were analysed using the detection kits (Nanjing Jiangcheng Bioengineering Institute, Nanjing, China) A001-1-2, A084-3-1, A007-1-1, A061-2-1 and A003-1-2 respectively according to the manufacturer’s instructions. Quantification of the activities of these enzymes were measured with a microplate reader (Thermo Fisher Scientific, Waltham, MA, USA).
Rhizosphere soils of A. adenophora were collected by using the shaking root method . Equal amounts (80 g) of these from the three plants in a single pot were homogenized and divided into two aliquots. Rhizosphere soils were sampled at each soil pH taken at 10, 90 and 180 days after transplanting. One aliquot was air-dried and used for analysis of soil chemical properties. The other was quick-frozen in liquid nitrogen immediately and stored at –80 °C for microbial community analysis (only for soil samples 90 and 180 days after transplanting). DNA extraction, PCR amplification, and Illumina sequencing were carried out according to the protocols described by . Soil DNA was submitted to Majorbio Bio-Pharm Technology Co., Ltd., Shanghai of China for amplification and Illumina sequencing (NovaSeq PE 250) of the V3-V4 hypervariable region of the 16S rRNA genes and the ITS2 regions of fungal rRNA genes. The 16S rRNA and ITS amplicon sequences have been deposited in the NCBI Sequence Read Archive under the submission ID: SUB13856309 and BioProject ID: PRJNA1034221 .
Phylogenetic analyses were carried out according to the methods described by with modifications. The V3–V4 amplicons of the 16S rRNA genes and ITS fragments were pair-end assembled and checked using Flash software to ensure that their sequences matched perfectly with the index sequences, had no more than one mismatch error present in the forward primer sequences, and trimmed sequences were longer than 200 bp. Then, QIIME was used to analyse the 16S rRNA and ITS amplicons to generate ASV (amplicon sequence variant) clusters and perform alpha diversity analyses. The number of sequences per sample was normalized based on the number of sequences obtained from the smallest library for each community before analysis. The V3–V4 and ITS amplicon sequences were grouped into ASVs at the 97% identity threshold (3% dissimilarity levels) using the RDP classifier (Release 11.1; https://sourceforge.net/projects/rdp-classifier/ ) and Unite (Release 6.0; https://unite.ut.ee/index.php ), respectively. Any ASV represented by ≤3 sequences was removed. Biodiversity indices, including the Chao1 index, Shannon index, and coverage ratios, were calculated with Mothur following the procedures provided and again applying a 97% identity threshold.
The Kruskal–Wallis test was used to assess differences in soil chemical properties, leaf enzyme activities, plant growth indices, microbial community abundances, and diversities between different pH treatments. Pairwise-Wilcox test was used to determine the difference significance ( P < 0.05). Pearson’s correlation analysis was performed between microbial abundances and soil pH or growth time of A. adenophora , and between soil pH and soil nutrient concentrations. For Pearson’s correlation analyses, the normality of data was confirmed by a Kolmogorov–Smirnov test. All analyses described above were performed with SPSS 17.0. The PCoA (principal coordinates analysis) based on the Bray–Curtis distance was chosen to perform the cluster analyses of the prokaryotic and fungal community composition between different pH treatments. db-RDA (distance-based redundancy) analyses base on Bray-Curtis distance of the correlations between soil pH, growth time and the composition of the rhizosphere prokaryotic and fungal communities of A. adenophora were performed with vegan in R packages. PERMANOVA analyses to differentiate the impact of soil pH and days of planting A. adenophora on the composition of the rhizosphere microbiomes of A. adenophora were carried out in R packages. Active microbial co-occurrence network analysis was conducted using the R packages. A Pearson’s coefficient of greater than 0.6 (or less than −0.6) and a significance level of less than 0.05 indicated a significant correlation. Topological features including number of edges, betweenness- and degree-centralization of each subnetwork were calculated for analysis of the distance-decay relationship of the prokaryotic and fungal co-occurrence patterns. Subsequently, network diagrams were generated using Gephi software (version 0.9.2) . Heatmaps showing changes in the degree-centrality values of the top 50 prokaryotic and fungal genera of the rhizosphere microbiomes of A. adenophora with soil pH and planting time (90 and 180 days) were modelled with pheatmap in R package.
Soil pH and its effects on the N, P, K concentrations of rhizosphere soil of A. adenophora Soil pH affected the nutrient concentration in the rhizosphere soils. At day 90 , the rhizosphere soil of A. adenophora at pH 9.0 had higher ( P < 0.05) available P and total K concentrations than those of the other three pH treatments, a lower ( P < 0.05) total N content than those at pH 7.2 and 6.5, and a higher ( P < 0.05) total phosphorus concentration than that at pH 7.2. At day 180 , no significant differences ( P > 0.05) were seen between the total and available concentrations of N, P, and K among rhizosphere soils at the different pH values. We also measured the pH values of the rhizosphere soils of A. adenophora after 90 and 180 days . In all four soils they decreased after day 10, except in the pH 9.0 soil, which continued to decrease by day 90. Soil pH values in those with initial pH 5.5, 6.5 and 7.2 had risen by day 90, while after day 180, soils with initial pH values of 6.5, 7.2, 9.0 had increased slightly, yet with soil at pH at 5.5, a decrease was recorded. When a Spearman correlation between the rhizosphere soil pH values and the total and available concentrations of N, P, and K of the rhizosphere soils after days 0, 90 and 180 was performed, no significant ( P > 0.05) correlations were apparent. Effects of soil pH on growth indices of A. adenophora Soil pH also affected the growth indices of A. adenophora during the early and middle experimental periods, but not at its end. No significant differences in the growth indices of A. adenophora were found in soils of different pH at day 180 , apart from a higher ( P < 0.05) below-ground height in soil of pH 7.2 than in that in pH 5.5 soil at day 10 and a higher ( P < 0.05) above-ground height in pH 7.2 soils than those at pH 9.0 at day 90 . Effect of soil pH on enzyme activities in A. adenophora leaves at different soil pH Soil pH had fluctuating effects on the activities of the A. adenophora leaf antioxidant enzymes. Thus, at day 90 , significant ( P < 0.05) differences were detected in the activities of CAT (pH 9.0>5.5 >pH 7.2>6.5), GSH (pH 7.2 >pH 5.5) and SOD (pH 9.0 >pH 7.2). At day 180 , CAT activities in plants grown at pH 7.2 were greater ( P < 0.05) than in those grown at pH 5.5. Effect of soil pH on the diversity of A. adenophora rhizosphere microbiome Soil pH affected the diversity of prokaryotic (bacteria and archaea) communities in the rhizosphere soil of A. adenophora . The Shannon indices of those grown in pH 5.5 and pH 9.0 soils were lower ( P < 0.05) than those in soils at pH 7.2 after days 90 and 180 respectively. Soil pH did not affect ( P > 0.05) the richness of the prokaryotic communities by days 90 and 180 . However, except for that at pH 9.0, the Shannon indexes of those at soil pH 5.5, 6.5 and 7.2 after day 180 were significantly higher ( P < 0.05) than those analyzed after day 90. Soil pH also affected the diversity of rhizosphere fungal communities of A. adenophora but to a lesser extent. Thus, no statistically significant ( P > 0.05) differences were apparent in the Shannon indices and richness of those at different soil pH at day 90. However, at day 180, both indices in soil pH 9.0 were lower ( P < 0.05) than those in soil pH 7.2. while those detected at day 180 were not significantly ( P > 0.05) different to those detected at day 90 . The coverage values of both the prokaryotic and fungal rhizosphere communities in the 27 soil samples detected at days 90 and 180 were all greater than 0.99, indicating that the sequencing depth applied covered the diversity of both rhizosphere microbiomes. Each soil sample rhizosphere consisted of 3,041–5,370 bacterial and archaeal ASVs and 320–818 fungal ASVs , with 141 identical prokaryotic and 75 fungal ASVs shared between them. We performed PCoA of the ASVs of all rhizosphere microbiomes using Bray-Curtis distance , which showed those of the three replicate soil samples in each soil pH were tightly clustered after days 90 and 180, with those of the former showing a clear separation along axis 1 and axis 2 from those at day 180 . Effect of soil pH on the composition of A. adenophora rhizosphere microbiome Soil pH affected markedly the composition of the rhizosphere microbiome of A. adenophora . The prokaryotic communities in soils at pH 5.5 contained fewer ( P < 0.05) Firmicutes (0.89% vs 2.38% at day 90; 1.10% vs 3.38% at day 180), Thermotogota (0.48% vs 1.84% at day 90) and more ( P < 0.05) Verrucomicrobiota (1.57% vs 0.45% at day 90) than those in pH 7.2 soils. Those at pH 9.0 had more ( P < 0.05) Proteobacteria (57.61% vs 51.71% at day 180) and fewer ( P < 0.05) Planctomycetes (0.96% vs 2.43% at day 180) than those at soil pH 7.2 . The fungal communities of those in soils at pH 5.5 had fewer ( P < 0.05) Rozellomycota (0.25% vs 0.85%) than those at pH 7.2, and those at pH 9.0 contained more ( P < 0.05) Chytridiomycota (19.23% vs 1.03% at day 180) and Mortierellomycota (16.94% vs 2.29% at day 180) than those at pH 7.2 . We compared the top 50 abundant genera in all 27 soil samples and found significant differences in their abundance (SDA) at each soil pH . There were one-to-five (average 3.3) and three-to-seven (average 4.3) SDA genera among the prokaryotic communities of the pH 5.5, 6.5, 7.2 soils at days 90 and 180, respectively, and 12-28 (average 18.7) and eight-to-24 (average 14.7) SDA genera between these three soil pH soils and the pH 9.0 soil at days 90 and 180 respectively. The highest differences (28 and 24 SDA genera at days 90 and 180 respectively) were between the pH 5.5 and pH 9.0 soils. Similarly, of the top abundant 50 fungal genera in all soil samples , there were one-to-five (average 2.7) and two-to-12 (average 6.3) SDA genera among fungal communities in soils at pH 5.5, 6.5, 7.2 at days 90 and 180 respectively, and seven-to-19 (average 13.7) and three-to-nine (average 5.3) SDA genera between three soil pH and the pH 9.0 soil at days 90 and 180, respectively. The most difference, not surprisingly (19 and nine SDA genera at days 90 and 180 respectively) was seen between the pH 5.5 and the pH 9.0 soils. Effects of soil pH on common pattern of the rhizosphere microbiome of A. adenophora To further investigate the effects of soil pH on the composition and population interaction of these rhizosphere microbiomes, we analyzed correlations among the top 50 abundant prokaryotic and fungal genera in the three replicate soil samples at each pH treatment, and calculated values for their node-level topological features including total degree (edge) number, degree-, closeness- and betweenness-centrality. A correlation network diagram was constructed for both the abundant prokaryotic and fungal genera identified in all 27 soils samples. We found that soil pH affected the edge number and centrality values in both the prokaryotic and fungal networks. At day 90, the total edge numbers of the soil pH 5.5, 6.5, 9.0 in the prokaryotic networks and the pH 6.5, 7.2, 9.0 soils in the fungal networks were all greater than that of the original bulk soil . At day 180, total edge numbers for all other soil pH samples returned to the level of the control, except for the slightly higher total edge number in the pH 9.0 soil sample of the prokaryotic and the lower total edge number in the pH 7.2 soil sample of the fungal networks. Substantial differences were seen in the degree-centrality value of the top 50 dominant genera in rhizosphere soils among different pH treatments . For example, the degree-centrality value of the bacterial genus Devosia in the bulk soil, the pH 6.5, 5.5, 7.2, 9.0 soils at days 90 and 180 were 0.37, 0.43, 0.21, 0.29, 0.63, 0.35, 0.27, 0.29, 0.49, respectively . Correlation effects between soil pH and A. adenophora ’s growth time on the composition of the rhizosphere microbiome We analyzed possible correlations between soil pH and A. adenophora growth time (90 and 180 days) on the ASV compositions of the prokaryotic and fungal communities. Analyses showed that both significantly ( P = 0.001) affected the composition of the prokaryotic and the fungal communities. Moreover, the angles between the pH and planting-time vectors for the prokaryotic and fungal communities were 65.9 and 70.1 degrees, respectively, indicating that their impacts were only weakly positively correlated. Furthermore, we differentiated the impact of pH and days of planting A. adenophora on the composition of the rhizosphere microbiomes of A. adenophora by using PERMANOVA analyses. The results showed that soil pH explained 62.2% and 52.8% effects in the prokaryotic and the fungal communities respectively, and the days of planting A. adenophora explained 37.8% and 47.2% effects in the prokaryotic and the fungal communities respectively.
A. adenophora Soil pH affected the nutrient concentration in the rhizosphere soils. At day 90 , the rhizosphere soil of A. adenophora at pH 9.0 had higher ( P < 0.05) available P and total K concentrations than those of the other three pH treatments, a lower ( P < 0.05) total N content than those at pH 7.2 and 6.5, and a higher ( P < 0.05) total phosphorus concentration than that at pH 7.2. At day 180 , no significant differences ( P > 0.05) were seen between the total and available concentrations of N, P, and K among rhizosphere soils at the different pH values. We also measured the pH values of the rhizosphere soils of A. adenophora after 90 and 180 days . In all four soils they decreased after day 10, except in the pH 9.0 soil, which continued to decrease by day 90. Soil pH values in those with initial pH 5.5, 6.5 and 7.2 had risen by day 90, while after day 180, soils with initial pH values of 6.5, 7.2, 9.0 had increased slightly, yet with soil at pH at 5.5, a decrease was recorded. When a Spearman correlation between the rhizosphere soil pH values and the total and available concentrations of N, P, and K of the rhizosphere soils after days 0, 90 and 180 was performed, no significant ( P > 0.05) correlations were apparent.
A. adenophora Soil pH also affected the growth indices of A. adenophora during the early and middle experimental periods, but not at its end. No significant differences in the growth indices of A. adenophora were found in soils of different pH at day 180 , apart from a higher ( P < 0.05) below-ground height in soil of pH 7.2 than in that in pH 5.5 soil at day 10 and a higher ( P < 0.05) above-ground height in pH 7.2 soils than those at pH 9.0 at day 90 .
A. adenophora leaves at different soil pH Soil pH had fluctuating effects on the activities of the A. adenophora leaf antioxidant enzymes. Thus, at day 90 , significant ( P < 0.05) differences were detected in the activities of CAT (pH 9.0>5.5 >pH 7.2>6.5), GSH (pH 7.2 >pH 5.5) and SOD (pH 9.0 >pH 7.2). At day 180 , CAT activities in plants grown at pH 7.2 were greater ( P < 0.05) than in those grown at pH 5.5.
A. adenophora rhizosphere microbiome Soil pH affected the diversity of prokaryotic (bacteria and archaea) communities in the rhizosphere soil of A. adenophora . The Shannon indices of those grown in pH 5.5 and pH 9.0 soils were lower ( P < 0.05) than those in soils at pH 7.2 after days 90 and 180 respectively. Soil pH did not affect ( P > 0.05) the richness of the prokaryotic communities by days 90 and 180 . However, except for that at pH 9.0, the Shannon indexes of those at soil pH 5.5, 6.5 and 7.2 after day 180 were significantly higher ( P < 0.05) than those analyzed after day 90. Soil pH also affected the diversity of rhizosphere fungal communities of A. adenophora but to a lesser extent. Thus, no statistically significant ( P > 0.05) differences were apparent in the Shannon indices and richness of those at different soil pH at day 90. However, at day 180, both indices in soil pH 9.0 were lower ( P < 0.05) than those in soil pH 7.2. while those detected at day 180 were not significantly ( P > 0.05) different to those detected at day 90 . The coverage values of both the prokaryotic and fungal rhizosphere communities in the 27 soil samples detected at days 90 and 180 were all greater than 0.99, indicating that the sequencing depth applied covered the diversity of both rhizosphere microbiomes. Each soil sample rhizosphere consisted of 3,041–5,370 bacterial and archaeal ASVs and 320–818 fungal ASVs , with 141 identical prokaryotic and 75 fungal ASVs shared between them. We performed PCoA of the ASVs of all rhizosphere microbiomes using Bray-Curtis distance , which showed those of the three replicate soil samples in each soil pH were tightly clustered after days 90 and 180, with those of the former showing a clear separation along axis 1 and axis 2 from those at day 180 .
A. adenophora rhizosphere microbiome Soil pH affected markedly the composition of the rhizosphere microbiome of A. adenophora . The prokaryotic communities in soils at pH 5.5 contained fewer ( P < 0.05) Firmicutes (0.89% vs 2.38% at day 90; 1.10% vs 3.38% at day 180), Thermotogota (0.48% vs 1.84% at day 90) and more ( P < 0.05) Verrucomicrobiota (1.57% vs 0.45% at day 90) than those in pH 7.2 soils. Those at pH 9.0 had more ( P < 0.05) Proteobacteria (57.61% vs 51.71% at day 180) and fewer ( P < 0.05) Planctomycetes (0.96% vs 2.43% at day 180) than those at soil pH 7.2 . The fungal communities of those in soils at pH 5.5 had fewer ( P < 0.05) Rozellomycota (0.25% vs 0.85%) than those at pH 7.2, and those at pH 9.0 contained more ( P < 0.05) Chytridiomycota (19.23% vs 1.03% at day 180) and Mortierellomycota (16.94% vs 2.29% at day 180) than those at pH 7.2 . We compared the top 50 abundant genera in all 27 soil samples and found significant differences in their abundance (SDA) at each soil pH . There were one-to-five (average 3.3) and three-to-seven (average 4.3) SDA genera among the prokaryotic communities of the pH 5.5, 6.5, 7.2 soils at days 90 and 180, respectively, and 12-28 (average 18.7) and eight-to-24 (average 14.7) SDA genera between these three soil pH soils and the pH 9.0 soil at days 90 and 180 respectively. The highest differences (28 and 24 SDA genera at days 90 and 180 respectively) were between the pH 5.5 and pH 9.0 soils. Similarly, of the top abundant 50 fungal genera in all soil samples , there were one-to-five (average 2.7) and two-to-12 (average 6.3) SDA genera among fungal communities in soils at pH 5.5, 6.5, 7.2 at days 90 and 180 respectively, and seven-to-19 (average 13.7) and three-to-nine (average 5.3) SDA genera between three soil pH and the pH 9.0 soil at days 90 and 180, respectively. The most difference, not surprisingly (19 and nine SDA genera at days 90 and 180 respectively) was seen between the pH 5.5 and the pH 9.0 soils.
A. adenophora To further investigate the effects of soil pH on the composition and population interaction of these rhizosphere microbiomes, we analyzed correlations among the top 50 abundant prokaryotic and fungal genera in the three replicate soil samples at each pH treatment, and calculated values for their node-level topological features including total degree (edge) number, degree-, closeness- and betweenness-centrality. A correlation network diagram was constructed for both the abundant prokaryotic and fungal genera identified in all 27 soils samples. We found that soil pH affected the edge number and centrality values in both the prokaryotic and fungal networks. At day 90, the total edge numbers of the soil pH 5.5, 6.5, 9.0 in the prokaryotic networks and the pH 6.5, 7.2, 9.0 soils in the fungal networks were all greater than that of the original bulk soil . At day 180, total edge numbers for all other soil pH samples returned to the level of the control, except for the slightly higher total edge number in the pH 9.0 soil sample of the prokaryotic and the lower total edge number in the pH 7.2 soil sample of the fungal networks. Substantial differences were seen in the degree-centrality value of the top 50 dominant genera in rhizosphere soils among different pH treatments . For example, the degree-centrality value of the bacterial genus Devosia in the bulk soil, the pH 6.5, 5.5, 7.2, 9.0 soils at days 90 and 180 were 0.37, 0.43, 0.21, 0.29, 0.63, 0.35, 0.27, 0.29, 0.49, respectively .
A. adenophora ’s growth time on the composition of the rhizosphere microbiome We analyzed possible correlations between soil pH and A. adenophora growth time (90 and 180 days) on the ASV compositions of the prokaryotic and fungal communities. Analyses showed that both significantly ( P = 0.001) affected the composition of the prokaryotic and the fungal communities. Moreover, the angles between the pH and planting-time vectors for the prokaryotic and fungal communities were 65.9 and 70.1 degrees, respectively, indicating that their impacts were only weakly positively correlated. Furthermore, we differentiated the impact of pH and days of planting A. adenophora on the composition of the rhizosphere microbiomes of A. adenophora by using PERMANOVA analyses. The results showed that soil pH explained 62.2% and 52.8% effects in the prokaryotic and the fungal communities respectively, and the days of planting A. adenophora explained 37.8% and 47.2% effects in the prokaryotic and the fungal communities respectively.
Effects of soil pH on growth of A. adenophora The soil pH is known to affect its biological, chemical, and physical properties, which determines the availability of nutrients. Therefore, soil pH is recognized as the most important soil factor for plant growth . Studies on the effects of soil pH on plant growth have focused mainly on crops, while limited information has been obtained on how it affects the growth of invasive alien plants. We studied the effects of acidic (pH 5.5), weakly acidic (pH 6.5), neutral (pH 7.2) and alkaline (pH 9.0) soils on growth of the weed A. adenophora by artificially changing the soil pH. Our results showed that significant differences in growth indices existed in the early (below-ground height of A. adenophora between pH 7.2 and pH 5.5 soils at day 10) or middle stages (above-ground height of A. adenophora between pH 7.2 and pH 9.0 soils at day 90) of the experiment. However, after 180 days, there were no statistically significant differences in above- and underground fresh and dry weights, plant heights, and root lengths of A. adenophora between those growing in the acid and alkaline soils and those in the neutral pH soil . These data indicate that A. adenophora has a wide pH tolerance range and is able to grow normally in both acidic and alkaline soils. As discussed previously, the ideal soil pH for plant growth is between 6.5 and 7.5 where it is easier for plants to obtain most of the necessary soil nutrients. Thus, used similar pot experiments to study the effects of soil pH (pH 4.5 to 8.0) on the invasive alien species Lygodium microphyllum in Florida. They found that after 60-days, plant biomasses, relative growth rates, photosynthesis, and special leaf areas of L. microphyllum growing at pH 5.5 and 6.5 were significantly higher than those of the other soil pH values. Furthermore, examined the effect of soil pH in pot experiments (pH 5, 6, and 7) on the growth of the European invasive species Ambrosia artemisiifolia (ragweed). Their data showed that plant heights growing at pH 5 and 6 were significantly lower than those growing at pH 7. Thus, the unusual pH tolerance of A. adenophora reported here may be one crucial reason for its successful invasion, and suggests that attempts use soil pH to control its invasion by changing the soil pH (for example, using lime) will fail. Effects of soil pH on leaf enzyme activities and redox marker levels, and nutrient concentrations of rhizosphere soil In the pot experiments we also measured enzyme activity and redox marker levels in the leaves of A. adenophora . When we compared the activities of four antioxidant enzymes and levels of two redox markers in the leaves of A. adenophora grown in acid and alkaline soils and those in neutral pH soils for 90 and 180 days, significant differences existed in the activities of CAT and the levels of GSH . However, at day 180 only CAT activities in those grown at pH 5.5 were lower ( P < 0.05) than those grown at pH 7.2. We also monitored the change of nutrient concentrations in the rhizosphere soils of A. adenophora grown at different soil pH. Significant differences were found in AP and TK, TP and TN between the rhizosphere soils of A. adenophora grown at pH 9.0 and 7.2 at day 90. No differences ( P > 0.05) were seen at day 180. Diversity, composition and interaction of the rhizosphere microbiota of A. adenophora grown at different soil pH The rhizosphere is where plants absorb soil nutrients and its microbiome has an important role in soil nutrient cycling and availability to plants. Its structure and function are influenced by plant rhizosphere exudates . Estimates suggest that plants secrete 20% of the carbon and 15% of nitrogen they fix into the rhizosphere , thus providing energy and nitrogen sources for microbial growth there. These include simple molecules, like sugars and organic acids, as well as plant secondary metabolites and complex polymer secretions such as mucilage. The composition and quantity rhizosphere exudates are known to vary with plant species, its developmental stage, and external abiotic factors . A. adenophora can change both the composition and structure of the soil microbial community through rhizosphere exudates and litter degradation during its invasion process . In this study, we analyzed the diversity and composition of rhizosphere microbiomes of A. adenophora grown at different soil pH for 0, 90 and 180 days by using Illumina high throughput sequencing. We showed that both rhizosphere microbiome diversity ( and ) and composition ( and , – ) under different soil pH conditions changed over 180 days. Under acid and alkaline conditions, the rhizosphere microbiomes differed in their diversity, phylum and generic compositions, and population interactions to relieve pH stress. We also revealed that soil pH had a greater impact on the diversity and composition of the prokaryotic rhizosphere communities than those of the fungal communities ( and , – ). This observation is generally in line with reports by who found in their long-term liming experiment that both the relative abundances and diversities of bacteria were positively related to soil pH (pH 4.0−8.3), while relative abundances of fungi were unaffected by pH and fungal diversity was only weakly related to soil pH. This may be due to the resistance of fungi to an acidic pH versus bacteria and archaea . Furthermore, the importance of each prokaryotic or fungal genus in the rhizosphere networks of A. adenophora varied under different soil pH values and at different periods (90 and 180 days) at the same pH based on the degree-centrality values of key microbial genera. This effect is proportional to their importance in formation of microbial networks . The over-time effects of soil pH on the diversity and composition of the rhizosphere microbiomes of A. adenophora is shown in the PCoA diagrams, where the prokaryotic and fungal communities at day 180 are clearly separated from those at day 90 for each soil pH. Our microbial network analyses also showed that A. adenophora growing in the acid and alkaline soils differentially adjusted the interactions between its dominant genera in the rhizosphere prokaryotic and fungal networks . The edge number between two nodes (genera) represents the network density, which is proportional to the magnitude of the interaction between two nodes (genera), and the total edge number indicates the stability of the microbiome. Compared with the bulk soil, interactions among the abundant prokaryotic and fungal genera in the rhizosphere microbiomes of A. adenophora growing in soils at all four soil pH values for 90 days were all enhanced, which suggests that the rhizosphere microbes displayed higher levels of interactions than did bulk soil communities . However, in the prokaryotic networks, more interactions appeared to be enhanced under acid and alkaline conditions, with total edge numbers being 1.51 and 1.37 times higher than under the neutral pH condition, respectively . In contrast, in the fungal networks, the interactions under acid and alkaline conditions were less, with total edge numbers of 0.74 and 0.78 times of that under neutral pH conditions, respectively . These results suggest that the stability of the prokaryotic networks in the rhizosphere microbiomes of A. adenophora growing in both acid and alkaline soils were enhanced, while those of the fungal networks were reduced. Interestingly, interaction levels between the abundant genera in both the prokaryotic and fungal networks at different soil pH mostly returned to the level of the bulk soil after 180 days, indicating that the adjustment of soil pH to the interactions of microbial networks of A. adenophora ’s rhizosphere microbiomes was complete. Changes in both structure and diversity of the rhizosphere microbiome at different soil pH were seen with A. adenophora . Our correlation analyses show that both soil pH and A. adenophora ’s growth time significantly ( P = 0.001) affected the compositions of the prokaryotic and fungal communities in the rhizosphere of A. adenophora . However, soil pH and planting time were weakly correlated. And soil pH had more impact than A. adenophora ’s growth time on the composition of the prokaryotic and fungal communities. Acidic (pH 5.5) and alkaline (pH 9.0) soils impacted the early- and middle-stage growth of A. adenophora . However, under these pH conditions A. adenophora changed the composition and structure of its rhizosphere microbiome through its rhizosphere exudates, improved soil nutrient supply thus eliminated the negative impact of soil pH, and maintained a normal growth at the end of the pot experiment.
A. adenophora The soil pH is known to affect its biological, chemical, and physical properties, which determines the availability of nutrients. Therefore, soil pH is recognized as the most important soil factor for plant growth . Studies on the effects of soil pH on plant growth have focused mainly on crops, while limited information has been obtained on how it affects the growth of invasive alien plants. We studied the effects of acidic (pH 5.5), weakly acidic (pH 6.5), neutral (pH 7.2) and alkaline (pH 9.0) soils on growth of the weed A. adenophora by artificially changing the soil pH. Our results showed that significant differences in growth indices existed in the early (below-ground height of A. adenophora between pH 7.2 and pH 5.5 soils at day 10) or middle stages (above-ground height of A. adenophora between pH 7.2 and pH 9.0 soils at day 90) of the experiment. However, after 180 days, there were no statistically significant differences in above- and underground fresh and dry weights, plant heights, and root lengths of A. adenophora between those growing in the acid and alkaline soils and those in the neutral pH soil . These data indicate that A. adenophora has a wide pH tolerance range and is able to grow normally in both acidic and alkaline soils. As discussed previously, the ideal soil pH for plant growth is between 6.5 and 7.5 where it is easier for plants to obtain most of the necessary soil nutrients. Thus, used similar pot experiments to study the effects of soil pH (pH 4.5 to 8.0) on the invasive alien species Lygodium microphyllum in Florida. They found that after 60-days, plant biomasses, relative growth rates, photosynthesis, and special leaf areas of L. microphyllum growing at pH 5.5 and 6.5 were significantly higher than those of the other soil pH values. Furthermore, examined the effect of soil pH in pot experiments (pH 5, 6, and 7) on the growth of the European invasive species Ambrosia artemisiifolia (ragweed). Their data showed that plant heights growing at pH 5 and 6 were significantly lower than those growing at pH 7. Thus, the unusual pH tolerance of A. adenophora reported here may be one crucial reason for its successful invasion, and suggests that attempts use soil pH to control its invasion by changing the soil pH (for example, using lime) will fail.
In the pot experiments we also measured enzyme activity and redox marker levels in the leaves of A. adenophora . When we compared the activities of four antioxidant enzymes and levels of two redox markers in the leaves of A. adenophora grown in acid and alkaline soils and those in neutral pH soils for 90 and 180 days, significant differences existed in the activities of CAT and the levels of GSH . However, at day 180 only CAT activities in those grown at pH 5.5 were lower ( P < 0.05) than those grown at pH 7.2. We also monitored the change of nutrient concentrations in the rhizosphere soils of A. adenophora grown at different soil pH. Significant differences were found in AP and TK, TP and TN between the rhizosphere soils of A. adenophora grown at pH 9.0 and 7.2 at day 90. No differences ( P > 0.05) were seen at day 180.
A. adenophora grown at different soil pH The rhizosphere is where plants absorb soil nutrients and its microbiome has an important role in soil nutrient cycling and availability to plants. Its structure and function are influenced by plant rhizosphere exudates . Estimates suggest that plants secrete 20% of the carbon and 15% of nitrogen they fix into the rhizosphere , thus providing energy and nitrogen sources for microbial growth there. These include simple molecules, like sugars and organic acids, as well as plant secondary metabolites and complex polymer secretions such as mucilage. The composition and quantity rhizosphere exudates are known to vary with plant species, its developmental stage, and external abiotic factors . A. adenophora can change both the composition and structure of the soil microbial community through rhizosphere exudates and litter degradation during its invasion process . In this study, we analyzed the diversity and composition of rhizosphere microbiomes of A. adenophora grown at different soil pH for 0, 90 and 180 days by using Illumina high throughput sequencing. We showed that both rhizosphere microbiome diversity ( and ) and composition ( and , – ) under different soil pH conditions changed over 180 days. Under acid and alkaline conditions, the rhizosphere microbiomes differed in their diversity, phylum and generic compositions, and population interactions to relieve pH stress. We also revealed that soil pH had a greater impact on the diversity and composition of the prokaryotic rhizosphere communities than those of the fungal communities ( and , – ). This observation is generally in line with reports by who found in their long-term liming experiment that both the relative abundances and diversities of bacteria were positively related to soil pH (pH 4.0−8.3), while relative abundances of fungi were unaffected by pH and fungal diversity was only weakly related to soil pH. This may be due to the resistance of fungi to an acidic pH versus bacteria and archaea . Furthermore, the importance of each prokaryotic or fungal genus in the rhizosphere networks of A. adenophora varied under different soil pH values and at different periods (90 and 180 days) at the same pH based on the degree-centrality values of key microbial genera. This effect is proportional to their importance in formation of microbial networks . The over-time effects of soil pH on the diversity and composition of the rhizosphere microbiomes of A. adenophora is shown in the PCoA diagrams, where the prokaryotic and fungal communities at day 180 are clearly separated from those at day 90 for each soil pH. Our microbial network analyses also showed that A. adenophora growing in the acid and alkaline soils differentially adjusted the interactions between its dominant genera in the rhizosphere prokaryotic and fungal networks . The edge number between two nodes (genera) represents the network density, which is proportional to the magnitude of the interaction between two nodes (genera), and the total edge number indicates the stability of the microbiome. Compared with the bulk soil, interactions among the abundant prokaryotic and fungal genera in the rhizosphere microbiomes of A. adenophora growing in soils at all four soil pH values for 90 days were all enhanced, which suggests that the rhizosphere microbes displayed higher levels of interactions than did bulk soil communities . However, in the prokaryotic networks, more interactions appeared to be enhanced under acid and alkaline conditions, with total edge numbers being 1.51 and 1.37 times higher than under the neutral pH condition, respectively . In contrast, in the fungal networks, the interactions under acid and alkaline conditions were less, with total edge numbers of 0.74 and 0.78 times of that under neutral pH conditions, respectively . These results suggest that the stability of the prokaryotic networks in the rhizosphere microbiomes of A. adenophora growing in both acid and alkaline soils were enhanced, while those of the fungal networks were reduced. Interestingly, interaction levels between the abundant genera in both the prokaryotic and fungal networks at different soil pH mostly returned to the level of the bulk soil after 180 days, indicating that the adjustment of soil pH to the interactions of microbial networks of A. adenophora ’s rhizosphere microbiomes was complete. Changes in both structure and diversity of the rhizosphere microbiome at different soil pH were seen with A. adenophora . Our correlation analyses show that both soil pH and A. adenophora ’s growth time significantly ( P = 0.001) affected the compositions of the prokaryotic and fungal communities in the rhizosphere of A. adenophora . However, soil pH and planting time were weakly correlated. And soil pH had more impact than A. adenophora ’s growth time on the composition of the prokaryotic and fungal communities. Acidic (pH 5.5) and alkaline (pH 9.0) soils impacted the early- and middle-stage growth of A. adenophora . However, under these pH conditions A. adenophora changed the composition and structure of its rhizosphere microbiome through its rhizosphere exudates, improved soil nutrient supply thus eliminated the negative impact of soil pH, and maintained a normal growth at the end of the pot experiment.
We show here that A. adenophora has a strong pH tolerance being able to grow normally in both acidic (pH 5.5) and alkaline (pH 9.0) soils. When growing in acidic and alkaline soils, A. adenophora altered the composition, diversity, and interactions of its rhizosphere microbiome especially the prokaryotic community. These changes helped to maintain a balanced nutrient supply to A. adenophora , allowing it to successfully adapt to pH stress in acid and alkaline soils. Thus, the unusual pH tolerance of A. adenophora may be one crucial reason for its successful invasion ability. These results suggest that attempts use soil pH to control its invasion by changing the soil pH will fail.
10.7717/peerj.17231/supp-1 Supplemental Information 1 Significance ( p values) of differences in relative abundance of the dominant prokaryotic phyla between the rhizosphere microbiomes of Ageratina adenophora grown in soils with different pH for 90 (a) and 180 (b) days (“ −” indicates p > 0.05) 10.7717/peerj.17231/supp-2 Supplemental Information 2 Significance ( p values) of differences in relative abundance of the dominant fungal phyla between the rhizosphere microbiomes of Ageratina adenophora grown in soils with different pH for 90 (a) and 180 (b) days (“ −” indicates p > 0.05) 10.7717/peerj.17231/supp-3 Supplemental Information 3 Significance ( p values) of differences in relative abundance of the dominant prokaryotic genera between the rhizosphere microbiomes of Ageratina adenophora grown in soils with different pH for 90 (a) and 180 (b) days (“ −” indicates p > 0.05) 10.7717/peerj.17231/supp-4 Supplemental Information 4 Significance ( p values) of differnces in relative abundance of the dominant fungal genera between the rhizosphere microbiomes of Ageratina adenophora growing in soils with different pH for 90 (a) and 180 (b) days (“ −” indicates p > 0.05) 10.7717/peerj.17231/supp-5 Supplemental Information 5 Heatmaps of the degree-centrality values of the top 50 dominant prokaryotic (A) and fungal (B) genera in the rhizosphere microbiomes of Ageratina adenophora grown at different pH for different times 10.7717/peerj.17231/supp-6 Supplemental Information 6 Raw data of plant growth indices, leaf enzymes, soil physiochemical indices
|
Precision Oncology in Melanoma: Changing Practices | 80863928-3f47-4946-a2d1-dbfdc934631c | 11619585 | Internal Medicine[mh] | Worldwide, there were 324,635 new cases of melanoma and 57,043 deaths from melanoma in 2020 . In the United States, age-adjusted rates of new cases of melanoma have been rising by 1.2% on average each year from 2010 to 2019. Despite this, because of significant improvements in earlier detection of disease and treatment options, death rates have been falling by 3.3% on average each year from 2011 to 2020 . Notable differences in incidence, stage at time of diagnosis, and overall survival (OS) have been observed among non-Hispanic White and Black patients within the United States. Melanoma is more common among White patients than Black patients, with incidence rates of 30.6 and 0.9 cases per 100,00 individuals, respectively, from 2016 to 2020 . During this period, Black patients were more likely to be diagnosed with advanced disease, with 16.9% of cases with evidence of metastases at the time of diagnosis, compared with 4.8% of cases in White patients. Differences in OS exist between these 2 patients populations as well; when stratified by race and stage, Black patients had significantly lower OS for stage I and stage III disease than White patients . Multiple risk factors for the development of melanoma have been established; exposure to ultraviolet radiation through sunlight and resultant sunburns is the most common. Ultraviolet radiation exposure through both ultraviolet A and ultraviolet B light results in DNA double-strand breaks and generation of reactive oxygen species that can indirectly damage DNA . Other risk factors include a family history of melanoma, certain phenotypic characteristics, and use of indoor tanning beds . Initial staging of biopsy-proven melanoma is performed according to the American Joint Committee on Cancer eighth edition staging system; this system retains the traditionally used characterizations of the primary tumor, regional LNs, and sites of distant metastases . Primary tumor size is reported as tumor (Breslow) thickness in millimeters. The presence or absence of ulceration, defined as the lack of intact epithelium over the melanoma, is prognostically relevant, as patient outcomes with ulcerated primary tumors are worse . Regional LNs are a common site of locoregional involvement of melanoma, and assessment for nodal disease is necessary for complete staging . The role of sentinel LN biopsy (SLNB) is discussed further later in the article. Multiple imaging modalities are currently available for the staging of melanoma and evaluation for distant metastatic disease. Commonly used cross-sectional imaging techniques include contrast-enhanced CT of the neck, chest, abdomen, and pelvis; whole-body PET/CT; and MRI. Not all patients require radiographic evaluation for systemic disease; neither the National Comprehensive Cancer Network nor the American Academy of Dermatology recommends cross-sectional imaging for asymptomatic patients with stage 0–IIA melanoma . 18 F-FDG PET has been shown insensitive for detection of occult LN metastases in these patient populations, with a sensitivity of only 13%–21% and a false-negative rate of 79%. The low sensitivity of 18 F-FDG PET in this context is thought to be due to the small size of metastatic deposits in sentinel LNs, as the sensitivity improves with increases in lesion size and is near 100% for metastases larger than 10 mm . Cross-sectional imaging for staging is recommended in patients with stage IIB disease or higher . Although a multitude of imaging techniques is available, robust data clearly delineating which modality should be used in specific clinical contexts are lacking, and clinical practice is heterogeneous . Despite a lack of randomized clinical trial data, however, 18 F-FDG PET/CT has outperformed conventional imaging with contrast-enhanced CT for both initial staging and detection of recurrent disease in systematic reviews and metaanalyses . Comparisons to whole-body MRI have been mostly equivocal, with the current high costs and limited availability of whole-body MRI generally making it a less viable alternative . The exception is in cerebral metastasis, for which, because of the background high uptake of glucose, PET/CT is insensitive. For this reason, MRI of the brain is recommended for patients with stage III and IV disease and in all patients with symptoms concerning for intracranial metastases . After a histologic diagnosis of melanoma and subsequent clinical staging, surgical resection via wide local excision remains the mainstay of treatment for early-stage melanoma . Wide excision is performed down to, but not including, the muscle fascia, and the recommended width of resected margins is dependent on the Breslow thickness . Surgical staging of regional LNs is a critical component of prognosis and systemic treatment planning for patients with localized melanoma, although it is not required in all patients. Current guidelines recommend lymphatic mapping and SLNB for all patients with a Breslow thickness greater than 1 mm . SLNB can also be considered for patients with a Breslow thickness between 0.8 and 1.0 mm or less than 0.8 mm when other high-risk features are present . For patients undergoing SLNB, preoperative lymphatic mapping via lymphoscintigraphy allows for accurate identification of sentinel LNs for dissection and sampling. Initially pioneered by Morton et al., lymphoscintigraphy uses intradermal injections of dye and radiotracer at the primary tumor site, later allowing for intraoperative identification of sentinel LNs visually and by γ-probes . Historically, melanoma patients with a positive SLNB underwent complete LN dissection, which was associated with increased risk of lymphedema and decreased quality of life . Complete LN dissection versus observation was subsequently examined in 2 major multicenter clinical trials. The MSLT-NRASII and DeCOG-SLT studies showed that immediate complete LN dissection did not improve distant metastasis-free survival, relapse-free survival (RFS), or melanoma-specific survival compared with surveillance with delayed complete LN dissection at recurrence . Surveillance included ultrasonographic examination of the sentinel LN basin every 4 mo during the first 2 y and every 6 mo during years 3 through 5. The role of adjuvant therapy for high-risk patients with positive SLNB and other clinical features is discussed later in this article. Molecular evaluation for the presence of driver mutations is the standard of care for most patients receiving systemic therapy in any setting. The most commonly observed mutation occurs in the BRAF oncogene; since its discovery in 2002, multiple targeted therapies for the treatment of melanoma have been approved . BRAF is a serine-threonine kinase within the RAS-RAF-MEK-ERK pathway; approximately 50% of melanomas harbor BRAF V600 mutations resulting in constitutive activation of MEK and ERK, with BRAF V600E being the most commonly observed mutation . Other genomic mutations that have been identified include alterations in NRAS (∼28% of cases), NF1 (∼14% of cases), and KIT (∼15%–20% of cases of acral or mucosal melanoma) . Molecular testing can be performed on tissue obtained from the primary tumor sample, tumor-involved regional LNs, or distant metastases. Before the Food and Drug Administration (FDA) approval of the BRAF inhibitor vemurafenib in 2011, limited treatment options were available for metastatic melanoma, and mortality rates were high. The efficacy of oral BRAF inhibition was first shown by Flaherty et al. when multiple patients with BRAF V600E -mutated metastatic melanoma had complete or partial tumor regression with a BRAF inhibitor . The role of MEK inhibition in BRAF -mutant melanoma was then established when Flaherty et al. showed that trametinib, an oral MEK inhibitor, significantly improved progression-free survival (PFS) and OS compared with chemotherapy . Combination BRAF and MEK inhibition with dabrafenib plus trametinib was later shown to improve PFS and OS with a reduced toxicity profile and compared with dabrafenib alone . Multiple regimens using combination BRAF and MEK inhibition have been approved in the interim, and they remain important options for BRAF -mutant metastatic melanoma. After the observed success of BRAF and MEK inhibition in the metastatic setting, the role of targeted therapy was examined in the adjuvant setting. In the COMBI-AD phase III trial, 870 patients with completely resected, stage III BRAF -mutated melanoma were randomized to receive oral dabrafenib plus trametinib or placebo for 12 mo after surgery. The 5-y analysis showed that RFS and distant metastasis-free survival were both longer in patients receiving dabrafenib plus trametinib, leading to FDA approval in 2018 . Patients with BRAF -mutated melanoma are also eligible for adjuvant immunotherapy; the optimal adjuvant therapy has not yet been established, as combination BRAF and MEK inhibition has not been directly compared with immunotherapy. The use of neoadjuvant BRAF plus MEK inhibition remains an ongoing area of investigation. Therapies targeting mutations other than BRAF have been shown to be efficacious for metastatic melanoma in the second-line setting . Approximately 15%–20% of cases of acral and mucosal melanoma will harbor an activating mutation in KIT . The tyrosine kinase inhibitor imatinib has been shown to increase PFS, with an overall disease control rate of approximately 55% when studied in small clinical trials and remains an option for patients who progress while on or are ineligible for immunotherapy and harbor a KIT mutation . Rarely, patients with cutaneous melanoma may harbor TRK- gene fusions and can receive subsequent-line therapy with TRK -gene fusion inhibitors such as larotrectinib and entrectinib . Mutations in NRAS, an oncogene found in the MAP kinase pathway, are found in approximately 15%–20% of BRAF wild-type cutaneous melanoma. Patients harboring NRAS -mutant tumors should receive front-line immunotherapy; however, if they have evidence of disease progression, off-label use of the MEK inhibitor binimetinib can be considered in addition to clinical trial enrollment . Advances in cancer immunotherapy have revolutionized the management of multiple malignancies, including melanoma. Studies of various forms of immunotherapy have been occurring for decades; however, it is only the recent progress in knowledge of cancer immunology and the tumor immune microenvironment that has propelled paradigm-changing treatments forward . Immune checkpoint inhibitors (ICIs) targeting the cytotoxic T-lymphocyte antigen-4 (CTLA-4), programmed death-1 (PD-1), and lymphocyte activation gene-3 (LAG-3) molecules are among the most prominent examples of such headway. CTLA-4 and PD-1 are 2 key receptors expressed on T cells involved in regulating T-cell activity. CTLA-4 affects T cells during the initial stages of activation via its interaction with antigen-presenting cells. CTLA-4 competes with the T-cell costimulatory receptor cluster differentiation 28 for the binding of cluster differentiation 80 and 86, in turn decreasing T-cell activation . Shortly after their elucidation of the role of CTLA-4 in 1995, Allison et al. showed that blockade of the CTLA-4 molecule in mouse models resulted in enhanced antitumor immunity, spurring on multiple ensuing clinical trials in the years that would follow . PD-1 is an inhibitory receptor upregulated on activated T cells after long-term antigen exposure as part of homeostatic immune cell regulation . The binding of the programmed death ligand-1 (PD-L1) and programmed death ligand-2 molecules to the PD-1 receptor, which occurs primarily in chronically inflamed tissues, acts as a checkpoint for the adaptive immune system . However, tumor cells also exploit the PD-1/PD-L1 axis as a means of immune evasion. Recently, LAG-3 has emerged as a third promising target for ICIs. LAG-3 is expressed predominantly on exhausted T cells and negatively regulates T-cell activation and function. A multitude of randomized, placebo-controlled clinical trials examining the use of ICIs for the treatment of melanoma in the metastatic, adjuvant, and neoadjuvant settings has been performed after the discoveries of CTLA-4 and PD-1 by Allison and Nishimura . The FDA approved the anti–CTLA-4 monoclonal antibody ipilimumab for the treatment of metastatic melanoma in 2011 after significant improvements in OS in multiple landmark trials . The role of combination immunotherapy in the metastatic setting was then examined in the CheckMate 067 trial when Wolchok et al. showed that the combination of ipilimumab plus nivolumab, an anti–PD-1 monoclonal antibody, was both safe and effective . Data later published showed that OS was significantly increased in patients receiving ipilimumab plus nivolumab compared with ipilimumab alone . The role of LAG-3 inhibition as a component of combination immunotherapy for metastatic melanoma was recently established in the RELATIVITY-047 trial . In patients with unresectable, stage III or IV melanoma, participants were randomly assigned to receive either relatlimab plus nivolumab or nivolumab monotherapy, both administered every 4 wk. The median PFS was 10.1 mo in the relatlimab-plus-nivolumab group compared with 4.6 mo in the nivolumab monotherapy group. OS data remain immature at this time. The role for ICIs in the adjuvant setting was first assessed by Eggermont et al. in the EORTC 18071 trial . In this phase III trial of patients with completely resected, stage III disease at high risk for recurrence, administration of ipilimumab every 3 wk for 4 doses and then every 3 mo for up to 3 y after surgery resulted in significantly longer RFS, with a median RFS of 26.1 mo in the ipilimumab group versus 17.1 mo in the placebo group (hazard ratio of 0.75). Results from this trial resulted in FDA approval of ipilimumab in the adjuvant setting in 2015. The role of PD-1 blockade in the adjuvant setting was examined in the EORTC 1325/KEYNOTE-054 trial . In patients with completely resected stage III melanoma, pembrolizumab, an anti-PD-1 monoclonal antibody, administered every 3 wk for a total of 18 doses, was compared with placebo administered in a similar fashion. The 18-mo RFS was 71.4% in the pembrolizumab group versus 53.2% in the placebo group; in subgroup analysis, the risk of recurrence or death was shown to be 46% lower in patients with PD-L1–positive tumors treated with pembrolizumab than in those treated with placebo. Data from updated analyses published in 2021 confirmed improvement in RFS and distant metastasis-free survival with pembrolizumab, though OS data have not yet been reported . The CheckMate 238 trial compared nivolumab with ipilimumab in a head-to-head fashion; 906 patients with completely resected stage III or IV melanoma were randomized to receive either adjuvant nivolumab or ipilimumab . Therapy was administered for up to 1 y or until recurrence of disease or unacceptable treatment-related toxicity. Initially, results favored nivolumab, as administration resulted in significantly longer RFS and distant metastasis-free survival than for ipilimumab. However, in an updated analysis published 4 y later, there was no significant difference in OS between the 2 cohorts, although RFS remained higher in the nivolumab arm (51.7% vs. 42.5%, hazard ratio of 0.71) . After significant improvements in PFS and OS with use of combination immunotherapy in the metastatic setting , the role of ipilimumab plus nivolumab in the adjuvant space was studied in the CheckMate 915 trial. Patients with completely resected, stage III or IV disease were randomized to receive either ipilimumab plus nivolumab or nivolumab alone for up to 1 y . Ipilimumab plus nivolumab did not result in an improvement in RFS compared with nivolumab monotherapy (64.6% vs. 63.2%, hazard ratio of 0.92). Furthermore, rates of grade 3 or 4 treatment-related adverse events were higher in the combination arm than in the nivolumab monotherapy arm (32.6% vs. 12.8%). The role of combination immunotherapy incorporating a LAG-3 inhibitor is currently being studied in the RELATIVITY-098 trial, in which the anti-LAG-3 agent relatlimab plus nivolumab is being compared with nivolumab monotherapy (NCT05002569). The role of personalized medicine in adjuvant immunotherapy for melanoma is expanding. In KEYNOTE-942, Weber et al. recently showed that creation and administration of an individualized messenger RNA vaccine plus pembrolizumab improved RFS compared with pembrolizumab alone . The messenger RNA-4157 vaccine, generated by harvesting patient tumor samples, performing whole-exome sequencing, and identifying tumor-specific mutations, encodes up to 34 neoantigens in a lipid nanoparticle formulation. Similarly, administration of autologous tumor-infiltrating lymphocytes, initially extracted from a patient’s melanoma and then expanded ex vivo, was recently FDA-approved for metastatic melanoma previously treated with at least one line of systemic therapy . Recently, practice-changing clinical trials exploring the role of neoadjuvant immunotherapy have demonstrated the benefit of ICIs in this setting. In the landmark phase II SWOG S1801 trial, 313 patients with resectable stage IIIB–IV melanoma with clinically detected or radiographically evident LN involvement were randomized to receive pembrolizumab, administered every 3 wk for a total of 3 doses before surgery, followed by 15 doses of pembrolizumab as adjuvant therapy or surgical resection followed by adjuvant pembrolizumab every 3 wk for a total of 18 doses . At the median follow-up of 15 mo, patients who received neoadjuvant pembrolizumab had improved EFS compared with those who received adjuvant therapy alone, with 2-y EFS of 72% versus 49%, respectively, with a hazard ratio of 0.58. The benefit of neoadjuvant pembrolizumab was seen across all subgroups, including patients with BRAF -mutated and wild-type disease, and treatment-related toxicities resulting in an inability to undergo surgery occurred in less than 10% of patients. OS data from this trial remain immature at this time. Given the historical lack of effective treatments for melanoma, there has been a paucity of randomized clinical trials conducted on imaging surveillance. As such, little evidence currently exists regarding optimal modality or timeline. As a result, the role of imaging surveillance in melanoma lacks expert agreement, reflected in national and international guideline heterogeneity. Consistent among guidelines, however, is the recommendation against surveillance imaging in early-stage disease, for which imaging detection rates have been shown to be lower than false-positive rates . Most guidelines do recommend surveillance imaging with stage IIB or IIC disease, for which rates of distant recurrence are significantly increased . Regarding preferred imaging modality, evidence for the improved accuracy of PET/CT compared with conventional contrast-enhanced CT in melanoma surveillance is strong. A 2011 systematic review demonstrated that the sensitivity and specificity of PET/CT was 86% and 91%, compared with 63% and 78% for CT of the chest, abdomen, and pelvis . The continued recommendation for CT of the chest, abdomen, and pelvis by some guidelines has been explained as a reflection of the limited availability of PET/CT in certain regions and demographics. Contrast-enhanced CT, and potentially whole-body MRI, can be considered in this context. In patients treated with ICI, 18 F-FDG PET appears to be particularly useful in the monitoring and prediction of response. Using RECIST, the most validated CT criteria in use for solid tumors, multiple 18 F-FDG PET criteria have been adapted for the interpretation of immunotherapy response, most recently with PET Response Evaluation Criteria for Immunotherapy and immune PERCIST. Though validation is ongoing, all of these criteria ultimately conclude that metabolic response in 18 F-FDG PET is strongly associated with survival outcomes . The recently published joint guidelines from the European Association of Nuclear Medicine, Society of Nuclear Medicine and Molecular Imaging, and Australian and New Zealand Society of Nuclear Medicine therefore endorse 18 F-FDG PET/CT as the imaging modality of choice for baseline disease assessment before the start of immunotherapy. 18 F-FDG PET/CT is also recommended after approximately 3–4 cycles of treatment in the case of clinical deterioration or suspected progression on other imaging, before restarting treatment in cases of temporary interruption, and before treatment discontinuation . Given the novel mechanisms of action of ICIs, atypical treatment responses not seen with use of traditional chemotherapy or other targeted therapies have been observed; these include pseudoprogression, hyperprogression, and immune-related adverse events. 18 F-FDG PET/CT is again uniquely advantageous in this context given its ability to characterize metabolically active tissue. Pseudoprogression, defined as an initial transient increase in tumor metabolic activity and morphologic size followed by a true response, occurs in approximately 10% of patients treated with ICIs. It is seen mostly frequently within the first 4–6 wk after initiation of treatment but can occur up to several months thereafter . Differentiating pseudoprogression from true progression requires follow-up imaging 4–8 wk later. Conversely, hyperprogression—an atypical, true acceleration of tumor growth on imaging after initiation of immunotherapy—represents rapid treatment failure and has been observed in melanoma and other malignancies. Efforts to define and understand the mechanism of this phenomenon continue; however, it is thought to occur in approximately 4%–26% of cases when defined by doubling of volume or growth rate on imaging . Finally, 18 F-FDG PET has also been shown to be uniquely effective in identifying immune-related adverse events, including detection before onset of symptoms . Recently, the use of high-intensity focused ultrasound as a noninvasive form of tumor ablation has been proposed as an adjunct therapy for multiple solid tumors, including melanoma. An ultrasound transducer positioned outside the body or within a body cavity is used to focus high-intensity ultrasound beams in a small region of tumor, resulting in extreme intratumoral temperature elevations in a matter of seconds, ultimately causing necrosis of tumor cells . In addition to a reduction in tumor burden, high-intensity focused ultrasound results in the release of tumor antigens and damage-associated molecular patterns, triggering both innate and adaptive immune responses. Studies evaluating optimization of high-intensity focused ultrasound protocols and incorporation of immunotherapy in addition to high-intensity focused ultrasound are ongoing (NCT04116320). Current investigations into the role of artificial intelligence and machine learning in 18 F-FDG PET/CT are promising. One area in which machine learning has proven particularly useful is in the identification of novel metabolic markers on 18 F-FDG PET to guide treatment strategies. Multiple studies on one such parameter, total metabolic tumor volume, calculated by both manual and automated lesion segmentation, have demonstrated significant correlation with poor treatment response to pembrolizumab . Another marker with apparent significant prognostic value is high metabolic activity of hematopoietic tissues, such as bone marrow and spleen, which has been correlated with poor response to ICIs and an immunosuppressive environment in multiple studies . The development of novel radiolabeled tracers to detect malignancy, guide therapy, and identify cellular microenvironments is ongoing. Numerous tracers are being studied in various preclinical and clinical stages of development. Melanin imaging, although suggested in recent studies to be a more complex target than initially thought, is highly specific to melanoma patients. The labeling of 18 F to benzamide derivatives such as 5-FPN ( 18 F-5-fluoro- N -[2-(diethylamino)ethyl]picolinamide) and MEL050 ( 18 F-6-fluoro- N -[2-(diethylamino)ethyl]pyridine-3-carboxamide) has demonstrated in vivo imaging performance superior to 18 F-FDG . Melanin-targeted therapy has also demonstrated early promise, with 131 I and 188 Re labeled to various molecules demonstrating antitumor efficacy with limited toxicities . Multiple tracers, including 68 Ga, 89 Zr, and 18 F, have also been attached to monoclonal antibodies against PD-1 and PD-L1, allowing for a noninvasive whole-body map of immune checkpoint proteins. These radiolabeled tracers have the potential to be used to stratify treatment candidates, monitor therapy, and create a pathway for targeted therapy . |
Reliability and validity of pelvic floor muscle strength assessment using the MizCure perineometer | 402a78e3-96ef-4525-83f4-bcd8c45368e5 | 7678071 | Gynaecology[mh] | An important role of the pelvic floor muscles (PFM) is to maintain urine continence and support the pelvic organs. Voluntary PFM contraction is evaluated by assessing pelvic floor elevation, muscle strength, endurance, and coordination . In clinical practice, digital vaginal palpation is the technique most often used to assess PFM function . In addition, the perineometer and ultrasound imaging are used as the diagnostic tools. However, the sensitivity of vaginal palpation for quantifying sustained contractions and discriminating variations in force is less than that of other techniques, and it has been shown to have limited reliability even when performed by experienced examiners . Vaginal pressure measurement is a commonly used quantitative evaluation to measure PFM strength , and it is significantly lower in women with stress incontinence than in healthy women . It is also used as a teaching tool and as motivation for conducting training exercises . Thus, vaginal pressure measurement has high clinical importance. It is necessary to perform an objective evaluation of the PFM to be able to properly treat, give feedback, and document changes in PFM function during rehabilitation . Additionally, PFM evaluation is recommended by the International Continence Society and considered essential to assess a post-therapeutic intervention effect . The Peritron (Laborie, Mississauga, ON, Canada) perineometer is commonly available for clinical practice and research. However, in order to import the Peritron into some countries, complicated purchasing procedures are required because this device is considered medical equipment. Therefore, there are some barriers to its use for evaluation and research. The MizCure (OWOMED, Seoul, Korea) is sold as a PFM training and biofeedback device. It can be easily purchased online by private individuals. The MizCure is generally used in some urology and gynecology and urogynecology clinics for training. The MizCure uses different units of measurement, which complicates any comparison of measurements obtained between the MizCure and Peritron perineometers. Whether the measurements obtained using the two perineometers, even with probes of similar diameters, are correlated is not known. In a previous study using the Peritron (Cardio-Design, Oakleigh, VIC Australia), it was found to have good inter-rater reliability, intra-rater reliability, and validity . However, the reliability and validity of the MizCure have not been verified. The purpose of this study was to clarify the reliability and validity of PFM strength assessment using the MizCure perineometer in healthy women. Subjects A convenience sample of 20 healthy women was recruited for this study. A sample size calculation showed that 17 subjects were needed for a correlation greater than 0.7, alpha level of 5%, power of 90%, and effect size of 0.6 . In this study, the sample size was set at 20, taking into account 3 dropouts. This study of intra- and inter-rater reliabilities and agreement was performed at our institute from September 2018 to December 2019. The patients included in this study were nulliparous, non-pregnant women, aged 20–45 years, with body mass index < 25 kg/m 2 and no gynecological complaints or disease verified, with the ability to correctly contract the PFM. Women with pelvic organ prolapse or who had undergone pelvic reconstructive surgery, those who had symptoms of vaginal infection, intolerance to condoms, or allergy to the gel used in the procedure, as well as those involved in PFM training, were excluded. The present study was approved by the Scientific Ethics Committee of our institute (#018-0056), and all patients provided their informed consent. Assessment tools and procedures Assessment tools Manometry 1 The Peritron 9300 perineometer (Laborie), shown in Fig. , was used in this study. The Peritron perineometer has a conical vaginal probe, 26 mm (pressurized: 33 mm) in diameter and 110 mm in length, with a measurable length of 55 mm. The vaginal probe is connected to the perineometer’s main body with an 80-cm plastic tube. When the probe is compressed by vaginal pressure, a pressure sensor measures the vaginal pressure. The probe consisted of an air-filled silicone rubber sensor and measured pressure in cmH 2 O. The occlusive pressure readings obtained by the perineometer provide surrogate measures of PFM strength. Manometry 2 The MizCure perineometer (OWOMED, Korea) is a conical vaginal insert, 21 mm (pressurized: 27 mm) in diameter and 79 mm in length, with a measurable length of 50 mm (Fig. ). The probe was connected to the perineometer’s main body via a 75-cm silicone tube. When a silicone tube is connected to the perineometer’s main body and the power is turned on, air enters the probe and the probe expands. When pressure is applied to the inside of the vagina to the inflated probe sensor, the pressure sensor measures the vaginal pressure. The inflation pressure can be set to 140 or 150 mmHg. In the present study, the inflation pressure was set to 140 mmHg. The unit measured pressure in mmHg. Procedure Test 1 First, before starting the tests, a transperineal technique using two-dimensional (2D) ultrasound confirmed that each woman was able to contract the PFM correctly based on 2D transperineal ultrasound measurements, including the anteroposterior diameter of the urogenital hiatus (measured at rest and during PFM contraction) (Fig. ). In our group, we demonstrated that 2D transperineal ultrasound is useful to assess PFM function in patients with pelvic organ prolapse . Yang et al. found a close correlation between reduced urogenital hiatus diameter in the sagittal plane and the modified Oxford grading scale . The modified Oxford grading scale is defined as follows: 0 = no contraction; 1 = flicker; 2 = weak; 3 = moderate; 4 = good; and 5 = strong . Second, vaginal pressure was measured with the Peritron and MizCure perineometers. The order of use of the two vaginal manometers and the two test positions were each performed randomly. Testing was conducted with the women in two positions: the supine position, with flexed and slightly abducted legs, and the standing position, with straight and slightly abducted legs. Before data were acquired by the perineometers, the participant inserted the probe, which was covered with a condom and lubricated with hypoallergenic gel, into her vaginal cavity. The participants were instructed to place the probe inside the vagina to a location where 0.5–1.0 cm of the probe was visible from the outside of the introitus. PFM strength was then evaluated by a maximum voluntary contraction, as measured by squeeze pressure. The instruction used for each contraction was ‘squeeze and lift the PFM as much as you can’. Vaginal pressure testing was performed with three repetitions of maximum voluntary contractions that each lasted for 3 s, with a 3-s rest between contractions. A 2-min rest break was then taken. Visible co-contraction of the transversus abdominis muscle was permitted, as long as there was no pelvic tilting . Examiner 1 was a physiotherapist with 11 years of clinical experience. Examiner 2 was a physiotherapist with 5 years of clinical experience and 4 years of educational experience in a university institution. Test 2 All women were evaluated twice. After the testing protocol was completed in the first session (Test 1), all subjects repeated the protocol 2–6 weeks later (Test 2). In Test 2, vaginal pressure measurements were performed using only the MizCure perineometer. The order of the two test positions (supine, standing) and the two examiners were each assigned randomly. Statistical analysis Within- and between-session intra-rater reliabilities in vaginal pressure values (maximal voluntary contraction) were analyzed using intraclass correlation coefficients (ICC) (1, 1), and inter-rater reliability was analyzed using ICC (2, 1). Validity was assessed by Pearson’s product-moment correlation coefficient and Spearman’s rank correlation analysis of the vaginal pressure values of the Peritron and MizCure. Pearson's product-moment correlation coefficient was used when the data were normative, and Spearman's rank correlation coefficient was used when the data were non-normative. Statistical analyses were performed using the free statistical analysis software R, version 2.12.0 ( https://personal.hs.hirosaki-u.ac.jp/pteiki/research/stat/S/ ), with the level of significance set at 5%. A convenience sample of 20 healthy women was recruited for this study. A sample size calculation showed that 17 subjects were needed for a correlation greater than 0.7, alpha level of 5%, power of 90%, and effect size of 0.6 . In this study, the sample size was set at 20, taking into account 3 dropouts. This study of intra- and inter-rater reliabilities and agreement was performed at our institute from September 2018 to December 2019. The patients included in this study were nulliparous, non-pregnant women, aged 20–45 years, with body mass index < 25 kg/m 2 and no gynecological complaints or disease verified, with the ability to correctly contract the PFM. Women with pelvic organ prolapse or who had undergone pelvic reconstructive surgery, those who had symptoms of vaginal infection, intolerance to condoms, or allergy to the gel used in the procedure, as well as those involved in PFM training, were excluded. The present study was approved by the Scientific Ethics Committee of our institute (#018-0056), and all patients provided their informed consent. Assessment tools Manometry 1 The Peritron 9300 perineometer (Laborie), shown in Fig. , was used in this study. The Peritron perineometer has a conical vaginal probe, 26 mm (pressurized: 33 mm) in diameter and 110 mm in length, with a measurable length of 55 mm. The vaginal probe is connected to the perineometer’s main body with an 80-cm plastic tube. When the probe is compressed by vaginal pressure, a pressure sensor measures the vaginal pressure. The probe consisted of an air-filled silicone rubber sensor and measured pressure in cmH 2 O. The occlusive pressure readings obtained by the perineometer provide surrogate measures of PFM strength. Manometry 2 The MizCure perineometer (OWOMED, Korea) is a conical vaginal insert, 21 mm (pressurized: 27 mm) in diameter and 79 mm in length, with a measurable length of 50 mm (Fig. ). The probe was connected to the perineometer’s main body via a 75-cm silicone tube. When a silicone tube is connected to the perineometer’s main body and the power is turned on, air enters the probe and the probe expands. When pressure is applied to the inside of the vagina to the inflated probe sensor, the pressure sensor measures the vaginal pressure. The inflation pressure can be set to 140 or 150 mmHg. In the present study, the inflation pressure was set to 140 mmHg. The unit measured pressure in mmHg. Procedure Test 1 First, before starting the tests, a transperineal technique using two-dimensional (2D) ultrasound confirmed that each woman was able to contract the PFM correctly based on 2D transperineal ultrasound measurements, including the anteroposterior diameter of the urogenital hiatus (measured at rest and during PFM contraction) (Fig. ). In our group, we demonstrated that 2D transperineal ultrasound is useful to assess PFM function in patients with pelvic organ prolapse . Yang et al. found a close correlation between reduced urogenital hiatus diameter in the sagittal plane and the modified Oxford grading scale . The modified Oxford grading scale is defined as follows: 0 = no contraction; 1 = flicker; 2 = weak; 3 = moderate; 4 = good; and 5 = strong . Second, vaginal pressure was measured with the Peritron and MizCure perineometers. The order of use of the two vaginal manometers and the two test positions were each performed randomly. Testing was conducted with the women in two positions: the supine position, with flexed and slightly abducted legs, and the standing position, with straight and slightly abducted legs. Before data were acquired by the perineometers, the participant inserted the probe, which was covered with a condom and lubricated with hypoallergenic gel, into her vaginal cavity. The participants were instructed to place the probe inside the vagina to a location where 0.5–1.0 cm of the probe was visible from the outside of the introitus. PFM strength was then evaluated by a maximum voluntary contraction, as measured by squeeze pressure. The instruction used for each contraction was ‘squeeze and lift the PFM as much as you can’. Vaginal pressure testing was performed with three repetitions of maximum voluntary contractions that each lasted for 3 s, with a 3-s rest between contractions. A 2-min rest break was then taken. Visible co-contraction of the transversus abdominis muscle was permitted, as long as there was no pelvic tilting . Examiner 1 was a physiotherapist with 11 years of clinical experience. Examiner 2 was a physiotherapist with 5 years of clinical experience and 4 years of educational experience in a university institution. Test 2 All women were evaluated twice. After the testing protocol was completed in the first session (Test 1), all subjects repeated the protocol 2–6 weeks later (Test 2). In Test 2, vaginal pressure measurements were performed using only the MizCure perineometer. The order of the two test positions (supine, standing) and the two examiners were each assigned randomly. Manometry 1 The Peritron 9300 perineometer (Laborie), shown in Fig. , was used in this study. The Peritron perineometer has a conical vaginal probe, 26 mm (pressurized: 33 mm) in diameter and 110 mm in length, with a measurable length of 55 mm. The vaginal probe is connected to the perineometer’s main body with an 80-cm plastic tube. When the probe is compressed by vaginal pressure, a pressure sensor measures the vaginal pressure. The probe consisted of an air-filled silicone rubber sensor and measured pressure in cmH 2 O. The occlusive pressure readings obtained by the perineometer provide surrogate measures of PFM strength. Manometry 2 The MizCure perineometer (OWOMED, Korea) is a conical vaginal insert, 21 mm (pressurized: 27 mm) in diameter and 79 mm in length, with a measurable length of 50 mm (Fig. ). The probe was connected to the perineometer’s main body via a 75-cm silicone tube. When a silicone tube is connected to the perineometer’s main body and the power is turned on, air enters the probe and the probe expands. When pressure is applied to the inside of the vagina to the inflated probe sensor, the pressure sensor measures the vaginal pressure. The inflation pressure can be set to 140 or 150 mmHg. In the present study, the inflation pressure was set to 140 mmHg. The unit measured pressure in mmHg. The Peritron 9300 perineometer (Laborie), shown in Fig. , was used in this study. The Peritron perineometer has a conical vaginal probe, 26 mm (pressurized: 33 mm) in diameter and 110 mm in length, with a measurable length of 55 mm. The vaginal probe is connected to the perineometer’s main body with an 80-cm plastic tube. When the probe is compressed by vaginal pressure, a pressure sensor measures the vaginal pressure. The probe consisted of an air-filled silicone rubber sensor and measured pressure in cmH 2 O. The occlusive pressure readings obtained by the perineometer provide surrogate measures of PFM strength. The MizCure perineometer (OWOMED, Korea) is a conical vaginal insert, 21 mm (pressurized: 27 mm) in diameter and 79 mm in length, with a measurable length of 50 mm (Fig. ). The probe was connected to the perineometer’s main body via a 75-cm silicone tube. When a silicone tube is connected to the perineometer’s main body and the power is turned on, air enters the probe and the probe expands. When pressure is applied to the inside of the vagina to the inflated probe sensor, the pressure sensor measures the vaginal pressure. The inflation pressure can be set to 140 or 150 mmHg. In the present study, the inflation pressure was set to 140 mmHg. The unit measured pressure in mmHg. Test 1 First, before starting the tests, a transperineal technique using two-dimensional (2D) ultrasound confirmed that each woman was able to contract the PFM correctly based on 2D transperineal ultrasound measurements, including the anteroposterior diameter of the urogenital hiatus (measured at rest and during PFM contraction) (Fig. ). In our group, we demonstrated that 2D transperineal ultrasound is useful to assess PFM function in patients with pelvic organ prolapse . Yang et al. found a close correlation between reduced urogenital hiatus diameter in the sagittal plane and the modified Oxford grading scale . The modified Oxford grading scale is defined as follows: 0 = no contraction; 1 = flicker; 2 = weak; 3 = moderate; 4 = good; and 5 = strong . Second, vaginal pressure was measured with the Peritron and MizCure perineometers. The order of use of the two vaginal manometers and the two test positions were each performed randomly. Testing was conducted with the women in two positions: the supine position, with flexed and slightly abducted legs, and the standing position, with straight and slightly abducted legs. Before data were acquired by the perineometers, the participant inserted the probe, which was covered with a condom and lubricated with hypoallergenic gel, into her vaginal cavity. The participants were instructed to place the probe inside the vagina to a location where 0.5–1.0 cm of the probe was visible from the outside of the introitus. PFM strength was then evaluated by a maximum voluntary contraction, as measured by squeeze pressure. The instruction used for each contraction was ‘squeeze and lift the PFM as much as you can’. Vaginal pressure testing was performed with three repetitions of maximum voluntary contractions that each lasted for 3 s, with a 3-s rest between contractions. A 2-min rest break was then taken. Visible co-contraction of the transversus abdominis muscle was permitted, as long as there was no pelvic tilting . Examiner 1 was a physiotherapist with 11 years of clinical experience. Examiner 2 was a physiotherapist with 5 years of clinical experience and 4 years of educational experience in a university institution. Test 2 All women were evaluated twice. After the testing protocol was completed in the first session (Test 1), all subjects repeated the protocol 2–6 weeks later (Test 2). In Test 2, vaginal pressure measurements were performed using only the MizCure perineometer. The order of the two test positions (supine, standing) and the two examiners were each assigned randomly. First, before starting the tests, a transperineal technique using two-dimensional (2D) ultrasound confirmed that each woman was able to contract the PFM correctly based on 2D transperineal ultrasound measurements, including the anteroposterior diameter of the urogenital hiatus (measured at rest and during PFM contraction) (Fig. ). In our group, we demonstrated that 2D transperineal ultrasound is useful to assess PFM function in patients with pelvic organ prolapse . Yang et al. found a close correlation between reduced urogenital hiatus diameter in the sagittal plane and the modified Oxford grading scale . The modified Oxford grading scale is defined as follows: 0 = no contraction; 1 = flicker; 2 = weak; 3 = moderate; 4 = good; and 5 = strong . Second, vaginal pressure was measured with the Peritron and MizCure perineometers. The order of use of the two vaginal manometers and the two test positions were each performed randomly. Testing was conducted with the women in two positions: the supine position, with flexed and slightly abducted legs, and the standing position, with straight and slightly abducted legs. Before data were acquired by the perineometers, the participant inserted the probe, which was covered with a condom and lubricated with hypoallergenic gel, into her vaginal cavity. The participants were instructed to place the probe inside the vagina to a location where 0.5–1.0 cm of the probe was visible from the outside of the introitus. PFM strength was then evaluated by a maximum voluntary contraction, as measured by squeeze pressure. The instruction used for each contraction was ‘squeeze and lift the PFM as much as you can’. Vaginal pressure testing was performed with three repetitions of maximum voluntary contractions that each lasted for 3 s, with a 3-s rest between contractions. A 2-min rest break was then taken. Visible co-contraction of the transversus abdominis muscle was permitted, as long as there was no pelvic tilting . Examiner 1 was a physiotherapist with 11 years of clinical experience. Examiner 2 was a physiotherapist with 5 years of clinical experience and 4 years of educational experience in a university institution. All women were evaluated twice. After the testing protocol was completed in the first session (Test 1), all subjects repeated the protocol 2–6 weeks later (Test 2). In Test 2, vaginal pressure measurements were performed using only the MizCure perineometer. The order of the two test positions (supine, standing) and the two examiners were each assigned randomly. Within- and between-session intra-rater reliabilities in vaginal pressure values (maximal voluntary contraction) were analyzed using intraclass correlation coefficients (ICC) (1, 1), and inter-rater reliability was analyzed using ICC (2, 1). Validity was assessed by Pearson’s product-moment correlation coefficient and Spearman’s rank correlation analysis of the vaginal pressure values of the Peritron and MizCure. Pearson's product-moment correlation coefficient was used when the data were normative, and Spearman's rank correlation coefficient was used when the data were non-normative. Statistical analyses were performed using the free statistical analysis software R, version 2.12.0 ( https://personal.hs.hirosaki-u.ac.jp/pteiki/research/stat/S/ ), with the level of significance set at 5%. The median age of the 20 healthy female participants was 26.5 years (range 23–45 years), and their median body mass index was 19.4 (range 17.5–23.4) kg/m 2 . None of the subjects included in the analysis had done PFM training before participating in the research project or between the two evaluation points. Table summarizes within and between-session vaginal pressure values obtained by examiners 1 and 2 (MizCure and Peritron perineometers). All raw data are referred to as Additional file . Within-session intra-rater reliability Table shows the within-session intra-rater reliability using three repetitions of each maximum voluntary contraction by the MizCure and Peritron perineometers for both Tests 1 and 2. For both examiners 1 and 2, all vaginal pressures in Tests 1 and 2 had ICC (1, 1) values of 0.90–0.96. Between-session intra-rater reliability Between-session intra-rater reliability values for the MizCure perineometer for examiners 1 and 2 are shown in Table . The between-session intra-rater reliability of examiner 1 was ICC (1, 1) = 0.72 for the supine position and 0.79 for the standing position. The between-session intra-rater reliability of examiner 2 was ICC (1, 1) = 0.63 for the supine position and 0.80 for the standing position. Within- and between-session inter-rater reliabilities Table shows the inter-rater reliability analysis for vaginal pressure values for Tests 1 and 2. The inter-rater reliability for Test 1 was ICC (2, 1) = 0.96 for both the supine and standing positions for the Peritron. The ICC (2, 1) for MizCure was 0.91 for the supine position and 0.87 for the standing position. The inter-rater reliability of the MizCure in Test 2 was ICC (2, 1) = 0.69 for the supine position and 0.95 for the standing position. Validity Significant correlations between the Peritron and MizCure perineometers in the measurements of vaginal pressure were found in the supine position (Pearson’s correlation coefficient of 0.68, P < 0.001) and in the standing position (Spearman’s correlation coefficient of 0.82, P < 0.001). More details about these results are presented in Table . Table shows the within-session intra-rater reliability using three repetitions of each maximum voluntary contraction by the MizCure and Peritron perineometers for both Tests 1 and 2. For both examiners 1 and 2, all vaginal pressures in Tests 1 and 2 had ICC (1, 1) values of 0.90–0.96. Between-session intra-rater reliability values for the MizCure perineometer for examiners 1 and 2 are shown in Table . The between-session intra-rater reliability of examiner 1 was ICC (1, 1) = 0.72 for the supine position and 0.79 for the standing position. The between-session intra-rater reliability of examiner 2 was ICC (1, 1) = 0.63 for the supine position and 0.80 for the standing position. Table shows the inter-rater reliability analysis for vaginal pressure values for Tests 1 and 2. The inter-rater reliability for Test 1 was ICC (2, 1) = 0.96 for both the supine and standing positions for the Peritron. The ICC (2, 1) for MizCure was 0.91 for the supine position and 0.87 for the standing position. The inter-rater reliability of the MizCure in Test 2 was ICC (2, 1) = 0.69 for the supine position and 0.95 for the standing position. Significant correlations between the Peritron and MizCure perineometers in the measurements of vaginal pressure were found in the supine position (Pearson’s correlation coefficient of 0.68, P < 0.001) and in the standing position (Spearman’s correlation coefficient of 0.82, P < 0.001). More details about these results are presented in Table . PFM training has been shown to improve stress urinary incontinence and pelvic organ prolapse and is recommended by the International Continence Society as Grade A . However, about 30% of women report failure to contract the PFM correctly . Incorrect PFM contraction is not expected to have a training effect. Therefore, it is recommended that proper PFM training should always include an objective assessment of correct contraction. In the present study, the reliability and validity of PFM strength assessment using the MizCure perineometer were examined in healthy women. Using transvaginal devices, it is known that the measurement of PFM strength depends on the size and placement of the probe, the subject’s cooperation, and the examiner’s experience and skills . If a small perineometer probe is used, the placement of the intravaginal probe causes reliability problems, because the probe may be located not completely adjacent to the pressure zone . Thus, in the current study, whether the MizCure can properly measure PFM strength was verified using the widely reported Peritron. In the previous study on the intra-rater reliability of the Peritron, the ICC (1, 1) was over 0.9 for the supine and standing position measurements . Therefore, the results of the present study suggest that the within-session intra-rater reliability of the MizCure is as good as that of the Peritron. Rahmani et al. and Ferreira et al. reported the inter-rater reliability of vaginal pressure measurements using the Peritron. Rahmani et al. reported that the between-session ICC (1, 1) was 0.88 . Ferreira et al. reported that the mean vaginal squeeze pressure had good inter-rater reliability, and the difference between the examiners was not significant . In the current study, the between-session intra-rater reliability of the MizCure was slightly lower than that reported by Rahmani et al. . This is primarily because of the size and placement of the probes. In the current study, the insertion and placement of the probe were performed by the subject. The MizCure has a smaller probe than the Peritron, and it is difficult to standardize the position of the probe between subjects. The previous study showed that probe placement can affect measurement results . The fact that intravaginal squeeze pressure changes along the vagina means that the profile of vaginal pressure is not fully understood. This suggests that probe placement may have affected reliability. The MizCure probe used in the current study was 79 mm in length. The vaginal high-pressure zone at rest and during PFM contraction is 2–4 cm from the vaginal introitus . The vaginal high-pressure zone was shown to be related to the PFM contraction . Therefore, the MizCure probe is considered to cover the high-pressure zone of the vagina. The other factor that may have affected the results for the reliability of vaginal pressure measurements may be the skill of the examiner. With respect to this factor, the two examiners were skilled physical therapists with urologic education. The examiner checked the correct PFM contraction with 2D ultrasound prior to measurement, and always checked to prevent compensatory movement due to PFM contraction (e.g., pelvic tilt or excessive abdominal muscle contractions). Therefore, the effect of the two examiners on the measured value was thought to be small. ICC reliability criteria are defined as substantial for 0.61–0.80 and almost perfect for 0.81–1.0 . From this perspective, the present results indicate that the intra-rater reliability of the MizCure within and between days was substantial to almost perfect. The inter-rater reliability was similar. These results suggest that the use of the MizCure perineometer to evaluate PFM strength can minimize the effects of the examiner and provide good intra-rater reliability. The validity of the MizCure was evaluated using Pearson’s product-moment correlation coefficient and Spearman’s rank correlation coefficient. The correlation between MizCure and Peritron measurements has not been previously reported. The MizCure measurements showed a significant correlation with the Peritron measurements: r = 0.68 for the supine measurements and r s = 0.82 for the standing measurements. For the correlation coefficient, 0.4–0.69 is considered moderate and 0.7–0.89 high . Therefore, the correlation coefficients for the MizCure were moderate to high. The results suggest that the MizCure is a suitable tool for measuring PFM strength. There have been reports in the past examining the validity of the Peritron and other perineometers . Barbosa et al. reported the following differences among the other devices: acquisition of higher or lower measurement values depends not only on the diameter of the probes used, but also on other variables, such as individual vaginal diameter, probes of different materials, and differences in the sensitivity of each piece of equipment to vaginal pressure . Therefore, a detailed examination of the characteristics of the MizCure perineometer and the individual’s vagina will be necessary in the future. A recent paper in the statistical analysis of correlation coefficients for reliability suggests different cut-off values for ICC. Based on the 95% confidence interval of the ICC estimate, values less than 0.5, values between 0.5 and 0.75, values between 0.75 and 0.9, and values greater than 0.90 indicate poor, moderate, good, and excellent reliability, respectively . Applying the results of this study, the 95% confidence intervals for inter-rater reliability for supine position measurement were 0.43–0.88 for examiner 1 and 0.28–0.83 for examiner 2, as shown in Table . In addition, as shown in Table , the 95% confidence interval for inter-rater reliability in the supine measurement was 0.38–0.86. This shows that the lower limit of the 95% confidence interval is low, and the variability is high. This is the lower bound of the 95% confidence interval is low. Due to the small number of subjects in this study, this indicates a large variation in the confidence interval. In the future, it is necessary to examine the results with a larger population. The advantage of the MizCure perineometer is that it is easy to purchase, portable, and simple to use. These results suggest that the MizCure is a tool that can quantitatively reflect PFM function, since the reliability and validity of the measured vaginal pressure values are good. This may help select the measurement position for evaluation and treatment purposes and help the treatment plan. From the current study, the MizCure might be a device that can be used by physical therapists, nurses, and physicians involved in pelvic floor rehabilitation to assess PFM function. However, as a limitation of this study, the results of the present study are limited to healthy nulliparous women with normal BMI without PFM dysfunction, such as stress urinary incontinence or pelvic organ prolapse. And all subjects were not asked about their sexual activity prior to vaginal pressure measurement. This may affect the results of vaginal pressure measurements. An additional limitation is the small number of subjects. A small sample size reduces the power of the study and increase the margin of error. Future study must be needed to ensure our study. The present findings suggest that MizCure perineometer is a validated tool to measure PFM strength in both supine and standing positions in healthy nulliparous women. Additional file 1. All raw data. |
Guidelines for Qualifications of Neurodiagnostic Personnel: A Joint Position Statement of the American Clinical Neurophysiology Society, the American Association of Neuromuscular & Electrodiagnostic Medicine, the American Society of Neurophysiological Monitoring, and ASET—The Neurodiagnostic Society | ea6361dc-72b0-4b13-8094-bef4846130a8 | 10150627 | Physiology[mh] | , 1. Neurodiagnostic Assistant (NDA) ○Job responsibilities (Tables and ) Continuously monitors patients undergoing cEEG recording for safety, either in room or via remote video monitoring; possesses knowledge of monitoring system and camera controls; alerts nursing and/or technologist staff when clinical seizures or other paroxysmal events occur; may communicate with patients and bedside staff to obtain information about events; and may document observations. The NDA is not qualified to analyze EEG data. May assist ND technologists as needed (restocking supplies, electrode removal, disinfection, etc). Completes hospital training to alert supervisor and/or activate hospital systems, such as rapid response, cardiac arrest, etc., per established protocols when encountering patient clinical issues. ○Education/certification High school diploma or equivalent. Successful completion of the ASET online course, LTM 100, titled Introduction to LTM for EMU Personnel. ○Experience No previous experience; no less than 20 hours of observation in an EMU or ND laboratory under the direction of a credentialed ND technologist (R. EEG T., CLTM, or NA-CLTM) (The registries for EEG and EP, and the certifications for IONM, LTM, ANS testing, and MEG are registered by ABRET—Neurodiagnostic Credentialing and Accreditation as follows: R. EEG T., R. EP T., CNIM, CLTM, CAP, and CMEG). Competency assessments including, but not limited to, recognition of clinical seizures and other clinical paroxysmal events, ictal testing procedures, measures to reduce risk of fall, and seizure first aid. ○Supervision (Table ) General technical supervision by an ND Technologist III or above. ○Ongoing education/maintenance of competency Should attend relevant educational offerings and be required to demonstrate ongoing competence. 2. Neurodiagnostic Technologist I (Grandfather clause*: Any ND technologist practicing in the ND field before December 31, 2021, shall be considered grandfathered in ND education, and therefore shall be deemed that the existing ND education requirement as outlined in Section 3 has been met (Table 2).) ○Job responsibilities This is a transitional position, and a new hire is expected to obtain credentials within 5 years. Performs routine testing under supervision; writes a descriptive technical analysis for QA purposes only. Has hospital training to alert supervisor and/or activate hospital systems, such as rapid response, cardiac arrest, etc., per established protocols when encountering patient clinical issues. ○Education/certification Associate degree or higher is preferred or enrollment in a CAAHEP-accredited ND program. , For NCS Technologist I, may have 6 months of personal supervision training under a Technologist III or higher with direct supervision of EDX physician. ○Experience No specific previous experience required; must meet hospital standards for all patient care workers. Competencies should at minimum include those specified by ASET's National Competency Skill Standards, AANEM's skill standards for NCS, and/or ABEM's eligibility and application requirements. ○Supervision (Table ) Direct technical supervision by a ND Technologist III or above is required. May be permitted to perform routine testing under indirect technical supervision after successful completion of all required competencies as established by a Technologist III or higher and the laboratory medical and technical supervisors. Regular quality assessments of technical skills must be performed and documented at least yearly. For EEG, EPs, and ANS testing, works under indirect supervision of interpreting provider who can be immediately present either electronically or in person. For NCS, works under personal physician supervision. ○Ongoing education/maintenance of competency Should attend relevant in-house educational offerings and be required to demonstrate ongoing competence through an in-house developed program. Should obtain a minimum of 15 hours of education in ND each year covering all modalities performed by the technologist. 3. Neurodiagnostic Technologist II ○Job responsibilities This is a transitional position and new hires should obtain credential within 3 years of hire. Performs routine testing under general supervision; writes a technical descriptive analysis for QA purposes only. For NCS Technologist II, 12 months of full-time (or equivalent) practical training in performing NCS under direct supervision of EDX physician. Has hospital training to alert supervisor and/or activate hospital systems, such as rapid response, cardiac arrest, etc., per established protocols when encountering patient clinical issues. ○Education/certification Meets eligibility requirements set by credentialing bodies, i.e., ABRET, – AAET, and/or ABEM, to take a credentialing examination. Associate degree or higher is preferred, or graduate of a CAAHEP-accredited ND program. , ○Experience Twelve or more months of experience working in a patient care environment with supervised experience in performing primary testing modality. Competencies should at minimum include those specified by ASET's National Competency Skill Standards and/or AANEM's skill standards for NCS, as appropriate. ○Supervision (Table ) General technical supervision. Reports to ND Technologist III or above. Works under supervision of interpreting provider who can be immediately present either electronically or in person. For NCS, works under direct physician supervision. Regular quality assessments of technical skills must be performed and documented at least yearly. ○Ongoing education/maintenance of competency A minimum of 15 credits should be obtained every 3 years, covering all modalities performed by the technologist. 4. Neurodiagnostic Technologist III ○Job responsibilities Performs routine, as well as more advanced testing (per program guidelines); recognizes clinically significant events and patterns; follows policy and procedures regarding critical test results; communicates with team members; writes a technical descriptive analysis. Has hospital training to alert supervisor and/or activate hospital systems, such as rapid response, cardiac arrest, etc., per established protocols when encountering patient clinical issues. R. EEG T. —Performs clinical EEG in the adult, pediatric, and neonatal populations. Also performs studies in ICUs. R. EP T. —Demonstrates proficiency in the acquisition and recognition of basic EP waveforms relevant to EP modality being tested. Includes VEP, BAEP, and SSEP. R.NCS.T. or CNCT , —Performs NCS; recognizes clinically significant events and follows facility policy and procedures regarding critical test results. CAP —Performs basic and advanced ANS testing procedures independently with a high degree of technical proficiency; recognizes physiologic and nonphysiologic artifacts and takes appropriate steps to eliminate them; and describes normal and abnormal clinical manifestations observed during the testing. ○Education/certification ABRET, AAET, or ABEM credential required. Associate degree or higher is preferred, or graduate of a CAAHEP-accredited ND program. , ○Experience Meets qualifications and requirements of Technologist II, is credentialed, and meets all education requirements set forth by ABRET, AAET, or ABEM. ○Supervision (Table ) Works under general technical supervision as specified in departmental policy and procedure manual. Regular quality assessments of technical skills must be performed and documented at least yearly. Works under supervision of interpreting provider who can be immediately present either electronically or in person. For NCS, works under direct physician supervision. ○Ongoing education/maintenance of competency A minimum of 30 credits should be obtained every 5 years covering all modalities performed by the technologist. This is a minimum requirement and is superseded by individual credential requirements as set forth by ABRET, AAET, and ABEM. 5. IONM Neurodiagnostic Technologist I ○Job responsibilities This is a trainee-level position and is considered transitional. It is expected that new hires will obtain CNIM certification within 5 years. Helps set up monitoring equipment while assuring patient safety. Communicates effectively with team members. Has hospital training to alert supervisor and/or activate hospital systems, such as rapid response, cardiac arrest, etc., per established protocols when encountering patient clinical issues. ○Education/certification R. EEG T. or R. EP T. or a bachelor's degree ○Experience Six or more months of experience working in a patient care environment. For individuals entering the field with a bachelor's degree, patient experience requirements will be determined by their employer. ○Supervision (Table ) Requires direct technical supervision. Works under supervision of interpreting provider who can be immediately present either electronically or in person. Regular quality assessments of technical skills must be performed and documented at least yearly. ○Ongoing education/maintenance of competency A minimum of 30 credits should be obtained every 5 years. This is a minimum requirement and is superseded by individual credential requirements. 6. Neurodiagnostic Technology Specialist I ○Job responsibilities Includes all of those required for a Neurodiagnostic Technologist III but exhibits additional critical thinking skills. Able to recognize critical values in critically ill patients of all ages and report the values to the appropriate medical personnel. ○Education/certification Associate degree or graduate of a CAAHEP-accredited ND program, , bachelor's degree is preferred. Current credentials required from ABRET, AAET, or ABEM. ○Experience Meets all requirements of experience and qualifications as specified in Tech level III in the ND field that includes an additional 1 year of experience in one of the advanced modalities listed below in Sections 6a–6e. ○Supervision (Table ) Works under general technical supervision. Works under supervision of interpreting provider who can be immediately present either electronically or in person. For NCS, works under direct physician supervision. Regular quality assessments of technical skills must be performed and documented at least yearly. ○Ongoing education/maintenance of competency A minimum of 30 credits should be obtained every 5 years. This is a minimum requirement and will be superseded by individual credential requirements and/or maintenance of certification requirements. 6a. Neurodiagnostic Technology Specialist I LTME (R. EEG T.) Specific experience R. EEG T. Three or more years of experience in the ND field that includes 1 year of experience in LTM for epilepsy. Specific job responsibilities Recognizes and reports critical values to the appropriate medical personnel, significant clinical events, and EEG patterns. Prepares, organizes, and summarizes data for physician review. 6b. Neurodiagnostic Technology Specialist I ICU cEEG (R. EEG T.) Specific experience R. EEG T. Three or more years of experience in the ND field that includes 1 year of experience in ICU/cEEG monitoring. Specific job responsibilities Recognizes significant clinical events and EEG patterns; provides alerts as detailed in departmental policy and procedure manual. Prepares, organizes, and summarizes data for physician review. 6c. Neurodiagnostic Technology Specialist I IONM (CNIM) Specific experience CNIM © Minimum of 1 year of experience in an IONM setting. Specific job responsibilities Able to apply electrodes and obtain high-quality waveforms independently. Able to recognize changes and communicate such with team as specified in the departmental policy and procedure manual. Able to troubleshoot common problems in IONM recordings. 6d. Neurodiagnostic Technology Specialist I NCS (R.NCS.T. or CNCT) Specific experience Bachelor's degree preferred. CNCT or R.NCS.T. required, plus training in performing advanced NCS. A minimum of 4 years as CNCT or R.NCS.T. performing NCS in the patient setting, with at least a total of 5 years of experience in performing NCS and may have experience in the ICU. Technologists may perform pediatric studies. Specific job responsibilities Able to perform basic and advanced NCS procedures independently, including pediatric NCS, repetitive nerve stimulation, and autonomic studies with a high degree of technical proficiency; can perform studies in routine and ICU settings; with additional training may perform neuromuscular ultrasound. Recognizes physiologic and nonphysiologic artifacts and takes appropriate steps to eliminate them. Describes normal and abnormal clinical manifestations observed during the testing. Uses critical thinking and clinical expertise to determine the need for further NCS testing as needed to assist with interpretation. 6e. Neurodiagnostic Technology Specialist I MEG (CMEG-eligible) Specific experience Meets CMEG examination requirements set forth by ABRET, including completion of MEG certificate program. Three or more years of experience in the field of ND, which includes at least 6 months of supervised clinical and hands-on experience in an active MEG center. Experience of 75 MEGs for epilepsy; know the 10 to 20 International System of Electrode Placement. Twenty-five MEG evoked potentials including three or more of the five EP scans: auditory, language evoked, motor evoked, sensory evoked, and visually evoked. Experience to trouble shoot the system, including filling liquid helium MEG system. Specific job responsibilities Recognizes significant clinical events and EEG patterns; demonstrates competency in operational routines, including helium filling (if applicable), tuning procedures (as applicable), standard testing procedures, troubleshooting, artifact prevention and elimination, and data storage, and sufficient understanding of source localization to preprocess routine clinical data for the analysis by a physician magnetoencephalographer. 7. Neurodiagnostic Technology Specialist II ○Job responsibilities Generally similar to Neurodiagnostic Technology Specialist I descriptions but provides more detailed preliminary reports and more detailed data review (as specified in departmental policy and procedures) to the interpreting provider. Able to provide higher level of teaching and training for other technologists. ○Education/certification Associate degree or graduate of a CAAHEP-accredited ND program, , bachelor's degree is preferred. ○Experience Minimum of 5 years of experience, of which 3 years are postcredential. NCS specialist II requires a minimum of 5 years as a CNCT or R.NCS.T., with 6 years of experience, including ICU experience. Advanced modality requirements for experience and qualifications are listed below in Sections 7a–7e. ○Supervision (Table ) Works under general technical supervision as specified in departmental policy and procedure manual. Works under supervision of interpreting provider who can be immediately present either electronically or in person. For NCS, works under direct physician supervision. ○Ongoing education/maintenance of competency A minimum of 30 credits should be obtained every 5 years. This is a minimum requirement and may be superseded by the requirements of credentialing boards. 7a. Neurodiagnostic Technology Specialist II LTME (Long-Term Video EEG Monitoring) (CLTM) Specific education/certification CLTM © Specific job responsibilities Assists in development of and monitoring of adherence to policies and procedures for LTME; assists other ND technologists in LTME. 7b. Neurodiagnostic Technology Specialist II ICU/cEEG (Continuous EEG Monitoring in the Intensive Care Unit) (CLTM) Specific education/certification CLTM © Specific job responsibilities Assists in development of and monitoring of adherence to policies and procedures for ICU/cEEG; assists other ND technologists in ICU/cEEG. 7c. Neurodiagnostic Technology Specialist II IONM (CNIM) Specific education/certification CNIM © Specific job responsibilities Assists in development of and monitoring of adherence to policies and procedures for IONM; assists other ND technologists in IONM. 7d. Neurodiagnostic Technology Specialist II NCS (R.NCS.T. or CNCT) Specific education/certification Meets all qualifications of NCS Specialist I. Bachelor's degree required. A minimum of 5 years as a CNCT or R.NCS.T. performing NCS in the patient setting, with at least a total of 6 years of experience in performing NCS (Grandfather clause: Technologists who do not hold a bachelor’s degree or higher and who meet all the requirements of an NCS Specialist I may be considered for NCS Specialist II if they have a minimum of 10 years of continuous experience in performing NCS, a minimum of 8 years as a CNCT or R.NCS.T., a minimum of three faculty engagements in the NCS field, and at least two reference letters from ABEM physicians (Table 2)) and may have ICU experience. Specific job responsibilities Assists in development of and monitoring of adherence to policies and procedures for NCS. Demonstrated ability to train others in the principles and practice of NCS, including technologists, residents, and fellows. 7e. Neurodiagnostic Technology Specialist II MEG (CMEG) Specific education/certification CMEG. Three or more years of experience in the ND field, specifically EEG, and 2 years of experience in MEG. Specific job responsibilities Performs digitization for co-registration to MRI, performs initial MEG spontaneous recording with concurrent EEG recording, understands placement and recording of evoked field trials (SEF, VEF, MEF, AEF, and LEF), implements nontraditional activation procedures as required (or ordered by attending physician), performs initial filtering and review of MEG/EEG data, performs preprocessing and localization of interictal activity, review of initial localization with physician, localization of evoked field data (for review by physician), and archiving and retrieval of MEG data. 8. NeuroAnalyst (Formerly Advanced Long-Term EEG Monitoring Analyst) (CLTM with NA-CLTM Preferred) ○Job responsibilities Monitors (on-site or remotely), evaluates, annotates, and classifies ictal, interictal, and paroxysmal events from EEG/video data. Recognizes physiologic and nonphysiologic artifacts. Writes detailed description of EEG patterns, seizure semiology, ictal and interictal abnormalities, and selection of representative EEG samples. Acts as a physician extender in collaboration with the supervising physician and other health care staff. If the NeuroAnalyst is working in an EMU, they must be able to perform the following duties: All duties and responsibilities for typical and special consideration for routine and advanced EEG/ECoG. Extensive knowledge in neuroanesthesia and its application to neuromonitoring. All aspects of invasive implants preoperatively, intraoperatively, and postoperatively, including, but not limited to, electrode setup, montage creation/verification, troubleshooting, hook-up and discontinuation, and stimulation for cortical mapping. ○Education/certification Holds credentials in EEG (R. EEG T.) and LTM (CLTM) with the NeuroAnalyst (NA-CLTM) credential preferred. Associate degree or graduate of a CAAHEP-accredited ND program, , bachelor's degree is preferred. ○Experience Minimum of 5 years of experience in LTM in the ambulatory setting, EMU, and/or critical care postcertification in LTM. ○Supervision (Table ) Works under general supervision of the neurodiagnostic technical lab supervisor or the neurodiagnostic lab director and the interpreting physician. ○Ongoing education/maintenance of competency A minimum of 30 credits should be obtained every 5 years. This is a minimum requirement and is superseded by individual credential requirements. 9. Neurodiagnostic Technical Lab Supervisor ○Overview Each laboratory requires technical supervision. These qualifications refer only to the issues specifically related to supervision of technical activities. The laboratory supervisor may take on additional responsibilities as dictated by hospital administrative policies and organization. ○Job responsibilities Provides direct supervision and education to other technologist levels; oversees day-to-day operations; responsible for maintaining policies and procedures; and QA program development and implementation in conjunction with the medical and technical laboratory directors. ○Education/certification Must have a minimum of one credential in ND technology, two or more preferred, in the area supervised. Associate degree or graduate of a CAAHEP-accredited ND program, , bachelor's degree is preferred. ○Experience Minimum of 5 years of experience in ND. ○Supervision (Table ) Works under the neurodiagnostic technical lab director and with the medical director. For clinical studies, works under supervision of interpreting provider who can be immediately present either electronically or in person. ○Ongoing education/maintenance of competency A minimum of 30 credits should be obtained every 5 years. This is a minimum requirement and is superseded by individual credential requirements. 10. Neurodiagnostic Education Specialist ○Overview Functions in the role of educator, facilitator, change agent, consultant, and leader for professional development. ○Job responsibilities Designs and implements competency and educational activities for ND personnel, including annual competency programs, orientation, continuing education, and professional development within a collaborative practice framework. Develops new employees to meet job requirements. Assists those who are not credentialed for board examination. Coordinates continuing education and competency activities for staff. ○Education/certification Graduate of an accredited Baccalaureate program, preferably in ND , or higher education. Must have a minimum of one ND-related credential, two or more preferred. Credential should be specific to the modality for which education is being provided. ○Experience Minimum of 5 years of experience in ND with previous teaching experience preferred. ○Supervision (Table ) Works under the neurodiagnostic technical lab director. ○Ongoing education/maintenance of competency A minimum of 30 credits should be obtained every 5 years. This is a minimum requirement and is superseded by individual credential requirements. 11. Neurodiagnostic Technical Lab Director ○Overview This position can be held either by a ND professional with additional management training or experience, or by non-ND manager, typically with experience in other diagnostic services. There are situations in which the administrative leadership of the CNP department may not, for the purposes of timekeeping, recordkeeping, and basic personnel management, have specific ND technology training. In that case, there must be a technologist at the level of Neurodiagnostic Technologist III or above who can provide technical supervision. ○Job responsibilities Works with hospital administration and the laboratory Medical Director to make personnel and budgetary decisions. Involved with marketing efforts. Serves as a liaison across departments when necessary. May also assume responsibility for productivity and financial viability, patient safety, and accreditation of the laboratory – among other high-level functions that contribute to the success of the department in support of the employer's mission. ○Education/certification A minimum of a bachelor's degree in health sciences; if job description includes performing ND studies, must have at least one ND credential. ○Experience Minimum of 5 years of minimum experience; 3 years of previous supervisory experience is recommended. ○Supervision (Table ) Works with hospital administration and Medical Director. If job description includes performing clinical ND studies, works under general supervision of interpreting provider who can be immediately present either electronically or in person. ○Ongoing education/maintenance of competency A minimum of 30 credits should be obtained every 5 years. This is a minimum requirement and is superseded by other individual credential requirements. ○Job responsibilities (Tables and ) Continuously monitors patients undergoing cEEG recording for safety, either in room or via remote video monitoring; possesses knowledge of monitoring system and camera controls; alerts nursing and/or technologist staff when clinical seizures or other paroxysmal events occur; may communicate with patients and bedside staff to obtain information about events; and may document observations. The NDA is not qualified to analyze EEG data. May assist ND technologists as needed (restocking supplies, electrode removal, disinfection, etc). Completes hospital training to alert supervisor and/or activate hospital systems, such as rapid response, cardiac arrest, etc., per established protocols when encountering patient clinical issues. ○Education/certification High school diploma or equivalent. Successful completion of the ASET online course, LTM 100, titled Introduction to LTM for EMU Personnel. ○Experience No previous experience; no less than 20 hours of observation in an EMU or ND laboratory under the direction of a credentialed ND technologist (R. EEG T., CLTM, or NA-CLTM) (The registries for EEG and EP, and the certifications for IONM, LTM, ANS testing, and MEG are registered by ABRET—Neurodiagnostic Credentialing and Accreditation as follows: R. EEG T., R. EP T., CNIM, CLTM, CAP, and CMEG). Competency assessments including, but not limited to, recognition of clinical seizures and other clinical paroxysmal events, ictal testing procedures, measures to reduce risk of fall, and seizure first aid. ○Supervision (Table ) General technical supervision by an ND Technologist III or above. ○Ongoing education/maintenance of competency Should attend relevant educational offerings and be required to demonstrate ongoing competence. ○Job responsibilities This is a transitional position, and a new hire is expected to obtain credentials within 5 years. Performs routine testing under supervision; writes a descriptive technical analysis for QA purposes only. Has hospital training to alert supervisor and/or activate hospital systems, such as rapid response, cardiac arrest, etc., per established protocols when encountering patient clinical issues. ○Education/certification Associate degree or higher is preferred or enrollment in a CAAHEP-accredited ND program. , For NCS Technologist I, may have 6 months of personal supervision training under a Technologist III or higher with direct supervision of EDX physician. ○Experience No specific previous experience required; must meet hospital standards for all patient care workers. Competencies should at minimum include those specified by ASET's National Competency Skill Standards, AANEM's skill standards for NCS, and/or ABEM's eligibility and application requirements. ○Supervision (Table ) Direct technical supervision by a ND Technologist III or above is required. May be permitted to perform routine testing under indirect technical supervision after successful completion of all required competencies as established by a Technologist III or higher and the laboratory medical and technical supervisors. Regular quality assessments of technical skills must be performed and documented at least yearly. For EEG, EPs, and ANS testing, works under indirect supervision of interpreting provider who can be immediately present either electronically or in person. For NCS, works under personal physician supervision. ○Ongoing education/maintenance of competency Should attend relevant in-house educational offerings and be required to demonstrate ongoing competence through an in-house developed program. Should obtain a minimum of 15 hours of education in ND each year covering all modalities performed by the technologist. ○Job responsibilities This is a transitional position and new hires should obtain credential within 3 years of hire. Performs routine testing under general supervision; writes a technical descriptive analysis for QA purposes only. For NCS Technologist II, 12 months of full-time (or equivalent) practical training in performing NCS under direct supervision of EDX physician. Has hospital training to alert supervisor and/or activate hospital systems, such as rapid response, cardiac arrest, etc., per established protocols when encountering patient clinical issues. ○Education/certification Meets eligibility requirements set by credentialing bodies, i.e., ABRET, – AAET, and/or ABEM, to take a credentialing examination. Associate degree or higher is preferred, or graduate of a CAAHEP-accredited ND program. , ○Experience Twelve or more months of experience working in a patient care environment with supervised experience in performing primary testing modality. Competencies should at minimum include those specified by ASET's National Competency Skill Standards and/or AANEM's skill standards for NCS, as appropriate. ○Supervision (Table ) General technical supervision. Reports to ND Technologist III or above. Works under supervision of interpreting provider who can be immediately present either electronically or in person. For NCS, works under direct physician supervision. Regular quality assessments of technical skills must be performed and documented at least yearly. ○Ongoing education/maintenance of competency A minimum of 15 credits should be obtained every 3 years, covering all modalities performed by the technologist. ○Job responsibilities Performs routine, as well as more advanced testing (per program guidelines); recognizes clinically significant events and patterns; follows policy and procedures regarding critical test results; communicates with team members; writes a technical descriptive analysis. Has hospital training to alert supervisor and/or activate hospital systems, such as rapid response, cardiac arrest, etc., per established protocols when encountering patient clinical issues. R. EEG T. —Performs clinical EEG in the adult, pediatric, and neonatal populations. Also performs studies in ICUs. R. EP T. —Demonstrates proficiency in the acquisition and recognition of basic EP waveforms relevant to EP modality being tested. Includes VEP, BAEP, and SSEP. R.NCS.T. or CNCT , —Performs NCS; recognizes clinically significant events and follows facility policy and procedures regarding critical test results. CAP —Performs basic and advanced ANS testing procedures independently with a high degree of technical proficiency; recognizes physiologic and nonphysiologic artifacts and takes appropriate steps to eliminate them; and describes normal and abnormal clinical manifestations observed during the testing. ○Education/certification ABRET, AAET, or ABEM credential required. Associate degree or higher is preferred, or graduate of a CAAHEP-accredited ND program. , ○Experience Meets qualifications and requirements of Technologist II, is credentialed, and meets all education requirements set forth by ABRET, AAET, or ABEM. ○Supervision (Table ) Works under general technical supervision as specified in departmental policy and procedure manual. Regular quality assessments of technical skills must be performed and documented at least yearly. Works under supervision of interpreting provider who can be immediately present either electronically or in person. For NCS, works under direct physician supervision. ○Ongoing education/maintenance of competency A minimum of 30 credits should be obtained every 5 years covering all modalities performed by the technologist. This is a minimum requirement and is superseded by individual credential requirements as set forth by ABRET, AAET, and ABEM. ○Job responsibilities This is a trainee-level position and is considered transitional. It is expected that new hires will obtain CNIM certification within 5 years. Helps set up monitoring equipment while assuring patient safety. Communicates effectively with team members. Has hospital training to alert supervisor and/or activate hospital systems, such as rapid response, cardiac arrest, etc., per established protocols when encountering patient clinical issues. ○Education/certification R. EEG T. or R. EP T. or a bachelor's degree ○Experience Six or more months of experience working in a patient care environment. For individuals entering the field with a bachelor's degree, patient experience requirements will be determined by their employer. ○Supervision (Table ) Requires direct technical supervision. Works under supervision of interpreting provider who can be immediately present either electronically or in person. Regular quality assessments of technical skills must be performed and documented at least yearly. ○Ongoing education/maintenance of competency A minimum of 30 credits should be obtained every 5 years. This is a minimum requirement and is superseded by individual credential requirements. ○Job responsibilities Includes all of those required for a Neurodiagnostic Technologist III but exhibits additional critical thinking skills. Able to recognize critical values in critically ill patients of all ages and report the values to the appropriate medical personnel. ○Education/certification Associate degree or graduate of a CAAHEP-accredited ND program, , bachelor's degree is preferred. Current credentials required from ABRET, AAET, or ABEM. ○Experience Meets all requirements of experience and qualifications as specified in Tech level III in the ND field that includes an additional 1 year of experience in one of the advanced modalities listed below in Sections 6a–6e. ○Supervision (Table ) Works under general technical supervision. Works under supervision of interpreting provider who can be immediately present either electronically or in person. For NCS, works under direct physician supervision. Regular quality assessments of technical skills must be performed and documented at least yearly. ○Ongoing education/maintenance of competency A minimum of 30 credits should be obtained every 5 years. This is a minimum requirement and will be superseded by individual credential requirements and/or maintenance of certification requirements. 6a. Neurodiagnostic Technology Specialist I LTME (R. EEG T.) Specific experience R. EEG T. Three or more years of experience in the ND field that includes 1 year of experience in LTM for epilepsy. Specific job responsibilities Recognizes and reports critical values to the appropriate medical personnel, significant clinical events, and EEG patterns. Prepares, organizes, and summarizes data for physician review. 6b. Neurodiagnostic Technology Specialist I ICU cEEG (R. EEG T.) Specific experience R. EEG T. Three or more years of experience in the ND field that includes 1 year of experience in ICU/cEEG monitoring. Specific job responsibilities Recognizes significant clinical events and EEG patterns; provides alerts as detailed in departmental policy and procedure manual. Prepares, organizes, and summarizes data for physician review. 6c. Neurodiagnostic Technology Specialist I IONM (CNIM) Specific experience CNIM © Minimum of 1 year of experience in an IONM setting. Specific job responsibilities Able to apply electrodes and obtain high-quality waveforms independently. Able to recognize changes and communicate such with team as specified in the departmental policy and procedure manual. Able to troubleshoot common problems in IONM recordings. 6d. Neurodiagnostic Technology Specialist I NCS (R.NCS.T. or CNCT) Specific experience Bachelor's degree preferred. CNCT or R.NCS.T. required, plus training in performing advanced NCS. A minimum of 4 years as CNCT or R.NCS.T. performing NCS in the patient setting, with at least a total of 5 years of experience in performing NCS and may have experience in the ICU. Technologists may perform pediatric studies. Specific job responsibilities Able to perform basic and advanced NCS procedures independently, including pediatric NCS, repetitive nerve stimulation, and autonomic studies with a high degree of technical proficiency; can perform studies in routine and ICU settings; with additional training may perform neuromuscular ultrasound. Recognizes physiologic and nonphysiologic artifacts and takes appropriate steps to eliminate them. Describes normal and abnormal clinical manifestations observed during the testing. Uses critical thinking and clinical expertise to determine the need for further NCS testing as needed to assist with interpretation. 6e. Neurodiagnostic Technology Specialist I MEG (CMEG-eligible) Specific experience Meets CMEG examination requirements set forth by ABRET, including completion of MEG certificate program. Three or more years of experience in the field of ND, which includes at least 6 months of supervised clinical and hands-on experience in an active MEG center. Experience of 75 MEGs for epilepsy; know the 10 to 20 International System of Electrode Placement. Twenty-five MEG evoked potentials including three or more of the five EP scans: auditory, language evoked, motor evoked, sensory evoked, and visually evoked. Experience to trouble shoot the system, including filling liquid helium MEG system. Specific job responsibilities Recognizes significant clinical events and EEG patterns; demonstrates competency in operational routines, including helium filling (if applicable), tuning procedures (as applicable), standard testing procedures, troubleshooting, artifact prevention and elimination, and data storage, and sufficient understanding of source localization to preprocess routine clinical data for the analysis by a physician magnetoencephalographer. Specific experience R. EEG T. Three or more years of experience in the ND field that includes 1 year of experience in LTM for epilepsy. Specific job responsibilities Recognizes and reports critical values to the appropriate medical personnel, significant clinical events, and EEG patterns. Prepares, organizes, and summarizes data for physician review. Specific experience R. EEG T. Three or more years of experience in the ND field that includes 1 year of experience in ICU/cEEG monitoring. Specific job responsibilities Recognizes significant clinical events and EEG patterns; provides alerts as detailed in departmental policy and procedure manual. Prepares, organizes, and summarizes data for physician review. Specific experience CNIM © Minimum of 1 year of experience in an IONM setting. Specific job responsibilities Able to apply electrodes and obtain high-quality waveforms independently. Able to recognize changes and communicate such with team as specified in the departmental policy and procedure manual. Able to troubleshoot common problems in IONM recordings. Specific experience Bachelor's degree preferred. CNCT or R.NCS.T. required, plus training in performing advanced NCS. A minimum of 4 years as CNCT or R.NCS.T. performing NCS in the patient setting, with at least a total of 5 years of experience in performing NCS and may have experience in the ICU. Technologists may perform pediatric studies. Specific job responsibilities Able to perform basic and advanced NCS procedures independently, including pediatric NCS, repetitive nerve stimulation, and autonomic studies with a high degree of technical proficiency; can perform studies in routine and ICU settings; with additional training may perform neuromuscular ultrasound. Recognizes physiologic and nonphysiologic artifacts and takes appropriate steps to eliminate them. Describes normal and abnormal clinical manifestations observed during the testing. Uses critical thinking and clinical expertise to determine the need for further NCS testing as needed to assist with interpretation. Specific experience Meets CMEG examination requirements set forth by ABRET, including completion of MEG certificate program. Three or more years of experience in the field of ND, which includes at least 6 months of supervised clinical and hands-on experience in an active MEG center. Experience of 75 MEGs for epilepsy; know the 10 to 20 International System of Electrode Placement. Twenty-five MEG evoked potentials including three or more of the five EP scans: auditory, language evoked, motor evoked, sensory evoked, and visually evoked. Experience to trouble shoot the system, including filling liquid helium MEG system. Specific job responsibilities Recognizes significant clinical events and EEG patterns; demonstrates competency in operational routines, including helium filling (if applicable), tuning procedures (as applicable), standard testing procedures, troubleshooting, artifact prevention and elimination, and data storage, and sufficient understanding of source localization to preprocess routine clinical data for the analysis by a physician magnetoencephalographer. ○Job responsibilities Generally similar to Neurodiagnostic Technology Specialist I descriptions but provides more detailed preliminary reports and more detailed data review (as specified in departmental policy and procedures) to the interpreting provider. Able to provide higher level of teaching and training for other technologists. ○Education/certification Associate degree or graduate of a CAAHEP-accredited ND program, , bachelor's degree is preferred. ○Experience Minimum of 5 years of experience, of which 3 years are postcredential. NCS specialist II requires a minimum of 5 years as a CNCT or R.NCS.T., with 6 years of experience, including ICU experience. Advanced modality requirements for experience and qualifications are listed below in Sections 7a–7e. ○Supervision (Table ) Works under general technical supervision as specified in departmental policy and procedure manual. Works under supervision of interpreting provider who can be immediately present either electronically or in person. For NCS, works under direct physician supervision. ○Ongoing education/maintenance of competency A minimum of 30 credits should be obtained every 5 years. This is a minimum requirement and may be superseded by the requirements of credentialing boards. 7a. Neurodiagnostic Technology Specialist II LTME (Long-Term Video EEG Monitoring) (CLTM) Specific education/certification CLTM © Specific job responsibilities Assists in development of and monitoring of adherence to policies and procedures for LTME; assists other ND technologists in LTME. 7b. Neurodiagnostic Technology Specialist II ICU/cEEG (Continuous EEG Monitoring in the Intensive Care Unit) (CLTM) Specific education/certification CLTM © Specific job responsibilities Assists in development of and monitoring of adherence to policies and procedures for ICU/cEEG; assists other ND technologists in ICU/cEEG. 7c. Neurodiagnostic Technology Specialist II IONM (CNIM) Specific education/certification CNIM © Specific job responsibilities Assists in development of and monitoring of adherence to policies and procedures for IONM; assists other ND technologists in IONM. 7d. Neurodiagnostic Technology Specialist II NCS (R.NCS.T. or CNCT) Specific education/certification Meets all qualifications of NCS Specialist I. Bachelor's degree required. A minimum of 5 years as a CNCT or R.NCS.T. performing NCS in the patient setting, with at least a total of 6 years of experience in performing NCS (Grandfather clause: Technologists who do not hold a bachelor’s degree or higher and who meet all the requirements of an NCS Specialist I may be considered for NCS Specialist II if they have a minimum of 10 years of continuous experience in performing NCS, a minimum of 8 years as a CNCT or R.NCS.T., a minimum of three faculty engagements in the NCS field, and at least two reference letters from ABEM physicians (Table 2)) and may have ICU experience. Specific job responsibilities Assists in development of and monitoring of adherence to policies and procedures for NCS. Demonstrated ability to train others in the principles and practice of NCS, including technologists, residents, and fellows. 7e. Neurodiagnostic Technology Specialist II MEG (CMEG) Specific education/certification CMEG. Three or more years of experience in the ND field, specifically EEG, and 2 years of experience in MEG. Specific job responsibilities Performs digitization for co-registration to MRI, performs initial MEG spontaneous recording with concurrent EEG recording, understands placement and recording of evoked field trials (SEF, VEF, MEF, AEF, and LEF), implements nontraditional activation procedures as required (or ordered by attending physician), performs initial filtering and review of MEG/EEG data, performs preprocessing and localization of interictal activity, review of initial localization with physician, localization of evoked field data (for review by physician), and archiving and retrieval of MEG data. Specific education/certification CLTM © Specific job responsibilities Assists in development of and monitoring of adherence to policies and procedures for LTME; assists other ND technologists in LTME. Specific education/certification CLTM © Specific job responsibilities Assists in development of and monitoring of adherence to policies and procedures for ICU/cEEG; assists other ND technologists in ICU/cEEG. Specific education/certification CNIM © Specific job responsibilities Assists in development of and monitoring of adherence to policies and procedures for IONM; assists other ND technologists in IONM. Specific education/certification Meets all qualifications of NCS Specialist I. Bachelor's degree required. A minimum of 5 years as a CNCT or R.NCS.T. performing NCS in the patient setting, with at least a total of 6 years of experience in performing NCS (Grandfather clause: Technologists who do not hold a bachelor’s degree or higher and who meet all the requirements of an NCS Specialist I may be considered for NCS Specialist II if they have a minimum of 10 years of continuous experience in performing NCS, a minimum of 8 years as a CNCT or R.NCS.T., a minimum of three faculty engagements in the NCS field, and at least two reference letters from ABEM physicians (Table 2)) and may have ICU experience. Specific job responsibilities Assists in development of and monitoring of adherence to policies and procedures for NCS. Demonstrated ability to train others in the principles and practice of NCS, including technologists, residents, and fellows. Specific education/certification CMEG. Three or more years of experience in the ND field, specifically EEG, and 2 years of experience in MEG. Specific job responsibilities Performs digitization for co-registration to MRI, performs initial MEG spontaneous recording with concurrent EEG recording, understands placement and recording of evoked field trials (SEF, VEF, MEF, AEF, and LEF), implements nontraditional activation procedures as required (or ordered by attending physician), performs initial filtering and review of MEG/EEG data, performs preprocessing and localization of interictal activity, review of initial localization with physician, localization of evoked field data (for review by physician), and archiving and retrieval of MEG data. ○Job responsibilities Monitors (on-site or remotely), evaluates, annotates, and classifies ictal, interictal, and paroxysmal events from EEG/video data. Recognizes physiologic and nonphysiologic artifacts. Writes detailed description of EEG patterns, seizure semiology, ictal and interictal abnormalities, and selection of representative EEG samples. Acts as a physician extender in collaboration with the supervising physician and other health care staff. If the NeuroAnalyst is working in an EMU, they must be able to perform the following duties: All duties and responsibilities for typical and special consideration for routine and advanced EEG/ECoG. Extensive knowledge in neuroanesthesia and its application to neuromonitoring. All aspects of invasive implants preoperatively, intraoperatively, and postoperatively, including, but not limited to, electrode setup, montage creation/verification, troubleshooting, hook-up and discontinuation, and stimulation for cortical mapping. ○Education/certification Holds credentials in EEG (R. EEG T.) and LTM (CLTM) with the NeuroAnalyst (NA-CLTM) credential preferred. Associate degree or graduate of a CAAHEP-accredited ND program, , bachelor's degree is preferred. ○Experience Minimum of 5 years of experience in LTM in the ambulatory setting, EMU, and/or critical care postcertification in LTM. ○Supervision (Table ) Works under general supervision of the neurodiagnostic technical lab supervisor or the neurodiagnostic lab director and the interpreting physician. ○Ongoing education/maintenance of competency A minimum of 30 credits should be obtained every 5 years. This is a minimum requirement and is superseded by individual credential requirements. ○Overview Each laboratory requires technical supervision. These qualifications refer only to the issues specifically related to supervision of technical activities. The laboratory supervisor may take on additional responsibilities as dictated by hospital administrative policies and organization. ○Job responsibilities Provides direct supervision and education to other technologist levels; oversees day-to-day operations; responsible for maintaining policies and procedures; and QA program development and implementation in conjunction with the medical and technical laboratory directors. ○Education/certification Must have a minimum of one credential in ND technology, two or more preferred, in the area supervised. Associate degree or graduate of a CAAHEP-accredited ND program, , bachelor's degree is preferred. ○Experience Minimum of 5 years of experience in ND. ○Supervision (Table ) Works under the neurodiagnostic technical lab director and with the medical director. For clinical studies, works under supervision of interpreting provider who can be immediately present either electronically or in person. ○Ongoing education/maintenance of competency A minimum of 30 credits should be obtained every 5 years. This is a minimum requirement and is superseded by individual credential requirements. ○Overview Functions in the role of educator, facilitator, change agent, consultant, and leader for professional development. ○Job responsibilities Designs and implements competency and educational activities for ND personnel, including annual competency programs, orientation, continuing education, and professional development within a collaborative practice framework. Develops new employees to meet job requirements. Assists those who are not credentialed for board examination. Coordinates continuing education and competency activities for staff. ○Education/certification Graduate of an accredited Baccalaureate program, preferably in ND , or higher education. Must have a minimum of one ND-related credential, two or more preferred. Credential should be specific to the modality for which education is being provided. ○Experience Minimum of 5 years of experience in ND with previous teaching experience preferred. ○Supervision (Table ) Works under the neurodiagnostic technical lab director. ○Ongoing education/maintenance of competency A minimum of 30 credits should be obtained every 5 years. This is a minimum requirement and is superseded by individual credential requirements. ○Overview This position can be held either by a ND professional with additional management training or experience, or by non-ND manager, typically with experience in other diagnostic services. There are situations in which the administrative leadership of the CNP department may not, for the purposes of timekeeping, recordkeeping, and basic personnel management, have specific ND technology training. In that case, there must be a technologist at the level of Neurodiagnostic Technologist III or above who can provide technical supervision. ○Job responsibilities Works with hospital administration and the laboratory Medical Director to make personnel and budgetary decisions. Involved with marketing efforts. Serves as a liaison across departments when necessary. May also assume responsibility for productivity and financial viability, patient safety, and accreditation of the laboratory – among other high-level functions that contribute to the success of the department in support of the employer's mission. ○Education/certification A minimum of a bachelor's degree in health sciences; if job description includes performing ND studies, must have at least one ND credential. ○Experience Minimum of 5 years of minimum experience; 3 years of previous supervisory experience is recommended. ○Supervision (Table ) Works with hospital administration and Medical Director. If job description includes performing clinical ND studies, works under general supervision of interpreting provider who can be immediately present either electronically or in person. ○Ongoing education/maintenance of competency A minimum of 30 credits should be obtained every 5 years. This is a minimum requirement and is superseded by other individual credential requirements. NOTE: As may pertain to all higher levels of practitioners, please note that there are individuals who perform, and in some cases interpret, testing under the supervision of a licensed and qualified physician. These individuals do not have a medical or osteopathic doctorate and are referred to as “Advanced Practitioners” or other “Qualified Health Care Providers.” This document recommends privilege-based licensure, as well as skills, knowledge, and abilities, gained through training, experience, and accredited programs. – These are demonstrated by passing board examinations and maintained through continuing education. – This document does not supersede applicable state law. These practitioners work within their state's regulatory and/or statutory scope of practice guidelines and within institutional credentialing. The scope of practice may differ across states, institutions, and insurance carriers. 12. Audiologist (Lab) ○Job responsibilities Audiological and vestibular testing and BAEPs including both the technical and the interpretative components related to the assessment of the function of the eighth cranial nerve and peripheral hearing apparatus. ○Education/certification All audiologists must be an AuD or hold current board certification. ○Experience Has performed and interpreted the number of studies required by federal, state, institutional, and/or certifying organization regulations. The minimum number should be sufficient for the practitioner to have gained mastery of all aspects of testing. ○Supervision (Table ) May work independently or under supervision as specified by federal, state, and hospital regulations. To supervise technologists performing audiological testing within the ND laboratory, the audiologist must have a minimum of 3 years of experience in clinical practice in addition to the AuD. ○Ongoing education/maintenance of competency Minimum of 50 CEUs spanning 5 or more years, as required for maintenance of certification. This is a minimum requirement and is superseded by other individual credential requirements. 13. Nonphysician (PhD, AuD , FMG) Neurophysiologist Performing IONM ○Job responsibilities This may include: Management of personnel and instrumentation that support IONM. Technical performance of IONM. IONM planning. Real-time interpretation of IONM under the supervision of a licensed physician who is immediately available, either in person or online, if needed, e.g., for rendering of medical opinion, decisions, and recommendations during surgery. This physician must be a clinical neurophysiologist trained, qualified, and experienced in IONM as referenced under Section 18. Providing recommendations for obtaining optimal neurophysiological data. Postoperative IONM report. ○Education/certification Possess a minimum of an earned doctoral degree in a physical science, life science, or clinical allied health profession from an accredited educational institution. Education must include successful completion of graduate-level training in neurophysiology and anatomy. Must have medical staff privileges for the performance of IONM in all hospitals where practicing. The DABNM is required (Grandfather clause: PhD neurophysiologists with a minimum of 20 years of experience in IONM are not required to hold the DABNM). ○Experience Evidence of continuous experience in IONM including case logs that document a minimum of 300 cases monitored with the primary responsibility for the clinical tasks in which the provider will participate. ○Supervision (Table ) The nonphysician neurophysiologist functions under the supervision of a licensed physician who is immediately available, either in person or online, if needed, e.g., for rendering of medical decisions and recommendations during surgery. This physician must be a clinical neurophysiologist trained, qualified, and experienced in IONM as referenced in Section 18. ○Ongoing education/maintenance of competency Maintenance of all credentials required for medical staff privileges in IONM. A minimum of 100 cases per year averaged over 3 years. Forty-five CEUs in IONM per year averaged over 5 years. 14. Senior Nonphysician (PhD, AuD, FMG) Neurophysiologist Performing IONM ○Job responsibilities May perform any of the job responsibilities described for the nonphysician neurophysiologist (Section 13) as described above. Available for teaching less experienced providers. The specific responsibilities assigned to each practitioner should be documented by the employer. ○Education/certification All requirements are the same as for the nonphysician neurophysiologist performing IONM except: The DABNM credential is required. ○Experience All requirements are the same as for the nonphysician neurophysiologist except that: At least 7 years of clinical activity in IONM is required. ○Supervision (Table ) The requirements are the same as the nonphysician neurophysiologist (Section 13). ○Ongoing education/maintenance of competency The requirements are the same as the nonphysician neurophysiologist (Section 13). ○Job responsibilities Audiological and vestibular testing and BAEPs including both the technical and the interpretative components related to the assessment of the function of the eighth cranial nerve and peripheral hearing apparatus. ○Education/certification All audiologists must be an AuD or hold current board certification. ○Experience Has performed and interpreted the number of studies required by federal, state, institutional, and/or certifying organization regulations. The minimum number should be sufficient for the practitioner to have gained mastery of all aspects of testing. ○Supervision (Table ) May work independently or under supervision as specified by federal, state, and hospital regulations. To supervise technologists performing audiological testing within the ND laboratory, the audiologist must have a minimum of 3 years of experience in clinical practice in addition to the AuD. ○Ongoing education/maintenance of competency Minimum of 50 CEUs spanning 5 or more years, as required for maintenance of certification. This is a minimum requirement and is superseded by other individual credential requirements. , FMG) Neurophysiologist Performing IONM ○Job responsibilities This may include: Management of personnel and instrumentation that support IONM. Technical performance of IONM. IONM planning. Real-time interpretation of IONM under the supervision of a licensed physician who is immediately available, either in person or online, if needed, e.g., for rendering of medical opinion, decisions, and recommendations during surgery. This physician must be a clinical neurophysiologist trained, qualified, and experienced in IONM as referenced under Section 18. Providing recommendations for obtaining optimal neurophysiological data. Postoperative IONM report. ○Education/certification Possess a minimum of an earned doctoral degree in a physical science, life science, or clinical allied health profession from an accredited educational institution. Education must include successful completion of graduate-level training in neurophysiology and anatomy. Must have medical staff privileges for the performance of IONM in all hospitals where practicing. The DABNM is required (Grandfather clause: PhD neurophysiologists with a minimum of 20 years of experience in IONM are not required to hold the DABNM). ○Experience Evidence of continuous experience in IONM including case logs that document a minimum of 300 cases monitored with the primary responsibility for the clinical tasks in which the provider will participate. ○Supervision (Table ) The nonphysician neurophysiologist functions under the supervision of a licensed physician who is immediately available, either in person or online, if needed, e.g., for rendering of medical decisions and recommendations during surgery. This physician must be a clinical neurophysiologist trained, qualified, and experienced in IONM as referenced in Section 18. ○Ongoing education/maintenance of competency Maintenance of all credentials required for medical staff privileges in IONM. A minimum of 100 cases per year averaged over 3 years. Forty-five CEUs in IONM per year averaged over 5 years. ○Job responsibilities May perform any of the job responsibilities described for the nonphysician neurophysiologist (Section 13) as described above. Available for teaching less experienced providers. The specific responsibilities assigned to each practitioner should be documented by the employer. ○Education/certification All requirements are the same as for the nonphysician neurophysiologist performing IONM except: The DABNM credential is required. ○Experience All requirements are the same as for the nonphysician neurophysiologist except that: At least 7 years of clinical activity in IONM is required. ○Supervision (Table ) The requirements are the same as the nonphysician neurophysiologist (Section 13). ○Ongoing education/maintenance of competency The requirements are the same as the nonphysician neurophysiologist (Section 13). 15. Physicians (MD, DO, or Foreign Equivalent) Who are Neither Neurologists, Physiatrists, nor Clinical Neurophysiologists ○Job responsibilities Interprets CNP studies under supervision as discussed below. ○Education/certification Valid state license to practice medicine in the state in which the study is performed. Completion of an ACGME-accredited residency. If practicing in a hospital setting, must satisfy the hospital's requirements for medical staff privileges in their specialty area. If the hospital has separate criteria for performing and interpreting neurophysiologic tests, the practitioner must meet those requirements for the particular test performed. A minimum of 6 months of full-time supervised training in the area(s) of neurophysiology practiced. If training is not full time, there should be equivalent of 6 months of supervised training when totaled. The training should be under the supervision of a board-certified clinical neurophysiologist with expertise in the field of training. It is preferable if this training occurred as part of a program accredited by the institutional graduate medical education committee or by the ACGME. EDX physicians should refer to the AANEM position statement, “Who Is Qualified to Practice Electrodiagnostic Medicine?” Acceptable board certification for the supervising neurophysiologists includes any of the following: ABPN–CN (American Board of Psychiatry and Neurology Clinical Neurophysiology) ABCN (American Board of Clinical Neurophysiology) ABEM (American Board of Electrodiagnostic Medicine) ABNM (American Board of Neurophysiologic Monitoring) for IONM only ○Experience Before practicing independently, this physician should have completed the number of studies outlined below under supervision for which privileges are being requested. EEG—500 studies Long-term video EEG monitoring—100 studies EMG/NCS—200 complete EDX evaluations IONM—100 patients Diagnostic evoked potentials—50 studies; at least 15 in each modality the practitioner will interpret. ○Supervision (Table ) EEG—clinical neurophysiologist or neurologist credentialed to interpret EEG studies should be available to review record or help with any questions or complex patients. Long-term video EEG monitoring—should work with a clinical neurophysiologist or neurologist credentialed to interpret these studies who provides ongoing review of each study. EMG/NCS—neurologist/physiatrist/clinical neurophysiologist credentialed to interpret these studies should be available to review records or help with questions or complex patients. IONM—should work with clinical neurophysiologist, neurologist, or physiatrist credentialed to interpret these studies and who provides ongoing review of each study. Diagnostic evoked potentials—should work with clinical neurophysiologist or neurologist who provides ongoing review of each study. ○Ongoing education/maintenance of competency Must maintain certification in primary specialty. Must have ongoing education in the area practiced with an average of 15 CME credits annually in the area(s) of CNP practiced, averaged over 3 years. Must participate in ongoing QA and quality improvement activities. A board-certified clinical neurophysiologist should be involved in these activities. 16. Neurologist (Without Board Certification in Any Area of CNP) or Physiatrist Certified by Their Respective Boards ○Job responsibilities Interprets routine studies of the specified type. ○Education/certification Valid state license to practice medicine. For IONM, EEG, and EPs, a minimum of 6 months of full-time, supervised training in these areas. If training is not full time, there should be equivalent of 6 months supervised training when totaled. The training should be under the supervision of a board-certified clinical neurophysiologist or neurologist with expertise in the field of training. It is preferable if this training occurred as part of a program accredited by the institutional graduate medical education committee or by the ACGME. Completion of an ACGME-accredited residency in neurology or physical medicine rehabilitation would be applicable for EDX physicians. If practicing in a hospital setting, should satisfy the hospital's requirement for medical staff privileges in neurology or PMR. If the hospital has separate criteria for performing and interpreting neurophysiologic tests, the practitioner should meet those requirements for the particular test performed. Meets hospital requirements to have medical staff privileges as a neurologist or for IONM and EDX testing as a PMR physician. ○Experience Before practicing independently, the physician should have completed under supervision the number of studies in an ACGME or RCPSC in a neurology or PMR residency program outlined below for which privileges are requested. EEG—500 studies Long -term video EEG monitoring—100 studies EMG/NCS—200 complete EDX evaluations IONM—100 patients Diagnostic evoked potentials—50 studies; at least 15 in each modality the practitioner will interpret. ○Supervision (Table ) Supervised by a clinical neurophysiologist who participates in quality assessment and quality improvement activities, including peer review, and is available for consultation regarding complex or difficult cases. ○Ongoing education/maintenance of competency Must maintain medical staff privileges in neurology or physical medicine rehabilitation would be acceptable for EDX physicians. Must have ongoing education in the area practiced with an average of 15 CME credits annually in the area(s) of CNP practiced, averaged over 3 years. Must participate in ongoing QA and quality improvement activities. A board-certified clinical neurophysiologist should be involved in these activities. 17. Clinical Neurophysiologist (MD, DO) ○Job responsibilities Supervises and interprets general CNP studies in the area of their expertise. Available for consultation with other staff on complex or difficult cases. Participates in QA and quality improvement activities. Involved in ongoing training and education of physicians and technologists. ○Education/certification Valid state license to practice medicine in the state in which the study is performed. Completion of an ACGME-accredited fellowship in CNP or equivalent training before the establishment of accredited training programs as recognized by board certification as specified below. ○Board eligibility or certification by ABPN-CN, ABCN, or ABEM. ○Experience One should have performed or interpreted under supervision at least the number of studies specified in Section 16. At least 3 years in clinical practice of CNP. ○Supervision (Table ) Supervises studies performed by other providers with less experience or training. ○Ongoing education/maintenance of competency Must maintain medical staff privileges in CNP as applicable. Must have ongoing education in the area practiced with an average of 15 CME credits annually in the area(s) of CNP practiced, averaged over 3 years. Must participate in ongoing QA and quality improvement activities. 18. Subspecialty Neurologist or Physiatrist (MD, DO) ○Job responsibilities Supervises and interprets general and complex CNP studies in the areas of expertise. Involved in planning QA and quality improvement activities in the ND department. Available for consultation with other staff on complex or difficult cases. Involved in ongoing training and education of physicians and technologists. ○Education/certification Valid state license to practice medicine in the state in which the study is performed. Completion of an ACGME-accredited fellowship in CNP or equivalent training before the establishment of accredited training programs as recognized by board certification as specified below. Board certification by ABPN-CN, ABCN, or ABEM. Completion of an ACGME-accredited residency in physical medicine and rehabilitation or neurology. A minimum of 6 months of full-time supervised training in the area of neurophysiology in which they will practice. If training is not full time, there should be equivalent of 6 months of supervised training when totaled. The training should be under the supervision of a board-certified clinical neurophysiologist with expertise in the field of training. ○Experience Along with the larger years of experience, the subspecialist should have performed or interpreted at least twice the number of studies specified for the neurologist or physiatrist (Section 16). Should have at least 5 years of clinical practice in neurophysiology. ○Supervision (Table ) Supervises studies performed by other providers with less experience or training. Available for teaching and supervision of less experienced practitioners. ○Ongoing education/maintenance of competency Must maintain medical staff privileges/subspecialty privileges in CNP as applicable. Must have ongoing education in the area practiced with an average of 15 CME credits annually in the area(s) of CNP practiced, averaged over 3 years. Must participate in ongoing QA and quality improvement activities. 19. Neurodiagnostic Medical Director (MD, DO) ○Job responsibilities Development and implementation of policies and procedures for ND laboratory. Supervision and assessment of competency of ND laboratory staff at all levels. Assures that there is ongoing teaching and educational activities within the department. Supervises quality improvement activities. Works with technical director/manager in planning for the laboratory, staff, equipment, and budget. ○Education/certification Valid medical license to practice in the state where supervising studies. Case experience equal to or greater than that required for subspecialty neurologist or physiatrist (Section 18). Board certified by ABPN or ABPMR. Board certified in at least one area of CNP (ABPN-CN, ABCN, or ABEM). For AANEM medical director for EDX laboratories or EDX laboratory accreditation, the qualifications of a medical laboratory director shall meet AANEM medical lab director qualifications and AANEM CME requirements: 1. Completed ACGME or RCPSC neurology or PMR residency. 2. Completed primary board certification in ACGME or RCPSC neurology or PMR. 3. Completed 3 months of training in EDX medicine during neurology or PMR ACGME or RCPSC residency or fellowship. ○Experience At least 5 years of professional practice in neurophysiology. ○Supervision (Table ) Department Chair/Vice Chair, Chief Medical Officer, or Section Chief as governed by the department or medical facility. ○Ongoing education/maintenance of competency Must maintain medical staff privileges in neurology or PMR, and CNP. Should have ongoing education in the area practiced with an average of 15 CME credits annually in the area(s) of CNP practiced, averaged over 3 years. Should be involved in managing ongoing QA and quality improvement activities. ○Job responsibilities Interprets CNP studies under supervision as discussed below. ○Education/certification Valid state license to practice medicine in the state in which the study is performed. Completion of an ACGME-accredited residency. If practicing in a hospital setting, must satisfy the hospital's requirements for medical staff privileges in their specialty area. If the hospital has separate criteria for performing and interpreting neurophysiologic tests, the practitioner must meet those requirements for the particular test performed. A minimum of 6 months of full-time supervised training in the area(s) of neurophysiology practiced. If training is not full time, there should be equivalent of 6 months of supervised training when totaled. The training should be under the supervision of a board-certified clinical neurophysiologist with expertise in the field of training. It is preferable if this training occurred as part of a program accredited by the institutional graduate medical education committee or by the ACGME. EDX physicians should refer to the AANEM position statement, “Who Is Qualified to Practice Electrodiagnostic Medicine?” Acceptable board certification for the supervising neurophysiologists includes any of the following: ABPN–CN (American Board of Psychiatry and Neurology Clinical Neurophysiology) ABCN (American Board of Clinical Neurophysiology) ABEM (American Board of Electrodiagnostic Medicine) ABNM (American Board of Neurophysiologic Monitoring) for IONM only ○Experience Before practicing independently, this physician should have completed the number of studies outlined below under supervision for which privileges are being requested. EEG—500 studies Long-term video EEG monitoring—100 studies EMG/NCS—200 complete EDX evaluations IONM—100 patients Diagnostic evoked potentials—50 studies; at least 15 in each modality the practitioner will interpret. ○Supervision (Table ) EEG—clinical neurophysiologist or neurologist credentialed to interpret EEG studies should be available to review record or help with any questions or complex patients. Long-term video EEG monitoring—should work with a clinical neurophysiologist or neurologist credentialed to interpret these studies who provides ongoing review of each study. EMG/NCS—neurologist/physiatrist/clinical neurophysiologist credentialed to interpret these studies should be available to review records or help with questions or complex patients. IONM—should work with clinical neurophysiologist, neurologist, or physiatrist credentialed to interpret these studies and who provides ongoing review of each study. Diagnostic evoked potentials—should work with clinical neurophysiologist or neurologist who provides ongoing review of each study. ○Ongoing education/maintenance of competency Must maintain certification in primary specialty. Must have ongoing education in the area practiced with an average of 15 CME credits annually in the area(s) of CNP practiced, averaged over 3 years. Must participate in ongoing QA and quality improvement activities. A board-certified clinical neurophysiologist should be involved in these activities. ○Job responsibilities Interprets routine studies of the specified type. ○Education/certification Valid state license to practice medicine. For IONM, EEG, and EPs, a minimum of 6 months of full-time, supervised training in these areas. If training is not full time, there should be equivalent of 6 months supervised training when totaled. The training should be under the supervision of a board-certified clinical neurophysiologist or neurologist with expertise in the field of training. It is preferable if this training occurred as part of a program accredited by the institutional graduate medical education committee or by the ACGME. Completion of an ACGME-accredited residency in neurology or physical medicine rehabilitation would be applicable for EDX physicians. If practicing in a hospital setting, should satisfy the hospital's requirement for medical staff privileges in neurology or PMR. If the hospital has separate criteria for performing and interpreting neurophysiologic tests, the practitioner should meet those requirements for the particular test performed. Meets hospital requirements to have medical staff privileges as a neurologist or for IONM and EDX testing as a PMR physician. ○Experience Before practicing independently, the physician should have completed under supervision the number of studies in an ACGME or RCPSC in a neurology or PMR residency program outlined below for which privileges are requested. EEG—500 studies Long -term video EEG monitoring—100 studies EMG/NCS—200 complete EDX evaluations IONM—100 patients Diagnostic evoked potentials—50 studies; at least 15 in each modality the practitioner will interpret. ○Supervision (Table ) Supervised by a clinical neurophysiologist who participates in quality assessment and quality improvement activities, including peer review, and is available for consultation regarding complex or difficult cases. ○Ongoing education/maintenance of competency Must maintain medical staff privileges in neurology or physical medicine rehabilitation would be acceptable for EDX physicians. Must have ongoing education in the area practiced with an average of 15 CME credits annually in the area(s) of CNP practiced, averaged over 3 years. Must participate in ongoing QA and quality improvement activities. A board-certified clinical neurophysiologist should be involved in these activities. ○Job responsibilities Supervises and interprets general CNP studies in the area of their expertise. Available for consultation with other staff on complex or difficult cases. Participates in QA and quality improvement activities. Involved in ongoing training and education of physicians and technologists. ○Education/certification Valid state license to practice medicine in the state in which the study is performed. Completion of an ACGME-accredited fellowship in CNP or equivalent training before the establishment of accredited training programs as recognized by board certification as specified below. ○Board eligibility or certification by ABPN-CN, ABCN, or ABEM. ○Experience One should have performed or interpreted under supervision at least the number of studies specified in Section 16. At least 3 years in clinical practice of CNP. ○Supervision (Table ) Supervises studies performed by other providers with less experience or training. ○Ongoing education/maintenance of competency Must maintain medical staff privileges in CNP as applicable. Must have ongoing education in the area practiced with an average of 15 CME credits annually in the area(s) of CNP practiced, averaged over 3 years. Must participate in ongoing QA and quality improvement activities. ○Job responsibilities Supervises and interprets general and complex CNP studies in the areas of expertise. Involved in planning QA and quality improvement activities in the ND department. Available for consultation with other staff on complex or difficult cases. Involved in ongoing training and education of physicians and technologists. ○Education/certification Valid state license to practice medicine in the state in which the study is performed. Completion of an ACGME-accredited fellowship in CNP or equivalent training before the establishment of accredited training programs as recognized by board certification as specified below. Board certification by ABPN-CN, ABCN, or ABEM. Completion of an ACGME-accredited residency in physical medicine and rehabilitation or neurology. A minimum of 6 months of full-time supervised training in the area of neurophysiology in which they will practice. If training is not full time, there should be equivalent of 6 months of supervised training when totaled. The training should be under the supervision of a board-certified clinical neurophysiologist with expertise in the field of training. ○Experience Along with the larger years of experience, the subspecialist should have performed or interpreted at least twice the number of studies specified for the neurologist or physiatrist (Section 16). Should have at least 5 years of clinical practice in neurophysiology. ○Supervision (Table ) Supervises studies performed by other providers with less experience or training. Available for teaching and supervision of less experienced practitioners. ○Ongoing education/maintenance of competency Must maintain medical staff privileges/subspecialty privileges in CNP as applicable. Must have ongoing education in the area practiced with an average of 15 CME credits annually in the area(s) of CNP practiced, averaged over 3 years. Must participate in ongoing QA and quality improvement activities. ○Job responsibilities Development and implementation of policies and procedures for ND laboratory. Supervision and assessment of competency of ND laboratory staff at all levels. Assures that there is ongoing teaching and educational activities within the department. Supervises quality improvement activities. Works with technical director/manager in planning for the laboratory, staff, equipment, and budget. ○Education/certification Valid medical license to practice in the state where supervising studies. Case experience equal to or greater than that required for subspecialty neurologist or physiatrist (Section 18). Board certified by ABPN or ABPMR. Board certified in at least one area of CNP (ABPN-CN, ABCN, or ABEM). For AANEM medical director for EDX laboratories or EDX laboratory accreditation, the qualifications of a medical laboratory director shall meet AANEM medical lab director qualifications and AANEM CME requirements: 1. Completed ACGME or RCPSC neurology or PMR residency. 2. Completed primary board certification in ACGME or RCPSC neurology or PMR. 3. Completed 3 months of training in EDX medicine during neurology or PMR ACGME or RCPSC residency or fellowship. ○Experience At least 5 years of professional practice in neurophysiology. ○Supervision (Table ) Department Chair/Vice Chair, Chief Medical Officer, or Section Chief as governed by the department or medical facility. ○Ongoing education/maintenance of competency Must maintain medical staff privileges in neurology or PMR, and CNP. Should have ongoing education in the area practiced with an average of 15 CME credits annually in the area(s) of CNP practiced, averaged over 3 years. Should be involved in managing ongoing QA and quality improvement activities. The field of clinical neurophysiology is large, diverse, and in constant evolution, and this document is not a review of the clinical indications or use of neurodiagnostic procedures. For more information, the following are additional resources. , , – |
Advanced Paediatric Life Support: A 25‐year journey in Australia | a7f417f5-86da-4489-afcd-39a82e06cf9f | 11826110 | Pediatrics[mh] | APLS courses in Australia The ALSG APLS course was brought to Australia by a multi‐disciplinary team of clinicians. Simon Young, an emergency physician, had completed the APLS course in the UK. He was supported by founding ALSG APLS course developers including Kevin Mackway‐Jones (lead author of the first edition of the APLS manual) and Sue Wieteska (Chief Executive Officer 1994–2022). Simon recruited faculty from other Australian clinicians who had also attended the UK programme. The first Australian candidates were invited from the range of clinicians who were managing paediatric care. Many of these candidates then completed the ALSG's GIC and became the first Australian faculty. Seed funding was provided by the Rural Doctors Association of Australia, which influenced the number of rural generalists supporting the organisation's focus for rural and remote courses. These inaugural instructors worked tirelessly and passionately to deliver the APLS programme; they also mentored the development of more instructors to sustain this teaching nationally and in New Zealand across a range of clinical settings. They were ‘… a core group of pioneers…’ instrumental in setting up an inter‐specialist programme, across traditional networks, independent of any one College or governing professional body. This group focused on educational outcomes, fostering values that… (invited)… cooperation for knowledge sharing. In reflecting on the APLS journey, these early instructors were enacting key principles of quality culture. APLS courses in Australia evolved? The initial Australian and New Zealand APLS courses were replicas of the UK course, but over time have been revised for the Australian and New Zealand clinical community. In 2015, 8‑10 h of pre‐course, interactive ‘APLS Online Learning’ was introduced. Online modules support self‐paced learning for key concepts and practical skills. This resource has enabled the refining of 17 face‐to‐face lectures into three plenaries with small group activities. The current plenary sessions target knowledge recall in preparation for interactive skill stations, discussions, workshops and scenario‐based teaching. These revisions were also applied to the 1‐day Paediatric Life Support programme. In 2020, a 2‐day APLS Refresher programme was introduced. This programme further promotes shared learning between experienced clinicians. While continuing to emphasise a structured and safe approach to acute care, the programme acknowledges the limitations of protocol‐driven management and supports the concept that there is often not a single ‘right’ approach. There has also been a focus on faculty development. While the GIC has evolved, its core purpose remains unchanged: to teach a structured, learner‐centred approach to teaching and provide a cultural orientation to the APLS CoP. In 2017, a team of experienced instructors, with support from Kate Denning (ALSG Lead Educator), ran the first Educational Skills Development Course (ESDC) for existing APLS instructors. This programme draws on the learner‐centred effectiveness of peer‐to‐peer learning, with faculty running simulated teaching sessions for each other and then facilitating authentic learning conversations with their peers. Additional instructor development is provided on some APLS courses with a Course Coach. The coaching role focuses on supporting the ongoing development of teaching skills and behaviours. This continued refinement of the ESDC and Course Coaching role is driven by instructors, in recognition of their need for support transitioning from ‘instructors’ to ‘facilitators’ of learning. How is the curriculum and teaching kept relevant and up to date? The APLS manual is reviewed and updated approximately every 5 years; a standalone Australian/New Zealand edition reflects resuscitation guidelines from the Australian and New Zealand Committee on Resuscitation (ANZCOR). A number of APLS instructors are contributors to ANZCOR, the International Liaison Committee on Resuscitation (ILCOR), the Australian Resuscitation Council (ARC) and the Paediatric Improvement Collaborative Clinical Practice Guidelines (CPGs). The contribution from these groups helps establish alignment of APLS content with external recommendations. Challenges of maintaining relevancy and consistency are managed by the governance of experienced faculty members who form the APLS committees. These committees have representation from a variety of disciplines, specialist areas and clinical settings. APLS Australia's management algorithms are based on current available evidence and consensus. Curriculum revisions occur in response to feedback from both faculty and candidates, who are active clinicians in a range of acute care settings. Examples of revisions to the curriculum include changes to the volume of fluid resuscitation used in the initial management of shock, and a team‐ and cognitive aid‐based approach to safe‐airway management. The teaching of major trauma now includes an algorithmic approach to massive haemorrhage. Other revisions of content in the most recent edition of the course manual include the theory of US guided vascular access, and the theory of multi‐victim trauma and blast injuries. In 2015, scenario realism was improved by the inclusion of iSimulate haemodynamic monitoring simulation units. In conjunction with this change was an increase in the number of serious illness scenarios and the post‐scenario debriefing time was extended. These changes were aimed at increasing learner engagement with the scenario teaching method and enabling reflection and learning on key curriculum topics. The most recent change to the face to face course has been the replacement of summative assessment of BLS, airways skills and defibrillation with repeated skills practice. There is an emphasis on skill mastery by demonstration of safe practice and a coaching approach is used for formative assessment. How have educational principles informed course development? The development of the course was pragmatic; however, approaches to course modifications have been aligned with the development of educational theory and practice. Early courses recognised the impact of educational theories, including Maslow's hierarchy: where meeting physical and psychological needs supports new learning, and Lewin's theory of change: that change requires an openness to ‘unfreezing’ and that this can be confronting. Other pertinent theories include Bandura's theory of social learning: recognising that learning occurs through observation, imitation and role modelling, and Kolb's experiential learning cycle – with intentional practice being followed by thoughtful reflection and opportunity to practice again. In 2010, the faculty identified the need to move from the Pendleton‐structured post scenario debrief to the use of advocacy with inquiry in learning conversations. These learning conversations aim to understand learner motives and perspective in learner‐centred small group discussions. The more recent development and support for a course coach has been to offer faculty development in the context of supporting groups with varied learning needs. Course and faculty development are now also influenced by the literature and thought leaders supporting best practice in medical simulation, reflective practice, skills teaching and assessment for learning. The APLS manual is reviewed and updated approximately every 5 years; a standalone Australian/New Zealand edition reflects resuscitation guidelines from the Australian and New Zealand Committee on Resuscitation (ANZCOR). A number of APLS instructors are contributors to ANZCOR, the International Liaison Committee on Resuscitation (ILCOR), the Australian Resuscitation Council (ARC) and the Paediatric Improvement Collaborative Clinical Practice Guidelines (CPGs). The contribution from these groups helps establish alignment of APLS content with external recommendations. Challenges of maintaining relevancy and consistency are managed by the governance of experienced faculty members who form the APLS committees. These committees have representation from a variety of disciplines, specialist areas and clinical settings. APLS Australia's management algorithms are based on current available evidence and consensus. Curriculum revisions occur in response to feedback from both faculty and candidates, who are active clinicians in a range of acute care settings. Examples of revisions to the curriculum include changes to the volume of fluid resuscitation used in the initial management of shock, and a team‐ and cognitive aid‐based approach to safe‐airway management. The teaching of major trauma now includes an algorithmic approach to massive haemorrhage. Other revisions of content in the most recent edition of the course manual include the theory of US guided vascular access, and the theory of multi‐victim trauma and blast injuries. In 2015, scenario realism was improved by the inclusion of iSimulate haemodynamic monitoring simulation units. In conjunction with this change was an increase in the number of serious illness scenarios and the post‐scenario debriefing time was extended. These changes were aimed at increasing learner engagement with the scenario teaching method and enabling reflection and learning on key curriculum topics. The most recent change to the face to face course has been the replacement of summative assessment of BLS, airways skills and defibrillation with repeated skills practice. There is an emphasis on skill mastery by demonstration of safe practice and a coaching approach is used for formative assessment. The development of the course was pragmatic; however, approaches to course modifications have been aligned with the development of educational theory and practice. Early courses recognised the impact of educational theories, including Maslow's hierarchy: where meeting physical and psychological needs supports new learning, and Lewin's theory of change: that change requires an openness to ‘unfreezing’ and that this can be confronting. Other pertinent theories include Bandura's theory of social learning: recognising that learning occurs through observation, imitation and role modelling, and Kolb's experiential learning cycle – with intentional practice being followed by thoughtful reflection and opportunity to practice again. In 2010, the faculty identified the need to move from the Pendleton‐structured post scenario debrief to the use of advocacy with inquiry in learning conversations. These learning conversations aim to understand learner motives and perspective in learner‐centred small group discussions. The more recent development and support for a course coach has been to offer faculty development in the context of supporting groups with varied learning needs. Course and faculty development are now also influenced by the literature and thought leaders supporting best practice in medical simulation, reflective practice, skills teaching and assessment for learning. APLS instructors and what do they bring to the course? There are over 860 instructors; clinicians from mixed and paediatric emergency, paediatrics, anaesthetics, retrieval medicine, paediatric intensive care, neonatal intensive care, surgery, rural general practice and paramedicine. Eighty percent of faculty were active in the 3 years to October 2022; 25% of instructors have been active for between 10 and 20 years, with 8% having volunteered for over 20 years! Rewards for the instructors include development in knowledge, clinical and teaching skills, enjoyment in teaching and connection with like‐minded clinicians. Experienced instructors are involved in APLS committees and contribute to decision making regarding course revision and delivery. The intrinsic motivators of this CoP underpin the processes that support the ongoing development of the organisation and its programmes. APLS courses? The principles of developing a quality culture include having managerial and structural supports. At APLS, key structural supports have been established for recognition and input from both learners and APLS instructors in the continuous improvement cycle. Feedback from these key stakeholders draws on their knowledge and skills from varied clinical contexts. Course directors formally gather instructor views on curriculum and teaching at each course. Learner feedback and collated reports from course directors are reviewed by governing committees which meet regularly. These committees formulate revisions, which are piloted on courses before changes are implemented across all programmes. Evaluation and oversight of the feasibility and sustainability of APLS courses is supported by administration and financial management teams. The organisation prioritises investment in a team of skilled course coordinators who work closely with faculty supporting consistency between courses and the efficient running of course programmes. Additionally, a medical and educational consultant team support the committees' development of faculty and curriculum. These organisational structural supports have evolved with the demand for courses and the expansion of the volunteer instructor body. APLS community? From Stalmeijer et al ., we identify that it is authentic respect for and between APLS faculty that is critical for sustaining a quality culture. Variations in clinical care, that instructors may experience in their workplaces, are melded into a ‘safe‐way’ of clinical management for teaching on APLS courses. Debate is valued and helps drive a culture of accountability, shared beliefs and ownership of course content and delivery. APLS course directors and the chairs of APLS subcommittees leverage leadership qualities acquired from their clinical roles to stimulate critical reflection and dialogue. These influences foster peer‐driven cooperation, shared goals, decision making and two‐way communication across networks. APLS on paediatric emergency care in Australia? There are many influences on the provision of safe paediatric emergency care. The impact of APLS courses is evaluated by inference, observations and support from clinicians who are actively working and undertaking research in emergency medicine. Key examples include that the online APLS algorithms ( www.apls.org.au/algorithms ) are accessed more than 26 000 times per year. The 3‐day APLS course is recognised by professional bodies as a mandatory component for trainees in both the Australasian College of Emergency Medicine and the Paediatrics and Child Health Division of the Royal Australasian College of Physicians. Additionally the APLS Course Development Committee has an active relationship with the Paediatric Research in Emergency Departments International Collaborative (PREDICT) group. Current research thus informs updates in APLS teaching as a form of translation of research into clinical practice. In 2022, 2262 clinicians participated in APLS programmes across Australia. These nursing, medical and paramedic clinicians cover a range of career stages. Experienced nurses come from paediatric or mixed EDs, or from paediatric inpatient wards. Medical clinicians come from paediatric or adult intensive care, anaesthetics or rural general practice settings. There are waitlists to attend APLS programmes and there is also a waitlist for clinicians wishing to embark on the pathway to become an APLS Instructor. Finally, like‐minded clinicians come together at bi‐annual Instructor Days and Paediatric Acute Care conferences. APLS Australia? Even though the core values of the organisation have not been formally evaluated they provide a lens to consider future challenges, as does an examination of the challenges in health education. While there is interdisciplinary diversity among faculty, most instructors and candidates are from a medical background. Thus, there is scope to develop strategies to support nursing clinicians and move the organisation towards greater interprofessional diversity. Innovation is another challenge yet also a strength for APLS courses. Health education must remain contemporary. Clinical and educational practices are continually evolving. There is a need for both to be integrated in a timely yet thoughtful and holistic fashion. In particular the role of artificial intelligence, both in course design and delivery and the impact it will have on learners' active involvement in their own learning needs, is yet to be evaluated. There are a number of educational challenges in the teaching of life support and resuscitation. Specific challenges for APLS courses include what constitutes a minimum standard in skills assessment, how to optimise the relatively brief scenarios and learning conversations for learning and how best to use cognitive aids (which are often part of clinical practice) in scenario teaching. For translation into the clinical context there is the challenge of how to optimise interprofessional and interdisciplinary education for learners who are at different career stages and work in different clinical settings. APLS values interact with quality culture through recognising these challenges and seeking sustainable solutions, with respect for the clinical disciplines and contexts where paediatric resuscitation occurs. The APLS organisation has grown from running a single course type to now include not just the flagship 3‐day APLS course, but also the refresher, GIC, PLS and ESDC courses. What has enabled sustainability of APLS in Australia? We propose that the key factor and driving force has been, and continues to be, an ethos of enhancing a quality culture underpinned by a community of practice. , This culture has existed from the earliest days, even though it was neither named nor defined. The early pioneers and ongoing leaders established the structures and processes needed to sustain the community that forms APLS in the present day. There has been a focus on continued enhancement at both systems and learner levels. This quality culture is strengthened by APLS values, which are enhanced by adaptability with a sense of ownership and commitment. The goal remains the provision of quality education to empower clinicians to apply their APLS acquired knowledge and skills in their own clinical context. Future directions for APLS Australia will be guided by both core values and educational theories supporting this framework of quality culture. Competing interests SSST is an APLS Director and a current member of the Paediatric Life Support Committee. JS is an APLS Educator. This is an unsolicited paper, which represents the views of the authors and not those of the APLS Board. SSST is an APLS Director and a current member of the Paediatric Life Support Committee. JS is an APLS Educator. This is an unsolicited paper, which represents the views of the authors and not those of the APLS Board. |
How early should be “Early Integrated Palliative Care”? | bf8344f4-a2b2-4e3f-ac3b-b79c2705c020 | 10728221 | Internal Medicine[mh] | Palliative care practitioners have suggested a change of name from “palliative care” to “supportive care” to overcome the stigma which some think is associated with the former. Attempts to define supportive care and palliative care as being synonymous spurred debate and blurred the distinctions between the two, with the risk that palliative care would lose its identity. Jean Klastersky traced the history of supportive care from initial chemotherapy for acute myeloid leukemia, where supportive care predominantly involved blood products and management of febrile neutropenia, through to the advent of cisplatin for solid tumors in which control of nausea and vomiting became a priority , and ultimately the control of ocular, dermatological or endocrine toxicities. If a name creates "discomfort" for patients and physicians, how then shall we call cancer, pain, end of life, death, and dying?
Some palliative care practitioners have suggested starting palliative care earlier during the cancer trajectory, i.e., defining what precisely is “early” about such care . The rationale was that an earlier start would improve quality of life for patients and their families. The rebranding of palliative care could also be associated to an earlier referral, possibly because it would make the term more acceptable because no longer directly linked to end of life and dying. Early palliative care entails empathetic communication with patients about their prognosis, symptom assessment and management, and advance care planning. Although some randomized controlled trials (RCTs) involving advanced cancer patients have reported higher quality of life scale scores and suggested a positive effect on survival in patients referred early to palliative care versus standard care, a Cochrane meta-analysis confirmed these results only with a low or a very low level of evidence . In their recent meta-analysis and systematic review comparing the effects of early palliative care versus standard cancer care or on-demand palliative care on patients with incurable cancer, Huo et al. reported that only 16 of 1376 studies were included. The pooled data suggested better quality of life, fewer symptoms, better mood, longer survival, and higher probability of dying at home for the early palliative care patients than for the control group. The evidence level was low, however, because of the high heterogeneity of quality-of-life measures and the few studies for the other results . But what practically means “early palliative care”? Is it reasonable to ask for a palliative care consult at the diagnosis of cancer? Should “early” care be started at a particular “early” stage of disease or because patients’ needs have been assessed “earlier” during the disease trajectory?
The embedded model foresees interdisciplinary collaboration between oncologists and palliative care practitioners in teamwork . This would allow space for sharing clinical information about individual patients and for integrating specialist care. For example, in their retrospective, pre-/postintervention study involving patients with thoracic malignancies, Agne et al. reported that after implementation of an embedded palliative care clinic, the number of referrals for palliative care rose whereas median waiting time between referral request and first visit and time between the first oncologic visit and completion of referral decreased. Such integration may foster collaboration between oncologists and palliative care practitioners, with the added benefit for patients of shorter waiting time to first encounter with the palliative care team . However, what is the viability of an embedded model that connects two disciplines that differ by objectives and by training? Are there sufficient palliative care practitioners that can join with other health care professionals to provide early and integrated services for treating patients with cancer? Furthermore, how do we want to make palliative care accessible to all who need it? How do we want to reduce health inequality and mitigate unnecessary suffering? We believe that embedding alone is not the solution.
Our proposal is to focus on basic education in palliative care for all students during their medical/nursing education and continuous professional education for those involved in the routine care of patients with life-limiting disease or progressive chronic conditions . The first step is to disseminate the screening for the need of palliative care, followed by regular monitoring of a patient’s physical symptoms, emotional, social, spiritual needs, and financial distress. This can be done using simple, validated tools self-reported by the patient. Patient-reported outcomes can then help to refine and adjust interventions to the patterns of suffering, including changes in therapeutic prescription or provision of spiritual, social, emotional, or financial support. Second, empathic communication is paramount as much as pharmacological and non-pharmacological interventions implemented according to the evidence-based guidelines for tumor and related symptoms. For this reason, health care providers, whichever their field of interest, should learn soon to communicate with their patients. While restraints on time and resources are often cited as barriers to engaging in an empathetic approach, communication and assessment of suffering are an integral part of care, if not the care itself, to which health-care professionals are deontologically committed. Set within a broader medical education program, the basics of early palliative care can be learned and then extended to all patients or those with chronic or incurable disease starting from the initial encounter, if necessary. When needed, referral for consultation with a palliative care specialist and PC team may identify and address a patient’s physical and non-physical needs. Teaching early recognition of palliative care needs through validated screening tools and empathetic communication with patients and families, may both help to alleviate emotional and physical burdens granting that all needs are timely recognized and, in case, or refractory/severe suffering, properly referred to specialists. In conclusion, we believe palliative care can be integrated with oncology or other disciplines by centering medicine around the needs of the person. This can be done through appropriate assessment and communication. Teaching how to screen and assess the suffering early during the course of disease and the importance of empathetic comunication should be done during the course of medical training, so as to spread those concepts to the broader audince possible. This may also help trainees and future doctors and nurses to better understand when is the right time to call for a specialist referral, while providing some primary palliative care. We think that this educational proposal may work better than other strategies to implement early palliative care. Future research is necessary to evaluate the efficacy of our proposal. This would require a commitment for all doctors to approach the field of palliative care and the empathetic approach with the patient which will be an added value to their specialist clinical skills.
|
Effect of Er: YAG laser and different surface treatment methods on the push-out bond strength of glass fiber post to self adhesive resin cement | aa832b5e-70e3-4d0f-9b8c-2233ad545466 | 11805825 | Dentistry[mh] | Restoring teeth that have received endodontic treatment is difficult due to the loss of structural integrity caused by caries and fractures . To achieve sufficient retention of the core foundation, it may be necessary to place a post into the root canal to facilitate the successful restoration of these teeth . Fiber posts were developed as a response to the problems accompanied by metal posts . The fibred post is composed of unidirectional fibers that are incorporated into a resin matrix . Glass fiber posts contain e-glass fibers (electric glass) that consist of SiO 2, CaO, B 2O, Al 2O, and a few other oxides of alkali metals in the amorphous phase . The use of these fiber posts facilitates the creation of a mechanically homogeneous monoblock, thereby diminishing the risk of fracture, as the modulus of elasticity of fiber posts is comparable to that of dentin . The fiber post retention in the root canal depends on the bond strength between different parts (post-cement-dentin) assembly . The main cause of failures in fiber posts is the interface between the fiber post and resin cement . The organic component of fiber posts comprises epoxy resin, characterized by a high degree of conversion and strong cross-linking . As a result, the resin cement cannot completely infiltrate the surface of prefabricated fiber posts, therefore obstructing the interdiffusion process between the resin cement and resin matrix. Consequently, this polymer matrix cannot interact with the monomers of resin cement . An important factor in the long-term success of restorative structures and teeth that have undergone endodontic treatment is the quality of the bonding, in which good bonding improves the distribution of stress induced by occlusal stresses . Thus, in order to optimize the adhesion of resin to fiber posts, many surface pre-treatment techniques, such as mechanical or chemical treatments of the post surface, have been studied . Surface conditioning with a silane-based resin, like 3- methacryloxypropyltrimethoxysilane, is the standard procedure . Chemical bonding occurs between both the inorganic fibers and the organic components of resin-matrix cements . Before this, the surface roughness can be enhanced through etching using reactive agents such as hydrofluoric acid, hydrogen peroxide, hydrochloric acid, and potassium permanganate . Another option is to use either abrasive alumina or silica particles to sandblast the surface, which will increase the roughness and accordingly improve the wettability and mechanical interlocking of resin-matrix cement . Surface treatment using novel laser-assisted methods has been investigated with respect to the laser type (e.g., Er: YAG, Nd: YAG, ErCr: YSGG laser, diode laser) and irradiation parameters (radiation levels, exposure duration, operational mode) . Depending on the precise laser parameters, laser texturing can modify surfaces by thermal mechanical ablation. The morphological characteristics and adherence of GFP surfaces to resin-matrix cement can be improved by combining laser texturing with conventional physicochemical techniques . Nonetheless, the actual effects of laser treatment and its precise parameters remain a subject of debate and require further investigation. This research was conducted to evaluate the effect of Er: YAG laser irradiation and various conventional surface treatment techniques on the push-out bond strength of GFP luted to self-adhesive resin cement at various root levels. The null hypothesis of the study was that there would be no differences in the push-out bond strength when considering both GFP and resin cement after application of Er: YAG laser irradiation surface treatment, as compared to other conventional surface treatment methods (Fig. ). Sample size calculation A Cohen’s d effect size of 2 was determined using the values for push out bond strength in the cervical one third for the Er: YAG and control groups, which were derived from the results of Gomes et al. . The minimum number of samples required to identify a significant difference for push-out bond strength between any two study groups was established at 24 samples (6 samples per group) when a type I error of 0.05 and a study power of 0.8 were assumed. G Power, version 3.1.9.7, was used to compute the sample size. Teeth selection Twenty-four recently extracted human permanent mandibular premolars, exhibiting predominantly uniform root lengths and normal morphology, were selected from several government hospitals and health centers with periodontal and maxillofacial surgery departments. Mandibular premolars were selected based on periapical radiography and visual inspection. The criteria for sample selection included single straight-rooted mandibular premolars devoid of root caries or restorations, no previous endodontic treatments, and completely developed roots with mature apices . Sample preparation Disinfection was achieved by immersing specimens in a 0.5% chloramine-T solution for 48 h. To achieve a root length of 15 mm, the teeth were decoronated using a diamond saw parallel to the cement-enamel junction under water spray. This procedure resulted in a flat surface-oriented perpendicular to the root’s longitudinal axis . Root canal treatment The root canal treatment was carried out following the recommendations specified in a previous study . The pulp tissues were removed using a barbed broach file barbed broach (Medin Barbed Broach, Vlachovice, Czech Republic). To prepare the root canals, the ProTaper system (Dentsply-Maillefer, Tulsa, OK, USA) and an endo motor E-Connect S electric motor (Eighteen Medical; Tds, China) at 300 rpm and torque 1 N cm according to the manufacturer’s instructions.were used. Crown- down technique was used, which included the application of a 3% sodium hypochlorite (NaOCl) irrigant (Egyptian Company for household detergents Clorox, Egypt) followed by 3 ml of 17% Ethylenediamine tetra acetic acid (EDTA) solution (AmritChem and Min. Ag, Mohali, India) and the canals were enlarged to the size of F4 file, maintaining a working length that is 1 mm shorter than the apex. finally, 5 ml of. distilled water (Fipco, Egypt) followed by drying the canals using paper point size #35/0.04. After that, all canals were filled using the single cone obturation technique. The master cone 35/0.04 (Dentsply Maillefer, Ballaigues, Switzerland) was checked for tug back action in all samples. Finally, the specimens were attached to acrylic resin blocks by means of a parallelogram, which helped in mounting the samples in the universal testing machine during the push-out test . Intra-radicular post-space preparation Post space preparation was carried out following the manufacturer’s guidelines. The gutta percha was removed using Gates-Glidden drills (Nordin, stainless steel, Switzerland). With a contra-angle handpiece ( NSK, GmbH, Eschborn, Germany) at 5000 rpm, post space was created with Glassix glass fiber special drills (Harald Nordin SA, Chailly/Montreux, and Switzerland) to a depth of 10 mm. The post space was then irrigated with a 3% NaOCl and EDTA solution to ensure complete removal of gutta percha and debris . Samples grouping All samples were numbered from 1 and ascending, then were divided by the web site ( www.random.org ) into 4 equal divisions of samples according to the post-surface treatment as follows: Group 1 ( n = 6) control group: (silane only). A homogeneous coating silane coupling agent (Porcelain Primer/Bisco Inc, Schaumburg, IL, EUA). was spread over the post’s surface brush and then dried after 60 s by natural air. Group 2 ( n = 6): (30% hydrogen peroxide for 5 min + silane). The posts were immersed in a glass tube filled with 30% hydrogen peroxide for 5 min at room temperature. After etching with H2O2 ( El Nasr Pharmaceutical Chemicals Co., Egypt ), the posts were rinsed with distilled water for 2 min and air-dried for 10 s. Finally, a silane coupling agent was applied for 60 s. Group 3 ( n = 6): (sandblasting + silane). Sandblasting equipment (Cobra, Renfert GmbH, Hilzingen, Germany) was used to sandblast the posts with aluminum oxide particles (Korox 50, Bego, Bremen, Germany), that were 50 microns in size. The sandblast was applied perpendicularly to the surface of the post at a distance of 10 mm for 20 s and at a pressure of 2 bars. Finally, a silane coupling agent was applied. Group 4 ( n = 6): (Er: YAG laser + silane) . The Er: YAG laser machine (Doctor Smile Erbium and Diode laser, Lambda Scientifica S.r.l., Vicenza, Italy) which is a pulsed laser system, emitting at a wavelength of 2,940 nm., was set with the parameters (150 mJ, 10 Hz, 1.5 W). Laser irradiation was done for 60 s, 100-µs pulse duration using a pulsed laser system. The optical tip, which had a diameter of 400 μm, was used at an incidence angle of 45 ° under water cooling (50% water – 50% air), 1-mm distant from the post surface from bottom to top. Finally, a silane coupling agent was applied. Scanning electron microscopy To assess the impact of surface treatment on the post surface, a single specimen was randomly chosen from each group after surface treatment and before post cementation. The four samples were mounted on copper stubs with double-sided adhesive tape and were observed under a (SEM) 500x magnification as shown in Fig. . Fiber post cementation GFP with diameters of 1.5 mm (Harald Nordin SA, Chailly/Monreux, Switzerland) were cemented using self-adhesive resin cement (SDI Limited, Victoria, Australia). Following a 24-hour storage period in distilled water at 37 °C, all specimens underwent thermocycling (2000 cycles at ± 5 °C and 55 °C, with a dwell time of 25 s in each water bath and a lag time of 10 s). Push-out test procedure Each specimen was cut transversely perpendicular to the root’s long axis in order to get 1.5 mm ± 0.3 thick slices from the coronal, middle, and apical thirds. Every segment was recorded and photographed from the apical and coronal surfaces using a stereomicroscopy (SZ-PT; Olympus, Tokyo, Japan) at an initial amplification of 65x. as shown in Fig. a. In this study, the image analysis tool (Image J; NIH, Bethesda, MD) was used to build a ruler of given length. The Set Scale function was then used to compare the two rulers. After measuring the post’s diameter, the radius could be estimated. The next step was to subject the parts to compressive loads with a 1 mm/min crosshead speed on a universal testing machine (Model 3345; Instron Industrial Products, Norwood, MA, USA) as shown in Fig. b . Analysis of the data was conducted at many stages. Firstly, the descriptive statistics for each group are obtained. A two-way analysis of variance (ANOVA) was conducted to identify the impact of each specific variable (surface treatment group and area). One-way analysis of variance (ANOVA) was conducted, followed by pair-wise Tukey’s post-hoc testing, to identify significant differences between major groups and radiation zones. Data visualizations were generated using Microsoft Excel. The statistical study was conducted using Aasistat 7.6 statistics software for Windows, developed by Campina Grande in Paraiba State, Brazil. A Cohen’s d effect size of 2 was determined using the values for push out bond strength in the cervical one third for the Er: YAG and control groups, which were derived from the results of Gomes et al. . The minimum number of samples required to identify a significant difference for push-out bond strength between any two study groups was established at 24 samples (6 samples per group) when a type I error of 0.05 and a study power of 0.8 were assumed. G Power, version 3.1.9.7, was used to compute the sample size. Twenty-four recently extracted human permanent mandibular premolars, exhibiting predominantly uniform root lengths and normal morphology, were selected from several government hospitals and health centers with periodontal and maxillofacial surgery departments. Mandibular premolars were selected based on periapical radiography and visual inspection. The criteria for sample selection included single straight-rooted mandibular premolars devoid of root caries or restorations, no previous endodontic treatments, and completely developed roots with mature apices . Disinfection was achieved by immersing specimens in a 0.5% chloramine-T solution for 48 h. To achieve a root length of 15 mm, the teeth were decoronated using a diamond saw parallel to the cement-enamel junction under water spray. This procedure resulted in a flat surface-oriented perpendicular to the root’s longitudinal axis . The root canal treatment was carried out following the recommendations specified in a previous study . The pulp tissues were removed using a barbed broach file barbed broach (Medin Barbed Broach, Vlachovice, Czech Republic). To prepare the root canals, the ProTaper system (Dentsply-Maillefer, Tulsa, OK, USA) and an endo motor E-Connect S electric motor (Eighteen Medical; Tds, China) at 300 rpm and torque 1 N cm according to the manufacturer’s instructions.were used. Crown- down technique was used, which included the application of a 3% sodium hypochlorite (NaOCl) irrigant (Egyptian Company for household detergents Clorox, Egypt) followed by 3 ml of 17% Ethylenediamine tetra acetic acid (EDTA) solution (AmritChem and Min. Ag, Mohali, India) and the canals were enlarged to the size of F4 file, maintaining a working length that is 1 mm shorter than the apex. finally, 5 ml of. distilled water (Fipco, Egypt) followed by drying the canals using paper point size #35/0.04. After that, all canals were filled using the single cone obturation technique. The master cone 35/0.04 (Dentsply Maillefer, Ballaigues, Switzerland) was checked for tug back action in all samples. Finally, the specimens were attached to acrylic resin blocks by means of a parallelogram, which helped in mounting the samples in the universal testing machine during the push-out test . Post space preparation was carried out following the manufacturer’s guidelines. The gutta percha was removed using Gates-Glidden drills (Nordin, stainless steel, Switzerland). With a contra-angle handpiece ( NSK, GmbH, Eschborn, Germany) at 5000 rpm, post space was created with Glassix glass fiber special drills (Harald Nordin SA, Chailly/Montreux, and Switzerland) to a depth of 10 mm. The post space was then irrigated with a 3% NaOCl and EDTA solution to ensure complete removal of gutta percha and debris . All samples were numbered from 1 and ascending, then were divided by the web site ( www.random.org ) into 4 equal divisions of samples according to the post-surface treatment as follows: Group 1 ( n = 6) control group: (silane only). A homogeneous coating silane coupling agent (Porcelain Primer/Bisco Inc, Schaumburg, IL, EUA). was spread over the post’s surface brush and then dried after 60 s by natural air. Group 2 ( n = 6): (30% hydrogen peroxide for 5 min + silane). The posts were immersed in a glass tube filled with 30% hydrogen peroxide for 5 min at room temperature. After etching with H2O2 ( El Nasr Pharmaceutical Chemicals Co., Egypt ), the posts were rinsed with distilled water for 2 min and air-dried for 10 s. Finally, a silane coupling agent was applied for 60 s. Group 3 ( n = 6): (sandblasting + silane). Sandblasting equipment (Cobra, Renfert GmbH, Hilzingen, Germany) was used to sandblast the posts with aluminum oxide particles (Korox 50, Bego, Bremen, Germany), that were 50 microns in size. The sandblast was applied perpendicularly to the surface of the post at a distance of 10 mm for 20 s and at a pressure of 2 bars. Finally, a silane coupling agent was applied. Group 4 ( n = 6): (Er: YAG laser + silane) . The Er: YAG laser machine (Doctor Smile Erbium and Diode laser, Lambda Scientifica S.r.l., Vicenza, Italy) which is a pulsed laser system, emitting at a wavelength of 2,940 nm., was set with the parameters (150 mJ, 10 Hz, 1.5 W). Laser irradiation was done for 60 s, 100-µs pulse duration using a pulsed laser system. The optical tip, which had a diameter of 400 μm, was used at an incidence angle of 45 ° under water cooling (50% water – 50% air), 1-mm distant from the post surface from bottom to top. Finally, a silane coupling agent was applied. To assess the impact of surface treatment on the post surface, a single specimen was randomly chosen from each group after surface treatment and before post cementation. The four samples were mounted on copper stubs with double-sided adhesive tape and were observed under a (SEM) 500x magnification as shown in Fig. . GFP with diameters of 1.5 mm (Harald Nordin SA, Chailly/Monreux, Switzerland) were cemented using self-adhesive resin cement (SDI Limited, Victoria, Australia). Following a 24-hour storage period in distilled water at 37 °C, all specimens underwent thermocycling (2000 cycles at ± 5 °C and 55 °C, with a dwell time of 25 s in each water bath and a lag time of 10 s). Each specimen was cut transversely perpendicular to the root’s long axis in order to get 1.5 mm ± 0.3 thick slices from the coronal, middle, and apical thirds. Every segment was recorded and photographed from the apical and coronal surfaces using a stereomicroscopy (SZ-PT; Olympus, Tokyo, Japan) at an initial amplification of 65x. as shown in Fig. a. In this study, the image analysis tool (Image J; NIH, Bethesda, MD) was used to build a ruler of given length. The Set Scale function was then used to compare the two rulers. After measuring the post’s diameter, the radius could be estimated. The next step was to subject the parts to compressive loads with a 1 mm/min crosshead speed on a universal testing machine (Model 3345; Instron Industrial Products, Norwood, MA, USA) as shown in Fig. b . Analysis of the data was conducted at many stages. Firstly, the descriptive statistics for each group are obtained. A two-way analysis of variance (ANOVA) was conducted to identify the impact of each specific variable (surface treatment group and area). One-way analysis of variance (ANOVA) was conducted, followed by pair-wise Tukey’s post-hoc testing, to identify significant differences between major groups and radiation zones. Data visualizations were generated using Microsoft Excel. The statistical study was conducted using Aasistat 7.6 statistics software for Windows, developed by Campina Grande in Paraiba State, Brazil. The findings of the push-out bond strength tests for the various surface treatments were summarized in Table and Fig. . Regardless to radicular regions, totally it was found that that Laser treated group recorded the highest mean ± SD value of push out bond strength (5.668042 ± 1.16 MPa) followed by H2O2 treated group mean ± SD value (4.400203 ± 0.87 MPa) then Control non-treated group mean ± SD value (4.379534 ± 1.64 MPa) meanwhile the lowest mean ± SD value was recorded with Sand-blast treated group (3.466738 ± 0.98 MPa). The difference between groups was statistically significant as indicated by two-way ANOVA (F = 7.6, P = 0.0013 < 0.05). Pair-wise Tukey’s post-hoc test showed no-significant ( p > 0.05) between (control and laser), (control and H2O2), (control and Sand-blast) and (Sand-blast and H2O2). Regardless of surface treatment groups, it was found that the middle region had the greatest push-out bond strength, with a mean ± SD of 4.746851 ± 0.73 MPa, followed by the apical region mean ± SD value of (4.720187 ± 0.49 MPa), while the lowest mean ± SD value was recorded with the coronal region (3.968848 ± 1.25 MPa). Table and Fig. , show that the two-way ANOVA (F = 2.6, P = 0.0879 > 0.05) showed that there was no statistically significant difference between the subgroups of the radicular region. Despite several comparative studies demonstrating the benefits of different surface pretreatment approaches for fiber posts, the literature lacks consensus regarding the most effective surface pretreatment approach for achieving optimal bonding . The results of our study showed that the groups treated with lasers, H2O2, and the control group had the strongest push-out bond strength in that order. The group treated with sandblasting, on the other hand, exhibited the weakest push-out bond strength. Irrigation protocol was done to enhance bond strength of GFP to dentin walls, as according to previous study , EDTA + laser-activated irrigation significantly reduces debris or smear layers from the root dentin wall. Silane coupling agents have been suggested by several studies as a way to improve the fiber post’s adherence to resin cement . Enhancing the formation of a covalent bond between the silane coupling agent, the resin cement, and the exposed glass fibers of the fiber post improves the surface wettability . Researches indicate that silanization enhances the retention of glass fiber posts only when the post undergoes suitable surface pretreatment prior to silane application . The integrity of the bonding between silane and the GFP is diminished when the glass fibers are encased in a strongly cross-linked, non-reactive epoxy resin matrix . The current investigation employed a 30% concentration of H2O2 for 5 min using the immersion technique, in accordance with recommendations from a prior study . The hypothesis was that performing etching of the fiber post with H2O2 prior to applying silane would improve the adhesion between the resin cement and the glass fiber posts, leading to a higher push-out bond strength compared to the control group . This phenomenon was observed due to the selective dissolving of the epoxy matrix by H2O2 through substrate oxidation while leaving the glass fibers intact and exposed for silanization. SEM images of H2O2 group Fig. b. have shown that dissolution of epoxy resin from GFP surfaces reveals the fibers and facilitates the formation of extra gaps for micro-mechanical retention of resin cements . Recently, lasers have been employed for surface cleaning of materials, optimization of wettability, and increase of adhesion and stability of adhesive surfaces. Various laser types, such as Erbium yttrium scandium gallium garnet (Er, Cr: YSGG), Erbium-doped yttrium aluminum garnet (Er: YAG), and neodymium-doped yttrium aluminum garnet (Nd: YAG), are employed in dentistry with improved results . In the irradiated area, water molecules and OH − groups absorb the Er: YAG laser, resulting in a rapid temperature increase. The heating characteristic of this process stimulates the evaporation of water molecules, resulting in increased pressure within the tissue and triggering micro explosions . This procedure is termed ablation, resulting in morphological alterations in the hard tissue. For optimal outcomes with the Er: YAG laser and to enhance ablation, the presence of water molecules in the treatment area is crucial . The current power level of 1.5 W for the Er: YAG laser was suggested in a previous study , that showed that all the surface glass fibers of the posts were safely exposed without any damage at this power, as according to previous study laser in high power density cause changes in the structural characteristics of GFP and decrease the flexural strength and flexural modulus values. The application of Er: YAG laser post surface treatment significantly improved the push-out bond strength relative to other groups. This behavior can be due to the action of the Er: YAG laser, which facilitates the ablation of the resin matrix on the surfaces of GFP, thus revealing the glass fibers. As a result, this technique results in the creation of a rough and irregular surface, featuring microretentive areas on the GFP surface . The results of the SEM show that this hypothesis is true Fig. d. The ablation method selectively eliminates a thin layer of the epoxy resin matrix, therefore preserving the glass fibers intact. The surface displayed no residual debris inside the fibers, hence enhancing the micromechanical interlocking of resin cement to the post surface. This was in accordance with a study by Dikec et al. , Abohajar et al. , and Bitter et al. , who concluded that fiber post-surface treatment by Er: YAG laser increases its bond strength to resin cement and reduces adhesive failures, cement-dentin gap formation, and nanoleakage. While Mekky et al. achieved a superior outcome with the Er-Cr: YSGG laser. This was in contradiction to Kurt et al. who stated that the Er: YAG group exhibits lower bond strength compared to the sand-blasted group and Križnar et al. , who find that the Er: YAG group exhibits reduced bond strength relative to the untreated group. While Akin et al. found that Sandblasting and Er: YAG laser-irradiation of the surface of the quartz fiber post before cementation is recommended for increasing retention. The differences in outcomes may be ascribed to the type of fiber post (quartz fiber post versus glass fiber post) and the specific setup parameters employed. The sandblasting of the GFP surface increase its surface area, facilitating interaction between the glass fibers and the silane coupling agent . The sandblasting process causes morphological and dimensional alterations that rely on pressure, duration, and particle size parameters . This study involved the application of 50-micron Al2O3 powder at a pressure of 2 bar for 20 s with a nozzle distance of 10 mm. The choice of this particular particle size was determined by its capacity to induce surface alterations in the post without undergoing deformation . Although sandblasting effectively roughens the surface of fiber posts to improve adhesion, it may also causes damage on the glass fibers and resin matrix, as seen in Fig. c, by causing disruption at the interface between the fibers . This may explain the low results of the push-out bond strength, as the prolonged blasting procedure might be a contributing factor for this result, resulting in an excessive removal of the resinous matrix instead of just the outermost layer. As a result, it produced minor irregular surface roughness, which could hinder optimal wetting by the silane coupling agent and lead to the formation of voids between the resin matrix of the post and the silane interface. This result is consistent with the research carried out by Soares et al. and Subramani et al. , which found that sandblasting pretreatment before silane application results in reduced bond strength compared to silane application alone. In contrast, previous research conducted by Sahafi et al. and Tuncdemir et al. , found no significant difference between the group subjected to sand blasting and the control group that did not receive any treatment. This finding contradicts the results of Kelsey et al. and Albashaireh that sandblasting produced greater bond strength compared to other surface roughening techniques. Fiber posts are commonly cemented to root canals using adhesive luting cements that are based on resin. The efficiency after bonding could be compromised by the time-consuming and technique-sensitive multistep bonding process . So, this research used adhesive- based resin cement as it demineralizes and infiltrates the tooth substrate, resulting in micromechanical retention. This study selected the push-out examination due to its ease of execution, reduced incidence of cohesive failure, and lower standard deviation. Push- out testing revealed an even greater distribution of stresses by finite element analysis; it has also been reported to exert shear stress parallel to the GFP and resin cement interface, analogous to clinical conditions . Our investigation revealed that, irrespective of surface treatment groups, the middle region exhibited the maximum push-out bond strength, followed by the apical region, whereas the coronal region had the lowest value. Despite the statistical insignificance of the bond strength variations among the radicular regions, the elevated values in the middle and apical regions may be attributed to the enhanced adaptation of the post to the root canal walls in these areas, as well as the reduced diameter of the post in the apical third, given that the morphology of the root canal closely resembled the shape, diameter, and taper of the posts . Moreover, according to Goracci et al. , the root canal morphology leads to an increase in cement thickness in the cervical region, which might have a detrimental impact on the retentive bond strength of the cemented fiber posts. Elnaghy et al. performed more research and came to conflicting conclusions, with self-adhesive resin cements showing a decrease in bond strength values towards the apical third. According to the results of our study, the hypothesis was partially rejected as in comparison to the silanized group, the sandblasting group produced a lower value for the bonding capacity of fiber post to resin cement, but the Er: YAG and hydrogen peroxide groups only showed higher values. Regarding the limitations of this study, although thermocycling was used as an aging factor simulation, the use of thermomechanical cycling procedures could mimic the clinical situation more precisely, and also our study did not study the failure pattern of the bonding surface by SEM. The effect of various laser surface treatment characteristics on the adhesion of fiber post and resin cement should be investigated further. Within the limitations of the current study, the following can be concluded: The utilizing the Er: YAG laser achieves remarkable surface enhancement applied to glass fiber posts, successfully enhancing their ability to adhere to resin cement. Sandblasting decreases fiber post-retention to resin cement. The hydrogen peroxide and the control groups give similar bond strength. |
Small-Molecule Anti-HIV-1 Agents Based on HIV-1 Capsid Proteins | add94392-ce31-4280-ba28-034f94ea27bf | 7913237 | Pharmacology[mh] | As a retrovirus, human immunodeficiency virus type 1 (HIV-1) can infect CD4-positive T-cells or macrophage, eventually causing acquired immunodeficiency syndrome (AIDS). To date, many anti-HIV-1 drugs , such as inhibitors of reverse transcriptase , protease and integrase , have been developed for the therapeutic treatment of HIV-1-infected individuals and AIDS patients. The utilization of the above drugs in combination with antiretroviral therapy (cART) has brought remarkable success to the chemotherapy of HIV infectious diseases . There are serious defects however, which have not been escaped. These contain the appearance of mutant viral strains with multi-drug resistance, emergence of severe side effects and costs of the dosed drugs. In an effort to solve these problems and enhance the repertoire of anti-HIV-1 drugs, we have sought drugs with different mechanisms of action such as coreceptor CXCR4 antagonists , CD4 mimics , fusion inhibitors , integrase inhibitors and inhibitors of viral uncoating and viral assembly . HIV-1 capsid (CA) proteins , which are generated from the Gag precursor protein Pr55Gag and are composed of N - and C -terminal domains (NTD/CTD), are highly conserved among many HIV strains. These proteins are structurally assembled by oligomerization of hexamers and pentamers , to form a CA core with a conical structure , which encapsulates the HIV-1 RNA genome, the integrase and reverse transcriptase. Matrix (MA) proteins also result from Pr55Gag, located inside viral membranes, and contribute to the assembly of the virion shell . Both the MA and CA proteins are considered to be great targets for inhibition against viral replication, and some MA- and CA-derived peptides with anti-HIV activity have been reported to date by our group and others . Since viral uncoating based on the MA/CA degradation and viral assembly as a consequence of MA/CA protein oligomerization are performed inside host cells, inhibitors must have cell membrane permeability to be able to suppress viral uncoating and assembly. Consequently, an octa-arginyl group was incorporated into the above peptide inhibitors to add cell membrane permeability . However, small compounds that are found to have inhibitory activity against viral uncoating and assembly might have cell membrane permeability. To date, several small compounds have been discovered but, except for GS-6207 , none has progressed to clinical trials.
2.1. General Information All reactions utilizing air- or moisture-sensitive reagents were performed in dried glassware under an atmosphere of nitrogen, using commercially supplied solvents and reagents unless otherwise noted. CH 2 Cl 2 (DCM) was distilled from CaH 2 and stored over molecular sieves 4A. Thin-layer chromatography (TLC) was performed on Merck 60F 254 precoated silica gel plates (Merck, Darmstadt, Germany) and was visualized by fluorescence quenching under UV light and by staining with phosphomolybdic acid, p-anisaldehyde, or ninhydrin. A solvent system consisting of 0.1% TFA in H 2 O solution ( v / v , solvent A) and 0.1% TFA in MeCN ( v / v , solvent B) were used for HPLC elution. For analytical HPLC, a Cosmosil 5C 18 -ARII column (4.6 × 250 mm, Nacalai Tesque, Inc., Kyoto, Japan) was employed with a linear gradient of B at a flow rate of 1 cm 3 min −1 on a JASCO PU-2089 plus (JASCO Corporation, Ltd., Tokyo, Japan), and eluting products were detected by UV at 220 nm. Preparative HPLC was performed using a Cosmosil 5C 18 -AR II column (20 × 250 mm, Nacalai Tesque, Inc.) on a JASCO PU-2086 plus (JASCO Corporation, Ltd.) in a suitable gradient mode of B at a flow rate of 10 cm 3 min −1 . Optical rotations were measured on a JASCO P-2200 polarimeter (JASCO Corporation, Ltd.) operating at the sodium D line with a 100 mm path length cell at 25 °C, and were reported as follows: [α] D (concentration (g/100 mL), solvent). Infrared (IR) spectra were measured on a JASCO FT/IR 4100 (JASCO Corporation, Ltd.) and recorded as wavelength (cm −1 ). 1 H- and 13 C-NMR spectra were recorded using a Bruker AVANCE III 400 spectrometer or AVANCE 500 spectrometer (Bruker, Billerica, MA, USA). Chemical shifts are reported in δ (ppm) relative to Me 4 Si (in CDCl 3 , or MeOH- d 4 ) as an internal standard. High-resolution mass spectra were recorded on a Bruker Daltonics micrOTOF focus (ESI) mass spectrometer (Bruker) in the positive detection mode. For flash chromatography, silica gel 60 N (Kanto Chemical Co., Inc., Tokyo, Japan) was employed. The details of synthesis and characterization data of the compounds are available in the . 2.2. In Silico Screening of Antiviral Candidates To perform the in silico screening, we first obtained the structure of the dimer of CA proteins (PDB ID: 3J34) from the Protein Data Bank ( https://www.rcsb.org/ ). The structure of the dimer of CA proteins was thermodynamically optimized by the energy minimization using MOE and the Amber10: EHT force field . The one monomer of the dimer of CA proteins was fixed as a receptor, whereas the residues of the other monomer were removed except for Typ184 and Met185 residues, of which the side chains play a key role of the dimer formation. Using the complex composing of the CA monomer as a receptor and Trp184Met185 dipeptide as a ligand, we searched for the compounds having a higher affinity than the dimer formation. To do this, the main chain backbone replacement of the dipeptide on the CA protein was performed by the Scaffold Replacement application in MOE using the linker database of MOE and the Amber10: EHT force field, whereas the side chains of the dipeptide were fixed. From the result, we selected the compounds having higher scores of the binding affinity for receptor (London dG), ligand efficacy, and topological polar surface area (TPSA). 2.3. Evaluation of Anti-HIV-1 Activity and Cytotoxicity For virus preparation, 293T/17 cells (Invitrogen), which are maintained in Dulbecco’s modified Eagle medium (DMEM) containing 10% FBS, in a T-75 flask were transfected with 10 μg of the pNL4-3 construct by the calcium phosphate method. The supernatant was collected 48 h after transfection, passed through a 0.45 μm filter, and stored at −80 °C as a stock virus. Inhibitory activities of test compounds against X4-HIV-1 (NL4-3 strain)-induced cytopathogenicity in MT-4 cells , which are maintained in RPMI-1640 containing 10% FBS, were assessed by the MTT assay. Various concentrations of test compound solutions were added to HIV-1-infected MT-4 cells at multiplicity of infection (MOI) of 0.001, and placed in wells of a 96-well microplate. The test compounds and the reference compounds such as AZT and AMD3100 were diluted by two-fold and five-fold, respectively. After 5 days’ incubation at 37 °C in a CO 2 incubator, the number of viable cells was determined by the MTT assay. Cytotoxicities of the test compounds were determined based on reduction of the viability of MT-4 cells by the MTT assay. The p24 antigen content in the culture supernatant was measured using HIV-1 p24 antigen enzyme-linked immunosorbent assay (ELISA) kit according to manufacturer’s instructions (Zeptometrix, Buffalo, NY, USA). A reverse transcriptase inhibitor, AZT, a CXCR4 antagonist, AMD3100 (Sigma Aldrich, St. Louis, MO, USA) were employed as positive control compounds with anti-HIV activity.
All reactions utilizing air- or moisture-sensitive reagents were performed in dried glassware under an atmosphere of nitrogen, using commercially supplied solvents and reagents unless otherwise noted. CH 2 Cl 2 (DCM) was distilled from CaH 2 and stored over molecular sieves 4A. Thin-layer chromatography (TLC) was performed on Merck 60F 254 precoated silica gel plates (Merck, Darmstadt, Germany) and was visualized by fluorescence quenching under UV light and by staining with phosphomolybdic acid, p-anisaldehyde, or ninhydrin. A solvent system consisting of 0.1% TFA in H 2 O solution ( v / v , solvent A) and 0.1% TFA in MeCN ( v / v , solvent B) were used for HPLC elution. For analytical HPLC, a Cosmosil 5C 18 -ARII column (4.6 × 250 mm, Nacalai Tesque, Inc., Kyoto, Japan) was employed with a linear gradient of B at a flow rate of 1 cm 3 min −1 on a JASCO PU-2089 plus (JASCO Corporation, Ltd., Tokyo, Japan), and eluting products were detected by UV at 220 nm. Preparative HPLC was performed using a Cosmosil 5C 18 -AR II column (20 × 250 mm, Nacalai Tesque, Inc.) on a JASCO PU-2086 plus (JASCO Corporation, Ltd.) in a suitable gradient mode of B at a flow rate of 10 cm 3 min −1 . Optical rotations were measured on a JASCO P-2200 polarimeter (JASCO Corporation, Ltd.) operating at the sodium D line with a 100 mm path length cell at 25 °C, and were reported as follows: [α] D (concentration (g/100 mL), solvent). Infrared (IR) spectra were measured on a JASCO FT/IR 4100 (JASCO Corporation, Ltd.) and recorded as wavelength (cm −1 ). 1 H- and 13 C-NMR spectra were recorded using a Bruker AVANCE III 400 spectrometer or AVANCE 500 spectrometer (Bruker, Billerica, MA, USA). Chemical shifts are reported in δ (ppm) relative to Me 4 Si (in CDCl 3 , or MeOH- d 4 ) as an internal standard. High-resolution mass spectra were recorded on a Bruker Daltonics micrOTOF focus (ESI) mass spectrometer (Bruker) in the positive detection mode. For flash chromatography, silica gel 60 N (Kanto Chemical Co., Inc., Tokyo, Japan) was employed. The details of synthesis and characterization data of the compounds are available in the .
To perform the in silico screening, we first obtained the structure of the dimer of CA proteins (PDB ID: 3J34) from the Protein Data Bank ( https://www.rcsb.org/ ). The structure of the dimer of CA proteins was thermodynamically optimized by the energy minimization using MOE and the Amber10: EHT force field . The one monomer of the dimer of CA proteins was fixed as a receptor, whereas the residues of the other monomer were removed except for Typ184 and Met185 residues, of which the side chains play a key role of the dimer formation. Using the complex composing of the CA monomer as a receptor and Trp184Met185 dipeptide as a ligand, we searched for the compounds having a higher affinity than the dimer formation. To do this, the main chain backbone replacement of the dipeptide on the CA protein was performed by the Scaffold Replacement application in MOE using the linker database of MOE and the Amber10: EHT force field, whereas the side chains of the dipeptide were fixed. From the result, we selected the compounds having higher scores of the binding affinity for receptor (London dG), ligand efficacy, and topological polar surface area (TPSA).
For virus preparation, 293T/17 cells (Invitrogen), which are maintained in Dulbecco’s modified Eagle medium (DMEM) containing 10% FBS, in a T-75 flask were transfected with 10 μg of the pNL4-3 construct by the calcium phosphate method. The supernatant was collected 48 h after transfection, passed through a 0.45 μm filter, and stored at −80 °C as a stock virus. Inhibitory activities of test compounds against X4-HIV-1 (NL4-3 strain)-induced cytopathogenicity in MT-4 cells , which are maintained in RPMI-1640 containing 10% FBS, were assessed by the MTT assay. Various concentrations of test compound solutions were added to HIV-1-infected MT-4 cells at multiplicity of infection (MOI) of 0.001, and placed in wells of a 96-well microplate. The test compounds and the reference compounds such as AZT and AMD3100 were diluted by two-fold and five-fold, respectively. After 5 days’ incubation at 37 °C in a CO 2 incubator, the number of viable cells was determined by the MTT assay. Cytotoxicities of the test compounds were determined based on reduction of the viability of MT-4 cells by the MTT assay. The p24 antigen content in the culture supernatant was measured using HIV-1 p24 antigen enzyme-linked immunosorbent assay (ELISA) kit according to manufacturer’s instructions (Zeptometrix, Buffalo, NY, USA). A reverse transcriptase inhibitor, AZT, a CXCR4 antagonist, AMD3100 (Sigma Aldrich, St. Louis, MO, USA) were employed as positive control compounds with anti-HIV activity.
3.1. In Silico Screening to Find Drug Leads Targeting CA Proteins 3.1.1. Structural Analysis of CA Proteins Structural analysis of CA proteins revealed a hydrophobic interaction between two CA molecules in Helix 9 of CTD involving Trp184 of one molecule and Met185 of the other molecule. This interaction is important for stabilization of the multimeric structure forming the CA core . In addition, viral mutants with Trp184Ala and Met185Ala mutations have no infectivity because the dimeric interaction between two CA molecules of the mutants is weakened, thereby causing abnormal morphology of the viral particles . Furthermore, a CA-derived fragment peptide covering Helix 9, which includes Trp184 and Met185, was previously found to have significant anti-HIV activity , and the Trp184-Met185 dipeptide is known to be highly conserved among natural HIV/simian immunodeficiency virus (SIV) strains . In this study, based on the above information, the site of the hydrophobic interaction between two CA molecules via Trp184 of one molecule and Met185 of the other molecule might be considered a valid drug target for CA dysfunction. A novel small molecule, which might bind to the above site, was designed by in silico screening. This drug candidate and several of its derivatives were synthesized, and their anti-HIV activity and cytotoxicity were evaluated. 3.1.2. Results of the In Silico Screening Initially, a series of dipeptide mimics of Trp184 and Met185 were designed using the Molecular Operating Environment (MOE) (Chemical Computing Group Inc., Montreal, QC, Canada). Briefly, using the structure of the CA protein dimer (PDB ID: 3J34), the structure of one monomer molecule was fixed as a receptor side, and the main chain of Trp184 and Met185 of the other monomer molecule was removed. The side chains of these two residues were fixed in place, and the backbone structures crosslinking two side-chain functional groups were screened against the linker database provided by MOE to bind to the receptor side . Antiviral candidates of dipeptide mimics were selected using the Scaffold Replacement application in MOE and based on the following scores showing binding affinity for receptors, ligand efficacy (London dG), topological polar surface area, molecular weight, log of the octanol/water partition coefficient (SlogP) and an estimate of the synthetic feasibility . Many London dG values mean binding affinities of available compounds for target proteins, and smaller values show higher binding affinity. The London dG value of the dimer of CA proteins is approximately −6, and compounds with London dG values of less than −6 have higher binding affinity for a CA molecule when compared to the interaction between two CA molecules. This screening served to identify some candidates with useful binding affinity, including MKN-1 ( 1 ), whose London dG value, ligand efficiency, topological polar surface area (TPSA) and SlogP value are −9.134 kcal/mol, 0.2559, 69.73 Å 2 and 4.022, respectively. The structure of MKN-1 ( 1 ) is completely different from that of known small compounds, which were previously developed . 3.2. Synthesis of Novel CA-Targeting Anti-HIV Agents 3.2.1. Synthesis of MKN-1 ( 1 ) A possible synthesis of MKN-1 ( 1 ) was outlined. For its construction, the structure of 1 , with two chiral centers was divided into three segments . Initially, Segment I was prepared as a mixture of racemates. Treatment of 1,3-butanediol ( 2 ) with p -toluenesulfonyl (tosyl) chloride (TsCl) in the presence of a catalyst, 4-dimethylaminopyridine (DMAP) gave the tosylated alcohol ( 3 ) , and subsequent treatment with sodium methanethiolate led to a sulfide ( 4 ) that corresponds to Segment I, in 96% yield over two steps ( a). The synthesis of Segment II is shown in b. Treatment of the indole ( 5 ) with iodine in the presence of potassium hydroxide yielded the 3-iodinated indole ( 6 ), and subsequent N -Boc-protection gave a Boc-protected indole ( 7 ) in 60% yield over two steps. Treatment of 7 with n -butyllithium and isopropoxyboronic acid pinacol ester produced a pinacol ester ( 8 ) corresponding to Segment II, in 50% yield ( b) . Segment III was stereoselectively synthesized using a Strecker reaction . Treatment of o -bromobenzaldehyde ( 9 ) with ( S )--1-(4-methoxyphenyl)-ethyl amine ( 10 ) and sodium cyanide gave ( S , S )-α-aminonitrile ( 11 ) in a highly diastereoselective reaction, in 56% yield. The acid hydrolysis of 11 led to an enantiopure ( S )-α-arylglycine ( 12 ), and the subsequent methyl esterification with thionyl chloride and N -Boc-protection of the α-amino group in ( 13 ) produced compound 14 that corresponds to Segment III, in 69% yield after three steps ( c). Using these three segments, MKN-1 ( 1 ) and its diastereomer were synthesized as shown in d. Compounds 8 (Segment II) and 14 (Segment III) were condensed by a Suzuki-Miyaura cross coupling reaction using tetrakis(triphenylphosphine)palladium (0) to obtain compound 15 in 98% yield. Saponification of compound 15 with lithium hydroxide yielded an acid ( 16 ), and the subsequent condensation with compound 4 (Segment I) by 1-ethyl-3-(3-dimethylaminopropyl) carbo-diimide hydrochloride (EDCI·HCl) in the presence of a catalyst (DMAP) gave an ester ( 17 ). The deprotection of the two N -Boc groups of 17 by HCl/dioxane gave MKN-1 ( 1 ) and its diastereoisomer, which were separated and purified by preparative HPLC to yield a first eluent MKN-1A ( 1A ) and a second eluent, MKN-1B ( 1B ), in yields of 3% and 2%, respectively, over three steps ( d). 3.2.2. Stereoselective Synthesis of MKN-1 ( 1 ) Next, the stereoselective synthesis of MKN-1 ( 1 ) was performed. In the synthesis of Segment I, ( S )--1,3-butanediol ( 18 ) was used as a starting material, and a chiral alcohol ( 20 ) was obtained in 63% in two steps in a manner similar to that shown in a. Saponification of the ester ( 15 ), the subsequent condensation with an alcohol ( 20 ) and the N -Boc deprotection followed by HPLC purification obtained diastereoselectively the target compound MKN-1 ( 1 ) in 12% yield over three steps in a pathway similar to that shown in d . In HPLC analysis, MKN-1 ( 1 ) corresponds to MKN-1A ( 1A ) . 3.2.3. Synthesis of MKN-1 Derivatives Synthesis of MKN-1 Derivatives with Aryl Ring Substitution 22 , 24 , 27 , and 28 MKN-1 ( 1 ) has two pharmacophoric functional groups: the indolyl and sulfidyl groups. Initially, MKN-1 derivatives, in which the indolyl moiety has been replaced, were designed. In general, naphthyl groups and benzofuranyl and benzothiophenyl groups are used as useful analogues of an indolyl group. Thus, MKN-1 derivatives, with naphthyl, benzofuranyl and benzothiophenyl groups in the place of the indolyl group, were synthesized. In the synthesis of MKN-1 derivatives with a 1-naphthyl or 2-naphthyl group, the Suzuki-Miyaura cross coupling of a phenylglycine derivative ( 14 ) with 1-naphthylboronic acid or 2-naphthylboronic acid, led to compound 21 or 23 , both in 42% yield ( a,b). Subsequent saponification, condensation with compound 20 , deprotection of the N -Boc group and HPLC purification gave compound 22 in 12% yield or 24 in 10% yield, over a three-step route similar to and ( a,b). In the synthesis of MKN-1 derivatives with a benzofuranyl or benzothiophenyl group, the Suzuki-Miyaura cross coupling of a phenylglycine derivative ( 14 ) with benzofuran-3-boronic acid or benzo[ b ]thiophene-3-boronic acid followed by the treatment described above gave compound 27 in 5% yield or 28 in 2% yield, in four steps ( c). Synthesis of MKN-1 Derivatives with Sulfide Substitution 30 , 34 – 36 , and 38 Next, MKN-1 derivatives, in which the sulfidyl moiety was replaced, were designed. MKN-1 derivatives, with methoxy, tert -butyl sulfidyl, iso -propyl sulfidyl, benzenesulfidyl and methanesulfonyl groups in the place of the methanesulfidyl group, were synthesized. In the synthesis of an MKN-1 derivative with a methoxy group, the alcohol bearing a methoxy group ( 29 ) was prepared by methylation of compound 18 in 17% yield, and saponification of compound 15 . Subsequent condensation with alcohol 29 , deprotection of the N -Boc group and HPLC purification gave compound 30 in 6% yield over a three-step pathway similar to that shown in and . In the synthesis of MKN-1 derivatives with a sulfidyl group, the alcohols bearing tert -butyl sulfidyl, iso -propyl sulfidyl and benzenesulfidyl groups ( 31–33 ) were prepared in 77%, 70% and 88% yields, respectively, by treatment of compound 19 with sodium tert -butylthiolate, sodium 2-propanethiolate or sodium thiophenolate. Then, condensation of the hydrolysate of compound 15 with alcohols 31 – 33 , followed by deprotection of the N -Boc group and HPLC purification gave compounds 34 – 36 in 12%, 19% and 9% yields, respectively, in three steps similar to those described above in . In the synthesis of an MKN-1 derivative with a methanesulfonyl group, the condensation of the hydrolysate of compound 15 with the alcohol ( 20 ) led to the ester ( 37 ) in 35% yield in two steps, and subsequent oxidation of the sulfidyl group of 37 with m -chloroperoxybenzoic acid ( m CPBA), deprotection of the N -Boc group and HPLC purification gave compound 38 in 8% yield over two steps . 3.3. Evaluation of Anti-HIV Activity and Cytotoxicity of the Synthesized Compounds The anti-HIV activity of the synthesized compounds was assessed based on protection against HIV-1 (NL4-3 strain)-induced cytopathogenicity in MT-4 cells by an MTT assay . The cytotoxicity of these compounds was determined based on reduction of the viability of MT-4 cells determined by an MTT assay. These results are shown in . MKN-1A ( 1A ), which is exactly MKN-1 ( 1 ), showed significant anti-HIV activity, and its diastereoisomer MKN-1B ( 1B ) showed moderate anti-HIV activity, suggesting that chiral recognition by a target molecule, possibly a CA protein, might be important. These compounds were also evaluated by a different method, using an enzyme-linked immune sorbent assay (ELISA) based on their inhibitory effect against the viral p24 antigen expression in NL4-3 strain-infected MT-4 cells. The results agree with the MTT assay in that MKN-1A ( 1A ) showed higher anti-HIV activity than MKN-1B ( 1B ). MKN-1 ( 1 ), which was stereoselectively synthesized and corresponds to MKN-1A ( 1A ), showed high anti-HIV activity. The cytotoxicities of MKN-1 ( 1 ) (MKN-1A ( 1A )) and MKN-1B ( 1B ) at essentially the same level were moderate and weak. MKN-1 derivatives with 1-naphthyl, 2-naphthyl, benzofuranyl and benzothiophenyl groups ( 22 , 24 , 27 , and 28 ) showed weak anti-HIV activity, indicating that the indolyl group is critical and cannot be modified. These MKN-1 derivatives ( 22 , 24 , 27 , and 28 ) showed relatively weak cytotoxicity similar to that of MKN-1 ( 1 ), MKN-1A ( 1A ) and MKN-1B ( 1B ). An MKN-1 derivative with a methoxy group in place of the methanesulfidyl group ( 30 ) showed weak anti-HIV activity, suggesting that a sulfur atom is important for significant anti-HIV activity, although an oxygen atom in this position retains minor activity. Compound 30 however failed to exhibit any cytotoxicity below 50 μM, indicating that an oxygen atom is more suitable than a sulfur atom in terms of low cytotoxicity. MKN-1 derivatives with tert -butyl sulfidyl, iso -propyl sulfidyl and benzenesulfidyl groups ( 34 – 36 ) showed higher anti-HIV activity than the 1-naphthyl, 2-naphthyl, benzofuranyl and benzothiophenyl group-substituted derivatives ( 22 , 24 , 27 , and 28 ) but lower activity than the parent compound MKN-1 ( 1 ) (MKN-1A ( 1A )). These derivatives ( 34 – 36 ) exhibited relatively high cytotoxicity, compared with other derivatives. This suggests that sulfidyl groups are critical for significant anti-HIV activity but sulfidyl groups which are bulky are not suitable in terms of cytotoxicity. A sulfone-substituted derivative ( 38 ) showed weak anti-HIV activity and failed to exhibit any cytotoxicity below 50 μM, indicating that sulfidyl but not sulfonyl groups are critical for significant anti-HIV activity.
3.1.1. Structural Analysis of CA Proteins Structural analysis of CA proteins revealed a hydrophobic interaction between two CA molecules in Helix 9 of CTD involving Trp184 of one molecule and Met185 of the other molecule. This interaction is important for stabilization of the multimeric structure forming the CA core . In addition, viral mutants with Trp184Ala and Met185Ala mutations have no infectivity because the dimeric interaction between two CA molecules of the mutants is weakened, thereby causing abnormal morphology of the viral particles . Furthermore, a CA-derived fragment peptide covering Helix 9, which includes Trp184 and Met185, was previously found to have significant anti-HIV activity , and the Trp184-Met185 dipeptide is known to be highly conserved among natural HIV/simian immunodeficiency virus (SIV) strains . In this study, based on the above information, the site of the hydrophobic interaction between two CA molecules via Trp184 of one molecule and Met185 of the other molecule might be considered a valid drug target for CA dysfunction. A novel small molecule, which might bind to the above site, was designed by in silico screening. This drug candidate and several of its derivatives were synthesized, and their anti-HIV activity and cytotoxicity were evaluated. 3.1.2. Results of the In Silico Screening Initially, a series of dipeptide mimics of Trp184 and Met185 were designed using the Molecular Operating Environment (MOE) (Chemical Computing Group Inc., Montreal, QC, Canada). Briefly, using the structure of the CA protein dimer (PDB ID: 3J34), the structure of one monomer molecule was fixed as a receptor side, and the main chain of Trp184 and Met185 of the other monomer molecule was removed. The side chains of these two residues were fixed in place, and the backbone structures crosslinking two side-chain functional groups were screened against the linker database provided by MOE to bind to the receptor side . Antiviral candidates of dipeptide mimics were selected using the Scaffold Replacement application in MOE and based on the following scores showing binding affinity for receptors, ligand efficacy (London dG), topological polar surface area, molecular weight, log of the octanol/water partition coefficient (SlogP) and an estimate of the synthetic feasibility . Many London dG values mean binding affinities of available compounds for target proteins, and smaller values show higher binding affinity. The London dG value of the dimer of CA proteins is approximately −6, and compounds with London dG values of less than −6 have higher binding affinity for a CA molecule when compared to the interaction between two CA molecules. This screening served to identify some candidates with useful binding affinity, including MKN-1 ( 1 ), whose London dG value, ligand efficiency, topological polar surface area (TPSA) and SlogP value are −9.134 kcal/mol, 0.2559, 69.73 Å 2 and 4.022, respectively. The structure of MKN-1 ( 1 ) is completely different from that of known small compounds, which were previously developed .
Structural analysis of CA proteins revealed a hydrophobic interaction between two CA molecules in Helix 9 of CTD involving Trp184 of one molecule and Met185 of the other molecule. This interaction is important for stabilization of the multimeric structure forming the CA core . In addition, viral mutants with Trp184Ala and Met185Ala mutations have no infectivity because the dimeric interaction between two CA molecules of the mutants is weakened, thereby causing abnormal morphology of the viral particles . Furthermore, a CA-derived fragment peptide covering Helix 9, which includes Trp184 and Met185, was previously found to have significant anti-HIV activity , and the Trp184-Met185 dipeptide is known to be highly conserved among natural HIV/simian immunodeficiency virus (SIV) strains . In this study, based on the above information, the site of the hydrophobic interaction between two CA molecules via Trp184 of one molecule and Met185 of the other molecule might be considered a valid drug target for CA dysfunction. A novel small molecule, which might bind to the above site, was designed by in silico screening. This drug candidate and several of its derivatives were synthesized, and their anti-HIV activity and cytotoxicity were evaluated.
Initially, a series of dipeptide mimics of Trp184 and Met185 were designed using the Molecular Operating Environment (MOE) (Chemical Computing Group Inc., Montreal, QC, Canada). Briefly, using the structure of the CA protein dimer (PDB ID: 3J34), the structure of one monomer molecule was fixed as a receptor side, and the main chain of Trp184 and Met185 of the other monomer molecule was removed. The side chains of these two residues were fixed in place, and the backbone structures crosslinking two side-chain functional groups were screened against the linker database provided by MOE to bind to the receptor side . Antiviral candidates of dipeptide mimics were selected using the Scaffold Replacement application in MOE and based on the following scores showing binding affinity for receptors, ligand efficacy (London dG), topological polar surface area, molecular weight, log of the octanol/water partition coefficient (SlogP) and an estimate of the synthetic feasibility . Many London dG values mean binding affinities of available compounds for target proteins, and smaller values show higher binding affinity. The London dG value of the dimer of CA proteins is approximately −6, and compounds with London dG values of less than −6 have higher binding affinity for a CA molecule when compared to the interaction between two CA molecules. This screening served to identify some candidates with useful binding affinity, including MKN-1 ( 1 ), whose London dG value, ligand efficiency, topological polar surface area (TPSA) and SlogP value are −9.134 kcal/mol, 0.2559, 69.73 Å 2 and 4.022, respectively. The structure of MKN-1 ( 1 ) is completely different from that of known small compounds, which were previously developed .
3.2.1. Synthesis of MKN-1 ( 1 ) A possible synthesis of MKN-1 ( 1 ) was outlined. For its construction, the structure of 1 , with two chiral centers was divided into three segments . Initially, Segment I was prepared as a mixture of racemates. Treatment of 1,3-butanediol ( 2 ) with p -toluenesulfonyl (tosyl) chloride (TsCl) in the presence of a catalyst, 4-dimethylaminopyridine (DMAP) gave the tosylated alcohol ( 3 ) , and subsequent treatment with sodium methanethiolate led to a sulfide ( 4 ) that corresponds to Segment I, in 96% yield over two steps ( a). The synthesis of Segment II is shown in b. Treatment of the indole ( 5 ) with iodine in the presence of potassium hydroxide yielded the 3-iodinated indole ( 6 ), and subsequent N -Boc-protection gave a Boc-protected indole ( 7 ) in 60% yield over two steps. Treatment of 7 with n -butyllithium and isopropoxyboronic acid pinacol ester produced a pinacol ester ( 8 ) corresponding to Segment II, in 50% yield ( b) . Segment III was stereoselectively synthesized using a Strecker reaction . Treatment of o -bromobenzaldehyde ( 9 ) with ( S )--1-(4-methoxyphenyl)-ethyl amine ( 10 ) and sodium cyanide gave ( S , S )-α-aminonitrile ( 11 ) in a highly diastereoselective reaction, in 56% yield. The acid hydrolysis of 11 led to an enantiopure ( S )-α-arylglycine ( 12 ), and the subsequent methyl esterification with thionyl chloride and N -Boc-protection of the α-amino group in ( 13 ) produced compound 14 that corresponds to Segment III, in 69% yield after three steps ( c). Using these three segments, MKN-1 ( 1 ) and its diastereomer were synthesized as shown in d. Compounds 8 (Segment II) and 14 (Segment III) were condensed by a Suzuki-Miyaura cross coupling reaction using tetrakis(triphenylphosphine)palladium (0) to obtain compound 15 in 98% yield. Saponification of compound 15 with lithium hydroxide yielded an acid ( 16 ), and the subsequent condensation with compound 4 (Segment I) by 1-ethyl-3-(3-dimethylaminopropyl) carbo-diimide hydrochloride (EDCI·HCl) in the presence of a catalyst (DMAP) gave an ester ( 17 ). The deprotection of the two N -Boc groups of 17 by HCl/dioxane gave MKN-1 ( 1 ) and its diastereoisomer, which were separated and purified by preparative HPLC to yield a first eluent MKN-1A ( 1A ) and a second eluent, MKN-1B ( 1B ), in yields of 3% and 2%, respectively, over three steps ( d). 3.2.2. Stereoselective Synthesis of MKN-1 ( 1 ) Next, the stereoselective synthesis of MKN-1 ( 1 ) was performed. In the synthesis of Segment I, ( S )--1,3-butanediol ( 18 ) was used as a starting material, and a chiral alcohol ( 20 ) was obtained in 63% in two steps in a manner similar to that shown in a. Saponification of the ester ( 15 ), the subsequent condensation with an alcohol ( 20 ) and the N -Boc deprotection followed by HPLC purification obtained diastereoselectively the target compound MKN-1 ( 1 ) in 12% yield over three steps in a pathway similar to that shown in d . In HPLC analysis, MKN-1 ( 1 ) corresponds to MKN-1A ( 1A ) . 3.2.3. Synthesis of MKN-1 Derivatives Synthesis of MKN-1 Derivatives with Aryl Ring Substitution 22 , 24 , 27 , and 28 MKN-1 ( 1 ) has two pharmacophoric functional groups: the indolyl and sulfidyl groups. Initially, MKN-1 derivatives, in which the indolyl moiety has been replaced, were designed. In general, naphthyl groups and benzofuranyl and benzothiophenyl groups are used as useful analogues of an indolyl group. Thus, MKN-1 derivatives, with naphthyl, benzofuranyl and benzothiophenyl groups in the place of the indolyl group, were synthesized. In the synthesis of MKN-1 derivatives with a 1-naphthyl or 2-naphthyl group, the Suzuki-Miyaura cross coupling of a phenylglycine derivative ( 14 ) with 1-naphthylboronic acid or 2-naphthylboronic acid, led to compound 21 or 23 , both in 42% yield ( a,b). Subsequent saponification, condensation with compound 20 , deprotection of the N -Boc group and HPLC purification gave compound 22 in 12% yield or 24 in 10% yield, over a three-step route similar to and ( a,b). In the synthesis of MKN-1 derivatives with a benzofuranyl or benzothiophenyl group, the Suzuki-Miyaura cross coupling of a phenylglycine derivative ( 14 ) with benzofuran-3-boronic acid or benzo[ b ]thiophene-3-boronic acid followed by the treatment described above gave compound 27 in 5% yield or 28 in 2% yield, in four steps ( c). Synthesis of MKN-1 Derivatives with Sulfide Substitution 30 , 34 – 36 , and 38 Next, MKN-1 derivatives, in which the sulfidyl moiety was replaced, were designed. MKN-1 derivatives, with methoxy, tert -butyl sulfidyl, iso -propyl sulfidyl, benzenesulfidyl and methanesulfonyl groups in the place of the methanesulfidyl group, were synthesized. In the synthesis of an MKN-1 derivative with a methoxy group, the alcohol bearing a methoxy group ( 29 ) was prepared by methylation of compound 18 in 17% yield, and saponification of compound 15 . Subsequent condensation with alcohol 29 , deprotection of the N -Boc group and HPLC purification gave compound 30 in 6% yield over a three-step pathway similar to that shown in and . In the synthesis of MKN-1 derivatives with a sulfidyl group, the alcohols bearing tert -butyl sulfidyl, iso -propyl sulfidyl and benzenesulfidyl groups ( 31–33 ) were prepared in 77%, 70% and 88% yields, respectively, by treatment of compound 19 with sodium tert -butylthiolate, sodium 2-propanethiolate or sodium thiophenolate. Then, condensation of the hydrolysate of compound 15 with alcohols 31 – 33 , followed by deprotection of the N -Boc group and HPLC purification gave compounds 34 – 36 in 12%, 19% and 9% yields, respectively, in three steps similar to those described above in . In the synthesis of an MKN-1 derivative with a methanesulfonyl group, the condensation of the hydrolysate of compound 15 with the alcohol ( 20 ) led to the ester ( 37 ) in 35% yield in two steps, and subsequent oxidation of the sulfidyl group of 37 with m -chloroperoxybenzoic acid ( m CPBA), deprotection of the N -Boc group and HPLC purification gave compound 38 in 8% yield over two steps .
1 ) A possible synthesis of MKN-1 ( 1 ) was outlined. For its construction, the structure of 1 , with two chiral centers was divided into three segments . Initially, Segment I was prepared as a mixture of racemates. Treatment of 1,3-butanediol ( 2 ) with p -toluenesulfonyl (tosyl) chloride (TsCl) in the presence of a catalyst, 4-dimethylaminopyridine (DMAP) gave the tosylated alcohol ( 3 ) , and subsequent treatment with sodium methanethiolate led to a sulfide ( 4 ) that corresponds to Segment I, in 96% yield over two steps ( a). The synthesis of Segment II is shown in b. Treatment of the indole ( 5 ) with iodine in the presence of potassium hydroxide yielded the 3-iodinated indole ( 6 ), and subsequent N -Boc-protection gave a Boc-protected indole ( 7 ) in 60% yield over two steps. Treatment of 7 with n -butyllithium and isopropoxyboronic acid pinacol ester produced a pinacol ester ( 8 ) corresponding to Segment II, in 50% yield ( b) . Segment III was stereoselectively synthesized using a Strecker reaction . Treatment of o -bromobenzaldehyde ( 9 ) with ( S )--1-(4-methoxyphenyl)-ethyl amine ( 10 ) and sodium cyanide gave ( S , S )-α-aminonitrile ( 11 ) in a highly diastereoselective reaction, in 56% yield. The acid hydrolysis of 11 led to an enantiopure ( S )-α-arylglycine ( 12 ), and the subsequent methyl esterification with thionyl chloride and N -Boc-protection of the α-amino group in ( 13 ) produced compound 14 that corresponds to Segment III, in 69% yield after three steps ( c). Using these three segments, MKN-1 ( 1 ) and its diastereomer were synthesized as shown in d. Compounds 8 (Segment II) and 14 (Segment III) were condensed by a Suzuki-Miyaura cross coupling reaction using tetrakis(triphenylphosphine)palladium (0) to obtain compound 15 in 98% yield. Saponification of compound 15 with lithium hydroxide yielded an acid ( 16 ), and the subsequent condensation with compound 4 (Segment I) by 1-ethyl-3-(3-dimethylaminopropyl) carbo-diimide hydrochloride (EDCI·HCl) in the presence of a catalyst (DMAP) gave an ester ( 17 ). The deprotection of the two N -Boc groups of 17 by HCl/dioxane gave MKN-1 ( 1 ) and its diastereoisomer, which were separated and purified by preparative HPLC to yield a first eluent MKN-1A ( 1A ) and a second eluent, MKN-1B ( 1B ), in yields of 3% and 2%, respectively, over three steps ( d).
1 ) Next, the stereoselective synthesis of MKN-1 ( 1 ) was performed. In the synthesis of Segment I, ( S )--1,3-butanediol ( 18 ) was used as a starting material, and a chiral alcohol ( 20 ) was obtained in 63% in two steps in a manner similar to that shown in a. Saponification of the ester ( 15 ), the subsequent condensation with an alcohol ( 20 ) and the N -Boc deprotection followed by HPLC purification obtained diastereoselectively the target compound MKN-1 ( 1 ) in 12% yield over three steps in a pathway similar to that shown in d . In HPLC analysis, MKN-1 ( 1 ) corresponds to MKN-1A ( 1A ) .
Synthesis of MKN-1 Derivatives with Aryl Ring Substitution 22 , 24 , 27 , and 28 MKN-1 ( 1 ) has two pharmacophoric functional groups: the indolyl and sulfidyl groups. Initially, MKN-1 derivatives, in which the indolyl moiety has been replaced, were designed. In general, naphthyl groups and benzofuranyl and benzothiophenyl groups are used as useful analogues of an indolyl group. Thus, MKN-1 derivatives, with naphthyl, benzofuranyl and benzothiophenyl groups in the place of the indolyl group, were synthesized. In the synthesis of MKN-1 derivatives with a 1-naphthyl or 2-naphthyl group, the Suzuki-Miyaura cross coupling of a phenylglycine derivative ( 14 ) with 1-naphthylboronic acid or 2-naphthylboronic acid, led to compound 21 or 23 , both in 42% yield ( a,b). Subsequent saponification, condensation with compound 20 , deprotection of the N -Boc group and HPLC purification gave compound 22 in 12% yield or 24 in 10% yield, over a three-step route similar to and ( a,b). In the synthesis of MKN-1 derivatives with a benzofuranyl or benzothiophenyl group, the Suzuki-Miyaura cross coupling of a phenylglycine derivative ( 14 ) with benzofuran-3-boronic acid or benzo[ b ]thiophene-3-boronic acid followed by the treatment described above gave compound 27 in 5% yield or 28 in 2% yield, in four steps ( c). Synthesis of MKN-1 Derivatives with Sulfide Substitution 30 , 34 – 36 , and 38 Next, MKN-1 derivatives, in which the sulfidyl moiety was replaced, were designed. MKN-1 derivatives, with methoxy, tert -butyl sulfidyl, iso -propyl sulfidyl, benzenesulfidyl and methanesulfonyl groups in the place of the methanesulfidyl group, were synthesized. In the synthesis of an MKN-1 derivative with a methoxy group, the alcohol bearing a methoxy group ( 29 ) was prepared by methylation of compound 18 in 17% yield, and saponification of compound 15 . Subsequent condensation with alcohol 29 , deprotection of the N -Boc group and HPLC purification gave compound 30 in 6% yield over a three-step pathway similar to that shown in and . In the synthesis of MKN-1 derivatives with a sulfidyl group, the alcohols bearing tert -butyl sulfidyl, iso -propyl sulfidyl and benzenesulfidyl groups ( 31–33 ) were prepared in 77%, 70% and 88% yields, respectively, by treatment of compound 19 with sodium tert -butylthiolate, sodium 2-propanethiolate or sodium thiophenolate. Then, condensation of the hydrolysate of compound 15 with alcohols 31 – 33 , followed by deprotection of the N -Boc group and HPLC purification gave compounds 34 – 36 in 12%, 19% and 9% yields, respectively, in three steps similar to those described above in . In the synthesis of an MKN-1 derivative with a methanesulfonyl group, the condensation of the hydrolysate of compound 15 with the alcohol ( 20 ) led to the ester ( 37 ) in 35% yield in two steps, and subsequent oxidation of the sulfidyl group of 37 with m -chloroperoxybenzoic acid ( m CPBA), deprotection of the N -Boc group and HPLC purification gave compound 38 in 8% yield over two steps .
22 , 24 , 27 , and 28 MKN-1 ( 1 ) has two pharmacophoric functional groups: the indolyl and sulfidyl groups. Initially, MKN-1 derivatives, in which the indolyl moiety has been replaced, were designed. In general, naphthyl groups and benzofuranyl and benzothiophenyl groups are used as useful analogues of an indolyl group. Thus, MKN-1 derivatives, with naphthyl, benzofuranyl and benzothiophenyl groups in the place of the indolyl group, were synthesized. In the synthesis of MKN-1 derivatives with a 1-naphthyl or 2-naphthyl group, the Suzuki-Miyaura cross coupling of a phenylglycine derivative ( 14 ) with 1-naphthylboronic acid or 2-naphthylboronic acid, led to compound 21 or 23 , both in 42% yield ( a,b). Subsequent saponification, condensation with compound 20 , deprotection of the N -Boc group and HPLC purification gave compound 22 in 12% yield or 24 in 10% yield, over a three-step route similar to and ( a,b). In the synthesis of MKN-1 derivatives with a benzofuranyl or benzothiophenyl group, the Suzuki-Miyaura cross coupling of a phenylglycine derivative ( 14 ) with benzofuran-3-boronic acid or benzo[ b ]thiophene-3-boronic acid followed by the treatment described above gave compound 27 in 5% yield or 28 in 2% yield, in four steps ( c).
30 , 34 – 36 , and 38 Next, MKN-1 derivatives, in which the sulfidyl moiety was replaced, were designed. MKN-1 derivatives, with methoxy, tert -butyl sulfidyl, iso -propyl sulfidyl, benzenesulfidyl and methanesulfonyl groups in the place of the methanesulfidyl group, were synthesized. In the synthesis of an MKN-1 derivative with a methoxy group, the alcohol bearing a methoxy group ( 29 ) was prepared by methylation of compound 18 in 17% yield, and saponification of compound 15 . Subsequent condensation with alcohol 29 , deprotection of the N -Boc group and HPLC purification gave compound 30 in 6% yield over a three-step pathway similar to that shown in and . In the synthesis of MKN-1 derivatives with a sulfidyl group, the alcohols bearing tert -butyl sulfidyl, iso -propyl sulfidyl and benzenesulfidyl groups ( 31–33 ) were prepared in 77%, 70% and 88% yields, respectively, by treatment of compound 19 with sodium tert -butylthiolate, sodium 2-propanethiolate or sodium thiophenolate. Then, condensation of the hydrolysate of compound 15 with alcohols 31 – 33 , followed by deprotection of the N -Boc group and HPLC purification gave compounds 34 – 36 in 12%, 19% and 9% yields, respectively, in three steps similar to those described above in . In the synthesis of an MKN-1 derivative with a methanesulfonyl group, the condensation of the hydrolysate of compound 15 with the alcohol ( 20 ) led to the ester ( 37 ) in 35% yield in two steps, and subsequent oxidation of the sulfidyl group of 37 with m -chloroperoxybenzoic acid ( m CPBA), deprotection of the N -Boc group and HPLC purification gave compound 38 in 8% yield over two steps .
The anti-HIV activity of the synthesized compounds was assessed based on protection against HIV-1 (NL4-3 strain)-induced cytopathogenicity in MT-4 cells by an MTT assay . The cytotoxicity of these compounds was determined based on reduction of the viability of MT-4 cells determined by an MTT assay. These results are shown in . MKN-1A ( 1A ), which is exactly MKN-1 ( 1 ), showed significant anti-HIV activity, and its diastereoisomer MKN-1B ( 1B ) showed moderate anti-HIV activity, suggesting that chiral recognition by a target molecule, possibly a CA protein, might be important. These compounds were also evaluated by a different method, using an enzyme-linked immune sorbent assay (ELISA) based on their inhibitory effect against the viral p24 antigen expression in NL4-3 strain-infected MT-4 cells. The results agree with the MTT assay in that MKN-1A ( 1A ) showed higher anti-HIV activity than MKN-1B ( 1B ). MKN-1 ( 1 ), which was stereoselectively synthesized and corresponds to MKN-1A ( 1A ), showed high anti-HIV activity. The cytotoxicities of MKN-1 ( 1 ) (MKN-1A ( 1A )) and MKN-1B ( 1B ) at essentially the same level were moderate and weak. MKN-1 derivatives with 1-naphthyl, 2-naphthyl, benzofuranyl and benzothiophenyl groups ( 22 , 24 , 27 , and 28 ) showed weak anti-HIV activity, indicating that the indolyl group is critical and cannot be modified. These MKN-1 derivatives ( 22 , 24 , 27 , and 28 ) showed relatively weak cytotoxicity similar to that of MKN-1 ( 1 ), MKN-1A ( 1A ) and MKN-1B ( 1B ). An MKN-1 derivative with a methoxy group in place of the methanesulfidyl group ( 30 ) showed weak anti-HIV activity, suggesting that a sulfur atom is important for significant anti-HIV activity, although an oxygen atom in this position retains minor activity. Compound 30 however failed to exhibit any cytotoxicity below 50 μM, indicating that an oxygen atom is more suitable than a sulfur atom in terms of low cytotoxicity. MKN-1 derivatives with tert -butyl sulfidyl, iso -propyl sulfidyl and benzenesulfidyl groups ( 34 – 36 ) showed higher anti-HIV activity than the 1-naphthyl, 2-naphthyl, benzofuranyl and benzothiophenyl group-substituted derivatives ( 22 , 24 , 27 , and 28 ) but lower activity than the parent compound MKN-1 ( 1 ) (MKN-1A ( 1A )). These derivatives ( 34 – 36 ) exhibited relatively high cytotoxicity, compared with other derivatives. This suggests that sulfidyl groups are critical for significant anti-HIV activity but sulfidyl groups which are bulky are not suitable in terms of cytotoxicity. A sulfone-substituted derivative ( 38 ) showed weak anti-HIV activity and failed to exhibit any cytotoxicity below 50 μM, indicating that sulfidyl but not sulfonyl groups are critical for significant anti-HIV activity.
Taken together, two pharmacophore functional groups (indolyl and sulfidyl groups) of MKN-1 ( 1 ), which was originally designed as a dipeptide mimic of Trp184 and Met185, are both important for high anti-HIV activity and should not be modified. The indolyl moiety cannot be changed into a 1-naphthyl, 2-naphthyl, benzofuranyl or benzothiophenyl group. The methanesulfidyl moiety can be converted into other sulfidyl groups, such as tert -butyl sulfidyl, iso -propyl sulfidyl and benzenesulfidyl with a slight decrease of anti-HIV activity, and into a methoxy group with a significant decrease of anti-HIV activity, but when changed into a methanesulfonyl group leads to total loss of activity. The Trp184 and Met185 residues are extremely conserved among the proteins of various HIV-1 subtypes circulating in nature, possessing the Shannon entropy scores of 0.0012 and 0.0014 for W184 and M185, respectively, which are even much lower values as compared with those of highly conserved active sites of HIV-1 integrate . The data indicate exceedingly strong selective constraints against changes in the Trp184-Met185 dipeptides and suggest potential therapeutic benefits in targeting them to reduce the risk of emergence of drug resistance variants. As described in , viral mutants with Trp184Ala and Met185Ala mutations has no infectivity, causing abnormal morphology of the viral particles . Therefore, the dipeptide mimic might have advantages when it is used with the lead compounds targeting capsid proteins, such as GS-6207, having different site of actions . Such combinational use of compounds could increase antiviral effects via synergistic effects for disturbing capsid assembly/disassembly during HIV-1 replication in the cells and would reduce risk of emergence of drug resistance variants more significantly than the single-compound use would. However, to obtain more potent lead compounds based on the present results, pharmacophore models on the target site, i.e., the CA-CA interface, can be constructed via molecular dynamics study. The approach may be beneficial to perform alternative screening, de novo design and optimization of lead compounds for obtaining compounds with high antiviral activity. In addition, de novo design of the candidates can be performed by using structural information of a few amino acid residues flanking the W184M185 residues of the CA protein dimer. This approach may improve the binding affinity.
In conclusion, this study presents a new class of small molecules, which was designed by in silico screening as a dipeptide mimic of Trp184 and Met185 at the hydrophobic interaction site between two CA molecules that have been reported to be important for stabilization of the multimeric structure of CA. The designed compound MKN-1 ( 1 ) has significant anti-HIV-1 activity, and its diastereoisomer MKN-1B ( 1B ) has lower activity. Structure activity relationship (SAR) studies of MKN-1 derivatives reveal the importance of two pharmacophore groups: indolyl and sulfidyl. In this study, whether MKN-1 ( 1 ) actually binds to the CA protein was not confirmed, but the chiral recognition of its target molecule was confirmed according to the difference of potencies between MKN-1 ( 1 ) and its diastereoisomer. The present results should be useful in the future design of a novel class of anti-HIV agents.
|
Addressing Faculty Emotional Responses during the Coronavirus 2019 Pandemic | 1b7fa886-f19b-4795-9013-59404389b779 | 7204729 | Pediatrics[mh] | Unlike a natural disaster, COVID-19 has imposed a continued infectious risk to physicians while they are called upon to provide care to others. In addition, they may also be worried about passing infection with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) to colleagues, family, and other patients. When a physician's focus is directed toward the self, it becomes challenging to redirect attention to the patient. As a result, medical errors may occur. Risk and uncertainty can also create a chronic state of hyper-alertness, which can lead to poor decision making and even unprofessional behavior. , In addition to uncertainty, many physicians may experience feelings of guilt (eg, “I'm not doing enough”) related to an exaggerated sense of responsibility. These personality traits can lead to conflicts between their own apprehensions and core values of service, which are amplified in times of crisis. Likewise, faculty may respond to the stress by trying to “power through” without acknowledging the emotional impact of the crisis. Physicians often struggle with asking for help for themselves under typical circumstances; however, the intensity of the COVID-19 crisis heightens this issue. Faculty may be accustomed to working in teams and experiencing a sense of community. Redeployment may mean working with new teams with no time to establish connection. Physicians who have been furloughed or isolated at home may feel even more separated. Social distancing may lead to further disengagement and depersonalization, which are risk factors for burnout. Finally, redeployment represents a loss of control and autonomy, a major driver of burnout. In a short time period, physicians have lost much of their ability to control how to spend their time, where to direct their attention, and how to best use scarce resources.
Based on the concerns described above, we initiated a new program of optional 1-hour group support video calls to help our faculty address their challenges, listen to how they are coping, and describe lessons learned. These calls are voluntary, informal, and facilitated by the Vice Chair for Faculty Development, who is a board-certified executive coach. The calls are advertised as part of daily faculty e-mail updates. Participants can call in from home or work. During the calls, participants can describe their experiences about how the pandemic has affected their professional and personal lives. Facilitation is focused on the validation of individual feelings and concerns. Participant dialogue is encouraged and supported. Appropriate resources (eg, COVID-19 practice guidelines, mental health resources) are shared. Participants are asked to reflect on their individual strengths, and how they have used these strengths to help them manage. The call concludes with a brainstorming session around lessons learned, as well as questions to develop additional strategies to maintain physician well-being.
Over the last 2 weeks, we have conducted 6 one-hour virtual support calls. Of the 226 Department faculty, 48 (21%) faculty participated in at least one of the calls. Mean attendance has been 8 faculty members. The faculty participants represented all academic ranks and the majority were female. This reflected the general composition of the Department. Most participants called in from their homes. Five common themes have emerged from these calls: fear of personal/family health and safety, concerns about deployment to COVID-19 sites, including competency to care for patients with COVID-19, personal protective equipment availability, dilemmas with end-of-life discussions, and expressions of isolation and loneliness. We have used these calls also to highlight important positive experiences, which may enhance physician well-being. We found that many faculty members have been excited to learn new clinical issues and to engage in new collaborations. Pediatricians are working with and learning from adult hospitalists and subspecialists. Many faculty members are learning how to implement new technology for communication or patient care. During the pandemic, we noticed that several strategies have been helpful in promoting well-being. The support calls, themselves, have been valuable in signaling the Department's attention and commitment to faculty well-being. Furthermore, during periods of disruption and faculty dispersal, it has been helpful to create or re-establish community. One physician on a call commented, “it's just good to be able to talk to others who are having similar experiences.” Shared experiences build community and may be protective against secondary trauma and post-traumatic stress disorder. When engaging with faculty, we have also noticed that several strategies can help individual faculty members. Some physicians respond to chaos with excitement and eagerness, but they may still harbor worries. Other physicians may be terrified about their own health but are driven to help others. Acknowledging anxiety and fear of uncertainty will help bridge trust and connectedness, which will strengthen well-being. It is important to encourage physicians to accept help from others. In addition to ongoing group support calls, access to individual mental health services is vital and accessible. Although physicians will say that they are “okay” right now, they may not realize what they need to feel better. Consider asking the following: “What would feeling good look like?” Being able to admit vulnerabilities will enable physicians to move forward. Faculty need to understand that they do not need to be “perfect.” We found that it is helpful to openly acknowledge that a pandemic situation may create nonideal circumstances for providing medical care. In accepting that the perfect is the enemy of the good, we have observed that physicians experience tremendous relief. When faculty dwell on obstacles and problems, it can be helpful to have faculty refocus and acknowledge what is under their control. Instead of asking, “What's not working?” ask, “What can be fixed?” or “What can be done to promote self-care?” Empowering physicians to control what they can control will enhance their well-being. It is important to remind physicians that they are valued. As physicians combat moral injury, remind them that they did not create this crisis. Ask them what matters most to them and what inspires them to care for others. When physicians feel like they have a purpose and their work has meaning, they are less likely to experience burnout and more likely to thrive. Although academic departments are focused on patient care, it is equally important to monitor the effect of the pandemic on faculty well-being.
|
A Proof of Concept of a Mobile Health Application to Support Professionals in a Portuguese Nursing Home | 9fc08df6-9989-4110-bcf8-7c196b11d295 | 6767027 | Health Communication[mh] | Over the past few years, the world has been witnessing a huge demographic change: the population is aging at an alarming rate. In fact, the statistics regarding the aging population are concerning since, compared to the growth of the whole population, it is estimated the elderly population is growing twice as quickly . Consequently, this problem has been a matter of concern for many countries since it is posing several challenges to healthcare systems worldwide . Thus, as a consequence of the rapidly aging population, the costs of elderly care and the number of elders in nursing homes have been increasing . The harsh reality is that many countries are experiencing a growth in the proportion of elders and, consequently, an increase of the number of service requirements for them that, at the moment, they are not able to meet. Thus, due to the high demand for more and better medical services for the elderly, there is a need to evaluate the state of these services and assess the need for improvements. Portugal is not an exception to this concern. In fact, Portugal is, at the moment, one of the countries with the largest aging population in the world and, similar to other countries, this situation has been negatively affecting several aspects of elderly care. In this sense, one of the major challenges resulting from this situation is the increasing number of elders in nursing homes. Over the past few years, nursing homes vacancies have been filling up at a quick rate, making the search for a place in one of these facilities a massive challenge for many elders and families in Portugal . Additionally, health professionals working in nursing homes are, more than not, overloaded with work since they are often few compared to the high number of elderly people . In addition to the aging population, one of the main factors causing this situation is the lack of investment and resources in these facilities. In this context, nursing homes generally use unsophisticated and rudimentary methods, namely paper, to record information and to clinically manage residents . Naturally, the paper-management of data is more error-prone and time-consuming since the risk of misplacing or losing information is much higher. Moreover, health professionals constantly need to return to the nursing stations to retrieve and record information, leading to a higher risk of forgetting information or writing information in the wrong place. Thus, the current need to fill the lack of resources and access to technology in nursing homes to solve some of the problems faced by them and ultimately improve the nursing care delivered is apparent. In fact, nursing homes could greatly benefit from the introduction of technological advancements, such as HICT. The use of HICT, which refers to any form of electronic solution that allows manipulating, managing, exchanging, retrieving, and storing digital information in healthcare settings, has dramatically and positively changed the medical practice and is certainly here to stay . Technologies encompassed in HICT have rapidly become a natural and indispensable part of healthcare settings due to their many advantages, namely to enhance the management, access, and sharing of information; to improve the quality, safety, and efficiency of healthcare delivery and its outcomes; to reduce the occurrence of errors and adverse events; to support the decision-making process; to decrease time-waste; and to improve productivity in healthcare systems . In fact, the use of HICT in medical contexts enables turning traditional healthcare towards smart healthcare, which consists in the use of technology to improve healthcare delivery and the quality of services. Nevertheless, despite the well-known benefits of HICTs, nursing homes have been lagging behind in adopting them due to the lack of investment and effort by these facilities to adapt to technological improvements . Thereby, considering all the above mentioned, this manuscript aims to describe and evaluate a proof of concept of a mHealth application developed for health professionals, more specifically the doctors and nurses working in a Portuguese nursing home. The solution was developed to introduce technological improvements in the facility and to support the health professionals in their daily tasks and at the point-of-care, namely to manipulate and have access to information as well as to schedule, perform, and record their job-related tasks. Moreover, clinical and performance BI indicators were also defined to help health professionals to make more informed and evidence-based decisions. It is important to mention that a mobile solution was chosen since a single hand-held device, which can be used anywhere and at any time, can allow accessing and manipulating information at the point-of-care. In this sense, the novelty of this project resides in the need to solve some of the challenges faced by a nursing home suffering from the consequences of the aging population and the absence of HICT. Additionally, the lack of literature and an integrated body of knowledge on the use of HICT in nursing homes shows that there is still much work that needs to be done in this area. Regarding the structure of this document, corresponds to the state of the art in which the body of knowledge related to this project is described. Then, in , the research methodologies that were selected to successfully conduct this project are approached. Afterwards, in , the developments tools that were chosen to develop the mobile application, namely the database, web services, and interfaces, and to create examples of the BI indicators are identified as well as their advantages. gives a brief description of the Portuguese nursing home, i.e., of the case study, for which the solution was developed in order to have a better understanding of the main challenges faced by the institution. Then, the results achieved regarding the database, web services, interfaces, and BI indicators developed are presented in . A brief discussion of the results obtained is presented in . Finally, in , the main conclusions and contributions achieved are identified and future work is presented.
In this section, the general background related to the research area of this project is presented in order to offer a deeper understanding about the novelty and relevance of this project, namely about how mHealth and BI can positively impact and be beneficial for healthcare facilities, more specifically, for the nursing home used as a case study in this study. Furthermore, the ethical issues associated with the use of HICT in healthcare contexts are described since they were taken into account during all stages of the development of this project. Finally, works related to the project carried out in this study are also addressed. 2.1. The Impact of Mobile Health in the Healthcare Industry In recent years, the rapid expansion of mobile technology, i.e., of technology that can be used “on-the-move”, has been affecting several industries, and the healthcare industry is not an exception . In fact, the ubiquitous presence of mobile devices, such as smartphones and tablets, and the rise in their adoption have led to the growth in the number of mobile applications. In this sense, there is currently a wide range of mobile applications that offer a variety of features and, more recently, mobile health applications have been expanding due to their potential to improve healthcare delivery . In this context, the use of mHealth, i.e., mobile devices and applications to support the medical practice, has been transforming several aspects of the healthcare industry and proving to be quite promising and beneficial for health professionals, namely to help them execute their daily tasks, to manage and monitor patients, to access and manage clinical data, and to enhance the decision-making process, among others . However, mHealth has not only been advantageous for healthcare providers but also for the consumers, allowing them to strengthen their communication with healthcare organizations . Therefore, the main benefits of mHealth are as follows : - Convenient and faster accessibility to information since all data are gathered in a single source, which can be used “on-the-move”; - Reduction of time-waste since health professionals can manipulate information at the point-of-care, not having to interrupt their workflow and go to another location to do so; - Faster and better decision-making process, since health professionals can have access to up-to-date information at the point-of-care, leading to more informed and based decisions; - Faster and improved communication since mHealth helps connect all the professionals distributed across the healthcare organization; - Help healthcare organizations to strengthen their communication with healthcare consumers by providing information to them at any given moment through appointment reminders, test result notifications, diagnostics, and disease control, among others; - Decrease errors and adverse events; and - Improve quality of healthcare delivery and services. It is important to mention that the use of mobile applications in healthcare settings is not intended to replace desktop applications, which can be more powerful and less restrictive than mobile applications, but to complement them and, especially, to enhance outcomes at the point-of-care . In fact, in situations where rapid information exchange is needed, where information should be entered at the point-of-care, and where health professionals are constantly on the move and have, therefore, less time to spend on computers, mobile technology is highly beneficial compared to desktop applications . For instance, health professionals working in nursing homes could greatly benefit from mobile technology since they are constantly in motion and have little time to spend on computers, which are often located in nursing stations far away from the residents. Therefore, the undeniable benefits of mHealth show that a higher investment should be done in its adoption as it can improve the quality of healthcare delivery. However, mHealth applications should only be developed after truly understanding the needs of the intended users in order to develop high quality and accurate applications and avoid their underutilization . 2.2. Business Intelligence Transforms Clinical Information into Valuable Information Business intelligence corresponds to a set of methodologies, applications, processes, technologies, and analytical tools that enables to gather, store, manipulate, process, and analyze data in order to gain new and relevant information used by organizations to make informed and evidence-based decisions . In the healthcare industry, BI tools are essential to analyze the clinical data constantly generated in order to obtain new knowledge used as evidence to support the decision-making process . Thereby, BI has emerged as a solution to make use of the complex and huge amounts of information gathered daily in organizations, offering analytical tools able to turn these data into meaningful, useful, and valuable information and, thus, make faster, informed, and evidence-based decisions . Furthermore, through the knowledge obtained, organizations are able to gain a deeper understanding and insight on their performance and highlight problem areas and opportunities, enabling them to plan and perform improvements if necessary . Regarding the healthcare industry, applying BI technology to electronic health records (EHRs) helps improve healthcare delivery and its outcomes; reduce the occurrence of errors, adverse events, and costs; and give economic value to the large amounts of clinical data generated daily, which otherwise would be a burden to healthcare organizations . The general architecture of the business intelligence process is illustrated in . As shown in , the components encompassed in the BI process include the following : - Extract, transform, and load (ETL) process: enables extracting data from multiple sources, clean and normalize these heterogeneous data to make them consistent and unambiguous, and load the transformed data into a data warehouse (DW); - Data warehousing process: enables building adequate DWs able to structure data and facilitate their analysis; and - Visualization, analysis, and interpretation of the data loaded into the DW: enables obtaining new knowledge previously unknown to an organization. Thus, for this purpose, various analytical tools and applications can be used, namely data mining tools and applications able to create charts, reports, spreadsheets, and dashboards, among others. Despite the opportunities and positive effects BI brings to organizations, this technology has not yet attained its full potential and maturity in the healthcare industry . However, the benefits of BI tools in healthcare settings are indisputable and have, thus, continuously been explored through the years. 2.3. Ethical Issues in Medicine Without any doubt, the use of HICT, mHealth, and BI in the healthcare industry has been greatly beneficial and advantageous for healthcare organizations since these technologies have the potential to enhance the quality of the care delivered. However, despite the many benefits and opportunities offered by these technologies, they are not without flaws. In fact, challenges may arise from the implementation and use of solutions based on them, more specifically, ethical issues. Nowadays, healthcare organizations produce daily vast amounts of EHRs and other types of data related to both the patients and the organization. However, since these data are stored in health information systems, patients are fearful that their confidentiality and privacy are compromised and not guaranteed, since, compared to the traditional paper-based management of data, technological advancements have made accessing data and violating privacy easier . Additionally, the EHRs of the patients can be consulted by various health professionals across the organization, which can be problematic for patients who do not want their sensitive information shared and viewed by other professionals . In this sense, privacy issues and patient confidentiality should always be taken into account and safeguarded while developing technological solutions. In fact, if the privacy and confidentiality of the users are not protected and ensured, some of them may not want to use HICT solutions . Furthermore, legal issues may arise if sensitive information of the users is disclosed without their consent and if their privacy is lost. Therefore, it is important to define data access policies in order to only give information access to authorized users . Nonetheless, implementing security protections remains a difficult task to perform, but it should always be taken into account and viewed as a priority when developing HICT solutions . On the other hand, regarding the introduction of mHealth solutions in healthcare settings, some health professionals remain hesitant regarding their use despite the many advantages and benefits provided by them. The main cause of this situation is the fact that many mHealth applications are currently being used without having a complete understanding of their effectiveness, accuracy, quality, and associated risks, which can, in extreme cases, impair healthcare delivery . In this sense, best-practice standards should be followed to ensure the quality, accuracy, and safety of mHealth solutions during their design, development, and implementation . Additionally, these applications should go through a rigorous set of validation and evaluation methods to guarantee their quality, accuracy, and safety in healthcare settings . 2.4. Related Work Undeniably, the introduction of mobile devices and applications has been positively transforming several aspects of the medical practice and providing many benefits to healthcare facilities, namely the improvement of the quality of their services and healthcare delivery. Therefore, to shed light on the potential of mHealth, some existing works are presented in this subsection. Nowadays, the vast majority of mHealth applications focus on specific health dimensions and are, thus, frequently oriented towards patients . In this sense, there is currently an extensive amount of mHealth applications available in the market and they are being used to monitor patients both at home and in health facilities, to educate patients, to strengthen the communication between patients and health facilities, and to offer better access to health services, diagnosis, and treatment, among others . In this context, it is possible to highlight several examples such as the mHealth monitoring system named iCare , which uses smartphones and wireless sensors to monitor elderly people in the comfort of their homes. This system is of particular interest since it enables remotely monitoring the elderly anywhere and at any time, providing different services according to the health conditions of each individual. Moreover, this system also acts as an assistant offering reminder, alarms, and medical guidance to the elderly. On the other hand, home-based telerehabilitation for people with multiple sclerosis was also addressed by Thirumalai et al. through the development of a therapeutic exercise application named TEAMS, which provides different exercises and programs according to the multiple sclerosis level of the individual. In the work of Parmanto et al. , a mHealth system called iMHere, which enables individuals with chronic conditions to perform preventive self-care tasks at home and to remotely communicate with clinicians without having to go to health facilities, is proposed. Finally, Bastos et al. developed the SmartWalk project, which promotes healthy aging by enabling elderly people to have a more active lifestyle while being remotely monitored by health professionals. This project involved the development of a mobile application connected to sensors that collect data while the elderly user walks on a predefined route provided by the application. The health professionals are then able to analyze these data to suggest modifications to the route and, thus, improve the health of the elderly user. However, despite the predominance of patient-centered mHealth solutions in the market, applications are also available for the management of health facilities and healthcare information and to assist health professionals. In this context, Doukas, Pliakas, and Maglogiannis proposed a mobile healthcare information management system named @HealthCloud that enables medical experts as well as patients to manage healthcare information. Thus, by using this system, users are able to retrieve, upload, and modify medical content, such as health records and medical images. Moreover, the authors affirmed that the system enables managing healthcare data in a pervasive and ubiquitous way, leading to the reduction of medical errors since medical experts can effectively communicate between each other and have access to patient-related information during decision-making. Similarly, Landman et al. developed a mobile application called CliniCam that enables clinicians to securely capture clinical images, annotate them, and finally store them in the EHR. Thus, this application enables making the images available to all credentialed clinicians across the hospital in a secure way. To this end, various security features were adopted, such as user authentication, data encryption, and secure wireless transmission. Despite the existence of a large amount of patient-centered mHealth applications, the implementation of mobile technology for the management of health facilities, namely of nursing homes, and to assist health professionals and medical experts in their daily tasks remains to be properly addressed, whereby further research is needed. In this context, this project was performed as an answer for the lack of mobile solutions in nursing homes that focus primarily on the assistance of health professionals in their job-related tasks and management of the facility. Thus, due to the lack of applications similar to the one described in this manuscript, the health professionals working in the nursing home used as a case study were constantly consulted in order to develop a solution that answers to their needs. Furthermore, information gathered from the literature, namely from Landman et al. , was also essential to promote security features.
In recent years, the rapid expansion of mobile technology, i.e., of technology that can be used “on-the-move”, has been affecting several industries, and the healthcare industry is not an exception . In fact, the ubiquitous presence of mobile devices, such as smartphones and tablets, and the rise in their adoption have led to the growth in the number of mobile applications. In this sense, there is currently a wide range of mobile applications that offer a variety of features and, more recently, mobile health applications have been expanding due to their potential to improve healthcare delivery . In this context, the use of mHealth, i.e., mobile devices and applications to support the medical practice, has been transforming several aspects of the healthcare industry and proving to be quite promising and beneficial for health professionals, namely to help them execute their daily tasks, to manage and monitor patients, to access and manage clinical data, and to enhance the decision-making process, among others . However, mHealth has not only been advantageous for healthcare providers but also for the consumers, allowing them to strengthen their communication with healthcare organizations . Therefore, the main benefits of mHealth are as follows : - Convenient and faster accessibility to information since all data are gathered in a single source, which can be used “on-the-move”; - Reduction of time-waste since health professionals can manipulate information at the point-of-care, not having to interrupt their workflow and go to another location to do so; - Faster and better decision-making process, since health professionals can have access to up-to-date information at the point-of-care, leading to more informed and based decisions; - Faster and improved communication since mHealth helps connect all the professionals distributed across the healthcare organization; - Help healthcare organizations to strengthen their communication with healthcare consumers by providing information to them at any given moment through appointment reminders, test result notifications, diagnostics, and disease control, among others; - Decrease errors and adverse events; and - Improve quality of healthcare delivery and services. It is important to mention that the use of mobile applications in healthcare settings is not intended to replace desktop applications, which can be more powerful and less restrictive than mobile applications, but to complement them and, especially, to enhance outcomes at the point-of-care . In fact, in situations where rapid information exchange is needed, where information should be entered at the point-of-care, and where health professionals are constantly on the move and have, therefore, less time to spend on computers, mobile technology is highly beneficial compared to desktop applications . For instance, health professionals working in nursing homes could greatly benefit from mobile technology since they are constantly in motion and have little time to spend on computers, which are often located in nursing stations far away from the residents. Therefore, the undeniable benefits of mHealth show that a higher investment should be done in its adoption as it can improve the quality of healthcare delivery. However, mHealth applications should only be developed after truly understanding the needs of the intended users in order to develop high quality and accurate applications and avoid their underutilization .
Business intelligence corresponds to a set of methodologies, applications, processes, technologies, and analytical tools that enables to gather, store, manipulate, process, and analyze data in order to gain new and relevant information used by organizations to make informed and evidence-based decisions . In the healthcare industry, BI tools are essential to analyze the clinical data constantly generated in order to obtain new knowledge used as evidence to support the decision-making process . Thereby, BI has emerged as a solution to make use of the complex and huge amounts of information gathered daily in organizations, offering analytical tools able to turn these data into meaningful, useful, and valuable information and, thus, make faster, informed, and evidence-based decisions . Furthermore, through the knowledge obtained, organizations are able to gain a deeper understanding and insight on their performance and highlight problem areas and opportunities, enabling them to plan and perform improvements if necessary . Regarding the healthcare industry, applying BI technology to electronic health records (EHRs) helps improve healthcare delivery and its outcomes; reduce the occurrence of errors, adverse events, and costs; and give economic value to the large amounts of clinical data generated daily, which otherwise would be a burden to healthcare organizations . The general architecture of the business intelligence process is illustrated in . As shown in , the components encompassed in the BI process include the following : - Extract, transform, and load (ETL) process: enables extracting data from multiple sources, clean and normalize these heterogeneous data to make them consistent and unambiguous, and load the transformed data into a data warehouse (DW); - Data warehousing process: enables building adequate DWs able to structure data and facilitate their analysis; and - Visualization, analysis, and interpretation of the data loaded into the DW: enables obtaining new knowledge previously unknown to an organization. Thus, for this purpose, various analytical tools and applications can be used, namely data mining tools and applications able to create charts, reports, spreadsheets, and dashboards, among others. Despite the opportunities and positive effects BI brings to organizations, this technology has not yet attained its full potential and maturity in the healthcare industry . However, the benefits of BI tools in healthcare settings are indisputable and have, thus, continuously been explored through the years.
Without any doubt, the use of HICT, mHealth, and BI in the healthcare industry has been greatly beneficial and advantageous for healthcare organizations since these technologies have the potential to enhance the quality of the care delivered. However, despite the many benefits and opportunities offered by these technologies, they are not without flaws. In fact, challenges may arise from the implementation and use of solutions based on them, more specifically, ethical issues. Nowadays, healthcare organizations produce daily vast amounts of EHRs and other types of data related to both the patients and the organization. However, since these data are stored in health information systems, patients are fearful that their confidentiality and privacy are compromised and not guaranteed, since, compared to the traditional paper-based management of data, technological advancements have made accessing data and violating privacy easier . Additionally, the EHRs of the patients can be consulted by various health professionals across the organization, which can be problematic for patients who do not want their sensitive information shared and viewed by other professionals . In this sense, privacy issues and patient confidentiality should always be taken into account and safeguarded while developing technological solutions. In fact, if the privacy and confidentiality of the users are not protected and ensured, some of them may not want to use HICT solutions . Furthermore, legal issues may arise if sensitive information of the users is disclosed without their consent and if their privacy is lost. Therefore, it is important to define data access policies in order to only give information access to authorized users . Nonetheless, implementing security protections remains a difficult task to perform, but it should always be taken into account and viewed as a priority when developing HICT solutions . On the other hand, regarding the introduction of mHealth solutions in healthcare settings, some health professionals remain hesitant regarding their use despite the many advantages and benefits provided by them. The main cause of this situation is the fact that many mHealth applications are currently being used without having a complete understanding of their effectiveness, accuracy, quality, and associated risks, which can, in extreme cases, impair healthcare delivery . In this sense, best-practice standards should be followed to ensure the quality, accuracy, and safety of mHealth solutions during their design, development, and implementation . Additionally, these applications should go through a rigorous set of validation and evaluation methods to guarantee their quality, accuracy, and safety in healthcare settings .
Undeniably, the introduction of mobile devices and applications has been positively transforming several aspects of the medical practice and providing many benefits to healthcare facilities, namely the improvement of the quality of their services and healthcare delivery. Therefore, to shed light on the potential of mHealth, some existing works are presented in this subsection. Nowadays, the vast majority of mHealth applications focus on specific health dimensions and are, thus, frequently oriented towards patients . In this sense, there is currently an extensive amount of mHealth applications available in the market and they are being used to monitor patients both at home and in health facilities, to educate patients, to strengthen the communication between patients and health facilities, and to offer better access to health services, diagnosis, and treatment, among others . In this context, it is possible to highlight several examples such as the mHealth monitoring system named iCare , which uses smartphones and wireless sensors to monitor elderly people in the comfort of their homes. This system is of particular interest since it enables remotely monitoring the elderly anywhere and at any time, providing different services according to the health conditions of each individual. Moreover, this system also acts as an assistant offering reminder, alarms, and medical guidance to the elderly. On the other hand, home-based telerehabilitation for people with multiple sclerosis was also addressed by Thirumalai et al. through the development of a therapeutic exercise application named TEAMS, which provides different exercises and programs according to the multiple sclerosis level of the individual. In the work of Parmanto et al. , a mHealth system called iMHere, which enables individuals with chronic conditions to perform preventive self-care tasks at home and to remotely communicate with clinicians without having to go to health facilities, is proposed. Finally, Bastos et al. developed the SmartWalk project, which promotes healthy aging by enabling elderly people to have a more active lifestyle while being remotely monitored by health professionals. This project involved the development of a mobile application connected to sensors that collect data while the elderly user walks on a predefined route provided by the application. The health professionals are then able to analyze these data to suggest modifications to the route and, thus, improve the health of the elderly user. However, despite the predominance of patient-centered mHealth solutions in the market, applications are also available for the management of health facilities and healthcare information and to assist health professionals. In this context, Doukas, Pliakas, and Maglogiannis proposed a mobile healthcare information management system named @HealthCloud that enables medical experts as well as patients to manage healthcare information. Thus, by using this system, users are able to retrieve, upload, and modify medical content, such as health records and medical images. Moreover, the authors affirmed that the system enables managing healthcare data in a pervasive and ubiquitous way, leading to the reduction of medical errors since medical experts can effectively communicate between each other and have access to patient-related information during decision-making. Similarly, Landman et al. developed a mobile application called CliniCam that enables clinicians to securely capture clinical images, annotate them, and finally store them in the EHR. Thus, this application enables making the images available to all credentialed clinicians across the hospital in a secure way. To this end, various security features were adopted, such as user authentication, data encryption, and secure wireless transmission. Despite the existence of a large amount of patient-centered mHealth applications, the implementation of mobile technology for the management of health facilities, namely of nursing homes, and to assist health professionals and medical experts in their daily tasks remains to be properly addressed, whereby further research is needed. In this context, this project was performed as an answer for the lack of mobile solutions in nursing homes that focus primarily on the assistance of health professionals in their job-related tasks and management of the facility. Thus, due to the lack of applications similar to the one described in this manuscript, the health professionals working in the nursing home used as a case study were constantly consulted in order to develop a solution that answers to their needs. Furthermore, information gathered from the literature, namely from Landman et al. , was also essential to promote security features.
This project was sustained by a set of well-defined steps with the intention of ensuring its success and having an organized path to follow. In this context, the design science research (DSR) methodology was used since it is suitable for HICT research projects. Additionally, this methodology was used since the developed solution meets the needs of the health professionals working in the nursing home and is able to solve the problems faced by them. In fact, by introducing the solution into the nursing home, it is possible to substitute the paper-based management of information, support the decision-making process, reduce time-waste and the occurrence of errors and adverse events, and, consequently, lessen the work overload experienced by health professionals as well as improve the nursing care delivered. The main purpose of the DSR methodology is to create and evaluate objects known as artifacts, or more specifically, solutions, developed in order to solve and address organizational problems . In other words, the DSR methodology corresponds to a rigorous science research method that encompasses a set of techniques, principles, and procedures followed to design and develop successful solutions capable of solving problems faced by an organization and useful and effective to face the problems at hand . In this sense, the DSR methodology can be divided into six distinct steps, as illustrated in . Therefore, since the DSR methodology was used for the development of this project, the problems and challenges faced by the health professionals working in the nursing home used as a case study had to be identified in order to motivate the development of the solution. Thus, focus groups, semi-structured interviews, and questionnaires were made with the professionals working for the nursing home as well as for the hospital that manages the facility in order to gather valuable information capable of identifying and understanding the main challenges encountered by the health professionals. It is important to mention that the focus groups, semi-structured interviews, and questionnaires were performed with a group of ten participants, including nurses working in the nursing home as well as information and communication technology (ICT) professionals and other professionals working for both the nursing home and the hospital that manages the facility. The participants were selected based on their availability and since they were the most suitable to provide information concerning the challenges faced by the nursing home and the use of HICT in the facility. Furthermore, an observation of the case study was also performed to have a better understanding of the conditions of the nursing home. Consequently, the objectives of the solution were defined according to the problems identified and, afterwards, the features and architecture of the solution were designed and developed. Once the solution was developed, it had to be demonstrated and evaluated through the execution of a proof of concept, which included a strengths, weaknesses, opportunities, and threats (SWOT) analysis and the technological acceptance model 3 (TAM3), in order to assess its usefulness, feasibility, and potential and if improvements and changes were needed. Additionally, this study also involved the communication of the problem and the solution to an audience, namely through the presentation of the solution to the health professionals and the writing of scientific papers. A proof of concept was performed in order to carry out a thorough evaluation of the solution and to demonstrate its usefulness and potential. Therefore, a proof of concept enables to demonstrate in practice the concepts, methodologies, and technologies encompassed in the development of a solution. Additionally, it allows validating the developed solution towards the target audience and ensures that the solution provides all of the requirements initially proposed. On the other hand, besides being able to assess the usefulness, potential, and benefits of a solution, a proof of concept is also capable of identifying potential issues and threats associated with the solution. Thus, the demonstration of the potential and feasibility of the mobile application involved the execution of a SWOT analysis to identify its strengths, weaknesses, opportunities, and threats. To this end, the TAM3 model was used to elaborate a questionnaire, which was performed with the health professionals working in the nursing home and the results obtained were used as a basis in the SWOT analysis. Thus, in this research project, the TAM3 model was followed to elaborate and design a questionnaire, which was performed with the users of the solution to assess their acceptance towards it. Briefly, the TAM3 corresponds to a tool capable of predicting the acceptance of an information technology (IT) solution by users in an organization as well as the likelihood of this technology being adopted by them. To this end, the model considers that the acceptance and use of technology are affected by the internal beliefs, attitudes, and intentions of users and that their satisfaction towards IT results from the combination of the feelings and attitudes regarding a set of factors linked to the adoption of the technology . Therefore, the attitudes and acceptance of users towards an IT solution influence and affect its successful implementation and use in an organization . Thus, analyzing the acceptance of users towards a new IT solution is quite essential since the more accepting they are, the more willing they are to make changes and spend their time and effort to use the solution . Organizations can then use the factors that affect the opinion of users towards the acceptance of a new IT solution and manipulate these factors to promote its successful use.
In this section, the development tools and technologies used to develop the solution are described as well as the reasons behind their selection and main advantages. 4.1. MySQL Relational Database Management System Naturally, the development of any mobile application should include the definition and creation of a database, if one does not already exist, to store and manipulate data. In this sense, the database designed and developed for this project was created with MySQL. MySQL is a relational database management system (RDBMS), meaning that it uses the relational model, in which several tables are logically related to each other through relations existing between them, as its database model . Additionally, since it is a database management system (DBMS), MySQL enables defining, modifying, and creating a database as well as inserting, updating, deleting, and retrieving data from the database . In addition, a DBMS offers controlled access to the database, namely a security system that blocks unauthorized users when they try to access the database, an integrity system that allows maintaining the consistency of data, a concurrency control system that allows shared access of data, a recovery control system that resets the database to its previous state in the case of a failure, and a catalog accessed by users to consult the descriptions of the data stored in the database . For the development of this project, MySQL was chosen to define and create the database since it is a RDBMS as well as an open-source, fast, secure, reliable, and easy to use DBMS . Additionally, the server in which the database had to be deployed and implemented, which belongs to the hospital that manages the nursing home, was already configured for this type of database, thus making MySQL the most appropriate choice. 4.2. PHP RESTful Web Services The communication and interaction between the mobile application and the MySQL database was possible through the creation of RESTful web services, which were created using PHP. RESTful web services are based on the representational state transfer (REST) architecture, which is a client–server-based architecture, and depends on the hypertext transfer protocol (HTTP) protocol to convey the messages . Thus, the REST architecture offers a set of principles on how data should be transferred over a network. RESTful web services are identified by uniform resource identifiers, which enable the interaction and exchange of messages with the web services over a network . Moreover, by taking advantage of the specific features of HTTP, RESTful web services are able to GET, PUT, DELETE, and POST data Thus, the web services were created to enable the mobile application to send requests to the database (via queries) and to send back to the application responses in the JavaScript object notation (JSON) format. The web services created enable to select data from the database as well as update and insert data. Consequently, to allow the communication between the mobile application and the web services, an Apache server was used, which is a HTTP server capable of receiving and sending HTTP messages. PHP was chosen to develop the web services since it is an open-source, fast, and easy to use language. On the other hand, the server in which the web services had to be implemented was already configured for this programming language since other applications were developed for the hospital that manages the nursing home using PHP. Thus, taking into account the reasons mentioned above and to avoid maintenance and integration issues in the future, PHP revealed to be the most appropriate choice. 4.3. React Native JavaScript Framework The interfaces of the mobile application were created using React Native, which is a JavaScript framework developed by Facebook for building native mobile applications, i.e., applications built for specific mobile platforms . React Native was released in 2015 and is based on React, which is a JavaScript library used to build user interfaces and targets the web. However, React Native targets mobile platforms and enables developers to simultaneously develop and maintain one application that can be deployed to both iOS and Android . Thus, developers do not need to develop distinct applications in order to target these two platforms. It is important to note that, although the mobile application built in this study was developed only for Android devices, choosing a cross-platform framework was still essential to allow its quick and easy development for iOS devices in the future. In recent years, React Native has been proving to have a lot of potential as a cross-platform framework enabling developers to build native applications while having a high performance. On the other hand, React Native provides many other benefits, such as : - It is an open-source and free platform, making the development of mobile applications a lot easier since all documentation is available for free and it is community driven; - Existence of a huge variety of third-party plugins and libraries to help and facilitate mobile development; - Existence of a hot reload feature allowing developers to see updates without recompiling their application and updating its state; - Existence of a live reload feature allowing developers to instantly reload their application without recompiling it; - Straightforward and easy to use since it has a modular and intuitive architecture; and - Has a great performance in mobile devices since it makes use of the graphic processing unit. Thereby, all of the reasons mentioned above made React Native the most indicated choice to develop the interfaces of the mobile application. Furthermore, at the time of the development of this project other applications were being developed for the hospital that manages the nursing home using React and React Native. Thus, React Native revealed to be the obvious choice to avoid maintenance and integration issues in the future. 4.4. Power BI Business Analytics Platform One of the objectives of this project was to identify and define clinical and performance indicators in order to make the decision-making process more evidence-based and accurate. However, it is important to mention that these indicators have not been created since the database does not have real data yet. Furthermore, in the future, it is envisioned to introduce them in a web application. Thus, to this end, Power BI was used to create examples of the clinical and performance indicators defined with fictitious data. Power BI is a business analytics platform released in 2013 by Microsoft Corporation that provides BI tools to the users able to collect, analyze, visualize, and share data . Thus, by aggregating data from various data sources, such as Excel, MySQL databases, and CSV files, among others, Power BI is capable of creating charts, reports, and graphs to obtain visuals and a better insight on the data . On the other hand, Power BI is available in a desktop application, which is only executable on Windows, and in a cloud service . Whereas the desktop application is used to model data and create reports, graphs, and charts, the cloud service is used to share and visualize them as well as create them. Therefore, when users need to perform data modeling, the desktop application is the best choice. However, to share dashboards, users need to use the cloud service. Thus, the Power BI desktop application was used to create visual examples of the clinical and performance indicators defined. The choice of using this BI platform was due to the fact that it is a free, easy to use, and intuitive tool that enables to quickly create charts and graphs without too much effort and to visualize them in a simple and explicit way.
Naturally, the development of any mobile application should include the definition and creation of a database, if one does not already exist, to store and manipulate data. In this sense, the database designed and developed for this project was created with MySQL. MySQL is a relational database management system (RDBMS), meaning that it uses the relational model, in which several tables are logically related to each other through relations existing between them, as its database model . Additionally, since it is a database management system (DBMS), MySQL enables defining, modifying, and creating a database as well as inserting, updating, deleting, and retrieving data from the database . In addition, a DBMS offers controlled access to the database, namely a security system that blocks unauthorized users when they try to access the database, an integrity system that allows maintaining the consistency of data, a concurrency control system that allows shared access of data, a recovery control system that resets the database to its previous state in the case of a failure, and a catalog accessed by users to consult the descriptions of the data stored in the database . For the development of this project, MySQL was chosen to define and create the database since it is a RDBMS as well as an open-source, fast, secure, reliable, and easy to use DBMS . Additionally, the server in which the database had to be deployed and implemented, which belongs to the hospital that manages the nursing home, was already configured for this type of database, thus making MySQL the most appropriate choice.
The communication and interaction between the mobile application and the MySQL database was possible through the creation of RESTful web services, which were created using PHP. RESTful web services are based on the representational state transfer (REST) architecture, which is a client–server-based architecture, and depends on the hypertext transfer protocol (HTTP) protocol to convey the messages . Thus, the REST architecture offers a set of principles on how data should be transferred over a network. RESTful web services are identified by uniform resource identifiers, which enable the interaction and exchange of messages with the web services over a network . Moreover, by taking advantage of the specific features of HTTP, RESTful web services are able to GET, PUT, DELETE, and POST data Thus, the web services were created to enable the mobile application to send requests to the database (via queries) and to send back to the application responses in the JavaScript object notation (JSON) format. The web services created enable to select data from the database as well as update and insert data. Consequently, to allow the communication between the mobile application and the web services, an Apache server was used, which is a HTTP server capable of receiving and sending HTTP messages. PHP was chosen to develop the web services since it is an open-source, fast, and easy to use language. On the other hand, the server in which the web services had to be implemented was already configured for this programming language since other applications were developed for the hospital that manages the nursing home using PHP. Thus, taking into account the reasons mentioned above and to avoid maintenance and integration issues in the future, PHP revealed to be the most appropriate choice.
The interfaces of the mobile application were created using React Native, which is a JavaScript framework developed by Facebook for building native mobile applications, i.e., applications built for specific mobile platforms . React Native was released in 2015 and is based on React, which is a JavaScript library used to build user interfaces and targets the web. However, React Native targets mobile platforms and enables developers to simultaneously develop and maintain one application that can be deployed to both iOS and Android . Thus, developers do not need to develop distinct applications in order to target these two platforms. It is important to note that, although the mobile application built in this study was developed only for Android devices, choosing a cross-platform framework was still essential to allow its quick and easy development for iOS devices in the future. In recent years, React Native has been proving to have a lot of potential as a cross-platform framework enabling developers to build native applications while having a high performance. On the other hand, React Native provides many other benefits, such as : - It is an open-source and free platform, making the development of mobile applications a lot easier since all documentation is available for free and it is community driven; - Existence of a huge variety of third-party plugins and libraries to help and facilitate mobile development; - Existence of a hot reload feature allowing developers to see updates without recompiling their application and updating its state; - Existence of a live reload feature allowing developers to instantly reload their application without recompiling it; - Straightforward and easy to use since it has a modular and intuitive architecture; and - Has a great performance in mobile devices since it makes use of the graphic processing unit. Thereby, all of the reasons mentioned above made React Native the most indicated choice to develop the interfaces of the mobile application. Furthermore, at the time of the development of this project other applications were being developed for the hospital that manages the nursing home using React and React Native. Thus, React Native revealed to be the obvious choice to avoid maintenance and integration issues in the future.
One of the objectives of this project was to identify and define clinical and performance indicators in order to make the decision-making process more evidence-based and accurate. However, it is important to mention that these indicators have not been created since the database does not have real data yet. Furthermore, in the future, it is envisioned to introduce them in a web application. Thus, to this end, Power BI was used to create examples of the clinical and performance indicators defined with fictitious data. Power BI is a business analytics platform released in 2013 by Microsoft Corporation that provides BI tools to the users able to collect, analyze, visualize, and share data . Thus, by aggregating data from various data sources, such as Excel, MySQL databases, and CSV files, among others, Power BI is capable of creating charts, reports, and graphs to obtain visuals and a better insight on the data . On the other hand, Power BI is available in a desktop application, which is only executable on Windows, and in a cloud service . Whereas the desktop application is used to model data and create reports, graphs, and charts, the cloud service is used to share and visualize them as well as create them. Therefore, when users need to perform data modeling, the desktop application is the best choice. However, to share dashboards, users need to use the cloud service. Thus, the Power BI desktop application was used to create visual examples of the clinical and performance indicators defined. The choice of using this BI platform was due to the fact that it is a free, easy to use, and intuitive tool that enables to quickly create charts and graphs without too much effort and to visualize them in a simple and explicit way.
As already stated, this study consisted in designing and developing a mobile application for health professionals working in a Portuguese nursing home in order to assist them at the point-of-care, e.g., to schedule, perform, and record tasks and to have access, record, consult, and manipulate information, and to help them clinically manage the residents. It is important to mention that the nursing home used as a case study for this project is managed by a Portuguese hospital. Therefore, the professionals working for both the nursing home and the hospital were consulted throughout this project. To have a better understanding of the relevance and motivation of this project, it was essential to identify the main issues and challenges faced by the health professionals and the nursing home. Therefore, focus groups, semi-structured interviews, and questionnaires were performed with the professionals working for both the nursing home and the hospital in order to obtain valuable information that could enlighten the main challenges faced by the nursing home. On the other hand, the case study was also subjected to observation so as to have a better understanding of its conditions. Thus, the following challenges were identified: - HICT or any other form of technological progress is not used in the nursing home. Although there is a computer in the nursing station, it is not used to record clinical information of the residents or even to schedule tasks. Therefore, there are no EHRs and health professionals use handwritten charts and medical records. Thus, since the information is stored on paper, the management of information is a lot more time-consuming, especially at the point-the-care, as the professionals have to consistently go back to the nursing station to manipulate information. Additionally, this situation can lead to a higher risk of losing, misplacing, or forgetting, information as well as documenting information in the wrong place. - The job-related tasks of the health professionals are scheduled and documented in handwritten charts or boards. This situation is particularly problematic since it is more error-prone, confusing, and less organized. - The nursing home does not have access to a wireless Internet connection. The health professionals can only have access to an Internet connection in the nursing station where the computer is located. This situation is especially challenging since it complicates the implementation of any kind of mHealth solution. - The number of health professionals compared to the high number of elderly people is low. Consequently, at times, the health professionals are overloaded with work. - There was a failed attempt to implement a web application. The web application aimed to shift from the paper-based to the computer-based management of data, allowing the health professionals to schedule tasks, document them, and record clinical information. However, the application was abandoned as it was time-consuming and not user-friendly. In addition to the above mentioned, this project was also motivated by the fact that the health professionals revealed their need for a solution that would allow them to perform their daily tasks anywhere in the nursing home and in a more organized and faster way. Consequently, the need to design and develop a solution that could assist the health professionals at the point-of-care by allowing them to manipulate information anywhere in the facility was obvious. In this sense, a proof of concept of a mobile application designed and developed to enhance the care delivered and elders’ quality of life, reduce the occurrence of errors and time-waste, and ease some of the workload experienced by them was conducted.
As mentioned above, the interfaces of the mobile application were developed using React Native, which is a JavaScript framework that enables building native mobile applications. It is important to state that, although React Native allows using the same code to deploy to both iOS and Android devices, the mobile application was only deployed for Android since Android devices are more affordable and common and are, therefore, more likely to be provided by the nursing home when the application is used in the future. However, if needed and after small modifications, the application can be quickly and easily deployed to iOS devices. On the other hand, the MySQL RDBMS was also used to define and create the database. In this sense, SQL was the language used to manipulate and access the data stored in the database. Furthermore, to enable the communication and transfer of data between the mobile application and the database, RESTful web services were created using PHP. Therefore, the solution is divided into three distinct elements, each with a different purpose. illustrates the architecture and different interactions existing between the various elements of the mobile application. At this point in time, the mobile application is fully developed, and the web services and the database are deployed in the server of the hospital that manages the nursing home. However, the solution is still being evaluated and tested by the health professionals. Moreover, the mobile application is not being used since the requirements, such as mobile devices and a reliable wireless Internet connection, have not yet been provided to the nursing home. Nevertheless, until the requirements are available to the nursing home, it is envisioned to continue improving the solution through the opinions and knowledge continuously provided by the professionals. Finally, it must be mentioned that, during all stages of the design and development of this project, ethical issues were taken into account and safeguarded to guarantee that confidentiality issues do not arise as well as the quality, accuracy, and safety of the solution. In this sense, the health professionals were constantly consulted throughout the design and development of the solution in order to develop an accurate and high quality mobile application. Furthermore, data privacy and confidentiality were promoted with the implementation of a login through which only authorized users, namely the nurses and doctors, with encrypted login credentials, can have access to the information contained in the solution. On the other hand, the solution will only be accessed by being connected to an Intranet connection, i.e., the private network of the institution. 6.1. Database and RESTful Web Services Definition and Implementation As mentioned above, the nursing home uses handwritten medical records and resorts to paper to manipulate information. Consequently, the facility did not have any database implemented prior to the development of this project. Therefore, before designing the interfaces of the mHealth application, a database had to be defined in order to allow the application to have access and store data. Thus, a MySQL relational database was defined and created taking into account the data that needed to be stored. Then, the database was deployed and implemented in the server of the hospital that manages the nursing home. However, it must be mentioned that the database remains to be populated with data related to the residents and the health professionals. In this sense, a database composed of 49 tables was designed and created, allowing the storage of: - Data related to the users of the mobile application: personal information of the health professionals (their full name, email, profile picture, telephone and mobile phone numbers, date of birth, institution identification number, and gender, among others) is stored as well as their login credentials. - Personal data related to the residents (their full name, institution process number, bed and bedroom numbers, admission date, date of birth, profile picture, telephone and mobile phone numbers, and national health service number, among others) is stored. - Personal data related to the informal caregivers and personal contacts of the residents (their full name, telephone and mobile phone numbers, relationship with the resident, and observations, among others) is stored. - Clinical notes written by the doctors: The content of the note, the institution identification number of the professional who wrote the note, the resident’s institution process number, and the date and time of the creation of the note are stored. - Nursing notes written by the nurses: Similar to the clinical notes of the doctors, the content of the note, the institution identification number of the professional who created the note, the resident’s institution process number, and the date and time of the creation of the note are stored. - Clinical information related to the residents, namely their general evaluation (e.g., alcohol and tobacco consumption), usual medication, clinical history (e.g., existence of diabetes, diseases, allergies, and past surgeries and fractures), physical assessment (e.g., weight, height, blood pressure, heart rate, skin integrity, turgidity, and color, vision, and hearing), nutritional and eating patterns (e.g., type of diet, dentition, and use of a nasogastric tube), bowel and bladder elimination patterns (e.g., use of adult diapers or of a urinary catheter), physical activity patterns (e.g., strength of the limbs), sleeping patterns (e.g., insomnia problems and number of hours of sleep during the day and night), and general assessment made by the health professionals (e.g., emotional state or autonomy level) is stored. - Data related to the wounds of the residents, namely the type of wound, pictures of the wound, and its location, treatments, and start and finish dates are stored. The evolution of the wounds is also documented through photos and observations provided by the health professionals. Additionally, the various treatments used throughout the evolution of the wound are stored. - Periodic evaluations recorded by the health professionals (blood pressure, weight, heart rate, and axillary temperature) are stored. In this context, the date and time of the evaluation, the institution identification number of the professional who made the evaluation, and the resident’s institution process number are stored. - Periodic evaluations of the capillary blood glucose of residents with diabetes are stored. Again, the date and time of the evaluation, the institution identification number of the professional who made the evaluation, and the resident’s institution process number are stored. - The history of the medical and inpatient reports of the residents: The date, type, and a brief description of the report, among others, are stored. - The nursing interventions scheduled by the health professionals through the identification of the type of nursing intervention, the scheduled and realization dates of the intervention, the resident’s institution process number, the institution identification numbers of the professionals who scheduled and performed the nursing intervention, and the state of the intervention, i.e., if the intervention was performed or not, are stored. - Data related to the nursing home, namely the name of the institution and the bedroom and bed numbers existing in the nursing home, are stored. - Technical data on the types and sizes of urinary catheters and nasogastric tubes available and types of wounds, injectable medications, nursing interventions, wounds location, and medical and inpatient reports, among others, are stored. Afterwards, RESTful web services written in PHP with SQL queries were developed to allow the sharing of data between the frontend (the mobile application) and the backend (the database). In this sense, numerous web services were created to allow users to manipulate data from the database, namely to insert, update, and select data. Finally, similar to the database, the web services were deployed in the server of the hospital. 6.2. Mobile Application Features After designing and developing the database and the web services, the interfaces and the features of the mobile application had to be designed and developed. For this purpose, React Native was chosen, as stated above. At first, when the user, i.e., the health professional, launches the mobile application, he needs to sign up for an account if he does not have one. In this context, the user is requested to provide his login credentials and personal data. In this context, the user is requested to specify if he is a nurse or a doctor since these two user types have access to different features once signed in to the application. Then, once the user has provided his login credentials and his personal data, the data are stored into the database. Alternatively, if the user already has an account, he can directly sign in to the mobile application with his login credentials. Finally, if his login credentials match with the ones stored in the database, the user is successfully signed in to the application, having access to the following features: - Daily tasks: the user can consult the nursing interventions/tasks planned for the day and confirm or cancel their execution. Furthermore, the user is also able to consult the tasks that were already executed or cancelled. This feature is only available for nurses since, through interviews performed with the health professionals, it was concluded that doctors do not schedule tasks when present in the nursing home. - Scheduled tasks: The user is able to consult the pending tasks, the cancelled tasks, and the finished tasks scheduled in the future, i.e., after the current date. Additionally, he can also cancel or confirm the execution of a task. For the same reasons mentioned above, this feature is only available for nurses. - Plan of the nursing home: Both user types can consult the list of bedrooms existing in the nursing home. Then, by choosing one of the bedrooms, the user has access to the following information: the number of beds available and the name of the residents living in the bedroom. For each resident, the bed number is specified as well as the number of pending tasks associated with the resident for the day. - Management of the residents: If the user is a nurse, he is able to manage the residents living in the nursing home. He can also view and edit their personal data as well as add new residents or disable a given resident if needed. Additionally, the user can view and edit the informal caregivers and personal contacts of each resident as well as add and remove contacts. However, if the user is a doctor, he is only able to view the personal data of the residents and the informal caregivers of each resident. Thus, doctors cannot insert new residents and informal caregivers, disable them, and edit their personal data. - Clinical notes: If the user is a doctor, he is able to create new clinical notes and consult the clinical notes’ history of each resident. However, nurses are only able to view the clinical notes’ history of each resident since clinical notes can only be written by doctors. - Nursing notes: If the user is a nurse, he is able to create new nursing notes and consult the nursing notes’ history of each resident. However, doctors are only able to consult the nursing notes’ history of each resident since nursing notes can only be written by nurses. - Management of the clinical information of the residents: If the user is a nurse, he can manage, i.e., edit and view, the clinical information of the residents. However, doctors can only view the clinical information of the residents. - Management of wounds: If the user is a nurse, he can manage the wounds of the residents and consult the wound history of each resident. More specifically, the user can insert new wounds for each resident as well as consult and record their evolution through photos and observations. Additionally, it is also possible to consult the history of the treatments used throughout the evolution of a wound and modify the current treatment if needed. Moreover, the user can also download a PDF file of the evolution of a given wound. However, doctors can only consult the wounds’ history of each resident, the evolution of each wound and of the treatments used, and download the PDF file of the evolution of the wound. - Periodic evaluations: This feature is available to both users and allows them to add new periodic evaluations and consult the periodic evaluations’ history of each resident. - Periodic evaluations of the capillary blood glucose: This feature is available to both users, enabling them to add new periodic evaluations of the capillary blood glucose for residents with diabetes. It is also possible to consult the history of the periodic evaluations of the capillary blood glucose of each resident with diabetes. - Inpatient reports: This feature is available to both users and allows them to add new inpatient reports and consult the inpatient reports’ history of each resident. - Medical reports: This feature is available to both users, allowing them to add new medical reports and consult the medical reports’ history of each resident. - Planning of nursing interventions: This feature is only available for nurses, enabling them to schedule nursing interventions for each resident. - Profile: This feature is available to both users, allowing them to have access and edit their personal data. - Sign out: This feature is available to both users and allows them to sign out of their accounts. 6.3. Clinical and Performance Business Intelligence Indicators To analyze and gain a deeper understanding of the overall performance of the nursing home and its health professionals as well as to improve the nursing care delivered and its outcomes, clinical and performance indicators were defined. However, at the moment, these indicators have not yet been created since the database does not have real data. Moreover, to create meaningful and valuable indicators, data should be gathered over a relatively long period of time, which is not the case at the moment. Furthermore, to have a better visualization and control over the indicators, it is envisioned to implement them in a web application and not in the mobile solution. Thereby, in the future, when enough data are gathered, it is envisioned to create, at least, the following clinical and performance indicators: - Percentage of nursing interventions realized per nurse: Pie chart indicator of the percentage of nursing interventions realized per nurse over a time horizon, for instance, per month and year. Thus, this indicator would enable highlighting if the nursing interventions are performed proportionately among the nurses working in the nursing home and if a certain health professional has a higher workload compared to others. Consequently, with the information obtained through this indicator, improvements and measures could be realized to have a better distribution of the nursing interventions between the nurses. - Total of realized and unrealized nursing interventions per month: Stacked column chart indicator of the total of realized and unrealized (neither realized nor cancelled) nursing interventions. This indicator would help identify abnormalities in the number of unrealized nursing interventions as well as the months in which more tasks are performed or unrealized. Consequently, regarding the former, if too many nursing interventions are unrealized, it may suggest that the nurses are not performing their job as well as they should. For instance, it may shed light on wheather the nurses are overloaded with work, not having enough time to perform all of their tasks. On the other hand, regarding the latter, if some specific months are busier than others, more nurses could be present for each shift in order for the nursing interventions to be realized as scheduled. - Variation of the capillary blood glucose of a given resident over time: Line chart indicator of the variation of the capillary blood glucose of a given resident over time. Thus, the health professionals would be able to have a better visualization of variation of the capillary blood glucose and, thus, more rapidly detect abnormalities and act on them. Additionally, this indicator could also be extended to other types of evaluations, namely to analyze the variation of the weight, blood pressure, heart rate, oxygen saturation, and axillary temperature of a given resident over time. - Percentage of wounds per resident: Bar chart indicator of the percentage of wounds per resident over a time horizon, for instance, per month or year. Consequently, with this clinical indicator, the health professionals would be able to identify the residents with an abnormal amount of wounds and, thus, supervise them more closely so as to avoid and reduce the occurrence of wounds for these residents. - Percentage of wounds per wound type: Donut chart indicator of the percentage of wounds per wound type over a time horizon, for instance, per month or year. Thus, through this clinical indicator, the health professionals would be able to identify if certain wound types occur more frequently than others. Consequently, according to the results obtained, further research and improvements could be realized so as to identify and reduce wound-causing factors. - Percentage of nursing interventions realized annually per type of nursing intervention: Bar chart indicator of the percentage of nursing interventions realized annually per type of nursing intervention. Therefore, through this indicator, the health professionals would be able to identify and be aware of the nursing interventions that are not realized with the expected frequency. Hence, with this knowledge, the health professionals could perform these nursing interventions more frequently. , and illustrate examples of some of the indicators mentioned above. Power BI was used with fictitious data.
As mentioned above, the nursing home uses handwritten medical records and resorts to paper to manipulate information. Consequently, the facility did not have any database implemented prior to the development of this project. Therefore, before designing the interfaces of the mHealth application, a database had to be defined in order to allow the application to have access and store data. Thus, a MySQL relational database was defined and created taking into account the data that needed to be stored. Then, the database was deployed and implemented in the server of the hospital that manages the nursing home. However, it must be mentioned that the database remains to be populated with data related to the residents and the health professionals. In this sense, a database composed of 49 tables was designed and created, allowing the storage of: - Data related to the users of the mobile application: personal information of the health professionals (their full name, email, profile picture, telephone and mobile phone numbers, date of birth, institution identification number, and gender, among others) is stored as well as their login credentials. - Personal data related to the residents (their full name, institution process number, bed and bedroom numbers, admission date, date of birth, profile picture, telephone and mobile phone numbers, and national health service number, among others) is stored. - Personal data related to the informal caregivers and personal contacts of the residents (their full name, telephone and mobile phone numbers, relationship with the resident, and observations, among others) is stored. - Clinical notes written by the doctors: The content of the note, the institution identification number of the professional who wrote the note, the resident’s institution process number, and the date and time of the creation of the note are stored. - Nursing notes written by the nurses: Similar to the clinical notes of the doctors, the content of the note, the institution identification number of the professional who created the note, the resident’s institution process number, and the date and time of the creation of the note are stored. - Clinical information related to the residents, namely their general evaluation (e.g., alcohol and tobacco consumption), usual medication, clinical history (e.g., existence of diabetes, diseases, allergies, and past surgeries and fractures), physical assessment (e.g., weight, height, blood pressure, heart rate, skin integrity, turgidity, and color, vision, and hearing), nutritional and eating patterns (e.g., type of diet, dentition, and use of a nasogastric tube), bowel and bladder elimination patterns (e.g., use of adult diapers or of a urinary catheter), physical activity patterns (e.g., strength of the limbs), sleeping patterns (e.g., insomnia problems and number of hours of sleep during the day and night), and general assessment made by the health professionals (e.g., emotional state or autonomy level) is stored. - Data related to the wounds of the residents, namely the type of wound, pictures of the wound, and its location, treatments, and start and finish dates are stored. The evolution of the wounds is also documented through photos and observations provided by the health professionals. Additionally, the various treatments used throughout the evolution of the wound are stored. - Periodic evaluations recorded by the health professionals (blood pressure, weight, heart rate, and axillary temperature) are stored. In this context, the date and time of the evaluation, the institution identification number of the professional who made the evaluation, and the resident’s institution process number are stored. - Periodic evaluations of the capillary blood glucose of residents with diabetes are stored. Again, the date and time of the evaluation, the institution identification number of the professional who made the evaluation, and the resident’s institution process number are stored. - The history of the medical and inpatient reports of the residents: The date, type, and a brief description of the report, among others, are stored. - The nursing interventions scheduled by the health professionals through the identification of the type of nursing intervention, the scheduled and realization dates of the intervention, the resident’s institution process number, the institution identification numbers of the professionals who scheduled and performed the nursing intervention, and the state of the intervention, i.e., if the intervention was performed or not, are stored. - Data related to the nursing home, namely the name of the institution and the bedroom and bed numbers existing in the nursing home, are stored. - Technical data on the types and sizes of urinary catheters and nasogastric tubes available and types of wounds, injectable medications, nursing interventions, wounds location, and medical and inpatient reports, among others, are stored. Afterwards, RESTful web services written in PHP with SQL queries were developed to allow the sharing of data between the frontend (the mobile application) and the backend (the database). In this sense, numerous web services were created to allow users to manipulate data from the database, namely to insert, update, and select data. Finally, similar to the database, the web services were deployed in the server of the hospital.
After designing and developing the database and the web services, the interfaces and the features of the mobile application had to be designed and developed. For this purpose, React Native was chosen, as stated above. At first, when the user, i.e., the health professional, launches the mobile application, he needs to sign up for an account if he does not have one. In this context, the user is requested to provide his login credentials and personal data. In this context, the user is requested to specify if he is a nurse or a doctor since these two user types have access to different features once signed in to the application. Then, once the user has provided his login credentials and his personal data, the data are stored into the database. Alternatively, if the user already has an account, he can directly sign in to the mobile application with his login credentials. Finally, if his login credentials match with the ones stored in the database, the user is successfully signed in to the application, having access to the following features: - Daily tasks: the user can consult the nursing interventions/tasks planned for the day and confirm or cancel their execution. Furthermore, the user is also able to consult the tasks that were already executed or cancelled. This feature is only available for nurses since, through interviews performed with the health professionals, it was concluded that doctors do not schedule tasks when present in the nursing home. - Scheduled tasks: The user is able to consult the pending tasks, the cancelled tasks, and the finished tasks scheduled in the future, i.e., after the current date. Additionally, he can also cancel or confirm the execution of a task. For the same reasons mentioned above, this feature is only available for nurses. - Plan of the nursing home: Both user types can consult the list of bedrooms existing in the nursing home. Then, by choosing one of the bedrooms, the user has access to the following information: the number of beds available and the name of the residents living in the bedroom. For each resident, the bed number is specified as well as the number of pending tasks associated with the resident for the day. - Management of the residents: If the user is a nurse, he is able to manage the residents living in the nursing home. He can also view and edit their personal data as well as add new residents or disable a given resident if needed. Additionally, the user can view and edit the informal caregivers and personal contacts of each resident as well as add and remove contacts. However, if the user is a doctor, he is only able to view the personal data of the residents and the informal caregivers of each resident. Thus, doctors cannot insert new residents and informal caregivers, disable them, and edit their personal data. - Clinical notes: If the user is a doctor, he is able to create new clinical notes and consult the clinical notes’ history of each resident. However, nurses are only able to view the clinical notes’ history of each resident since clinical notes can only be written by doctors. - Nursing notes: If the user is a nurse, he is able to create new nursing notes and consult the nursing notes’ history of each resident. However, doctors are only able to consult the nursing notes’ history of each resident since nursing notes can only be written by nurses. - Management of the clinical information of the residents: If the user is a nurse, he can manage, i.e., edit and view, the clinical information of the residents. However, doctors can only view the clinical information of the residents. - Management of wounds: If the user is a nurse, he can manage the wounds of the residents and consult the wound history of each resident. More specifically, the user can insert new wounds for each resident as well as consult and record their evolution through photos and observations. Additionally, it is also possible to consult the history of the treatments used throughout the evolution of a wound and modify the current treatment if needed. Moreover, the user can also download a PDF file of the evolution of a given wound. However, doctors can only consult the wounds’ history of each resident, the evolution of each wound and of the treatments used, and download the PDF file of the evolution of the wound. - Periodic evaluations: This feature is available to both users and allows them to add new periodic evaluations and consult the periodic evaluations’ history of each resident. - Periodic evaluations of the capillary blood glucose: This feature is available to both users, enabling them to add new periodic evaluations of the capillary blood glucose for residents with diabetes. It is also possible to consult the history of the periodic evaluations of the capillary blood glucose of each resident with diabetes. - Inpatient reports: This feature is available to both users and allows them to add new inpatient reports and consult the inpatient reports’ history of each resident. - Medical reports: This feature is available to both users, allowing them to add new medical reports and consult the medical reports’ history of each resident. - Planning of nursing interventions: This feature is only available for nurses, enabling them to schedule nursing interventions for each resident. - Profile: This feature is available to both users, allowing them to have access and edit their personal data. - Sign out: This feature is available to both users and allows them to sign out of their accounts.
To analyze and gain a deeper understanding of the overall performance of the nursing home and its health professionals as well as to improve the nursing care delivered and its outcomes, clinical and performance indicators were defined. However, at the moment, these indicators have not yet been created since the database does not have real data. Moreover, to create meaningful and valuable indicators, data should be gathered over a relatively long period of time, which is not the case at the moment. Furthermore, to have a better visualization and control over the indicators, it is envisioned to implement them in a web application and not in the mobile solution. Thereby, in the future, when enough data are gathered, it is envisioned to create, at least, the following clinical and performance indicators: - Percentage of nursing interventions realized per nurse: Pie chart indicator of the percentage of nursing interventions realized per nurse over a time horizon, for instance, per month and year. Thus, this indicator would enable highlighting if the nursing interventions are performed proportionately among the nurses working in the nursing home and if a certain health professional has a higher workload compared to others. Consequently, with the information obtained through this indicator, improvements and measures could be realized to have a better distribution of the nursing interventions between the nurses. - Total of realized and unrealized nursing interventions per month: Stacked column chart indicator of the total of realized and unrealized (neither realized nor cancelled) nursing interventions. This indicator would help identify abnormalities in the number of unrealized nursing interventions as well as the months in which more tasks are performed or unrealized. Consequently, regarding the former, if too many nursing interventions are unrealized, it may suggest that the nurses are not performing their job as well as they should. For instance, it may shed light on wheather the nurses are overloaded with work, not having enough time to perform all of their tasks. On the other hand, regarding the latter, if some specific months are busier than others, more nurses could be present for each shift in order for the nursing interventions to be realized as scheduled. - Variation of the capillary blood glucose of a given resident over time: Line chart indicator of the variation of the capillary blood glucose of a given resident over time. Thus, the health professionals would be able to have a better visualization of variation of the capillary blood glucose and, thus, more rapidly detect abnormalities and act on them. Additionally, this indicator could also be extended to other types of evaluations, namely to analyze the variation of the weight, blood pressure, heart rate, oxygen saturation, and axillary temperature of a given resident over time. - Percentage of wounds per resident: Bar chart indicator of the percentage of wounds per resident over a time horizon, for instance, per month or year. Consequently, with this clinical indicator, the health professionals would be able to identify the residents with an abnormal amount of wounds and, thus, supervise them more closely so as to avoid and reduce the occurrence of wounds for these residents. - Percentage of wounds per wound type: Donut chart indicator of the percentage of wounds per wound type over a time horizon, for instance, per month or year. Thus, through this clinical indicator, the health professionals would be able to identify if certain wound types occur more frequently than others. Consequently, according to the results obtained, further research and improvements could be realized so as to identify and reduce wound-causing factors. - Percentage of nursing interventions realized annually per type of nursing intervention: Bar chart indicator of the percentage of nursing interventions realized annually per type of nursing intervention. Therefore, through this indicator, the health professionals would be able to identify and be aware of the nursing interventions that are not realized with the expected frequency. Hence, with this knowledge, the health professionals could perform these nursing interventions more frequently. , and illustrate examples of some of the indicators mentioned above. Power BI was used with fictitious data.
After the development of the mobile application, a proof of concept was performed to validate the usability, feasibility, and usefulness of the solution towards the target audience and to ensure that the solution provides all of the requirements initially proposed. Therefore, a SWOT analysis was elaborated to identify the strengths, weaknesses, opportunities, and threats related to the solution. To this end, a questionnaire based on the TAM3 was conducted with the health professionals working in the nursing home in order to assess their acceptability, i.e., how they accept and receive the mobile application, and its results were used as a basis in the SWOT analysis. Furthermore, this analysis was also based on personal opinion as well as valuable information obtained through semi-structured interviews and focus groups realized with the professionals working for both the nursing home and hospital. It must be mentioned that the survey questionnaire was conducted with few health professionals. Thus, not enough results were obtained to be presented. However, in the future, it is intended to evaluate the mobile application with more health professionals and, thus, have a more complete evaluation. The SWOT analysis performed is presented hereafter. The following strengths were identified: - Decrease of time-waste and, consequently, an increase in productivity since the health professionals can have access and record information at the point-of-care, i.e., they do not need to constantly return to the nursing station; - Decrease of the occurrence of errors since the solution reduces the risk of misplacing, losing, or forgetting information; - Enhancement of the nursing care delivered and elders’ quality of life due to the decrease of errors and time-waste; - Easier access and manipulation of information; - Timely sharing and centralization of information; - Optimization of the various processes occurring in the nursing home; - Answer to the needs of the health professionals; - Scheduling of tasks less confusing and more organized compared to hand-written boards; - Reduction of the amount of paper generated daily with hand-written charts due to the shift from the paper-based to the computer-based management of data; - Evidence-based and more accurate decision-making process since the health professionals can have access to information at the point-of-care; - High usability since the mobile application has a simple, user-friendly, and intuitive design with well-defined paths and organized information; - High adaptability since the solution can easily be implemented in other nursing homes; and - High scalability since new features can easily be added and the mobile application can easily be maintained. The following weaknesses can be pointed out: - Need of a wireless Internet connection, which is not currently available in the nursing home; - Need of mobile devices, namely mobile phones and tablets, in order to use the solution; - Need to populate the database with real data, namely information of the residents and health professionals, which will require time resources; - Need to train the health professionals before using the solution; and - Need to wait a relatively long period of time before creating the clinical and performance indicators. The opportunities of the solution are as follows: - Introduction and implementation of the mobile application in other nursing homes; - Enhancement of other processes due to the technological improvement of the nursing home; and - Creation of clinical and performance indicators due to the elimination of the paper-based management of data and the storage of information in a database. Finally, the following threats can be highlighted: - Issues may emerge if a reliable wireless Internet connectivity is not available; and - New systems and competition may arise due to the novelty of the solution, which approaches recent problems. In light of the above mentioned, it is possible to affirm how beneficial and influential mHealth and BI are in healthcare organizations, namely to enhance the various processes occurring in them and, consequently, to improve the care delivered and patients’ quality of life. In fact, through the use of mobile applications, such as the solution described in this manuscript, the medical practice can be completely transformed as they allow rapid and convenient access to and manipulation of information at the point-of-care. Thus, for professionals constantly on the move, which is the case with the health professionals working in the nursing home used as a case study, a mHealth solution such as the one developed allows reducing time-waste since they do not need to interrupt their workflow, decreasing the occurrence of errors since the likelihood of forgetting or misplacing information is lower, and making faster and better decisions since they can have access to up-to-date information at the point-of-care making informed decisions. Furthermore, through BI tools, it is possible to use and analyze the huge amounts of data gathered daily in organizations in order to turn these data into valuable knowledge. In fact, the clinical and performance indicators defined in this research project enable highlighting problem areas and opportunities existing in the nursing home and shed light on the overall performance of the facility and its professionals. Finally, regarding the ethical issues associated with the implementation of HICT in healthcare contexts, they were safeguarded through the inclusion and consultation of the health professionals during all stages of the design and development of the solution in order to develop an accurate mHealth application of quality that actually meets the needs of its users. Additionally, privacy and confidentiality issues were also taken into account since only authorized users, i.e., the nurses and doctors working in the nursing home, can have access to the information displayed in the solution. Moreover, data regarding login credentials were encrypted and the solution would only be available through an Intranet connection, i.e., a private network. However, since implementing data security protections is a difficult task to achieve, there is still some work that remains to be done in order to respond completely to the privacy requirements that are constantly emerging. In this context, it is planned to continuously improve the solution over time, through the encryption of all the data stored in the database.
The project described in this manuscript aimed to introduce HICT in a Portuguese nursing home suffering from the consequences of the aging population and the usage of rudimentary methods and, subsequently, take advantage of the benefits provided by HICT in order to improve elders’ quality of life and the nursing care delivered. Therefore, considering the issues and challenges faced by the nursing home used as a case study, a mobile application was designed and developed for the health professionals working in the facility in order to help them manage the residents and assist them at the point-of-care. In the long-term, the research team foresees that the mobile application will allow easier and faster access and manipulation of the information by the health professionals compared to the paper-based management of data, since, after some time, a paper-based process is composed of several pages. Additionally, it will help reduce time-waste and errors and, hence, improve elders’ quality of life and the nursing care delivered as well as reduce some of the work overload experienced by health professionals. Furthermore, it will enable to improve the overall performance of the nursing home and health professionals as well as optimize some of the processes occurring in the facility. Regarding future work, it is planned to provide the necessary resources to the nursing home since, without them, the health professionals are not able to use the solution. Thus, it is intended to provide mobile devices, such as tablets and mobile phones, and a reliable wireless Internet connection, namely wireless Intranet, in order for the mobile application to be used. Afterwards, it is intended to populate the database with real data related to the health professionals and the residents. It is important to mention that the database already contains technical data (e.g., the sizes and types of urinary catheters and nasogastric tubes available and types of wounds, among others) since this information was gathered through the help of the health professionals. On the other hand, the research team envisions designing and developing a web application to assist the mobile application and, hence, integrate some of its features. In this sense, the web application will integrate most of the features of the mobile application, allowing the health professionals to manage the residents from a computer if they prefer to do so. Additionally, it is intended to integrate into the web application a module to manage the users of the applications and another containing the clinical and performance indicators mentioned previously. However, these indicators will only be available when enough data are gathered, since, otherwise, the knowledge acquired would not be meaningful and valuable. Furthermore, it is intended to continue the expansion of the mobile application through the addition of new and relevant features. Therefore, considering the above mentioned, the research team envisions encouraging the continuous maintenance, growth, and expansion of the solution.
|
Global burden of emergency and operative conditions: an analysis of Global Burden of Disease data, 2011–2019 | 8575a5a0-965e-496e-ba49-4336123653f6 | 11865850 | Surgical Procedures, Operative[mh] | In passing resolution 76.2 Integrated emergency, critical and operative care for universal health coverage and protection from health emergencies – the so-called ECO resolution – at the World Health Assembly in 2023, World Health Organization (WHO) Member States pledged to strengthen their health systems to provide high-quality, integrated, emergency, critical and operative care. Historically, health systems have often adopted a vertical approach that focused health services on specific population groups or conditions, such as maternal morbidity and mortality, particular infectious diseases or trauma. The ECO resolution emphasizes the importance of a horizontal alignment and the integration of health-care services along the patient pathway at all levels, from primary care to tertiary specialist care. Further, the coronavirus disease 2019 (COVID-19) pandemic highlighted the need for more resilient health systems that can better respond and adapt to external shocks and health emergencies. Stronger health systems are also necessary for attaining the United Nations sustainable development goals for 2030, particularly goal 3: to ensure healthy lives and promote well-being for all at all ages. This alignment and integration is especially important today because global progress towards achieving universal health coverage by 2030 has fallen behind, particularly for health service coverage. The World Health Assembly’s ECO resolution calls for greater political commitment to strengthening the planning and provision of integrated emergency, critical and operative care services, in anticipation of better population health. Policy-makers require sound evidence to develop national policies for the expansion of needs-based, integrated emergency, critical and operative care and to establish priorities for local settings. The objective of our study was to quantify the global, regional and national burden of conditions that may require emergency or operative care in terms of deaths and disability-adjusted life years (DALYs).
Definitions There is no global consensus on definitions of emergency, critical or operative care or on the conditions that would require these types of care. Based on two previous studies using Global Burden of Disease data, , we defined an emergency condition as, “a condition that, if not diagnosed and treated within hours to days of onset, often leads to serious physical or mental disability or death.” Although definitions and classifications of critical care have been proposed in recent studies, , it is difficult to estimate the global critical care burden because the critical illness syndromes, such as sepsis and multiorgan dysfunction, associated with critical conditions are neither widely reported nor included in Global Burden of Disease data. Consequently, we excluded critical care from our study. However, critical care is provided for all conditions categorized as emergency conditions and for the majority of conditions categorized as operative conditions. We defined an operative condition as, “any condition that may require the expertise of a surgically trained provider,” and operative care as, “any measure that reduces the rates of physical disability or premature death associated with a surgical condition.” Although the Disease Control Priorities project applied a narrower definition and included only invasive procedures in its data analysis, it was acknowledged that surgical conditions can be managed using either a surgical procedure or a conservative approach. For example, an abscess can be incised and drained or it can be treated with antibiotics, and a splenic injury can be managed by emergency spleen removal or by monitoring, as is common in children. Data source and categories Our study involved annual data from the publicly accessible Global Burden of Disease database for the years 2011 to 2019, before COVID-19 had a confounding influence on health-care delivery. We expressed the global, regional and national burden of emergency and operative conditions in terms of deaths and DALYs per 100 000 population using official country population estimates. A checklist for the Guidelines for Accurate and Transparent Health Estimates Reporting is available in the online repository. , Global Burden of Disease data are classified using four levels. As an example: (i) level 1: noncommunicable diseases; (ii) level 2: cardiovascular diseases; (iii) level 3: stroke; and (iv) level 4: ischaemic stroke. We used the most detailed classification available for each condition listed. Chang et al. employed a Delphi consensus process to classify conditions according to their need for emergency care: (i) conditions that, if not addressed within hours to days of onset, commonly lead to serious disability or death; (ii) conditions commonly associated with acute decompensation that lead to serious disability or death; and (iii) non-emergency conditions. Others later adapted these categories and, to reduce the risk of overestimating emergency conditions, included only conditions in Chang et al.’s first category in their analyses. We used the adapted classification to identify emergency conditions. Remaining conditions were classified as non-emergency conditions (e.g. osteoarthritis and dementia). For operative conditions, we performed a modified Delphi consensus process to classify all conditions listed in the Global Burden of Disease database as either operative or non-operative. The consensus process adhered to best practices, involving 12 participants across two rounds. We chose this number based on evidence of diminishing returns beyond 12. Participants, identified through networks such as the Royal College of Surgeons of England and WHO, were selected for their clinical expertise and geographic diversity. All had backgrounds in surgical specialties, anaesthesia, intensive care, emergency medicine or dentistry. They were based in seven low- and middle-income countries (Belize, Ethiopia, India, Malawi, Somalia, Sudan, and the United Republic of Tanzania) and four high-income countries (Sweden, Switzerland, the United Kingdom of Great Britain and Northern Ireland, and the United States of America). Participation was confidential, with identities kept anonymous during and after the Delphi process. Participants provided informed consent, understanding that involvement was voluntary and could be discontinued at any time. Additional information is provided in the online repository. We applied a consensus threshold of 67%. We did not categorize conditions listed in the Global Burden of Disease database that were too nonspecific to classify (e.g. other neonatal disorders and other malignant neoplasms) as either operative or non-operative. For nine conditions, no consensus could be reached: these included animal contact, sexual violence, rheumatic and nonrheumatic valvular heart disease, liver cancer and periodontal disease. We performed a literature search for each of these nine conditions and classified them as operative conditions if the treatment options described fell under our definition of operative care. A condition that was identified as both an emergency condition and an operative condition was classified as an emergency-and-operative condition. This third category differed from our broader group of emergency and/or operative conditions combined, which included all conditions that were either emergency conditions only, operative conditions only or emergency-and-operative conditions. Data analysis In our baseline analysis, we present summary statistics on deaths and DALYs associated with emergency, operative and emergency-and-operative conditions in 2019. Temporal trends in all-age deaths and DALYs across 9 years (i.e. 2011 to 2019) and for 193 countries were assessed using panel data models. The analytical models included dummy variables for the Global Burden of Disease geographical regions and country income categories derived from 2022 World Bank classifications. Country-level fixed effects were used to account for unobserved variations in country characteristics that were assumed to be constant throughout the study period. As it was possible that the correlation between errors in observations in an individual country were greater than the correlation in errors between countries, our model used standard errors clustered at the country level. We repeated our analysis for deaths and DALYs linked to emergency conditions, operative conditions and emergency-and-operative conditions, respectively. A P -value less than 0.05 was considered significant. All analyses were conducted using Stata v. 16 SE (StataCorp LLC, College Station, United States of America).
There is no global consensus on definitions of emergency, critical or operative care or on the conditions that would require these types of care. Based on two previous studies using Global Burden of Disease data, , we defined an emergency condition as, “a condition that, if not diagnosed and treated within hours to days of onset, often leads to serious physical or mental disability or death.” Although definitions and classifications of critical care have been proposed in recent studies, , it is difficult to estimate the global critical care burden because the critical illness syndromes, such as sepsis and multiorgan dysfunction, associated with critical conditions are neither widely reported nor included in Global Burden of Disease data. Consequently, we excluded critical care from our study. However, critical care is provided for all conditions categorized as emergency conditions and for the majority of conditions categorized as operative conditions. We defined an operative condition as, “any condition that may require the expertise of a surgically trained provider,” and operative care as, “any measure that reduces the rates of physical disability or premature death associated with a surgical condition.” Although the Disease Control Priorities project applied a narrower definition and included only invasive procedures in its data analysis, it was acknowledged that surgical conditions can be managed using either a surgical procedure or a conservative approach. For example, an abscess can be incised and drained or it can be treated with antibiotics, and a splenic injury can be managed by emergency spleen removal or by monitoring, as is common in children.
Our study involved annual data from the publicly accessible Global Burden of Disease database for the years 2011 to 2019, before COVID-19 had a confounding influence on health-care delivery. We expressed the global, regional and national burden of emergency and operative conditions in terms of deaths and DALYs per 100 000 population using official country population estimates. A checklist for the Guidelines for Accurate and Transparent Health Estimates Reporting is available in the online repository. , Global Burden of Disease data are classified using four levels. As an example: (i) level 1: noncommunicable diseases; (ii) level 2: cardiovascular diseases; (iii) level 3: stroke; and (iv) level 4: ischaemic stroke. We used the most detailed classification available for each condition listed. Chang et al. employed a Delphi consensus process to classify conditions according to their need for emergency care: (i) conditions that, if not addressed within hours to days of onset, commonly lead to serious disability or death; (ii) conditions commonly associated with acute decompensation that lead to serious disability or death; and (iii) non-emergency conditions. Others later adapted these categories and, to reduce the risk of overestimating emergency conditions, included only conditions in Chang et al.’s first category in their analyses. We used the adapted classification to identify emergency conditions. Remaining conditions were classified as non-emergency conditions (e.g. osteoarthritis and dementia). For operative conditions, we performed a modified Delphi consensus process to classify all conditions listed in the Global Burden of Disease database as either operative or non-operative. The consensus process adhered to best practices, involving 12 participants across two rounds. We chose this number based on evidence of diminishing returns beyond 12. Participants, identified through networks such as the Royal College of Surgeons of England and WHO, were selected for their clinical expertise and geographic diversity. All had backgrounds in surgical specialties, anaesthesia, intensive care, emergency medicine or dentistry. They were based in seven low- and middle-income countries (Belize, Ethiopia, India, Malawi, Somalia, Sudan, and the United Republic of Tanzania) and four high-income countries (Sweden, Switzerland, the United Kingdom of Great Britain and Northern Ireland, and the United States of America). Participation was confidential, with identities kept anonymous during and after the Delphi process. Participants provided informed consent, understanding that involvement was voluntary and could be discontinued at any time. Additional information is provided in the online repository. We applied a consensus threshold of 67%. We did not categorize conditions listed in the Global Burden of Disease database that were too nonspecific to classify (e.g. other neonatal disorders and other malignant neoplasms) as either operative or non-operative. For nine conditions, no consensus could be reached: these included animal contact, sexual violence, rheumatic and nonrheumatic valvular heart disease, liver cancer and periodontal disease. We performed a literature search for each of these nine conditions and classified them as operative conditions if the treatment options described fell under our definition of operative care. A condition that was identified as both an emergency condition and an operative condition was classified as an emergency-and-operative condition. This third category differed from our broader group of emergency and/or operative conditions combined, which included all conditions that were either emergency conditions only, operative conditions only or emergency-and-operative conditions.
In our baseline analysis, we present summary statistics on deaths and DALYs associated with emergency, operative and emergency-and-operative conditions in 2019. Temporal trends in all-age deaths and DALYs across 9 years (i.e. 2011 to 2019) and for 193 countries were assessed using panel data models. The analytical models included dummy variables for the Global Burden of Disease geographical regions and country income categories derived from 2022 World Bank classifications. Country-level fixed effects were used to account for unobserved variations in country characteristics that were assumed to be constant throughout the study period. As it was possible that the correlation between errors in observations in an individual country were greater than the correlation in errors between countries, our model used standard errors clustered at the country level. We repeated our analysis for deaths and DALYs linked to emergency conditions, operative conditions and emergency-and-operative conditions, respectively. A P -value less than 0.05 was considered significant. All analyses were conducted using Stata v. 16 SE (StataCorp LLC, College Station, United States of America).
We identified 193 countries for which information on deaths and DALYs linked to 272 conditions was available from the Global Burden of Disease database. Of the 272 conditions, we categorized 61 as emergency conditions and 211 as non-emergency conditions. In addition, 88 were categorized as operative conditions and 184 as non-operative conditions. Finally, 31 were categorized as emergency-and-operative conditions, and 118 were categorized as emergency and/or operative conditions. An overview of all classifications is presented in an online repository. In 2019, emergency and/or operative conditions accounted for 37 850 181 deaths (514.09 deaths per 100 000 population) and 1 331 300 000 DALYs (18 113.00 DALYs per 100 000 population) worldwide; and emergency-and-operative conditions accounted for 6 966 425 deaths (86.92 deaths per 100 000 population) and 303 344 808 DALYs (4070.95 DALYs per 100 000 population; ; and ). Low-income countries reported the highest burden of DALYs associated with emergency-and-operative conditions (4894.41 DALYs per 100 000 population) and this burden decreased with the rise in income classification, such that the burden for high-income countries was 3316.45 DALYS per 100 000 population ( and ). In contrast, the largest burden of deaths associated with emergency-and-operative conditions was reported for upper-middle-income countries (99.20 deaths per 100 000 population; and ). Regionally, the East Asia and the Pacific region reported the highest DALY burden associated with emergency-and-operative conditions (4626.38 DALYs per 100 000 population); followed by sub-Saharan Africa (4268.36 DALYs per 100 000 population); and Latin America and the Caribbean (4260.94 DALYs per 100 000 population; and ). The East Asia and the Pacific region also report the largest burden of deaths associated with emergency-and-operative conditions (107.14 deaths per 100 000 population; and ). Emergency conditions were responsible for a substantial share of deaths and DALYs globally; namely, 27 167 926 deaths (367.18 deaths per 100 000 population) and 1 015 000 000 DALYs (13 872 DALYs per 100 000 population; ; and ). Moreover, the per capita burden of emergency conditions was greatest in low-income countries (424.67 deaths and 26 002.56 DALYs per 100 000 population) and the burden generally decreased with the rise in income classification ( and ). Considerable regional variations in DALYs due to emergency conditions were observed. For example, in sub-Saharan Africa, the reported figure was 22 562.76 DALYs per 100 000 population, which was more than three times that reported for the North America region (7321.92 DALYs per 100 000 population; ). In comparison, operative conditions accounted for 17 648 680 deaths (233.83 deaths per 100 000 population) and 619 600 000 DALYs (8311.47 DALYs per 100 000 population) globally ( ; and ). The burden of deaths linked to operative conditions was highest in high-income countries (306.18 deaths per 100 000 population) and lowest in low-income countries (156.90 deaths per 100 000 population; and ). The burden of DALYs linked to operative conditions was similar across all country income groups, with the highest burden being recorded for upper-middle-income countries (8855.13 DALYs per 100 000 population; and ). Regionally, the highest burden of deaths linked to operative conditions was reported for the Europe and Central Asia region (i.e. 355.43 deaths per 100 000 population) and the North America region (i.e. 307.13 deaths per 100 000 population; ). Figures for the burden of deaths and DALYs in individual countries are presented in the online repository. Time trends in deaths and DALYs associated with emergency, operative and emergency-and-operative conditions derived using our panel data models are presented in and , respectively, along with comparisons between country income groups and regional groupings. Overall, we found that the burden of deaths and DALYs decreased globally over time for emergency-and-operative conditions. There were substantial decreases in deaths and DALYs for emergency conditions, but a small increase in deaths and a decrease in DALYs for operative conditions. The decreases seen for emergency-and-operative conditions were driven by changes in deaths and DALYs for emergency conditions. Between 2011 and 2019, the per capita burden of deaths and DALYs linked to emergency-and-operative conditions was least in high-income countries ( and ). The same was true for deaths and DALYs linked to emergency conditions: high-income countries reported the lowest per capita burden, and most associated deaths and DALYs occurred in low-income countries. Compared with high-income countries, lower-middle-income countries reported a significantly lower burden of deaths linked to operative conditions throughout the time period, whereas low-income countries reported a small but significantly higher burden of DALYs.
We estimated the burden of emergency and/or operative conditions globally to be 37 850 181 deaths and 1 331 300 000 DALYs in 2019 alone. This high level underscores the critical importance of strengthening and scaling up integrated emergency, critical and operative care, as emphasized in the 2023 World Health Assembly’s ECO resolution. Previous research using regional data for 1990 to 2015 from the Global Burden of Disease database found that an estimated 51% of deaths and 42% of DALYs globally were due to emergency conditions, and that the number of deaths and DALYs was inversely correlated with the World Bank’s country income classification. Injury, ischaemic heart disease, lower respiratory tract infection and haemorrhagic stroke accounted for the majority of emergency conditions in high- and upper-middle-income countries: 84% and 79% of all emergency conditions in these country groups, respectively. However, these conditions were also responsible for a substantial burden in lower-middle- and low-income countries: 39% and 49% of years of life lost in these country groups, respectively. Another study, which used 2010 Global Burden of Disease data and applied a broader definition of emergency conditions, estimated that emergency conditions accounted for 90% of deaths and 84% of DALYs globally. The highest burden was observed in low-income countries, where reported emergency care utilization rates were consistently lower than in other countries. , We found that, although the estimated burden (both deaths and DALYs) of emergency conditions decreased across the study period globally, these conditions remained a substantial cause of death and disability. We also found that deaths linked to operative conditions increased slightly during the study period, whereas DALYs due to operative conditions decreased significantly from 2017 onwards. These trends may have been driven by strengthened prevention and early detection mechanisms, improved emergency care provision or epidemiological changes (e.g. an increase in the burden of noncommunicable diseases). However, because of differences in the way emergency and operative conditions were defined, it may not be valid to compare trends in their burdens directly. The balance between the surgical and conservative management of operative conditions will likely vary according to the availability of resources, clinical presentation and surgical subspecialty. Previous research in the United States found that surgical procedures were performed for conditions in every 2010 Global Burden of Disease subcategory, with the highest surgical frequencies for musculoskeletal conditions (84.0%) and neoplasms (61.4%). In low- and middle-income countries, over 60% of surgical procedures were performed for emergency conditions in 2015. The third edition of Essential surgery: disease control priorities reported in 2015 that operative conditions were associated with 4.7 million deaths and 340 million DALYs each year, but acknowledged that these figures do not capture common operative conditions such as bowel obstruction or gallbladder disease. Our findings confirm the high burden of operative conditions, both generally and in high-income settings. Given the disruption in operative service delivery that occurred during the COVID-19 pandemic after our study period, it is likely that the burden of operative conditions would have increased further, with important implications for population health. Our classification of emergency-and-operative conditions highlighted conditions for which there was a particular need for a rapid, coordinated and multidisciplinary care. One such one condition is maternal haemorrhage, a major cause of maternal mortality, which accounted for 46 429 deaths and 3 085 190 DALYs globally in 2019. Both immediate resuscitation measures and potential surgical interventions, such as uterine artery ligation or hysterectomy, are required to save lives. Similarly, appendicitis is an emergency-and-operative condition. This condition resulted in 33 341 deaths and 1 498 796 DALYs globally in 2019. Treatment often requires early recognition and prompt surgical attention to prevent complications, such as perforation and peritonitis. Emergency, critical and operative conditions create an immense economic burden. Between 2015 and 2030, operative conditions alone were estimated to result in 12.3 trillion United States dollars (US$) in lost economic productivity. Considerable public and private investment is required to strengthen the planning and provision of emergency, critical and operative care services needed to meet the health needs of the population, improve health system resilience and ensure a secure public health system. For example, in 2023 the cost of scaling up operative care in low- and middle-income countries was estimated to be US$ 300 billion. However, no country that has developed a national surgical obstetrics and anaesthesia plan has committed the necessary funding. Moreover, although health expenditure has been growing faster than the economy across many low- and middle-income countries, , there remain major barriers to accessing emergency, critical and operative care, such as the need for high out-of-pocket payments. , These barriers probably account for some of our findings, particularly in low-income settings. The World Health Assembly’s ECO resolution calls for the standardization and disaggregation of data collection to: (i) accurately characterize and report disease burdens and, thereby, identify high-yielding mechanisms for improving the coordination, safety and quality of delivery of emergency, critical and operative care; and (ii) demonstrate how integrated care can contribute to meeting national targets, achieving health programme goals and attaining the sustainable development goals. Towards that end, there is a need for a comprehensive global measurement framework for, and indicators of, disease burden to improve the data available and support the research needed to guide evidence-based policy development and priority setting at the local level. , Our study provides a transparent approach to classifying emergency and operative conditions that was embedded within a global consensus exercise. This approach enabled us to obtain detailed country-level estimates of the burden of these conditions that can be used to inform policy development and investment decisions at national and international levels. , Our study has limitations. First, there is no global consensus on the definition of emergency, critical or operative conditions. Based on a previous study, we used a narrow classification of emergency conditions. As a result, we excluded some urgent medical conditions, such as diabetes mellitus and human immunodeficiency virus/acquired immunodeficiency syndrome, that can lead to acute decompensation requiring emergency care if left untreated and that can result in serious morbidity or death. To categorize operative conditions, we conducted an international Delphi consensus exercise involving participants with a range of clinical backgrounds from low-, middle- and high-income countries. Although we aimed to include individuals covering a diverse range of backgrounds and settings, we acknowledge that their responses may have been influenced by the local availability of resources, local treatment guidelines and their personal practices. For a small number of operative conditions, no consensus could be reached and the literature was consulted after discussion with participants instead of undertaking further rounds of the Delphi process. Second, the different ways in which we defined emergency and operative conditions means that, although both definitions might be useful for obtaining broad estimates of the burden associated with a particular type of care, their validity for comparing the burdens of emergency and operative conditions directly was limited. More broadly, our study was affected by limitations in Global Burden of Disease data themselves and by the extent to which specific conditions can be equated to specific types of care. Global Burden of Disease data may be limited by variations in data sources and quality, especially in data from countries where hospital records and death registration are relatively incomplete. , Such variations could reduce the validity of comparisons across regions and country income groups. Further, although the type of condition can broadly be used to indicate the type of care needed, the extent to which a specific type of care is required for a specific condition may vary greatly. For example, a cyclist road injury is categorized as an operative condition but not all cyclist road injuries will require surgery. Nonetheless, our study builds on previous research and uses a uniform method and terminology, thereby enabling valid comparisons to be made across time, regions and country income groups. Future studies could include estimates of the likelihood that emergency or operative care would be needed for each condition, which would further refine mortality and disability estimates. In addition, although our study reported the burden of emergency and/or operative conditions, we were not able to estimate how much of the burden could be avoided by strengthening emergency, critical and operative care. Other actions, such as investing in prevention, could also affect the avoidable and unavoidable burden of emergency and operative care and should be considered as part of a more holistic approach to improving population health. Finally, our study used Global Burden of Disease data from before 2020, when the COVID-19 pandemic began. The impact of the major shock to the provision of emergency, critical and operative care caused by the pandemic and the subsequent recovery of health-care systems will need to be assessed by future research. In conclusion, the high global burden of emergency and operative conditions we found in our study underscores the importance of strengthening and scaling up integrated emergency, critical and operative care, as emphasized in the 2023 World Health Assembly’s ECO resolution. A substantial proportion of the world's leading causes of death and morbidity could be addressed through the provision of emergency and operative care. Consequently, a global commitment to improving the planning and provision of integrated emergency, critical and operative care has the potential to meet the health needs of the population, improve health system resilience and ensure a secure public health system. Towards that end, it is vital to: (i) create a shared vision for emergency, critical and operative care by developing a global strategy and action plan; (ii) support leadership on emergency, critical and operative care within national health ministries; (iii) enhance WHO’s emergency, critical and operative capacity at all levels; and (iv) monitor implementation of the ECO resolution.
|
Recognizing Early Regulation Disorders in Pediatric Care: The For Healthy Offspring Project | e9c19321-c598-4e86-baa5-0cada0c5f6a5 | 8130504 | Pediatrics[mh] | Emotional and behavior regulation disorders in infancy and toddlerhood are quite frequent, with an occurrence of 5–20% in a normal population . Primary or classic regulation disorders (“the classic triad”), such as excessive and persistent crying and sleep and feeding disorders, are already seen in early infancy and can also be recognized and diagnosed in primary care . According to the framework of developmental psychopathology , in most cases, the cumulative combination of somatic, interactional, and psychosocial environmental risk factors and a lack of significant protective factors leads to problematic behaviors associated with several types of early mental health problems . Similar to other early childhood mental health problems, the background of regulation disorders is also assumed to be influenced by complex mechanisms, whereby the individual physical and psychological characteristics of the parents and the children, their common early history (pregnancy, birth, early care), the actual parent-child interactions, and the developing relationships serve as key proximal mediators of more complex distal factors, such as the sociodemographic situation, family structure, stressful life events, and social support. Naturally, the manifestation of certain problems can depend on other moderating factors, such as the age, sex, and temperament of the child . This clinical area is specifically located at the juncture of medicine, psychology, and education, and it therefore requires an interdisciplinary approach and handling. The risk factors that compromise childhood development, parent-child interactions, and family functioning and the protective factors that support resilient development are important issues to be borne in mind in both research and clinical practice . Since most of the relevant literature comes from small-sample studies, continued research is needed with larger samples to further explore the clinical significance of regulation disorders . In the meantime, the findings as a whole highlight the paucity of evidence about this group of infants and the need to prioritize them for research and clinical work . The study, screening, and treatment of early childhood mental health problems have a decades-long history in international practice, but the investigation of such disorders, as well as the related prevention and intervention activities, remains an understudied area in Hungary. To date, only a few national private and public initiatives have expressed interest in developing this area. The For Healthy Offspring Project, initiated by the Heim Pál National Pediatric Institute in Budapest, was the first Hungarian research to establish an effective hospital model for screening and to examine the prevalence of emotional and behavior regulation disorders in early childhood (0–3 years) and the significance of different risks and protective factors behind them. We hypothesized that the prevalence of regulation disorders in Hungary is similar to that in other countries. We also hypothesized that the association between excessive crying and sleep disorders is strong. We developed a complex model to screen for regulatory problems in early childhood. In this article, our aims were to (1) introduce the model of our screening program and our large-sample hospital research, (2) report the occurrence of major regulation disorders (excessive crying, sleep and feeding problems) in our sample, and (3) report associations between regulation disorders and other examined medical diseases. We hypothesized that (1) the prevalence of regulation disorders in our Hungarian sample is similar to that of other countries, (2) the associations between excessive crying, sleep and feeding disorders are significant, and (3) these regulation problems may also be moderately associated with other diseases.
Families of 0- to 3-year-old children with eating or sleeping problems or extreme crying from 3 departments of the Heim Pál National Pediatric Institute in Budapest and neighboring areas were included in this study. Data were collected from July 2010 to June 2011 in a cross-sectional design study. We obtained information about early childhood regulation disorders from 4 sources, including questionnaires, medical examinations, and individual and small group consultations. The model of the screening process, the data collection, and the administration of the For Healthy Offspring project are shown in . During the research period, we recruited from among all families with children under 3 years of age (n=1855) within 3 departments of the hospital, and 580 families volunteered to participate. This represents a response rate of 31.4%. During the same period, we also collected data (n=584) with the help of health visitor nurses in neighboring areas. The nurses mainly administered the questionnaires to the families with whom they visited spontaneously during their work or who visited them during their consulting hours. Thus, both subsamples were specific and selective in terms of willingness to participate and motivation to share concerns. We hypothesized that hospital rates underrepresent the real incidence of regulation disorders, while area rates overrepresent them. In summary, in our sample, the inclusion criterion was age under 3 years among those children who visited the 3 hospital departments or lived in neighboring areas. There were no exclusion criteria. Data collection could be biased by some methodological issues such as willingness to participate and motivation to share concerns. Although our sample was not representative, it was nevertheless adequately heterogeneous in all relevant sociodemographic characteristics . Questionnaires were given to parents (n=1164) by doctors and nurses working in 3 departments (Pediatric, Sleep, and Neurology) of the Heim Pál National Pediatric Institute (n=580) and also by health nurses and general practitioners in local areas (n=584). Mothers responded in 1133 cases. Medical examinations and/or diagnostic evaluations were performed in 619 cases. When completing the questionnaire, the parents were offered a complex screening program (a longer medical consultation) in our hospital if any of the most common behavior regulation disorders were present in the child. A total of 183 families took part in this complex diagnostic evaluation. Afterwards, 35 parent-infant dyads also took part in small-group consultations. For some families, individual consultations and psychotherapy were recommended with the support volunteer hospital or other institutional professional. The process and professional content of the Screening Program are described below. The Screening Program: Diagnostic Evaluation Medical Consultation Medical consultation (performed by NS and EP) consisted of a focused, detailed history taking that was followed by a physical examination. During the physical examination, healthy somatic, motor, and psychosocial developmental signs were carefully considered. In most cases, both parents were present when the child was examined; however, in a few cases, only the mother was present. The consultation normally lasted 1 h. History taking started with the discussion of the symptom(s) that the parents had concerns about, and it was followed by general pediatric questions. To obtain broader knowledge of circumstances, certain aspects of the child’s psychiatric history (birth circumstances, pregnancy, perinatal period, early sensory-motor and mental development, family sociodemographic characteristics) were also included. In all cases, the primary consideration was to determine the organic causes of the symptoms. If no physical abnormalities were found, we assumed the causative effect of psychosocial factors. Diagnostic Evaluation of Organic Causes, Differential Diagnosis The results of the first consultation determined the subsequent diagnostic steps. Diagnostic evaluation of the organic causes was performed depending on the presence of abnormal findings. Examinations were performed at the Pediatric Department as an outpatient service and included laboratory tests and radiological imaging tests. In some cases, specific examinations were performed by a gastroenterologist, neurologist, otorhinolaryngologist, ophthalmologist, or cardiologist. An obstructive sleep apnea symptoms test and, in cases of indeterminate symptoms of the infant or the presence of apnea, polysomnographic monitoring combined with esophageal pH monitoring was performed (by PB) at the Sleep Ambulance by Somnoscreen Plus (SOMNOmedics, GmbH, Randersacker, Germany). The following symptoms of unspecified/unexplained origin, usually with no underlying medical condition, were found: periodic breath-holding spells or strange breathing sounds, change of skin tone, loss of consciousness, loss of postural tone, and seizure. A complex motor evaluation of the infant was completed by a physiotherapist. In cases of medical illness, our work was based on general pediatric diagnostic steps. We had more difficulty diagnosing early behavior regulation disorders. The International Classification of Diseases, 10 th Revision (ICD-10) , which is used in Hungary, does not include clear directions about early childhood regulation disorders. In the differential diagnostics of regulation disorders, we relied on the principles of the German system, as described by Hédervári-Heller in Hungarian and German, although it is not widely used in Hungary. Moreover, we greatly profited from reviewing the classification criteria of the Diagnostic Classification of Mental Health and Developmental Disorders of Infancy and Early Childhood – Revised (DC: 0–3R) , which was not used in Hungary at that time. Small Group Consultation As long as no organic abnormality was found, part of the sample was offered a small-group consultation. These consultations took place in the presence of the parents (or only the mothers) and the infants on 2 occasions, each lasting 90 min. The consultations were conducted by a pediatric psychotherapist (TN) with the participation of psychologists and psychiatrists from the hospital. During the small-group consultations, the following aspects were spontaneously observed: (a) infant’s developmental level and the quality of his or her playing activity; (b) quality of the parent-infant interactions; (c) emotional reactions between the parents and their moods while together; (d) parent-infant attachment patterns based on the balance between exploration and attachment behaviors; (e) infant’s attitude toward the parents and the professionals in attendance; and (f) ambience of the consultation meeting. At the end of the second meeting, the experiences were discussed with the families in a private meeting, and individual therapy for the parents or parent-infant consultations were offered if necessary. As professionals worked together in a team, they decided by consensus if regulation problems were present and whether additional care was needed due to psychosocial, relationship, or interaction difficulties. In many cases, the 2 small-group consultations were sufficient to resolve mild regulation difficulties. Measuring Instruments Questionnaires A questionnaire package was specifically designed for this study. In our edited basic questionnaire, the parents were asked in detail about their family background, housing and job circumstances, their financial status, their health, health-related and psychological characteristics of the pregnancy and the birth of the examined infant, the newborn period, breastfeeding and early care, the infant’s physical and mental condition, and his or her behavioral characteristics. We then focused on the 3 main areas of regulation disorders. Detailed questions were asked about each topic to determine if the infant was affected by intense crying and restlessness, feeding and weight gain difficulties, or sleep disorders. Most of the questions were closed-ended, discrete questions with 2 or more possible answers, or Likert-type scale items, usually with 5 levels. Only the “other” questions were open ended. A pilot study was conducted to ensure that the questions were well understood by the respondents and could be answered easily and reliably. We were not able to measure internal consistency because these questions were single items and did not form scales. In this article, we used only the socio-demographic information and some Likert-type questions on early regulation (crying, sleep, and feeding). Medical Diagnoses The diagnoses determined in course of medical examinations and during individual and small-group consultations (see the detailed description of the Screening Program above) were included in our database. Statistical Analysis Quantitative analyses were performed using IBM SPSS Statistics 20.0 software package (IBM, Armonk, NY, USA). For the descriptive statistics in this article, we calculated prevalence distributions and for examining associations between regulation problems and other health conditions, we ran crosstabs (χ 2 -tests) in the cases of diagnoses (yes/no; 1/0 categorical variables) and Mann-Whitney tests in the cases of questionnaire data (Likert-type items, see ). Ethical Approval The Institutional Ethics Committee of Heim Pál Children’s Hospital approved the study (authorization number: 11/04.2010). Informed Consent Informed consent was obtained from the parents of all individual participants included in the study.
Medical Consultation Medical consultation (performed by NS and EP) consisted of a focused, detailed history taking that was followed by a physical examination. During the physical examination, healthy somatic, motor, and psychosocial developmental signs were carefully considered. In most cases, both parents were present when the child was examined; however, in a few cases, only the mother was present. The consultation normally lasted 1 h. History taking started with the discussion of the symptom(s) that the parents had concerns about, and it was followed by general pediatric questions. To obtain broader knowledge of circumstances, certain aspects of the child’s psychiatric history (birth circumstances, pregnancy, perinatal period, early sensory-motor and mental development, family sociodemographic characteristics) were also included. In all cases, the primary consideration was to determine the organic causes of the symptoms. If no physical abnormalities were found, we assumed the causative effect of psychosocial factors. Diagnostic Evaluation of Organic Causes, Differential Diagnosis The results of the first consultation determined the subsequent diagnostic steps. Diagnostic evaluation of the organic causes was performed depending on the presence of abnormal findings. Examinations were performed at the Pediatric Department as an outpatient service and included laboratory tests and radiological imaging tests. In some cases, specific examinations were performed by a gastroenterologist, neurologist, otorhinolaryngologist, ophthalmologist, or cardiologist. An obstructive sleep apnea symptoms test and, in cases of indeterminate symptoms of the infant or the presence of apnea, polysomnographic monitoring combined with esophageal pH monitoring was performed (by PB) at the Sleep Ambulance by Somnoscreen Plus (SOMNOmedics, GmbH, Randersacker, Germany). The following symptoms of unspecified/unexplained origin, usually with no underlying medical condition, were found: periodic breath-holding spells or strange breathing sounds, change of skin tone, loss of consciousness, loss of postural tone, and seizure. A complex motor evaluation of the infant was completed by a physiotherapist. In cases of medical illness, our work was based on general pediatric diagnostic steps. We had more difficulty diagnosing early behavior regulation disorders. The International Classification of Diseases, 10 th Revision (ICD-10) , which is used in Hungary, does not include clear directions about early childhood regulation disorders. In the differential diagnostics of regulation disorders, we relied on the principles of the German system, as described by Hédervári-Heller in Hungarian and German, although it is not widely used in Hungary. Moreover, we greatly profited from reviewing the classification criteria of the Diagnostic Classification of Mental Health and Developmental Disorders of Infancy and Early Childhood – Revised (DC: 0–3R) , which was not used in Hungary at that time. Small Group Consultation As long as no organic abnormality was found, part of the sample was offered a small-group consultation. These consultations took place in the presence of the parents (or only the mothers) and the infants on 2 occasions, each lasting 90 min. The consultations were conducted by a pediatric psychotherapist (TN) with the participation of psychologists and psychiatrists from the hospital. During the small-group consultations, the following aspects were spontaneously observed: (a) infant’s developmental level and the quality of his or her playing activity; (b) quality of the parent-infant interactions; (c) emotional reactions between the parents and their moods while together; (d) parent-infant attachment patterns based on the balance between exploration and attachment behaviors; (e) infant’s attitude toward the parents and the professionals in attendance; and (f) ambience of the consultation meeting. At the end of the second meeting, the experiences were discussed with the families in a private meeting, and individual therapy for the parents or parent-infant consultations were offered if necessary. As professionals worked together in a team, they decided by consensus if regulation problems were present and whether additional care was needed due to psychosocial, relationship, or interaction difficulties. In many cases, the 2 small-group consultations were sufficient to resolve mild regulation difficulties.
Medical consultation (performed by NS and EP) consisted of a focused, detailed history taking that was followed by a physical examination. During the physical examination, healthy somatic, motor, and psychosocial developmental signs were carefully considered. In most cases, both parents were present when the child was examined; however, in a few cases, only the mother was present. The consultation normally lasted 1 h. History taking started with the discussion of the symptom(s) that the parents had concerns about, and it was followed by general pediatric questions. To obtain broader knowledge of circumstances, certain aspects of the child’s psychiatric history (birth circumstances, pregnancy, perinatal period, early sensory-motor and mental development, family sociodemographic characteristics) were also included. In all cases, the primary consideration was to determine the organic causes of the symptoms. If no physical abnormalities were found, we assumed the causative effect of psychosocial factors.
The results of the first consultation determined the subsequent diagnostic steps. Diagnostic evaluation of the organic causes was performed depending on the presence of abnormal findings. Examinations were performed at the Pediatric Department as an outpatient service and included laboratory tests and radiological imaging tests. In some cases, specific examinations were performed by a gastroenterologist, neurologist, otorhinolaryngologist, ophthalmologist, or cardiologist. An obstructive sleep apnea symptoms test and, in cases of indeterminate symptoms of the infant or the presence of apnea, polysomnographic monitoring combined with esophageal pH monitoring was performed (by PB) at the Sleep Ambulance by Somnoscreen Plus (SOMNOmedics, GmbH, Randersacker, Germany). The following symptoms of unspecified/unexplained origin, usually with no underlying medical condition, were found: periodic breath-holding spells or strange breathing sounds, change of skin tone, loss of consciousness, loss of postural tone, and seizure. A complex motor evaluation of the infant was completed by a physiotherapist. In cases of medical illness, our work was based on general pediatric diagnostic steps. We had more difficulty diagnosing early behavior regulation disorders. The International Classification of Diseases, 10 th Revision (ICD-10) , which is used in Hungary, does not include clear directions about early childhood regulation disorders. In the differential diagnostics of regulation disorders, we relied on the principles of the German system, as described by Hédervári-Heller in Hungarian and German, although it is not widely used in Hungary. Moreover, we greatly profited from reviewing the classification criteria of the Diagnostic Classification of Mental Health and Developmental Disorders of Infancy and Early Childhood – Revised (DC: 0–3R) , which was not used in Hungary at that time.
As long as no organic abnormality was found, part of the sample was offered a small-group consultation. These consultations took place in the presence of the parents (or only the mothers) and the infants on 2 occasions, each lasting 90 min. The consultations were conducted by a pediatric psychotherapist (TN) with the participation of psychologists and psychiatrists from the hospital. During the small-group consultations, the following aspects were spontaneously observed: (a) infant’s developmental level and the quality of his or her playing activity; (b) quality of the parent-infant interactions; (c) emotional reactions between the parents and their moods while together; (d) parent-infant attachment patterns based on the balance between exploration and attachment behaviors; (e) infant’s attitude toward the parents and the professionals in attendance; and (f) ambience of the consultation meeting. At the end of the second meeting, the experiences were discussed with the families in a private meeting, and individual therapy for the parents or parent-infant consultations were offered if necessary. As professionals worked together in a team, they decided by consensus if regulation problems were present and whether additional care was needed due to psychosocial, relationship, or interaction difficulties. In many cases, the 2 small-group consultations were sufficient to resolve mild regulation difficulties.
Questionnaires A questionnaire package was specifically designed for this study. In our edited basic questionnaire, the parents were asked in detail about their family background, housing and job circumstances, their financial status, their health, health-related and psychological characteristics of the pregnancy and the birth of the examined infant, the newborn period, breastfeeding and early care, the infant’s physical and mental condition, and his or her behavioral characteristics. We then focused on the 3 main areas of regulation disorders. Detailed questions were asked about each topic to determine if the infant was affected by intense crying and restlessness, feeding and weight gain difficulties, or sleep disorders. Most of the questions were closed-ended, discrete questions with 2 or more possible answers, or Likert-type scale items, usually with 5 levels. Only the “other” questions were open ended. A pilot study was conducted to ensure that the questions were well understood by the respondents and could be answered easily and reliably. We were not able to measure internal consistency because these questions were single items and did not form scales. In this article, we used only the socio-demographic information and some Likert-type questions on early regulation (crying, sleep, and feeding). Medical Diagnoses The diagnoses determined in course of medical examinations and during individual and small-group consultations (see the detailed description of the Screening Program above) were included in our database.
A questionnaire package was specifically designed for this study. In our edited basic questionnaire, the parents were asked in detail about their family background, housing and job circumstances, their financial status, their health, health-related and psychological characteristics of the pregnancy and the birth of the examined infant, the newborn period, breastfeeding and early care, the infant’s physical and mental condition, and his or her behavioral characteristics. We then focused on the 3 main areas of regulation disorders. Detailed questions were asked about each topic to determine if the infant was affected by intense crying and restlessness, feeding and weight gain difficulties, or sleep disorders. Most of the questions were closed-ended, discrete questions with 2 or more possible answers, or Likert-type scale items, usually with 5 levels. Only the “other” questions were open ended. A pilot study was conducted to ensure that the questions were well understood by the respondents and could be answered easily and reliably. We were not able to measure internal consistency because these questions were single items and did not form scales. In this article, we used only the socio-demographic information and some Likert-type questions on early regulation (crying, sleep, and feeding).
The diagnoses determined in course of medical examinations and during individual and small-group consultations (see the detailed description of the Screening Program above) were included in our database.
Quantitative analyses were performed using IBM SPSS Statistics 20.0 software package (IBM, Armonk, NY, USA). For the descriptive statistics in this article, we calculated prevalence distributions and for examining associations between regulation problems and other health conditions, we ran crosstabs (χ 2 -tests) in the cases of diagnoses (yes/no; 1/0 categorical variables) and Mann-Whitney tests in the cases of questionnaire data (Likert-type items, see ).
The Institutional Ethics Committee of Heim Pál Children’s Hospital approved the study (authorization number: 11/04.2010).
Informed consent was obtained from the parents of all individual participants included in the study.
Regulation Disorders in Medical Examinations and Screening Program A total of 1133 mothers answered the questionnaires. In a subsample, 619 children had also medical examinations. The sex distribution was almost half and half, male and female, in both groups. The children’s ages and other sociodemographic factors were almost the same or very similar in each subgroup. In the whole sample, the average age of children was 15.3±10.8 months. The average number of children in the families was 1.7±0.9. The average age of the mothers was 32±5.1 years, and more than 90% were married. In addition to medical diagnoses, the main early childhood behavior regulation disorders are also categorized in . In the subsample in which medical examinations were conducted, excessive crying and/or restlessness was present in 15.0%, sleep disorders in 15.2%, breastfeeding problems in 10.3%, and collective feeding disorders in 14.8%. The prevalence of constipation was 4.0%, while abdominal colic was present in 12.3%. Medical examinations were followed by a deeper screening program for regulation disorders in 183 cases, while in 436 cases this was not necessary. presents the prevalence of different disease and disorder categories in each subgroup, showing higher prevalence of regulation disorders in the screening subgroup. Comorbidity Between Different Early Childhood Regulation Disorders In cases in which excessive crying was present, the comorbidity with sleep disorders was 50% (χ 2 (1)=106.20; P <0.001); with breastfeeding disorders 22.6% (χ 2 (1)=17.69; P <0.001); and with loss of appetite 11.8% (χ 2 (1)=10.70; P <0.001). In those cases in which sleep disorders were present, the comorbidity with breastfeeding disorders was 39.1% (χ 2 (1)=31.60; P <0.001). Other data are displayed in . Comorbidity Between Early Childhood Regulation Disorders and Other Health Conditions In those cases in which excessive crying was present, the comorbidity with concentration difficulties was 10.8% (χ 2 (1)=10.71; P <0.001) and with abdominal colic it was 26.9% (χ 2 (1)=21.67; P =0.001). When sleep disorders or breastfeeding difficulties or organic feeding difficulties were diagnosed, abdominal colic was also present in 21.3% (χ 2 (1)=8.33; P <0.001), 23.4% (χ 2 (1)=8.25; P =0.004), and 34.1% (χ 2 (1)=19.50; P <0.001) of the cases, respectively. Interestingly, pulmonological conditions or recurrent upper airway infections were more frequent in the subgroups in which regulation problems were not a concern. The other relationships are shown in . Regulation Disorders in the Parental Report According to the questionnaire answers, 14.7% of the mothers’ reported low self-confidence when interpreting their infant’s signs; it was 22.1% in a subgroup that was referred to the screening program. A total of 15.6% of mothers characterized their children as strong criers (24.4% in a subgroup that was referred to the screening program). A total of 16% of the children had some type of feeding or weight gain disorder (32.6% in the screening subgroup), and 10% awoke 4 or more times during the night (21% in the screening subgroup). The frequency of symptoms of crying, feeding, and sleeping per parent reporting are shown in . Diagnoses of Regulation Disorders in Medical Examinations and Differences Among Maternal Answers in Questionnaires According to the z-statistics in Mann-Whitney tests, mothers of children with a diagnosis of excessive crying reported significantly more problematic crying behavior (long prolonged crying in early infancy: P <0.001, crying and fussiness in the last 2 weeks: P =0.001, soothability: P <0.001, parental distress: P =0.001) compared with those without this diagnosis. Mothers of children with a diagnosis of loss of appetite or weight loss reported significantly more problematic feeding behavior (feeding as a challenge: P <0.001) compared with mothers of children who did not have this diagnosis. Children with a diagnosis of sleep disorders had significantly more problematic sleep behavior reported based on the parental questionnaire (nightwakings, sleep onset, parental distress compared with those who did not have this diagnosis; P <0.001 for all differences) using Likert scale in the questionnaire. Detailed results are shown in .
A total of 1133 mothers answered the questionnaires. In a subsample, 619 children had also medical examinations. The sex distribution was almost half and half, male and female, in both groups. The children’s ages and other sociodemographic factors were almost the same or very similar in each subgroup. In the whole sample, the average age of children was 15.3±10.8 months. The average number of children in the families was 1.7±0.9. The average age of the mothers was 32±5.1 years, and more than 90% were married. In addition to medical diagnoses, the main early childhood behavior regulation disorders are also categorized in . In the subsample in which medical examinations were conducted, excessive crying and/or restlessness was present in 15.0%, sleep disorders in 15.2%, breastfeeding problems in 10.3%, and collective feeding disorders in 14.8%. The prevalence of constipation was 4.0%, while abdominal colic was present in 12.3%. Medical examinations were followed by a deeper screening program for regulation disorders in 183 cases, while in 436 cases this was not necessary. presents the prevalence of different disease and disorder categories in each subgroup, showing higher prevalence of regulation disorders in the screening subgroup.
In cases in which excessive crying was present, the comorbidity with sleep disorders was 50% (χ 2 (1)=106.20; P <0.001); with breastfeeding disorders 22.6% (χ 2 (1)=17.69; P <0.001); and with loss of appetite 11.8% (χ 2 (1)=10.70; P <0.001). In those cases in which sleep disorders were present, the comorbidity with breastfeeding disorders was 39.1% (χ 2 (1)=31.60; P <0.001). Other data are displayed in .
In those cases in which excessive crying was present, the comorbidity with concentration difficulties was 10.8% (χ 2 (1)=10.71; P <0.001) and with abdominal colic it was 26.9% (χ 2 (1)=21.67; P =0.001). When sleep disorders or breastfeeding difficulties or organic feeding difficulties were diagnosed, abdominal colic was also present in 21.3% (χ 2 (1)=8.33; P <0.001), 23.4% (χ 2 (1)=8.25; P =0.004), and 34.1% (χ 2 (1)=19.50; P <0.001) of the cases, respectively. Interestingly, pulmonological conditions or recurrent upper airway infections were more frequent in the subgroups in which regulation problems were not a concern. The other relationships are shown in .
According to the questionnaire answers, 14.7% of the mothers’ reported low self-confidence when interpreting their infant’s signs; it was 22.1% in a subgroup that was referred to the screening program. A total of 15.6% of mothers characterized their children as strong criers (24.4% in a subgroup that was referred to the screening program). A total of 16% of the children had some type of feeding or weight gain disorder (32.6% in the screening subgroup), and 10% awoke 4 or more times during the night (21% in the screening subgroup). The frequency of symptoms of crying, feeding, and sleeping per parent reporting are shown in .
According to the z-statistics in Mann-Whitney tests, mothers of children with a diagnosis of excessive crying reported significantly more problematic crying behavior (long prolonged crying in early infancy: P <0.001, crying and fussiness in the last 2 weeks: P =0.001, soothability: P <0.001, parental distress: P =0.001) compared with those without this diagnosis. Mothers of children with a diagnosis of loss of appetite or weight loss reported significantly more problematic feeding behavior (feeding as a challenge: P <0.001) compared with mothers of children who did not have this diagnosis. Children with a diagnosis of sleep disorders had significantly more problematic sleep behavior reported based on the parental questionnaire (nightwakings, sleep onset, parental distress compared with those who did not have this diagnosis; P <0.001 for all differences) using Likert scale in the questionnaire. Detailed results are shown in .
Crying is part of the normal development of an infant. It is a form of communication with parents and results from various stimuli, such as hunger, discomfort, or pain. Excessive crying in the early months is a frequent concern. Pediatricians have to understand and adequately manage the problem and offer support to exhausted parents. Excessive crying may interfere with the mother-infant, father-infant, and mother-father interactions and may increase the risk of child abuse . Over the last 15 years, other regulatory disorders of early childhood have attracted the increased attention of both researchers and pediatric practitioners . Sleeping Disorders In international pediatrician surveys and cross-cultural comparisons, sleeping disorders (night awakenings and sleep-onset difficulties) are one of the most frequent (10–76%) parental concerns . In our large sample (N=1133) questionnaire study, 10% of the children had sleeping disorders. In a subsample in which diagnoses were determined based on medical examinations and consultations, we found an incidence of 15.2% for sleep disorders. This rate was similar to reports from other countries . Excessive Crying The occurrence of excessive crying was 8–30% in previous large-sample studies . In our questionnaire study, 15% of mothers reported intensive crying in the infants. In a subsample in which diagnoses were determined based on medical examinations and consultations, we found an incidence of 15.0% of excessive crying and/or restlessness. Prevalence rates for excessive crying were lower in some European countries, with 1.5% in the Netherlands and 9.2% in Denmark , but comparable to the 16.3% reported in Germany . Eating Disorders Eating disorders are also common in infancy. Prevalence numbers range from 20% to 25% in the normal population and from 40% to 80% in infants with disabilities. Mild feeding difficulties occur in approximately 30% of children . The prevalence of clinical feeding disorders is 3–10% , and the incidence of more severe failure to thrive is about 3–4% . According to our questionnaire results, 16% of the children had feeding disorders. In a subsample in which diagnoses were determined based on medical examinations and consultations, we found an incidence of 10.3% for breastfeeding disorders and 14.8% for different feeding disorders. Sleep and feeding disorders are the leading concerns in clinical samples. In a large sample (N=701) in one of the most renowned European clinical programs, the Munich Program , occurrence of problems for children were the following: sleep disorders (62.8%), feeding disorders (40.4%), chronic restlessness-motor activity-lack of interest in play (30.1%), excessive crying (29.4%), dysfunctional sleep-wake organization (25.8%), excessive defiance (20.3%), excessive clinging-separation anxiety-social withdrawal (12.3%), and aggression-oppositional behavior (6.8%). Multiple Regulatory Disorders Behavioral disorders in infancy can affect an infant’s development . Furthermore, infants with behavioral disorders are more likely to have impaired parent-infant relationships . Approximately 20% of all infants show symptoms of excessive crying, sleeping, or feeding disorders in the first year of life . The prevalence of colic in infants is about 20%, but it depends on parental perception of crying . In our study, the prevalence was 12.3%. In a systematic review and meta-analysis, colic prevalence at 5–6 weeks of age (25.1%) was significantly higher than colic prevalence at 8–9 weeks of age (10.8%) . Most maternal and child health nurses were unaware of the evidence that crying is not associated with gastro-esophageal reflux, but most of them reported that reflux causes pain . Further, a small minority of children (1–2%) will manifest all 3, leading to multiple regulatory disorders . Infants with multiple moderate-to-severe regulatory problems experience >10 times the odds of clinically significant mental health concerns during childhood, and these symptoms appear to worsen over time . Comorbidity Little is known about the association between excessive crying and sleeping or eating disorders in population samples . We found comorbidity among different regulation disorders. Where one type of behavior regulation disorder was present, another type of regulation disorder was more frequently diagnosed . Crosstabs (χ 2 tests) have proved ( P <0.05) that infants who were referred to the screening program because of medical considerations had more frequent behavior regulation disorders than other infants in our study. This, in turn, indicates that the differential diagnostic process was successful in our program. Wolke et al found that 32.7% of parents reported that their infant had a crying, sleeping, or feeding disorder, and a further 14.6% reported their infant as having more than one of these disorders. Specifically, comorbidity was most likely to occur between crying and sleep disorders. Multiple regulatory problems may identify infants with a high burden of comorbidity that extends into childhood . In a retrospective study by von Kries et al , a higher prevalence of sleep and eating disorders was found in children up to 4 years of age who were reported to have had excessive crying beyond the sixth month. In our study, the connection between sleep disorders and excessive crying was 50%, but some studies have not found a relationship between excessive crying and sleep disorders and other indicators of multiple regulatory disorders , while other studies have found either sleeping or other regulatory disorders . The few children with excessive crying and either severe sleeping or eating disorders might constitute a group of infants with multiple regulatory disorders outside the continuous spectrum of normal behavior . Moreover, there is growing evidence of the negative implications for infants whose excessive, persistent crying is present with other regulatory disorders, such as feeding and sleeping disorders . A meta-analysis of 22 longitudinal studies showed evidence associating excessive crying and other regulatory disorders (sleeping and eating) in the first months of life with adaptive problems at school age, mainly related to attention deficit-hyperactivity disorder symptoms and associated behaviors . In our study, the comorbidity between excessive crying and concentration difficulties was 10.8%. There was no connection between regulation and breathing disorders. Breathing stops and sleep disorders were successfully differentiated. Among those diagnosed with sleep disorders, there were fewer infants who had breathing stops than among the others. Also, there was no relationship with symptoms of possible neurological conditions and uncertain sickness or abnormal movements. Infants who had early activity or concentration disorders were significantly more highly represented among those who had behavior regulation disorders as well. The occurrence of abdominal colic and constipation, symptoms that are often found to have a psychosomatic background, also correlated with the occurrence of other regulation disorders. The occurrence of diagnoses of excessive crying, sleep disorders, and breastfeeding and other feeding disorders was significantly lower in children in whom recurrent upper airway infections, recurrent wheezing, or laryngitis were diagnosed . Questions about both parental observations and subjective feelings were asked in questionnaires. Some important extreme values of questions are presented for the whole sample, for the subsample in which medical examinations were also conducted, and for the subsample that was referred to the screening program. Mothers of infants who were referred to the screening program because of medical considerations reported more problems in the questionnaires than other mothers in our study . The mothers of the infants with diagnoses of excessive crying, sleep, or feeding disorders reported significantly more problematic behavior in questionnaires as well . Our screening program included children who showed signs of regulation disorders and who were referred for a detailed diagnostic evaluation. The incidence of diagnoses in this program was the following: sleep disorders (49.7%), excessive crying (37.2%), and functional feeding disorders (no organic background; 21.3%). In order to offer appropriate medical support, the differential diagnostic process is important in separating acute secondary symptoms (e.g., crying, mild and transient sleep and feeding disorders because of general discomfort, nonspecific complaints, pain) of frequent child illnesses (eg, respiratory diseases) from comorbid chronic behavior regulation disorders. In our study, we investigated the relationship between regulation disorders and other health conditions from medical records for which a screening model enabled careful and thorough differential diagnostics. Outpatient treatment is sufficient for crying or sleeping disorders in infancy. However, hospitalization may be required for feeding disorders because not all feeding disorders can be treated on an outpatient basis . In a study conducted by Schmid and Wolke , excessive infant crying (10.1%) was specifically associated with maternal anxiety disorders, especially in infants of younger and less educated first-time mothers. Feeding disorders (36.4%) were predicted by maternal anxiety (and comorbid depressive) disorders in primiparous mothers and infants with lower birth weight. Infant sleeping disorders (12.2%) were related to maternal depressive (and comorbid anxiety) disorders irrespective of maternal parity . In our experience, designating risk groups in pediatric care is a complex problem. On the one hand, many parents may report regulation disorders, while based on a strict diagnostic system, a clinical disorder cannot be determined. On the other hand, we can assume that there are many hidden cases in which infants could have a clinically relevant regulation disorder but their parents do not interpret these behaviors as being problematic and do not report them to pediatricians or health care nurses. Thus, we can assume that some cases could remain hidden, while in others, study is requested without a clinical basis. In the latter circumstance attention from a therapeutic perspective is still necessary because of the parents’ concern. Questions about the parents’ feelings about the specific problems can contribute to a better understanding of the actual situation. Limitations Most of the data in the current study were collected via maternal reports. These are often limited by social desirability and reporter bias. Another limitation of this study was our lack of access to father-child interactions. Thus, future research should include both mothers and fathers. Another limitation there was no ethnic diversity, with only white participants in the study. It is important to note that our study used only a short period of data collection. Finally, our sample is not representative; the results can be regarded as estimations for recognizing early childhood regulation disorders in pediatric care.
In international pediatrician surveys and cross-cultural comparisons, sleeping disorders (night awakenings and sleep-onset difficulties) are one of the most frequent (10–76%) parental concerns . In our large sample (N=1133) questionnaire study, 10% of the children had sleeping disorders. In a subsample in which diagnoses were determined based on medical examinations and consultations, we found an incidence of 15.2% for sleep disorders. This rate was similar to reports from other countries .
The occurrence of excessive crying was 8–30% in previous large-sample studies . In our questionnaire study, 15% of mothers reported intensive crying in the infants. In a subsample in which diagnoses were determined based on medical examinations and consultations, we found an incidence of 15.0% of excessive crying and/or restlessness. Prevalence rates for excessive crying were lower in some European countries, with 1.5% in the Netherlands and 9.2% in Denmark , but comparable to the 16.3% reported in Germany .
Eating disorders are also common in infancy. Prevalence numbers range from 20% to 25% in the normal population and from 40% to 80% in infants with disabilities. Mild feeding difficulties occur in approximately 30% of children . The prevalence of clinical feeding disorders is 3–10% , and the incidence of more severe failure to thrive is about 3–4% . According to our questionnaire results, 16% of the children had feeding disorders. In a subsample in which diagnoses were determined based on medical examinations and consultations, we found an incidence of 10.3% for breastfeeding disorders and 14.8% for different feeding disorders. Sleep and feeding disorders are the leading concerns in clinical samples. In a large sample (N=701) in one of the most renowned European clinical programs, the Munich Program , occurrence of problems for children were the following: sleep disorders (62.8%), feeding disorders (40.4%), chronic restlessness-motor activity-lack of interest in play (30.1%), excessive crying (29.4%), dysfunctional sleep-wake organization (25.8%), excessive defiance (20.3%), excessive clinging-separation anxiety-social withdrawal (12.3%), and aggression-oppositional behavior (6.8%).
Behavioral disorders in infancy can affect an infant’s development . Furthermore, infants with behavioral disorders are more likely to have impaired parent-infant relationships . Approximately 20% of all infants show symptoms of excessive crying, sleeping, or feeding disorders in the first year of life . The prevalence of colic in infants is about 20%, but it depends on parental perception of crying . In our study, the prevalence was 12.3%. In a systematic review and meta-analysis, colic prevalence at 5–6 weeks of age (25.1%) was significantly higher than colic prevalence at 8–9 weeks of age (10.8%) . Most maternal and child health nurses were unaware of the evidence that crying is not associated with gastro-esophageal reflux, but most of them reported that reflux causes pain . Further, a small minority of children (1–2%) will manifest all 3, leading to multiple regulatory disorders . Infants with multiple moderate-to-severe regulatory problems experience >10 times the odds of clinically significant mental health concerns during childhood, and these symptoms appear to worsen over time .
Little is known about the association between excessive crying and sleeping or eating disorders in population samples . We found comorbidity among different regulation disorders. Where one type of behavior regulation disorder was present, another type of regulation disorder was more frequently diagnosed . Crosstabs (χ 2 tests) have proved ( P <0.05) that infants who were referred to the screening program because of medical considerations had more frequent behavior regulation disorders than other infants in our study. This, in turn, indicates that the differential diagnostic process was successful in our program. Wolke et al found that 32.7% of parents reported that their infant had a crying, sleeping, or feeding disorder, and a further 14.6% reported their infant as having more than one of these disorders. Specifically, comorbidity was most likely to occur between crying and sleep disorders. Multiple regulatory problems may identify infants with a high burden of comorbidity that extends into childhood . In a retrospective study by von Kries et al , a higher prevalence of sleep and eating disorders was found in children up to 4 years of age who were reported to have had excessive crying beyond the sixth month. In our study, the connection between sleep disorders and excessive crying was 50%, but some studies have not found a relationship between excessive crying and sleep disorders and other indicators of multiple regulatory disorders , while other studies have found either sleeping or other regulatory disorders . The few children with excessive crying and either severe sleeping or eating disorders might constitute a group of infants with multiple regulatory disorders outside the continuous spectrum of normal behavior . Moreover, there is growing evidence of the negative implications for infants whose excessive, persistent crying is present with other regulatory disorders, such as feeding and sleeping disorders . A meta-analysis of 22 longitudinal studies showed evidence associating excessive crying and other regulatory disorders (sleeping and eating) in the first months of life with adaptive problems at school age, mainly related to attention deficit-hyperactivity disorder symptoms and associated behaviors . In our study, the comorbidity between excessive crying and concentration difficulties was 10.8%. There was no connection between regulation and breathing disorders. Breathing stops and sleep disorders were successfully differentiated. Among those diagnosed with sleep disorders, there were fewer infants who had breathing stops than among the others. Also, there was no relationship with symptoms of possible neurological conditions and uncertain sickness or abnormal movements. Infants who had early activity or concentration disorders were significantly more highly represented among those who had behavior regulation disorders as well. The occurrence of abdominal colic and constipation, symptoms that are often found to have a psychosomatic background, also correlated with the occurrence of other regulation disorders. The occurrence of diagnoses of excessive crying, sleep disorders, and breastfeeding and other feeding disorders was significantly lower in children in whom recurrent upper airway infections, recurrent wheezing, or laryngitis were diagnosed . Questions about both parental observations and subjective feelings were asked in questionnaires. Some important extreme values of questions are presented for the whole sample, for the subsample in which medical examinations were also conducted, and for the subsample that was referred to the screening program. Mothers of infants who were referred to the screening program because of medical considerations reported more problems in the questionnaires than other mothers in our study . The mothers of the infants with diagnoses of excessive crying, sleep, or feeding disorders reported significantly more problematic behavior in questionnaires as well . Our screening program included children who showed signs of regulation disorders and who were referred for a detailed diagnostic evaluation. The incidence of diagnoses in this program was the following: sleep disorders (49.7%), excessive crying (37.2%), and functional feeding disorders (no organic background; 21.3%). In order to offer appropriate medical support, the differential diagnostic process is important in separating acute secondary symptoms (e.g., crying, mild and transient sleep and feeding disorders because of general discomfort, nonspecific complaints, pain) of frequent child illnesses (eg, respiratory diseases) from comorbid chronic behavior regulation disorders. In our study, we investigated the relationship between regulation disorders and other health conditions from medical records for which a screening model enabled careful and thorough differential diagnostics. Outpatient treatment is sufficient for crying or sleeping disorders in infancy. However, hospitalization may be required for feeding disorders because not all feeding disorders can be treated on an outpatient basis . In a study conducted by Schmid and Wolke , excessive infant crying (10.1%) was specifically associated with maternal anxiety disorders, especially in infants of younger and less educated first-time mothers. Feeding disorders (36.4%) were predicted by maternal anxiety (and comorbid depressive) disorders in primiparous mothers and infants with lower birth weight. Infant sleeping disorders (12.2%) were related to maternal depressive (and comorbid anxiety) disorders irrespective of maternal parity . In our experience, designating risk groups in pediatric care is a complex problem. On the one hand, many parents may report regulation disorders, while based on a strict diagnostic system, a clinical disorder cannot be determined. On the other hand, we can assume that there are many hidden cases in which infants could have a clinically relevant regulation disorder but their parents do not interpret these behaviors as being problematic and do not report them to pediatricians or health care nurses. Thus, we can assume that some cases could remain hidden, while in others, study is requested without a clinical basis. In the latter circumstance attention from a therapeutic perspective is still necessary because of the parents’ concern. Questions about the parents’ feelings about the specific problems can contribute to a better understanding of the actual situation.
Most of the data in the current study were collected via maternal reports. These are often limited by social desirability and reporter bias. Another limitation of this study was our lack of access to father-child interactions. Thus, future research should include both mothers and fathers. Another limitation there was no ethnic diversity, with only white participants in the study. It is important to note that our study used only a short period of data collection. Finally, our sample is not representative; the results can be regarded as estimations for recognizing early childhood regulation disorders in pediatric care.
The For Healthy Offspring Project was the first study to examine the prevalence and the complex (medical and psychosocial) background of the classic behavior regulation disorders (excessive crying, feeding, and sleep problems) in infancy and toddlerhood in Hungary. In this article, the relationship between regulation disorders and other health conditions were investigated. Crying and sleep and feeding disorders are challenging for most parents, but only a small fraction of these cases can be categorized as clinical disorders. Although our study is not representative, according to our findings, we can hypothesize that the general incidence of early childhood regulation disorders in international research of 5–15% is likely similar to that found in Hungary as well. This study added more information about the associations between regulation problems and other health conditions. Our model for screening enabled the careful differential diagnostic process of separating acute secondary symptoms from comorbid chronic behavior regulation disorders. We highlighted that no other data are currently available on the frequency and types of early childhood regulation problems in Hungary. In order to effectively recognize early behavior regulation disorders in daily practice, diagnostic instruments widely used in international field should be adapted in general Hungarian pediatric care.
|
Traditional medicinal knowledge and practices among the tribal communities of Thakht-e-Sulaiman Hills, Pakistan | 2f16db15-be6c-40c7-b4a7-56f9d1a52081 | 8439060 | Pharmacology[mh] | In rural areas, the local materia medica typically consists of 5–30% of the plant species of the available flora . Further, animals and minerals are also used as medicine . Local medicinal knowledge and traditional practices are heavily impacted by acculturation and modernization processes, and, if left undocumented, valuable knowledge may get lost for future generations . Traditional health care systems, complemented with western medicine, are widely considered as viable solutions for improving human health in rural areas of developing countries worldwide . Traditional medicinal practices in Pakistan have a long history and are largely based on the Unani Tibb, the Greco-Arab system of medicine. The Unani system relies on the concept of humours and aims for nature and mankind to coexist in a balanced manner. Unani Tibb traces its origins back to Hellenistic Greece. It was later adopted by the Arabs, and was extended to both Europe and Asia. Chinese and Indian medicine enriched it further. It proliferated in India under the Muslim rulers around 1350 AD . Unani is still significant in Pakistan especially among tribal peoples where it is considered as first line treatment . Traditional medicine has been accepted and integrated into the national health system of Pakistan . Professional practitioners must be registered by their respective councils, i.e., the National Council for Unani Tibb and the National Council for Homoeopathy. There are approximately 50,000 registered Hakims/Tabibs (Unani medicine practitioners), 6000 Homoeopaths, 537 Vaids (Ayurveda medicine practitioners), about 28 recognized Tibbia colleges and two Universities in the country . About 457 Tibbi dispensaries and many private clinics provide medication publicly throughout the country with 300–350 Unani and around 300 homoeopathic manufacturing companies producing drugs . Traditional medicine in Pakistan is popular due to its affordability, availability and accessibility . Around 63% out-of-pocket expenditure of the public is for health issues, and costs tend to be a major barrier in pursuing suitable health care . The government spends about 2.6% of GDP on health , and their primary health care services play a negligible role in country-side areas. Thus, people in remote areas of Pakistan rely heavily on traditional medicines. A literature review shows that the majority of ethnomedicinal studies in Pakistan are centered on the Himalaya range , some studies are reported from the Karakoram , Hindu Kush and Salt ranges , while remote tribal areas of the Sulaiman Mountains are neglected. The present study, therefore, aims to investigate and document the traditional medicinal knowledge and approach, and the materia medica among a tribal community in the Thakht-e-Sulaiman hills. This may contribute to our understanding of the medicinal pluralism and traditional use of materia medica in the countryside of Pakistan.
Study area Field research of about 24 months was conducted by the first author from 2010 to 2012 and in 2015 in the tribal areas of northwest Pakistan, the then called Federally Administered Tribal Areas (FATA)- recently (in 2018) merged with province Khyber Pakhtunkhwa. We focused on tribal communities in the very south of FATA, living on the eastern side of the Thakht-e-Sulaiman ( تخت سلیمان ) Mountain, the highest peak (3450 m) of the area (Fig. ). The foothill area ranges from arid to semi-arid, with 200-500 mm precipitation mostly in July–August and December–January . Summer starts in May and lasts until September with a mean daily maximum of 40 °C, and winter lasts from November to March with a daily maximum of 5.7–7.6 °C. The weather is generally warmer on the eastern side of the mountain. Vegetation changes from dry sub-tropical to dry temperate from east to west with increasing altitude. The top of the Thakht-e-Sulaiman is covered with coniferous forests . The research area belongs to the Frontier Region of Dera Ismail Khan (F.R. D.I. Khan), with a total area of 2008 km 2 and a population of approximately 68,556 . Two tribes live in this area, the Sherani (شیرانئ) in the west and the Ustranas (اؤسترانئ) in the east . The Sherani area is divided into plains and hills. This study focuses on the hills of the Sherani area which are inhabited by three sub-tribes: the Oba Khel- اوباخیل (ca. 18% area), Hussan Khel- حسن خیل (20%) and Chual Khel- چؤل خیل (12%). Interviews were conducted among the Sulthan Zai (سلطان زئ) sub-tribe of Oba Khel, which lives in an area of approximately 145 km 2 . Interviews were conducted in five foothill villages i.e., Payor Mela, Landi Kutherzai, Zindawar, Jaty Ghbazh and Kurachai (1). پئوڑمیلا, (2). لنڈئ کؤتڑزئ, (3). زندہ وا ر, . جٹئ غبژ, (1). کوڑاچائ) between 1000 and 1200 masl and the associated ‘migratory villages’ in the mountains between 2300 and 2600 masl, where some people migrate in summer with their herds (Fig. ). Livelihood strategies include livestock raising, timber cutting, non-timber forest products collection and labor work on daily wages. The informants’ details, demographic situation and brief description of foothill villages are shown in Table (for migration and other details of the study sites; see . Interviews A total of 116 informants between 20 and 100 years with an average age of 41 (SD ±13) years was interviewed. In the first phase, unstructured and semi-structured, formal and informal in-depth interviews ( n = 58) were conducted with informants in the local language, Pashtu. Also, personal observations and group discussions were held to get an overview of general concepts of natural phenomena and familiarize with local terms and their emic definitions . In the second phase, detailed interviews were conducted with local health care specialists for understanding the local health care system in the area ( n = 11). In the third phase, successive oral freelists were performed with individuals of each village to check for completeness of the collected knowledge as well as its variation at individual and community level ( n = 47). Transect walks were made along the villages with key informants both at foothills and mountainous areas for specimens’ collections and triangulation of data. Most informants were male due to cultural restrictions on involving female informants. Key female informants were included indirectly, mostly through interviewing their sons. We built rapport with the local communities and were allowed to live with the people, accompany them during their daily life, and attend ritual ceremonies. The ethical guidelines of the International Society of Ethnobiology were strictly followed during the whole research process. Consent was obtained from every informant before interviewing where objectives, procedure and methodology of the project were also explained. Plant specimens Specimens were prepared of all documented plants (Table ). These were identified by taxonomists at Quaid-I-Azam University Islamabad Pakistan (by Dr. Zahid Ullah & Dr. Mushtaq Ahmad) and reconfirmed by comparing with specimens in the Herbarium of Pakistan and the Flora of Pakistan . Families were assigned according to Chase et al. and species names were cross checked with The Plant List . All voucher specimens with accession numbers were deposited at the Herbarium of Pakistan (ISL), Quaid-I-Azam University Islamabad Pakistan. Data analysis Information on medicinal plants was analyzed using use reports (URs- as in Table , , ). One use report corresponds to a specific plant part administered in a specific way against a disease as mentioned by one informant. Freelist data were analyzed using descriptive statistics. To determine the most frequently used plant species for treating a particular ailment category by the informants of the study area, we calculated the fidelity level (FL-, as in Table ) by following Alexiades . The availability of species was categorized into frequent, occasional, and rare based on personal observations in the field and discussions with the informants by following the criteria of DAFOR scale. Local terms for diseases were reconfirmed with the regional medical doctors. To facilitate cross-cultural comparisons and to highlight uniqueness and similarities, we categorized all the diseases mentioned by the interviewees into 16 disease categories according to the symptoms they cause and the organs they affect ; see Table ).
Field research of about 24 months was conducted by the first author from 2010 to 2012 and in 2015 in the tribal areas of northwest Pakistan, the then called Federally Administered Tribal Areas (FATA)- recently (in 2018) merged with province Khyber Pakhtunkhwa. We focused on tribal communities in the very south of FATA, living on the eastern side of the Thakht-e-Sulaiman ( تخت سلیمان ) Mountain, the highest peak (3450 m) of the area (Fig. ). The foothill area ranges from arid to semi-arid, with 200-500 mm precipitation mostly in July–August and December–January . Summer starts in May and lasts until September with a mean daily maximum of 40 °C, and winter lasts from November to March with a daily maximum of 5.7–7.6 °C. The weather is generally warmer on the eastern side of the mountain. Vegetation changes from dry sub-tropical to dry temperate from east to west with increasing altitude. The top of the Thakht-e-Sulaiman is covered with coniferous forests . The research area belongs to the Frontier Region of Dera Ismail Khan (F.R. D.I. Khan), with a total area of 2008 km 2 and a population of approximately 68,556 . Two tribes live in this area, the Sherani (شیرانئ) in the west and the Ustranas (اؤسترانئ) in the east . The Sherani area is divided into plains and hills. This study focuses on the hills of the Sherani area which are inhabited by three sub-tribes: the Oba Khel- اوباخیل (ca. 18% area), Hussan Khel- حسن خیل (20%) and Chual Khel- چؤل خیل (12%). Interviews were conducted among the Sulthan Zai (سلطان زئ) sub-tribe of Oba Khel, which lives in an area of approximately 145 km 2 . Interviews were conducted in five foothill villages i.e., Payor Mela, Landi Kutherzai, Zindawar, Jaty Ghbazh and Kurachai (1). پئوڑمیلا, (2). لنڈئ کؤتڑزئ, (3). زندہ وا ر, . جٹئ غبژ, (1). کوڑاچائ) between 1000 and 1200 masl and the associated ‘migratory villages’ in the mountains between 2300 and 2600 masl, where some people migrate in summer with their herds (Fig. ). Livelihood strategies include livestock raising, timber cutting, non-timber forest products collection and labor work on daily wages. The informants’ details, demographic situation and brief description of foothill villages are shown in Table (for migration and other details of the study sites; see .
A total of 116 informants between 20 and 100 years with an average age of 41 (SD ±13) years was interviewed. In the first phase, unstructured and semi-structured, formal and informal in-depth interviews ( n = 58) were conducted with informants in the local language, Pashtu. Also, personal observations and group discussions were held to get an overview of general concepts of natural phenomena and familiarize with local terms and their emic definitions . In the second phase, detailed interviews were conducted with local health care specialists for understanding the local health care system in the area ( n = 11). In the third phase, successive oral freelists were performed with individuals of each village to check for completeness of the collected knowledge as well as its variation at individual and community level ( n = 47). Transect walks were made along the villages with key informants both at foothills and mountainous areas for specimens’ collections and triangulation of data. Most informants were male due to cultural restrictions on involving female informants. Key female informants were included indirectly, mostly through interviewing their sons. We built rapport with the local communities and were allowed to live with the people, accompany them during their daily life, and attend ritual ceremonies. The ethical guidelines of the International Society of Ethnobiology were strictly followed during the whole research process. Consent was obtained from every informant before interviewing where objectives, procedure and methodology of the project were also explained.
Specimens were prepared of all documented plants (Table ). These were identified by taxonomists at Quaid-I-Azam University Islamabad Pakistan (by Dr. Zahid Ullah & Dr. Mushtaq Ahmad) and reconfirmed by comparing with specimens in the Herbarium of Pakistan and the Flora of Pakistan . Families were assigned according to Chase et al. and species names were cross checked with The Plant List . All voucher specimens with accession numbers were deposited at the Herbarium of Pakistan (ISL), Quaid-I-Azam University Islamabad Pakistan.
Information on medicinal plants was analyzed using use reports (URs- as in Table , , ). One use report corresponds to a specific plant part administered in a specific way against a disease as mentioned by one informant. Freelist data were analyzed using descriptive statistics. To determine the most frequently used plant species for treating a particular ailment category by the informants of the study area, we calculated the fidelity level (FL-, as in Table ) by following Alexiades . The availability of species was categorized into frequent, occasional, and rare based on personal observations in the field and discussions with the informants by following the criteria of DAFOR scale. Local terms for diseases were reconfirmed with the regional medical doctors. To facilitate cross-cultural comparisons and to highlight uniqueness and similarities, we categorized all the diseases mentioned by the interviewees into 16 disease categories according to the symptoms they cause and the organs they affect ; see Table ).
Local healthcare system The local health care system is pluralistic and consists of different types of specialists with different backgrounds. According to their needs, local people visit one or several of these specialists. The choice of medicine is based on severity of disease, effectiveness of the medicine and ease of availability. Wearing the skin of a goat/sheep is usually applied in severe cases, medicinal plants are used for moderate ailments, and minerals and other animal products are used for specific ailments or as alternatives if plants are unavailable. Two types of ritual specialists are found: a) Mullayan (ملایان), the religious specialists with formal religious education from a religious school called Madrasa (مدرسہ). He uses religious knowledge for healing including the scripture from the holy books and practices from Quran, Hadith, and Sunnah; and is responsible for other religious duties like collective prayers and funeral processions. People visit Mulla usually in cases of ‘masiyath’ (ماسیت)—a group of ailments including any unusual disease believed to be caused by spirits (Jinn), or other soul related illnesses with symptoms like neurological or psychological trauma/disorders, hypervigilance and pervasive feelings of terror—which are treated with ‘ta’wiz’, dam and du’a’- (تعویذ، دم، دعا), i.e., amulets, blessings, and prayers. The amulets typically consist of a piece of paper with holy inscription wrapped in a cloth and/or leather and hung/fastened around the neck or arm. Most villages have a Mulla and they tend to have average medicinal plant knowledge; all Mulla we met were men, but nowadays women can also attend religious schools. b) Aamel (عامل), is a ritual specialist consulted when people contract unusual diseases related to spirits. They work with rituals which may last up to 40 days. Usually, they are male but female Aamel also exist. Aamel apprentice with experienced Aamel and go through arduous training. They communicate with spirits, locally called Paerai (پیرائی), and are widely known as Jinn. These are entities which can cause physical and mental harm to humans but can be tamed by Aamel. These specialists are rare in the area, and usually lack knowledge of medicinal plants. Consultants or pulse diagnosers identify the ailment through pulse and provide or suggest appropriate treatment with medicinal plants or other drugs, wearing of animal skin, ritual treatment, or biomedical treatment. The consultant’s knowledge about pulse is considered as a gift of God which is mostly transferred from generation to generation (usually male). Usually, each village has a consultant/pulse diagnoser. Locals visit consultants when unaware of what ails them. Consultants have good medicinal plant knowledge. Traditional bone setters adjust broken and disjointed bones. They are rare in the area and their knowledge is inherited from elders. Rural bone setters tend to use medicinal plants whereas urban bone setters use conventional pharmaceuticals. Traditional midwives or birth attendants - usually one per village - are wise and experienced women invited during childbirth. They are knowledgeable regarding medicinal plants. Biomedical drugs are administered by medically trained doctors in clinics in urbanized areas. Untrained drug sellers are found in every 3rd or 4th village. They also advise usage and administering of drugs. There are no specific local medicinal plant specialists, but elders have knowledge of medicinal plants and use them in their families. They are mostly old and experienced people who inherit this knowledge from ancestors and other elders. Local therapeutic concepts and treatments The local materia medica is composed of plants, animals, minerals, and other sources while the local etiology revolves around the Unani concept of humours. This concept reached the area through practitioners trained in Unani medicine from united India who settled in remote tribal areas for community services. Their knowledge was incorporated into the knowledge system of the communities (personal communication with Unani medicine experts). The concept of “bitter” is locally associated with medicine and an often-used proverb says: “everything bitter except poison is good for health while everything sweet except honey is harmful”. ‘Voice infection’ (ژاغ) is another local concept related to health and disease. It assumes that some people’s voice has naturally an infectious effect on the patient or his/her wounded body parts. It can happen intentionally or unintentionally. Beside plants, gold or silver is preferentially used to cure or prevent the effect of ‘voice infection’ (Table ). Evil eye is a concept that describes the power of envy and jealousy. Humans obsessed by envy or jealousy can, with their eyesight, harm their fellow men intentionally or unintentionally. A person with evil eye can harm even when they are pleased to see someone or somebody’s possessions. Children, mothers during pregnancy before and after childbirth, ill/injured as well as healthy and wealthy people are vulnerable to it. For protection, a religious amulet is mostly used, or sometimes a small temporary black scar is made with coal on the visible part of the skin. Wearing of goat/sheep skin is locally considered a paramount medicinal tool. It is used for a large number of ailments especially in cases of emergency and complications (Table ). Generally, goat skin is considered cold and advised to be worn in summer, while sheep skin is warm and is advised for the winter. Use of each can also be advised regardless of the season. According to key informants, use of the skin needs special care, otherwise, the disease may worsen. Wearing the skin needs to be carefully adjusted to the progression of the disease. For example, skin wearing is advised only at the beginning of malaria; advised at the beginning or end of typhoid but not at the climax when symptoms are the strongest. Correct and effective usage is mostly advised by the consultants of the area or by the elders of the family. It is believed that an imbalance of hot/cold usually causes common illnesses. Wearing of the skin influences this balance, smoothens and supports the body according to its requirements, and detoxifies through suction. Other remedies made from animals include bear and porcupine fats used against musculoskeletal problems, ass milk against whopping cough, and gall bladder bile from the Sulaiman Markhor—a wild goat ( Capra falconeri ) — against hepatitis B and C (Table ). Minerals like gold, mineral stone, silver or brass coins are often ritually used for protection against ‘voice infection’. Diversity and use of medicinal plants A total of 44 species of plants were documented with 588 use reports (Table ). The medicinal species are herbs (21 spp.), trees (12 spp.), shrubs (9 spp.), and climbers (2 spp.) which are distributed among foothills (28 spp. with 258 UR) and mountainous areas (16 spp. with 330 UR; Table ). Among all, a single species ( Curcuma longa ) is cultivated, three are semi-cultivated, i.e., present both in wild and cultivated form ( Ficus palmata , Olea ferruginea , Punica granatum ), while the remaining are wild. Majority of the species are frequently available (23 spp.) while some are occasional (16 spp.) and others rare (5 spp.). Almost half of the plants (28 spp.) are used only in fresh form while the remaining (16 spp.) are used in both fresh and dried forms. Additionally, 16 species among the documented medicinal plants are also used as food (Table ). All the diseases cured with medicinal plants are categorized into 16 disease categories in which gastrointestinal diseases are the most commonly mentioned (with high number of species and use reports) followed by multisystem and ritual, respectively (Fig. ). Leaves are mostly used, while gums, resins, latex, and wood-oil also play an important role (Fig. ). Preparation and application of medicinal plants All the documented medicinal preparations are based on a single plant. Eight different ways of medicinal plant uses/preparations from different plant parts are employed. Most often, plant parts are used unprocessed (e.g., Acacia modesta and Calotropis procera ), followed by infusion and decoction (Fig. ). Medicine is taken orally (24 spp.) or used topically (9 spp.) and some species have both oral and topical applications (9 spp.). Only two species are used for their smell and smoke (Table , Fig. ). Variation in knowledge among villages and informants Medicinal plant knowledge is similar among villages (Fig. ). Three-fourths of all medicinal plant species were mentioned in more than one of the villages, and 15 spp. were common to all villages’ informants. Also, sheep and goat skins are homogenously used in addition to animal and mineral based materia medica. A comparison of age groups shows that informants between 20 and 29 years reported less medicinal plants than older people. Medicinal plant knowledge is transmitted vertically, i.e., from generation to generation and horizontally within the community especially among elders.
The local health care system is pluralistic and consists of different types of specialists with different backgrounds. According to their needs, local people visit one or several of these specialists. The choice of medicine is based on severity of disease, effectiveness of the medicine and ease of availability. Wearing the skin of a goat/sheep is usually applied in severe cases, medicinal plants are used for moderate ailments, and minerals and other animal products are used for specific ailments or as alternatives if plants are unavailable. Two types of ritual specialists are found: a) Mullayan (ملایان), the religious specialists with formal religious education from a religious school called Madrasa (مدرسہ). He uses religious knowledge for healing including the scripture from the holy books and practices from Quran, Hadith, and Sunnah; and is responsible for other religious duties like collective prayers and funeral processions. People visit Mulla usually in cases of ‘masiyath’ (ماسیت)—a group of ailments including any unusual disease believed to be caused by spirits (Jinn), or other soul related illnesses with symptoms like neurological or psychological trauma/disorders, hypervigilance and pervasive feelings of terror—which are treated with ‘ta’wiz’, dam and du’a’- (تعویذ، دم، دعا), i.e., amulets, blessings, and prayers. The amulets typically consist of a piece of paper with holy inscription wrapped in a cloth and/or leather and hung/fastened around the neck or arm. Most villages have a Mulla and they tend to have average medicinal plant knowledge; all Mulla we met were men, but nowadays women can also attend religious schools. b) Aamel (عامل), is a ritual specialist consulted when people contract unusual diseases related to spirits. They work with rituals which may last up to 40 days. Usually, they are male but female Aamel also exist. Aamel apprentice with experienced Aamel and go through arduous training. They communicate with spirits, locally called Paerai (پیرائی), and are widely known as Jinn. These are entities which can cause physical and mental harm to humans but can be tamed by Aamel. These specialists are rare in the area, and usually lack knowledge of medicinal plants. Consultants or pulse diagnosers identify the ailment through pulse and provide or suggest appropriate treatment with medicinal plants or other drugs, wearing of animal skin, ritual treatment, or biomedical treatment. The consultant’s knowledge about pulse is considered as a gift of God which is mostly transferred from generation to generation (usually male). Usually, each village has a consultant/pulse diagnoser. Locals visit consultants when unaware of what ails them. Consultants have good medicinal plant knowledge. Traditional bone setters adjust broken and disjointed bones. They are rare in the area and their knowledge is inherited from elders. Rural bone setters tend to use medicinal plants whereas urban bone setters use conventional pharmaceuticals. Traditional midwives or birth attendants - usually one per village - are wise and experienced women invited during childbirth. They are knowledgeable regarding medicinal plants. Biomedical drugs are administered by medically trained doctors in clinics in urbanized areas. Untrained drug sellers are found in every 3rd or 4th village. They also advise usage and administering of drugs. There are no specific local medicinal plant specialists, but elders have knowledge of medicinal plants and use them in their families. They are mostly old and experienced people who inherit this knowledge from ancestors and other elders.
The local materia medica is composed of plants, animals, minerals, and other sources while the local etiology revolves around the Unani concept of humours. This concept reached the area through practitioners trained in Unani medicine from united India who settled in remote tribal areas for community services. Their knowledge was incorporated into the knowledge system of the communities (personal communication with Unani medicine experts). The concept of “bitter” is locally associated with medicine and an often-used proverb says: “everything bitter except poison is good for health while everything sweet except honey is harmful”. ‘Voice infection’ (ژاغ) is another local concept related to health and disease. It assumes that some people’s voice has naturally an infectious effect on the patient or his/her wounded body parts. It can happen intentionally or unintentionally. Beside plants, gold or silver is preferentially used to cure or prevent the effect of ‘voice infection’ (Table ). Evil eye is a concept that describes the power of envy and jealousy. Humans obsessed by envy or jealousy can, with their eyesight, harm their fellow men intentionally or unintentionally. A person with evil eye can harm even when they are pleased to see someone or somebody’s possessions. Children, mothers during pregnancy before and after childbirth, ill/injured as well as healthy and wealthy people are vulnerable to it. For protection, a religious amulet is mostly used, or sometimes a small temporary black scar is made with coal on the visible part of the skin. Wearing of goat/sheep skin is locally considered a paramount medicinal tool. It is used for a large number of ailments especially in cases of emergency and complications (Table ). Generally, goat skin is considered cold and advised to be worn in summer, while sheep skin is warm and is advised for the winter. Use of each can also be advised regardless of the season. According to key informants, use of the skin needs special care, otherwise, the disease may worsen. Wearing the skin needs to be carefully adjusted to the progression of the disease. For example, skin wearing is advised only at the beginning of malaria; advised at the beginning or end of typhoid but not at the climax when symptoms are the strongest. Correct and effective usage is mostly advised by the consultants of the area or by the elders of the family. It is believed that an imbalance of hot/cold usually causes common illnesses. Wearing of the skin influences this balance, smoothens and supports the body according to its requirements, and detoxifies through suction. Other remedies made from animals include bear and porcupine fats used against musculoskeletal problems, ass milk against whopping cough, and gall bladder bile from the Sulaiman Markhor—a wild goat ( Capra falconeri ) — against hepatitis B and C (Table ). Minerals like gold, mineral stone, silver or brass coins are often ritually used for protection against ‘voice infection’.
A total of 44 species of plants were documented with 588 use reports (Table ). The medicinal species are herbs (21 spp.), trees (12 spp.), shrubs (9 spp.), and climbers (2 spp.) which are distributed among foothills (28 spp. with 258 UR) and mountainous areas (16 spp. with 330 UR; Table ). Among all, a single species ( Curcuma longa ) is cultivated, three are semi-cultivated, i.e., present both in wild and cultivated form ( Ficus palmata , Olea ferruginea , Punica granatum ), while the remaining are wild. Majority of the species are frequently available (23 spp.) while some are occasional (16 spp.) and others rare (5 spp.). Almost half of the plants (28 spp.) are used only in fresh form while the remaining (16 spp.) are used in both fresh and dried forms. Additionally, 16 species among the documented medicinal plants are also used as food (Table ). All the diseases cured with medicinal plants are categorized into 16 disease categories in which gastrointestinal diseases are the most commonly mentioned (with high number of species and use reports) followed by multisystem and ritual, respectively (Fig. ). Leaves are mostly used, while gums, resins, latex, and wood-oil also play an important role (Fig. ).
All the documented medicinal preparations are based on a single plant. Eight different ways of medicinal plant uses/preparations from different plant parts are employed. Most often, plant parts are used unprocessed (e.g., Acacia modesta and Calotropis procera ), followed by infusion and decoction (Fig. ). Medicine is taken orally (24 spp.) or used topically (9 spp.) and some species have both oral and topical applications (9 spp.). Only two species are used for their smell and smoke (Table , Fig. ).
Medicinal plant knowledge is similar among villages (Fig. ). Three-fourths of all medicinal plant species were mentioned in more than one of the villages, and 15 spp. were common to all villages’ informants. Also, sheep and goat skins are homogenously used in addition to animal and mineral based materia medica. A comparison of age groups shows that informants between 20 and 29 years reported less medicinal plants than older people. Medicinal plant knowledge is transmitted vertically, i.e., from generation to generation and horizontally within the community especially among elders.
Prevalent diseases and concepts In the present study, gastrointestinal diseases have the highest number of species and use reports, followed by ritual uses and musculoskeletal ailments (number of use reports; Fig. ). This mirrors the prevalence of diseases and treatments in the area. Gastrointestinal disorders are common usually due to contaminated water, which are preferably treated with medicinal plants. This pattern of treating gastrointestinal disorders with medicinal plants is found all over the world among rural communities and usually explained with the antimicrobial properties of many plants used as medicine . Musculoskeletal problems are also common in the area due to the mountainous terrain and accident-prone livelihood activities such as carrying heavy commercial timber. The category of ritual mainly covers treatments for diseases caused by spirits (Jinn), evil eye or ‘voice infection’. The concept of ‘voice infection’ could not be found in the available scientific literature, although it is deeply rooted in the local understanding of illness and is common in tribal areas of Pakistan and Afghanistan. In contrast, diseases caused by Jinn as well as the concept of evil eye is widely known in Islamic regions and broadly discussed in literature . Medical doctors in the nearby area explained that the local concepts of “unusual diseases” seem to be related to epilepsy, psychological problems, and allergies. Medicinal uses of plant species with a bitter taste like Olea ferrugenia fruits and Caralluma tuberculata aerial parts are based on the concept of ‘bitter taste is good for health’. This concept is found in many different cultures of the world . Some diseases and medicinal plants are locally perceived as hot and cold, whereas the treatment is based on opposites. For example, malaria is considered a hot disease, and is treated with an infusion of Teucrium stocksianum, which is considered cold. Similarly, a Withania coagulans infusion is considered cold and is used against sunstroke, a locally perceived hot disease. Detailed knowledge about plant parts, preparations, and their degree of hotness and coldness is held and practiced by the herbalist/Hakim/Unani medicine specialists, which, however, is not found in our research area. A hot and cold dichotomy and the treatment with opposites is an integral part of the concept of humours and has been described for other regions of Pakistan and all over the globe . Local materia medica The number of medicinal plant species reported from the Sulaiman area is less than reported by other studies, which typically report between 50 and 150 medicinal plant species for comparable sites . This may have several reasons. Animal products, especially the use of goat and sheep skin, are of utmost importance for local treatments. Furthermore, a substantial part of the remedies is apotropaic and, in this case, often made from minerals or other products. In the Himalayan foothills of Southwest China, a similar situation was found with local healers—among the Shuhi people—who mainly work with ritual plants and their medicinal plant knowledge is relatively scarce compared to other regions . In the Sulaiman area, health prevention through gathered wild food is also important and may be a reason for relatively little medicinal plant knowledge . The numbers of species reported as ethnoveterinary and edible plant species were also less as compared to other areas. Apart from above-mentioned reasons, our research area has semi-arid climatic conditions which support comparatively less plant diversity; and—irrespective of most articles on medicinal plants from different parts of the country—we focused on a smaller area but with detailed documentation/evaluation. All of the documented medicinal plant species are reported from other areas of Pakistan with similar or different uses . Especially the ritual use of some of the species reported for problems like ‘voice infection’, masiyath , and evil eye (rarely reported as bad eye), seem to be unique to the Sulaiman area and its local culture. Above half of species (52%) are new or unreported from the country for presently mentioned human ailments (Table ). Same was the case for ethnoveterinary medicinal plants , whereas one-third species of the wild edibles were also newly reported from the study area , which shows the uniqueness of the Sulaiman area and its culture. About half of all use reports were from only 6 plant families, i.e., Lamiaceae (82 UR), Pinaceae (64 UR), Apocynaceae (62 UR), Euphorbiacaee (43 UR), Solanaceae (36 UR) and Rosaceae (23 UR). The relative importance of Pinaceae is due to two Pinus species ( P. gerardiana and P. wallichiana ) which are broadly used in the area not only for medicinal but also for ethnoveterinary medication and food purposes . The extent of similarity of these results with the prevalence of the families in the local vegetation is unknown, as a checklist of the flora of the Sulaiman Mountains is unavailable. All reported medicinal plants are used individually without mixing different species or parts together during preparation, and the majority of medicinal plants are directly used without any prior processing/preparation (Fig. ). There are different medicinal and ritual specialists, but no local herbalists in the area. This coupled with the predominant use of fresh plants, animals and minerals for medicinal purposes indicates that the traditional healing system consists of a combination of knowledge from different systems including biomedicine. A negative impact of syncretism between traditional and biomedicine is that local people tend to use pharmaceuticals like pain killers carelessly since they are unaware of possible side effects and proper dosage. These concepts are unknown to their traditional medicine. The local use of pharmaceuticals based on traditional concepts of plant medicine and related problems have also been described and discussed for two Amazonian societies . Medicinal plant knowledge has some variations between villages (Fig. ) possibly due to socio-economic differences (Table ), weaker contacts (horizontal transmission of knowledge), and differences in their exposure to diverse flora . While age-wise knowledge difference (Fig. ), is a universal phenomenon in traditional medicine, although, in our case it was ubiquitous in all age groups except the young (age 20–29). The commonality of medicinal plant knowledge was relatively more prevalent than the ethnoveterinary species and wild edibles . However, literature supports the commonalities of edible plants knowledge and use—as compared to ethnomedicinal ones—due to the sensitivity of health-related issues/knowledge . Possible reasons include the strong culture of attending/taking care of patients by the neighboring villagers where they provide/share the best advice/knowledge, which is usually warmly welcomed by the patient’s family. Local medicinal plant use is still dynamic. Some medicinal plants are used less recently while others are newly integrated into the materia medica. Plant medicine might be abandoned due to lack of efficiency, problems of availability, or cheap pharmaceutical alternatives. For example, there was a decrease in the use of Phlomoides spectabilis leaves against human skin allergy. Newly integrated species are Valeriana jatamansi against diarrhea, ca. 10 years ago, and Thymus linearis against stomach problems, ca. 15 years ago. Key informants claimed that the extensive use of few medicinal plants like Teucrium stocksianum, Ephedra gerardiana and Withania coagulans (Table )—compared to available pharmaceuticals—is due to their efficacy. Plants with high fidelity levels (e.g., Valeriana jatamansi , Litsea monopetala , Berberis calliobotrys , Withania coagulans and Pinus species-Table ) are also used for similar ailments in livestock . The globally increasing tendency of using traditional medicine may led to some harmful/poisonous or unpredictable side effects, e.g., several species mentioned during present study ( Euphorbia , Ephedra , Citrullus , etc.) are reported in literature with adverse effects . Therefore, official pharmacopeias must be consulted before using such plants or its parts. Extensive use of goat and sheep skin for medicinal treatment (Table ) to our knowledge has not been reported in the ethnomedical literature of Pakistan yet. These uses are not restricted to the present research area but are typically found among Pashtun tribes in Pakistan and Afghanistan (personal discussion with residents of different areas including people of Afghanistan and Pashtun tribes of Pakistan). Similarly, the use of gold, silver and mineral stones against ‘voice infection’ is also practiced in the adjoining tribal areas (personal observation). These uses of materia medica are transmitted as oral histories. The uses of mineral stone (دہ ژاغ دانہ) against ‘voice infection’ are more important in off seasons when fresh plant material (especially Euphorbia prolifera with fidelity level 100%, Table ) is scarce. One of the reasons why the Sulaiman Markhor ( Capra falconeri ) appears as threatened species on the IUCN Red List 2016 is its high demand for medicinal purposes (e.g., bile of gall bladder for hepatitis, skin for multiple medicinal purposes and meat for general health support). Its horns are used for decoration, and both skin and horns fetch high prices in the market. Sustainable conservation strategy in the form of ecotourism and applying other conservation tools, by involving local communities—as they are familiar with vegetation, habitat and associated wildlife, needs to be devised in the area . Interestingly, no medicinal plant trade is found in the research area. Some species like fruits of Withania coagulance and seeds of Pegnum harmalla are commonly marketed in Pakistan, even in the surrounding communities of the research area, but their prices are unenticing. Other plants like Berberis calliobotrys, Ephedra gerardiana and Velariana jatamansi have a high market demand and catch good prices , but locals were unaware about these commercial values. Sustainable harvesting of such plants could help to improve local livelihoods . Above half of the present ethnomedicinal plant species were commonly available (Table ), and leaves were the most used parts (Fig. ), so were less critical for the subsistence needs of the locals. The priority must be given to the rarely available species with higher URs and FL (Table - ), because frequent uses decrease its availability. The ethnomedicinal knowledge in the area was also facing degradation- although not very high, which negatively affects the lives and culture of these societies.
In the present study, gastrointestinal diseases have the highest number of species and use reports, followed by ritual uses and musculoskeletal ailments (number of use reports; Fig. ). This mirrors the prevalence of diseases and treatments in the area. Gastrointestinal disorders are common usually due to contaminated water, which are preferably treated with medicinal plants. This pattern of treating gastrointestinal disorders with medicinal plants is found all over the world among rural communities and usually explained with the antimicrobial properties of many plants used as medicine . Musculoskeletal problems are also common in the area due to the mountainous terrain and accident-prone livelihood activities such as carrying heavy commercial timber. The category of ritual mainly covers treatments for diseases caused by spirits (Jinn), evil eye or ‘voice infection’. The concept of ‘voice infection’ could not be found in the available scientific literature, although it is deeply rooted in the local understanding of illness and is common in tribal areas of Pakistan and Afghanistan. In contrast, diseases caused by Jinn as well as the concept of evil eye is widely known in Islamic regions and broadly discussed in literature . Medical doctors in the nearby area explained that the local concepts of “unusual diseases” seem to be related to epilepsy, psychological problems, and allergies. Medicinal uses of plant species with a bitter taste like Olea ferrugenia fruits and Caralluma tuberculata aerial parts are based on the concept of ‘bitter taste is good for health’. This concept is found in many different cultures of the world . Some diseases and medicinal plants are locally perceived as hot and cold, whereas the treatment is based on opposites. For example, malaria is considered a hot disease, and is treated with an infusion of Teucrium stocksianum, which is considered cold. Similarly, a Withania coagulans infusion is considered cold and is used against sunstroke, a locally perceived hot disease. Detailed knowledge about plant parts, preparations, and their degree of hotness and coldness is held and practiced by the herbalist/Hakim/Unani medicine specialists, which, however, is not found in our research area. A hot and cold dichotomy and the treatment with opposites is an integral part of the concept of humours and has been described for other regions of Pakistan and all over the globe .
The number of medicinal plant species reported from the Sulaiman area is less than reported by other studies, which typically report between 50 and 150 medicinal plant species for comparable sites . This may have several reasons. Animal products, especially the use of goat and sheep skin, are of utmost importance for local treatments. Furthermore, a substantial part of the remedies is apotropaic and, in this case, often made from minerals or other products. In the Himalayan foothills of Southwest China, a similar situation was found with local healers—among the Shuhi people—who mainly work with ritual plants and their medicinal plant knowledge is relatively scarce compared to other regions . In the Sulaiman area, health prevention through gathered wild food is also important and may be a reason for relatively little medicinal plant knowledge . The numbers of species reported as ethnoveterinary and edible plant species were also less as compared to other areas. Apart from above-mentioned reasons, our research area has semi-arid climatic conditions which support comparatively less plant diversity; and—irrespective of most articles on medicinal plants from different parts of the country—we focused on a smaller area but with detailed documentation/evaluation. All of the documented medicinal plant species are reported from other areas of Pakistan with similar or different uses . Especially the ritual use of some of the species reported for problems like ‘voice infection’, masiyath , and evil eye (rarely reported as bad eye), seem to be unique to the Sulaiman area and its local culture. Above half of species (52%) are new or unreported from the country for presently mentioned human ailments (Table ). Same was the case for ethnoveterinary medicinal plants , whereas one-third species of the wild edibles were also newly reported from the study area , which shows the uniqueness of the Sulaiman area and its culture. About half of all use reports were from only 6 plant families, i.e., Lamiaceae (82 UR), Pinaceae (64 UR), Apocynaceae (62 UR), Euphorbiacaee (43 UR), Solanaceae (36 UR) and Rosaceae (23 UR). The relative importance of Pinaceae is due to two Pinus species ( P. gerardiana and P. wallichiana ) which are broadly used in the area not only for medicinal but also for ethnoveterinary medication and food purposes . The extent of similarity of these results with the prevalence of the families in the local vegetation is unknown, as a checklist of the flora of the Sulaiman Mountains is unavailable. All reported medicinal plants are used individually without mixing different species or parts together during preparation, and the majority of medicinal plants are directly used without any prior processing/preparation (Fig. ). There are different medicinal and ritual specialists, but no local herbalists in the area. This coupled with the predominant use of fresh plants, animals and minerals for medicinal purposes indicates that the traditional healing system consists of a combination of knowledge from different systems including biomedicine. A negative impact of syncretism between traditional and biomedicine is that local people tend to use pharmaceuticals like pain killers carelessly since they are unaware of possible side effects and proper dosage. These concepts are unknown to their traditional medicine. The local use of pharmaceuticals based on traditional concepts of plant medicine and related problems have also been described and discussed for two Amazonian societies . Medicinal plant knowledge has some variations between villages (Fig. ) possibly due to socio-economic differences (Table ), weaker contacts (horizontal transmission of knowledge), and differences in their exposure to diverse flora . While age-wise knowledge difference (Fig. ), is a universal phenomenon in traditional medicine, although, in our case it was ubiquitous in all age groups except the young (age 20–29). The commonality of medicinal plant knowledge was relatively more prevalent than the ethnoveterinary species and wild edibles . However, literature supports the commonalities of edible plants knowledge and use—as compared to ethnomedicinal ones—due to the sensitivity of health-related issues/knowledge . Possible reasons include the strong culture of attending/taking care of patients by the neighboring villagers where they provide/share the best advice/knowledge, which is usually warmly welcomed by the patient’s family. Local medicinal plant use is still dynamic. Some medicinal plants are used less recently while others are newly integrated into the materia medica. Plant medicine might be abandoned due to lack of efficiency, problems of availability, or cheap pharmaceutical alternatives. For example, there was a decrease in the use of Phlomoides spectabilis leaves against human skin allergy. Newly integrated species are Valeriana jatamansi against diarrhea, ca. 10 years ago, and Thymus linearis against stomach problems, ca. 15 years ago. Key informants claimed that the extensive use of few medicinal plants like Teucrium stocksianum, Ephedra gerardiana and Withania coagulans (Table )—compared to available pharmaceuticals—is due to their efficacy. Plants with high fidelity levels (e.g., Valeriana jatamansi , Litsea monopetala , Berberis calliobotrys , Withania coagulans and Pinus species-Table ) are also used for similar ailments in livestock . The globally increasing tendency of using traditional medicine may led to some harmful/poisonous or unpredictable side effects, e.g., several species mentioned during present study ( Euphorbia , Ephedra , Citrullus , etc.) are reported in literature with adverse effects . Therefore, official pharmacopeias must be consulted before using such plants or its parts. Extensive use of goat and sheep skin for medicinal treatment (Table ) to our knowledge has not been reported in the ethnomedical literature of Pakistan yet. These uses are not restricted to the present research area but are typically found among Pashtun tribes in Pakistan and Afghanistan (personal discussion with residents of different areas including people of Afghanistan and Pashtun tribes of Pakistan). Similarly, the use of gold, silver and mineral stones against ‘voice infection’ is also practiced in the adjoining tribal areas (personal observation). These uses of materia medica are transmitted as oral histories. The uses of mineral stone (دہ ژاغ دانہ) against ‘voice infection’ are more important in off seasons when fresh plant material (especially Euphorbia prolifera with fidelity level 100%, Table ) is scarce. One of the reasons why the Sulaiman Markhor ( Capra falconeri ) appears as threatened species on the IUCN Red List 2016 is its high demand for medicinal purposes (e.g., bile of gall bladder for hepatitis, skin for multiple medicinal purposes and meat for general health support). Its horns are used for decoration, and both skin and horns fetch high prices in the market. Sustainable conservation strategy in the form of ecotourism and applying other conservation tools, by involving local communities—as they are familiar with vegetation, habitat and associated wildlife, needs to be devised in the area . Interestingly, no medicinal plant trade is found in the research area. Some species like fruits of Withania coagulance and seeds of Pegnum harmalla are commonly marketed in Pakistan, even in the surrounding communities of the research area, but their prices are unenticing. Other plants like Berberis calliobotrys, Ephedra gerardiana and Velariana jatamansi have a high market demand and catch good prices , but locals were unaware about these commercial values. Sustainable harvesting of such plants could help to improve local livelihoods . Above half of the present ethnomedicinal plant species were commonly available (Table ), and leaves were the most used parts (Fig. ), so were less critical for the subsistence needs of the locals. The priority must be given to the rarely available species with higher URs and FL (Table - ), because frequent uses decrease its availability. The ethnomedicinal knowledge in the area was also facing degradation- although not very high, which negatively affects the lives and culture of these societies.
The present paper based on interactions with local informants investigates the traditional medicinal knowledge and materia medica of remote tribal communities in west Pakistan. A variety of medical substances from plants, animals and minerals are used to treat diseases, depending on the severity of the disease and availability of the substance. Treatment often happens in the family context. But different types of medical and ritual specialists are consulted if necessary, especially in the case of unusual diseases and illnesses caused by spirits. The local medicinal system is dynamic as it not only includes and integrates new medicinal plants but also pharmaceuticals. However, most important is the use of goat and sheep skin which forms a central pillar for healing. The use and practices mentioned during present study needs detailed pharmaceutical evaluation before its recommendation for general use. The widely used materia medica with rare availability needs conservation priority. Similarly, the local cultural norms are the means of matria medica practices which must be preserved. While the area faces some acculturation processes, traditional practices remain quite intact. From a developmental perspective, reinforcement of local institutional contexts would be important to strengthen local knowledge and related sustainable practices.
|
Splenic infarction following torsion of wandering spleen involving pancreatic tail successfully managed in resource limited setting: a case report | bf115fb3-2820-4c42-a8a2-8b43208d1631 | 11786403 | Surgical Procedures, Operative[mh] | Torsion of the spleen is a rare cause of acute abdominal pain. A “wandering spleen” is characterized by laxity or absence of the supporting splenic ligaments and where a long pedicle facilitates abnormal positioning of the spleen outside its native left subdiaphragmatic location. Wandering spleen predisposes the spleen to torsion, blood-flow impairment, and ischemia, and can cause a variety of symptoms from mild intermittent abdominal pain to acute abdominal crisis . The first detailed description of this clinical entity was by Van Horne in 1667 as an incidental finding during autopsy . Wandering spleen is a rare clinical entity with a less than 0.2% reporting incidence rate . It is a condition characterized by the absence or underdevelopment of one or all the ligaments that hold the spleen in its normal position. The lienorenal ligament attaches the hilum of the spleen to the left kidney, and the gastrosplenic ligament attaches the hilum of the spleen to the greater curvature of the stomach. Inferiorly, the spleen is supported with the phrenicocolic ligament . It affects children, who make up one-third of all cases, with a female predominance after the age of 1 year. In pediatric cases, its presentation includes acute abdomen and nonspecific symptoms, such as nausea, vomiting, and fever. When wandering spleen is combined with splenic torsion, it may cause venous congestion and enlargement at the beginning. As the pathological course progresses, it may end up in splenic ischemia and infarction; finally, splenic necrosis and rupture could occur . At adult age it most frequently affects women of reproductive age, in whom acquired laxity of the splenic ligaments is usually the cause. The hypothesized cause is found to be hormonal changes during pregnancy leading to ligamentous laxity . Apart from its normal position, the spleen can be found anywhere in the abdomen, from the bottom of the left diaphragm to the pelvis . Another extremely rare presentation is the finding of splenic torsion in its normal anatomical position . The clinical presentation of wandering spleen is variable, but the main symptom is abdominal pain. It ranges from asymptomatic abdominal mass to intestinal obstruction or acute abdominal, which requires urgent surgical intervention . Its major complication is acute torsion with subsequent infarction, which is a potentially fatal emergency . Recognition of this medical condition can help avoid any confusion with acute abdomen of other etiologies. The diagnosis can be confirmed by imaging techniques, such as computed tomography (CT) and magnetic resonance imaging (MRI) . In general, a multidisciplinary approach should be considered, using history and physical examination to guide appropriate consultation with radiology and general surgery. Aggressive treatment is usually needed, and for most patients surgical intervention is required as a definitive measure . Here we present a case of a 40-year-old female patient presenting with acute abdomen following 1080° counterclockwise torsion of splenic pedicle involving pancreatic tail with splenic infarction, which was successfully managed with open splenectomy in a setting where minimal invasive surgery is not available and well practiced. This work has been reported in line with CARE guidelines.
A 40-year-old Black, multiparous female patient of Amhara ethnicity, who is a housewife from a low socioeconomic background, referred from a rural area in Ethiopia, presented with an acute onset of dull, aching abdominal pain. She also experienced episodes of vomiting of ingested material and abdominal distension. The patient has no history of trauma, previous interventions, or significant medical, family, or social history. Physical examination revealed a palpable mass in the periumbilical and right lower quadrant areas with accompanying regional tenderness. Laboratory results disclosed a high normal leukocyte count whereas urine human chorionic gonadotropin (HCG) and organ function tests were nonrevealing. After initial physiological stabilization, radiologic examinations, including an abdominal ultrasound and contrast-enhanced computed tomography (CT) of the abdomen, were performed. The abdominal ultrasound with color Doppler revealed a 14 × 10 cm mass in the left lower quadrant, showing echodensity similar to splenic parenchyma, with no color Doppler flow and no visible spleen in its usual anatomical location. On the basis of the radiology department’s recommendation, a contrast-enhanced CT of the abdomen was carried out, which revealed a spleen measuring approximately 15 cm in size, located in the lower abdomen. The CT also showed twisting of the splenic vessels at the hilum, resembling a “whirlpool” appearance, with heterogeneous density and no post-contrast enhancement. Additionally, part of the pancreatic tail appeared twisted, though without any change in density. Free peritoneal fluid was also noted. Financial constraints and cultural beliefs posed significant challenges during the diagnostic evaluation and intervention of our patient. After obtaining informed written consent, the patient was taken to the operating room with a preoperative diagnosis of splenic infarction secondary to torsion of a wandering spleen, involving the tail of the pancreas. Laparotomy revealed the spleen located in the right lower quadrant, with an elongated vascular pedicle twisted 1080° counterclockwise, involving the tail of the pancreas. The omentum and small bowel were adherent to the spleen, and minimal peritoneal fluid was present, as shown in Fig. . Splenectomy was performed after gently dissecting the omentum and small bowel from the spleen, with the pancreatic tail preserved (Fig. ). The patient was discharged on postoperative day 4 following an uneventful hospital course, with instructions provided according to the postsplenectomy protocol. An appointment for immunization against encapsulated organisms was scheduled after 2 weeks postoperatively. During follow-up appointments, the patient was found to be in good health, with no complications from the procedure (Table ).
The anatomic position of the spleen is constant. The lienogastric, lienorenal, and phrenocolic ligaments provide support and attachment. Wandering spleen occurs when there is a failure of development or laxity of these supporting structures. Several causes have been proposed. A congenital defect has been implicated in some cases. With incomplete fusion of the embryonic mesogastrium posteriorly, the lienorenal ligament fails to develop, and the pancreas is not completely retroperitoneal, so that its distal portion is included in the intraperitoneal splenic hilum. Acquired factors suggested have been splenomegaly, trauma, and abdominal laxity and hormonal effects of pregnancy, which may account for the increased incidence of this condition in women of childbearing age . The first description of a wandering spleen is attributed to Von Horne, in 1667, as an autopsy finding on an adult . The most commonly described clinical picture is the presence of a palpable mass associated with abdominal discomfort, and torsion is the most common complication . Clinical symptomatology is variable and may range from the patient being completely asymptomatic, presenting with a mobile abdominal lump, or having intermittent abdominal pain because of partial torsion and spontaneous detorsion of the splenic pedicle. Often, patients may also present with an acute abdomen due to complete torsion and infarction. On examination, a firm, mobile abdominal mass with characteristic “notched borders” may be felt, but this is always not the case because splenic engorgement may obliterate the splenic notch, and therefore a clinical diagnosis is usually tricky . Sonography and CT permit visualization of the spleen and it will be seen in an unusual location and may be enlarged. Enlargement of the distal pancreas with heterogeneous sonographic appearance or low attenuation on CT scanning should suggest that the tail of the pancreas has also undergone torsion and ischemic change . In our patient, the abnormal position of the spleen was demonstrated on both while twisting of the splenic vascular pedicle and involvement of the pancreatic tail was revealed on contrast-enhanced CT of the abdomen. Angiography is the most definitive diagnostic modality in the diagnostic evaluation of wandering spleen . The course and length of the splenic artery and the exact location of its torsion may be seen. Treatment of choice is splenopexy if the spleen is viable, despite the high rate of recurrence. When it is not possible to preserve the spleen, the standard treatment is splenectomy followed by prophylactic antibiotic therapy and vaccination for encapsulated bacteria . The best surgical approach is said to be minimally invasive surgery in a setup where it is available and well practiced, unlike in resource-limited setting such as ours.
Torsion and infarction of a wandering spleen is a rare abdominal emergency. Characteristic imaging features on radiological modalities such as abdominal sonography with color Doppler and CT scan are crucial in making an accurate and timely diagnosis. A fair degree of accurate assessment about viability and thrombosis of the splenic vessels can be made on CT scan, which may help the surgeon decide the right mode of treatment. Despite its rare prevalence and atypical presentations, a high index of suspicion for torsion of wandering spleen is very important in patients with acute abdomen, especially in those with risk factors or initial abdominal sonographic evaluation revealing absence of spleen in its normal anatomic position. With rapid diagnosis, timely surgical intervention, and appropriate subsequent follow-up, patients can survive this clinical condition.
|
Panoramic evaluation of external root resorption in mandibular molars during orthodontic treatment: a comparison between root-filled and vital teeth treated with fixed appliances or clear aligners | 29b3ea50-cb9d-496e-a11c-a45426627e41 | 11439240 | Dentistry[mh] | Orthodontic treatment (OT) carries the potential risk of a complication known as external apical root resorption (EARR). EARR is characterized by the irreversible loss or shortening of the hard tissue at the root apex. This phenomenon can be a detrimental unintended consequence of the forces applied during the process of tooth movement . Although studies report a wide range of prevalence, estimating that EARR affects 20–100% of orthodontic patients, the extent of resorption can vary considerably . While EARR can potentially affect any tooth undergoing orthodontic movement, maxillary and mandibular incisors are generally considered the most susceptible to resorption . A potential consequence of EARR is a compromised tooth structure, which can lead to an imbalance between root and crown length, and in severe cases, even tooth loss, ultimately impacting both the aesthetics and functionality of the OT outcome . OT triggers a complex interplay of factors that can influence susceptibility to EARR. These factors include a patient’s genetics, age, nutritional habits, malocclusion severity, the type of appliance used, the chosen treatment approach (extraction vs. non-extraction), the characteristics of applied force (magnitude, direction, and duration), and the overall treatment length . Despite extensive research into the prevalence and severity of EARR in orthodontic patients, the exact biological mechanisms underlying this process remain unclear . For decades, fixed appliance (FA) treatments have been the mainstay of OT. However, a paradigm shift is underway with the increasing popularity of clear aligner (CA) treatments . This patient-centered approach prioritizes aesthetics and comfort. Unlike FAs, CAs are virtually invisible and removable, improving aesthetics, comfort, and oral hygiene maintenance . This treatment approach offers an overall reduction in treatment and chair time, providing a more aesthetic and comfortable alternative to FA treatments . The type of FA used for OT is known to cause EARR . The impact of CA treatment on EARR remains a topic of ongoing research. Despite the theoretical advantages of CA treatment in minimizing EARR through the application of gentler forces, current research presents a mixed picture . Existing studies highlight the possibility of EARR with CAs, with reports of both severe EARR and greater than a 20% reduction in root-crown ratio . Additionally, varying results were obtained in studies comparing CA treatment with FA treatment. While some studies suggest a potential reduction compared to FAs due to the gentler forces applied , others report similar levels of EARR with both methods . The lack of consistent findings highlights the need for further investigation into this aspect of CA treatment. A recent comprehensive study reveals the prevalence of root canal treatment (RCT) worldwide, showing that more than half of the population studied has received at least one RCT . Therefore, orthodontists are likely to encounter root-filled teeth (RFT) in a substantial portion of their patient population, often requiring a coordinated approach involving both orthodontic and endodontic interventions . Existing literature suggests a correlation exists between EARR and a tooth’s prior RCT and pulpal status. Consequently, the magnitude of EARR may differ between RFT and vital pulp teeth (VPT) . Existing research regarding the comparative susceptibility of RFT and VPT during OT presents conflicting results, with some finding no difference, some suggesting more, and others suggesting less resorption in RFT . A recent meta-analysis suggests that RFT might experience a lower degree of EARR compared to VPT. However, the certainty of this finding remains unverified, and this difference might not be clinically significant . Consequently, the comparative susceptibility of RFT and VPT to EARR during OT requires further investigation with a rigorous, evidence-based approach . Few studies in the literature examined EARR in teeth after CA treatments. To date, no studies have directly compared EARR in RFT and contralateral VPT in the same patients after CA treatment. Additionally, there is a lack of research comparing EARR between RFT treated with FAs and those treated with CAs. Therefore, this study aims to investigate the following objectives: To evaluate and compare the amount of EARR associated with OT in RFT and their contralateral VPT during CA treatment in the same patient. This comparison will be conducted using both linear and root surface measurements. To investigate and compare the differences in EARR patterns between CAs and FAs in patients undergoing OT. This comparison will specifically focus on EARR in RFT or VPT. This study investigates two primary hypotheses: There is no statistically significant difference in the amount of EARR between RFT and their contralateral VPT during OT, regardless of whether CAs or FAs are used. The type of OT employed (CAs versus FAs) does not have a statistically significant impact on the extent of EARR in either RFT or VPT.
This retrospective clinical study employed a split-mouth design and received ethical approval from the Clinical Research Ethics Committee of Kutahya Health Sciences University, Kutahya, Türkiye (reference number: 2024/07–37). Informed consent was obtained from all the participants including both patients and the legal guardians of children involved in the present study. A power analysis was conducted to determine the minimum sample size necessary to detect a clinically relevant difference in EARR between groups. The analysis employed an alpha error probability of 0.05 and a power of 90%, targeting a minimum detectable difference of 0.95 mm with a standard deviation of ± 1.15 mm . Based on this analysis, a minimum of 21 patients per group was required. To enhance the study’s statistical power, a larger sample size was ultimately recruited. For this retrospective analysis, radiographs obtained from patients undergoing OT in our clinic from January 2022 to May 2024 were evaluated. The radiographs were subsequently divided into two groups based on treatment modality: CA treatment and FA treatment. Inclusion and exclusion criteria were then applied to select the final sample for assessment. Inclusion Criteria . High-Quality Radiographs: Panoramic radiographs (PRs) that are standardized and exhibit high image quality for accurate landmark identification . Complete Permanent Dentition: Patients with full permanent dentition and no missing teeth, excluding third molars . OT Modality: Patients who received OT with either FAs or CAs . Specific Malocclusion: Treatment who have skeletal Class I anomalies, moderate crowding, and did not involve tooth extractions . Adequate Endodontically Treated Teeth with a VPT counterpart: Patients who had at least one mandibular molar that underwent RCT at least a year prior. The treated tooth must exhibit a healthy periodontal ligament without any signs of periapical pathology. Additionally, VPT counterpart must be present on the contralateral side of the jaw. Acceptable root canal fillings criteria included complete obturation of all root canals, termination of root canal fillings within 0–2 mm of the radiographic apex, absence of voids in any region of the fillings. Criteria for an adequate coronal restoration included a permanent, intact coronal restoration with well-adapted margins, no evidence of recurrent caries, and a radiographically intact appearance . Absence of Parafunctional Habits: Patients who reported no history of bruxism or clenching . Treatment Group Consistency: Groups which utilized the same FA or CA system and aimed for the same amount of tooth movement (anchorage amount) . Exclusion Criteria . RCT During Orthodontics: Patients who underwent RCT during OT were excluded . Dental Anomalies: Patients with teeth exhibiting size or position anomalies, a non-vital tooth on the contralateral side, or unerupted teeth were excluded . Treatment History: Patients with a history of trauma, previous OT, missing treatment records, or teeth extracted for OT or before treatment were excluded . Molar Tipping: Radiographs were evaluated for molar tipping. Patients with tipping in their mandibular molars were excluded . Open Root Apex: Patients with open root apices were excluded . Systemic and Congenital Conditions: Patients with systemic diseases or congenital anomalies or craniofacial syndromes were excluded . Oral Health Issues: Patients with any signs of oral problems such as caries, periodontal disease, or existing root resorption on the examined teeth were excluded . Temporomandibular Joint Disorders and Supernumerary Teeth: Patients diagnosed with temporomandibular joint disorders or with supernumerary teeth were excluded . To ensure comparability between the two groups and minimize the influence of case complexity, the American Board of Orthodontics (ABO) discrepancy index was employed to assess case difficulty. Consequently, patients with high complexity scores were excluded from the study . Following application of the inclusion and exclusion criteria, a total of 37 patients (21 females, 16 males) who received FA treatment were recruited from a pool of 146 patients who completed their OT. The mean age of the FA group was 17.45 years (SD ± 2.67 years). Similarly, 29 patients (18 females, 11 males) who underwent CA treatment were included in the CA group from a pool of 108 patients who completed treatment. The mean age in the CA group was 18.33 years (SD ± 1.96 years). The CA treatment group comprised participants who underwent treatment with CAs (ClearCorrect ® , ClearCorrect LLC, Rock Round, TX, USA). This group was helped by the implementation of virtual treatment planning (ClearPilot™ - ClearCorrect’s Treatment Planning Tool, version 5; ClearCorrect ® ). The treatment sequence, encompassing procedures such as interproximal reduction, attachment placement, and the application of intermaxillary elastics, adhered to the established virtual plan. Patients in this group were instructed to change the CAs in 15-day intervals, with a recommended daily wear time of 22 h. In the FA treatment group, a conventional bracket (MBT prescription and a slot size of 0.022 inches; Razor SS brackets, International Orthodontic Service IOS™, California, USA) was used for treatment. Nickel-titanium archwires (0.014-, 0.016-, 0.018-, 0.016 × 0.022-, 0.017 × 0.025 and 0.019 × 0.025 inch) were used for the aligning and leveling phases, and stainless steel (0.019 × 0.025 inch) was used for the working phase. The FA treatment group exhibited a mean treatment duration of 1.96 years (SD ± 0.78 years), while the CA treatment group demonstrated a mean duration of 1.28 years (SD ± 0.51 years). The amount of EARR in both RFT and contralateral VPT was evaluated in mandibular molars in both groups. Measurements were obtained on taken digital PRs before OT and immediately after debonding. All radiographs were were taken in a standardized head position using a Castellini X-ray unit (Castellini X Radius Compact, Imola, Italy) to ensure consistent patient positioning. Crown and root length measurements were obtained using the measurement program of the Castellini X-ray unit following a methodology established in previous studies . The reference points and specific measurements employed for pre- and post-OT evaluations are detailed in Fig. . The crown and root length measurements involved several key steps. First, the cementoenamel junction (CEJ) was established as a straight line connecting the mesial and distal CEJ points. Subsequently, crown length was determined on both the initial and final radiographs for RFT and their contralateral VPT counterparts. This measurement comprised the longest distance from the occlusal edge to the CEJ. Root length measurements followed a similar approach for both RFT and VPT. The distance was measured from the CEJ to the root apices. Because mandibular molars have two roots, root length was calculated by measuring the distance from the CEJ to the midpoint of the line connecting the two root apices . Details pertaining to the calculation of EARR and its proportions, are presented in Fig. . The EARR was determined in millimeters by subtracting the post-OT root length from the pre-OT root length. Additionally, the ratio of pre-OT crown length to post-OT crown length was calculated. The final EARR value was obtained by multiplying these two values. To assess the relative amount of EARR in RFT compared to VPT, a proportional calculation based on the EARR values in VPT was employed . To complement the linear measurements employed in this study, digital root surface measurements were also obtained from the digital PRs (Fig. ). Following a digital calibration process, the root surface measurements were performed using the SketchAndCalc software program (Axiom Welldone, https://www.sketchandcalc.com/ ). The root surface was measured by meticulously outlining all internal and external root surfaces, starting at the CEJ. To accurately measure the entire root surface, points were meticulously plotted along the root surface at one-pixel intervals. An average of forty (± five) points per tooth was employed to ensure comprehensive representation of the entire root surface . Statistical analysis The normality of the current study’s data was assessed using the Kolmogorov-Smirnov test. Consequently, parametric tests were employed due to the normally distributed data. The Pearson correlation test was utilized to analyze the relationship between gender and the types of teeth measured in the groups. A random sample of 20 patients was selected to evaluate the reliability and reproducibility of measurement protocol. Forty PRs from these patients were measured twice, with a two-week interval between measurements. The intraclass correlation coefficient for the first and second measurements was 0.946 (0.916–0.977), indicating excellent agreement between the two measurements by the same researcher. Additionally, the interclass correlation coefficient was 0.936 (0.908–0.964), demonstrating excellent agreement between the two researchers. These high correlation values indicate excellent agreement between the measurements, and no statistically significant difference was observed between the measurements. Treatment-induced changes within each group were assessed using paired t-tests. Independent t-tests were then utilized to compare the mean age and treatment duration of individuals between the groups, as well as the difference in the EARR amount between RFT and VPT. In order to evaluate the effect of different OT methods on the amount of EARR, analysis of covariance (ANCOVA) was also performed to compare the two groups, taking initial tooth length measurements and initial root surface measurements as covariates ( p < .05). Statistical analyses were conducted using the SPSS software package (version 20.0 for Windows; SPSS Inc., Chicago, IL).
The normality of the current study’s data was assessed using the Kolmogorov-Smirnov test. Consequently, parametric tests were employed due to the normally distributed data. The Pearson correlation test was utilized to analyze the relationship between gender and the types of teeth measured in the groups. A random sample of 20 patients was selected to evaluate the reliability and reproducibility of measurement protocol. Forty PRs from these patients were measured twice, with a two-week interval between measurements. The intraclass correlation coefficient for the first and second measurements was 0.946 (0.916–0.977), indicating excellent agreement between the two measurements by the same researcher. Additionally, the interclass correlation coefficient was 0.936 (0.908–0.964), demonstrating excellent agreement between the two researchers. These high correlation values indicate excellent agreement between the measurements, and no statistically significant difference was observed between the measurements. Treatment-induced changes within each group were assessed using paired t-tests. Independent t-tests were then utilized to compare the mean age and treatment duration of individuals between the groups, as well as the difference in the EARR amount between RFT and VPT. In order to evaluate the effect of different OT methods on the amount of EARR, analysis of covariance (ANCOVA) was also performed to compare the two groups, taking initial tooth length measurements and initial root surface measurements as covariates ( p < .05). Statistical analyses were conducted using the SPSS software package (version 20.0 for Windows; SPSS Inc., Chicago, IL).
Analysis of demographic characteristics revealed no statistically significant differences between the FA treatment and CA treatment groups in terms of gender distribution and tooth type (Pearson’s chi-squared test) or chronological age (Student’s t-test) ( p > .05). However, as detailed in Table , the treatment duration was significantly longer in the FA group compared to the CA group ( p < .05). The ABO discrepancy index scores demonstrated similar baseline difficulty between the CA treatment group (mean; 11.93 ± 1.57) and the FA treatment group (mean; 13.41 ± 1.92) ( p > .05). Table presents a comparison of the mean EARR values and proportions between the FA and CA groups. Both the FA and CA groups exhibited statistically significant EARR in both RFT and their contralateral VPT ( p < .05). In addition, statistical analysis revealed a significant difference in terms of EARR between RFT and their contralateral VPT within both the FA and CA groups ( p < .05). Compared to the CA group, the FA group showed significantly greater EARR in both RFT and their contralateral VPT ( p < .05). When the initial tooth length values were taken as a covariate (initial tooth length [RFT] = 15.8 mm), the EARR in RFT and the ANCOVA and proportion evaluation showed that the differences between EARR of the FA treatment and CA treatment groups were statistically significant ( p < .05). Table presents a comparison of the mean root surface measurements and treatment-induced changes between the RFT and VPT groups. Statistically significant reductions in root surface measurements were observed across all teeth in both groups ( p < .05). When the pre-treatment root surface measurements were taken as a covariate (initial root surface measurements [RFT] = 103.41 mm²), the EARR evaluation with ANCOVA showed that the decrease in root surface measurements in RFT and VPT was significant between the two groups ( p < .05). Additionaly, in both treatment groups, a comparison of the decrease in root surface measurements between the two sides in the groups revealed significant differences between RFT and VPT ( p < .05).
OT frequently presents with EARR, a complication that can negatively impact both patient outcomes and treatment success. While CAs have gained significant popularity, their influence on EARR remains poorly understood. Additionally, current literature lacks sufficient scientific evidence comparing the effects of CAs on EARR in both RFT and VPT. Therefore, this study aimed to assess the changes in tooth lengths and root surface measurements of mandibular molars following OT with either FAs or CAs. The study evaluated both RFT and their contralateral VPT using digital PRs. Our findings revealed significantly greater EARR in VPT compared to RFT, as measured by both linear and root surface measurements. This contradicts our first hypothesis. Additionally, the treatment modality (FA treatment or CA treatment) significantly impacted the amount of EARR in both groups. Teeth treated with FAs exhibited greater EARR compared to those treated with CAs. Consequently, our second hypothesis was also rejected. CAs and FAs have different features, and both treatment methods are frequently used in OTs. Therefore, to ensure a more robust comparison, this study carefully evaluated inclusion criteria, including the ABO discrepancy index. This helped to guarantee that patients in both treatment groups had similar levels of treatment difficulty, required tooth movement, and anticipated treatment outcomes . In addition, Alqerban et al. demonstrated a link between the degree of EARR and the quality of RCT in RFT. Consequently, this study evaluated the quality of RCT, selecting only cases with appropriate RCT and coronal restoration according to previous studies . ClearCorrect aligners are a leading choice for CA treatment, known for their precision, comfort, and aesthetic properties. Decades of research and innovation in material science have culminated in these aligners, which incorporate the proprietary ClearQuartz™ tri-layer material, Performance Trimline design, and scientifically validated ClearControl™ clinical features. This combination aims to optimize treatment outcomes and achieve desired orthodontic results. The industry-leading triple-layer aligner material combines two layers of flexible polymer with an elastomeric core layer for greater orthodontic control, resulting in light and consistent force application . Furthermore, ClearCorrect aligners incorporate a scientifically validated high, flat trimline design. This design has been demonstrated to optimize force transmission, thereby facilitating more precise tooth movement and enhanced control over root positioning . Optimizing the force delivered to the teeth and delivering more controlled force application helps minimize EARR following OT. Previous research has explored the comparative effects of FAs and CAs on EARR. Li et al. observed significantly greater EARR in maxillary and mandibular incisors treated with FAs compared to CAs in a retrospective study of 70 patients. Similarly, Almagram et al. reported less EARR in the CA group compared to the FA group in their study of 40 patients focusing on maxillary incisors. Jyotirmay et al. further corroborated these findings with a larger sample size of 110 patients, demonstrating significantly less EARR in the CA group for both maxillary and mandibular incisors. Additionally, a meta-analysis suggests that while CAs may not entirely prevent EARR, the incidence and severity are likely lower compared to FAs . Our study’s results align with these previous findings, demonstrating a similar trend of reduced EARR with CA treatment. Orthodontic tooth movement, induced by mechanical stress, triggers an inflammatory response in the periodontal tissues. This response involves local changes, including altered blood flow and the release of various biological factors. The sterile inflammatory environment created by OT can lead to EARR . This process is influenced by various factors, including genetic predisposition , systemic conditions , patient demographics (age and gender) , and treatment parameters . The magnitude, duration, and continuity of orthodontic forces play a significant role . Studies have shown that heavier and continuous forces are associated with increased prevalence and severity of EARR . Conversely, intermittent forces allow for cementogenic repair, potentially mitigating resorption . CAs are hypothesized to deliver lighter, intermittent forces through a computer-aided design process, potentially contributing to less EARR compared to FAs . Additionally, the magnitude and direction of forces generated by aligners may vary from those generated by brackets and archwires. Moreover, force transmission in CA treatment is through the aligner and attachment, while in FA treatment, it’s through the bracket located on the tooth’s crown. These mechanical differences could potentially influence the rate of EARR . Furthermore, our study found that the duration of OTs with CAs was shorter than that of FAs. Our findings can be attributed to the combined effects of the aforementioned factors. Besides this, prior investigations have predominantly focused on incisors, whereas this study investigated EARR in molar teeth. Our findings demonstrate a similar pattern of reduced resorption with CAs compared to FAs, even in this posterior segment. This approach contributes to the existing body of literature by exploring the understudied relationship between OT and EARR in posterior teeth. While studies by Iglesias-Linares et al. and Toyokawa-Sperandio et al. reported no significant differences in EARR between FAs and CAs, these findings may be attributable to methodological variations. Notably, both studies solely evaluated incisors, whereas our investigation focused on molars. Additionally, Toyokawa-Sperandio et al. assessed EARR after only 6 months, a considerably shorter timeframe compared to our study’s treatment duration. These discrepancies in tooth type and treatment duration likely contributed to the observed differences in outcomes. Both physiological and pathological root resorption involve a regulated interplay between osteoclasts and osteoblasts, as well as odontoclasts and odontoblasts . Stimuli, such as localized increases in cytokines, activate T-cells, leading them to express RANKL. This, in turn, triggers the activation and differentiation of pre-odontoclasts. Fibroblasts and odontoblasts communicate through bioactive neuropeptides. Cytokines, prostaglandin E2, tumor necrosis factor-alpha, and hormones produced by the compromised periodontal ligament stimulate RANKL expression in fibroblasts. These factors exert their influence through vasoactive, chemotactic, and cellular effects. This cascade of events ultimately leads to the recruitment of active odontoclasts, initiating the process of root resorption . Despite a scarcity of scientific data on EARR in RFT, the debate on their response to orthodontic forces persists . Khan and Kumar, in a study involving 30 patients, suggested an increased risk of EARR in RFT . Conversely, Llamas-Carreras et al. found no significant difference in the degree of EARR between RFT and VPT . However, a recent meta-analysis yielded a contrarian conclusion, indicating significantly less EARR in RFT teeth compared to those with VPT . Yoshpe et al. even proposed the potential of endodontic procedures to manage or prevent EARR during OT. Yamamoto et al. reported that compromised and stressed pulp cells secrete inflammatory cytokines, receptor activator of NF-κB ligand (RANKL), and macrophage colony-stimulating factor (M-CSF), initiating odontoclastic activity. However, the absence of a pulp would negate the secretion of these factors, potentially explaining the increased EARR observed in VPT. Further investigation is warranted to elucidate the underlying causes of this disparity in EARR between RFT and VPT . A limitation of this study was the use of digital PRs to evaluate tooth length changes and root surface measurements. PRs have been employed in previous orthodontic EARR evaluations . While periapical radiographs and 3D imaging (cone-beam computed tomography, CBCT) offer greater accuracy, PRs are considered a less precise but more readily available option . Serial periapical radiographs and 3D imaging are not routinely used during OT. In addition, 3D imaging boasts higher accuracy and repeatability compared to 2D imaging in root resorption assessment. CBCT offers advantages in diagnosing and measuring root resorption but emits a higher radiation dose . Although PRs may overestimate or underestimate actual root loss due to potential distortion, they were chosen due to their routine use in orthodontic monitoring, allowing for retrospective analysis in this study . Here, we focused on comparing pre- and post-treatment radiographs to assess relative changes in root lengths and root surface measurements, rather than obtaining exact measurements. Stramotes et al. suggested that PRs taken at different times can provide sufficiently accurate linear measurements if the occlusal plane remains similar and tooth angulation doesn’t exceed 10 degrees. To minimize errors in this study, a radiology assistant used the same panoramic machine with a standardized head position ensured by the machine’s positioning light. Another limitation of this study is that the use of immediate post-treatment radiographs. Without a long-term evaluation, it is impossible to determine if the orthodontic effects of either treatment modality undergo spontaneous repair mechanisms over time . Future research designs should incorporate such a comprehensive assessment. Additionally, our investigation focused solely on mandibular molars. EARR patterns may differ in other tooth groups, highlighting the need for future studies to evaluate the full dentition. Moreover, the CA group received treatment solely with the Clear Correct system, whose aligners are made from a single plastic material. Material variations can influence the modulus of elasticity, potentially leading to disparate effects on the root apex from identical tooth movements. Therefore, different results may be obtained with treatments using different types of aligners. Additional studies, both retrospective and prospective, using larger sample sizes and advanced diagnostic imaging techniques such as CBCT are necessary to corroborate these findings. Several strengths differentiate our study from the existing literature. First, we employed a sufficiently large sample size, crucial for identifying both clinical and statistical significance. Furthermore, meticulous application of inclusion and exclusion criteria ensured standardized and comparable groups. Second, root surface measurements were examined along with EARR and proportions, and our linear measurements corroborated the root surface measurements, mitigating limitations associated with the absence of 3D imaging. Third, we assessed the impact of OT type and pulpal status on EARR within a single study. This approach is novel in the context of CA therapy, and we believe our findings offer a significant contribution to the field. Finally, whereas prior studies often focused on incisors, we opted for mandibular molars to promote standardization and address the scarcity of research dedicated solely to molars.
Within the limitations of this study, this study has several conclusions: All teeth exhibited varying degrees of EARR based on pre- and post-OT radiograph comparisons. While CAs may not eliminate EARR during OT, the prevalence and severity were lower compared to FAs. However, treatment selection remains a clinician’s judgment based on individual cases. A comparative analysis revealed that RFT, irrespective of OT modality (CA treatment or FA treatment), exhibited greater resistance to EARR than VPT counterparts. Consequently, the potential for EARR in RFT may be a less critical factor in OT planning.
|
Functional Impairment in Individuals Exposed to Violence Based on Electronical Forensic Medical Record Mining and Their Profile Identification: Controlled Observational Study | 938c0316-6ec5-412c-a314-933d862d426d | 11470214 | Forensic Medicine[mh] | Many individuals experience both intended and unintended acts of violence, leading to physical and psychological trauma that impacts their everyday functioning . Intimate partner homicides constitute 1 in 7 homicide cases and account for 1 in 3 female homicides . In the United States, studies have reported that intimate partner violence (IPV) with health, legal, or work-related impacts affects 28.8% of women and 9.9% of men over their lifetime . Additionally, 76%-82% of young adults experience community violence during their lifetime . Evaluating functional impairment among individuals exposed to violence has significant implications for patient care and the judicial system. While the World Health Organization (WHO) and many countries worldwide recognized the functional approach to pathology and health through the adoption of the International Classification of Functioning, Disability, and Health (ICF) in 2001, little to no research has been conducted on functional impairment in individuals exposed to violence. In most European countries, forensic physicians examine individuals exposed to violence to assess the outcomes of violence at the request of magistrates, who can base their sentences on certified medical evidence . To evaluate the functional impairment following an assault, physicians rely on both objective, observable elements and subjective elements, such as patient-reported symptoms (eg, expressions of fear or pain). Legal authorities may underestimate the importance of subjective elements compared with objective ones . Indeed, such elements are not systematically reported in medical documents by physicians, or they may lack standardized, consensual measures of these elements. Additionally, psychological trauma is primarily assessed based on patients reporting subjective symptoms. As a result, sources of psychological trauma leading to functional impairment may be considered less than physical ones because they are perceived as less reliable in a judicial context compared with physical, observable symptoms . However, psychological trauma is a major source of functional impairment among individuals exposed to violence and can potentially lead to long-term health deterioration . A more systematic and comprehensive assessment of psychological trauma would enhance its early detection and facilitate the treatment of severe psychiatric conditions such as posttraumatic stress disorder . One solution to address these issues is to use scales that quantify the intensity of subjective elements relevant to assessing functional impairment. Such tools offer several advantages. First, they can simplify the complex analysis of multiple and variably reported symptoms into a cohesive index. Second, they allow for comparisons between different patients or between different measurements at various end points for the same patient. Finally, because it is possible to incorporate both physician-rated and self-rated assessments, these tools should provide information that more accurately reflects the response of individuals exposed to violence rather than solely the physician’s perspective. Such scales have been developed in other contexts to measure elements such as pain or daily instrumental activities , in both research settings and everyday clinical practice. However, these scales may not meet the expectations for examining individuals exposed to violence, as they can be too time-consuming or are not validated for this particular clinical setting. To be effective in everyday practice, scales should meet 4 criteria: (1) they should consist of brief, straightforward questions that are easily understood by patients during routine assessments; (2) they should contribute to the overall evaluation of functional impairment; (3) they should exhibit sufficient concordance, demonstrating intrarater reproducibility (consistency when the same physician asks the same questions) and interrater reproducibility (consistency between different physicians asking the same questions) in comparable situations involving violence; and (4) they should be easily comprehensible and allow for straightforward interpretation by legal authorities, such as judges or judicial police officers. This study aimed to investigate, under real-life conditions, the contribution of 6 scales to the global assessment of functional impairment and their concordance. These scales were based on a comprehensive, real-life typology of violence. Three self-rated scales measured subjective elements: perceptions of pain, fear, and life threat. These subjective elements were measured at 2 distinct end points: at the time of the assault and at the time of medical consultation. Two physician-rated scales assessed the intensity of functional impairment and the quality of interaction between the patient and the physician during the consultation. Finally, the psychosomatic index evaluated the relative importance of psychological trauma in functional impairment. The primary objective of this study was to characterize scales measuring subjective elements related to functional impairment in individuals exposed to violence within a judicial context. The evaluation of the scales’ reproducibility was based on a typology of situations involving violence, determined through the analysis of extensive multivariable observational data contained in electronic health records. We used this typology to control for potential confusion biases and variability in real-life practice settings, providing a comprehensive assessment of the relationships between practitioners and scale results. The secondary objective was to describe the degree of functional impairment and psychological trauma–related symptoms across this typology of situations involving violence. Overview In this retrospective study, data were extracted for all consecutive individuals exposed to violence examined by physicians in the Department of Forensic Medicine at Jean-Verdier Hospital in Bondy (Seine-Saint-Denis, France), in the Paris metropolitan area, between January 1, 2015, and December 31, 2015. These data are not publicly available due to privacy restrictions, as they contain information that could compromise the confidentiality of research participants. The data used in this study resulted from the combination of 2 types of sources. On the one hand, it included systematically collected, routine, and standardized characteristics such as age, sex, and circumstances of violence (eg, locations). On the other hand, it involved retrieving corresponding medical certificates from electronic health records and extracting additional characteristics not included in the standard collection through simple textual analysis techniques (eg, searching for terms, variants, and lexical fields while accounting for typing errors or spelling variations). These analyses are facilitated by the fact that the certificates are in digital form and standardized. The features extracted in this manner primarily concern psychological symptoms. Ethics Approval This research study was conducted retrospectively from data obtained for clinical purposes. An ethics approval of the project (reference number CERHUPO 2015-07-03) was given by the institutional review board (IRB 00001072) CPP Ile-de-France 2 on July 9, 2015. An information note for patients was displayed in each consultation room. This information was also available in the welcome booklet provided to each hospitalized patient. Patients were informed about the potential statistical use of their personal data, which would be anonymized and used solely for research purposes. They could opt out of this use by contacting the department manager. Inclusion and Exclusion Criteria Patients reporting deliberate assault and battery were assessed for eligibility. We excluded patients younger than 10 years, those reporting unintended violence or neglect, those examined more than 30 days after the reported incident, those examined by physicians with fewer than 300 patients per year, and those assessed for a second evaluation of their functional impairment. Only patients with complete data were included in the analysis. Description of the Scales Three self-evaluated Likert scales were used to measure pain, fear, and the perception of a life threat at 2 distinct end points: at the time of the assault and during the medical interview. Each Likert item had 7 levels (from 0=no pain to 6=maximum pain). The physician asked the questions in a general manner and could repeat them up to 2 times while providing a visual graduated scale for the patient to use in self-rating. Regarding the pain scale, the physician asked (translated from French) “On a scale of 0-6, where 0 means no pain and 6 represents the maximum amount of pain you can imagine, how would you rate the pain you experienced during the assault?” followed by, “Similarly, on a scale of 0-6, where 0 means no pain and 6 represents the maximum amount of pain you can imagine, how would you rate the pain you are experiencing now?” Questions about fear and the perception of a life threat were asked in a similar manner (see the “Questions Asked During Consultations” section in ). We introduced 2 physician-evaluated Likert scales, each with 7 levels from 0 to 6. One scale was used to measure the intensity of global functional impairment at the time of the consultation, with 0 indicating no functional impairment and 6 representing maximum functional impairment. The other scale was used to assess the quality of interaction between the physician and the patient. Physicians were asked to rate the quality of interaction from 0 to 6, with 0 indicating the worst quality and 6 representing the most satisfying interaction. Finally, we introduced the psychosomatic index, a 7-level scale from 0 to 6. Physicians rated this index from 0 if physical trauma was the only source of functional impairment to 6 if psychological trauma was the only source of functional impairment . Statistical Analysis Analysis Strategy We adopted a 2-stage strategy for analysis. First, we developed a typology of situations involving violence based on observational data. We identified homogeneous profiles by examining the most common characteristics among situations of violence, including the characteristics of the individuals exposed to violence, assailants, and the assaults themselves. We hypothesized that within a given type, the characteristics of all situations measured in real-life practice settings are sufficiently homogeneous to allow direct comparisons between practitioners in terms of scale results. Any potential intra- and interrater differences can therefore be attributed to the quality of the scales. Second, we paired patients based on their membership in the same homogeneous profile and evaluated the intrarater and interrater reproducibility of the scales. Details of a sensitivity analysis are provided in the “Sensitivity Analysis” section in , and information on the R package (R Foundation for Statistical Computing) used is provided in the “Software Used for Analyses” section in . Determining a Typology of Situations of Violence Situations of violence were characterized by the characteristics of individuals exposed to violence and assailants, as well as other circumstances of the assault. Among these, assault outcomes were defined by physical trauma and functional impairment. Functional impairment was quantified, as requested by French judicial authorities, in terms of days of total incapacity to work (TIW) . Details are provided in the “Defining Situations Involving Violence” section in . We used a clustering algorithm to identify homogeneous profiles of situations involving violence. The Partitioning Around Medoids algorithm was used to automatically detect groups or clusters of similar patients, provided the desired number of groups was specified. We applied the consensus clustering framework to determine the optimal number of clusters. Finally, we investigated the clinical characteristics underlying each violent situation profile. Scales Characterization First, we described the scales by median and IQR for the overall population and within each violent situation profile. Second, we compared scale scores with related subjective elements. We selected patient-reported symptoms for 5 psychological dimensions related to psychological trauma: sleep disorders (difficulty initiating sleep, frequent awakenings, or early-morning awakenings), loss of appetite, stress symptoms (recurrent memories, avoidance, hypervigilance), expression of pain, and expression of fear. We performed a comparative analysis of scale medians between physicians. We tested for global differences using the Kruskal-Wallis test and for pairwise differences between physicians using the Conover post hoc analysis . Adjustments were made using the Bonferroni method. Finally, we applied univariate linear regression models to determine whether each scale was associated with functional impairment as measured by TIW. Interrater Reproducibility For each pair of physicians, we randomly matched patients within the same violent situation profile. Thus, for each pair of physicians, we matched patients who experienced similar situations of violence. We used Kendall W , or the coefficient of concordance , to measure reproducibility between raters for the same violent situation profiles. Kendall W was corrected for ties (see formula in the “Kendall W Coefficient of Concordance” section of ). Intrarater Reproducibility For each physician, we randomly matched patients within the same violent situation profile and used Kendall W to measure the reproducibility of scales when a physician examined patients with the same violent situation profile. In this retrospective study, data were extracted for all consecutive individuals exposed to violence examined by physicians in the Department of Forensic Medicine at Jean-Verdier Hospital in Bondy (Seine-Saint-Denis, France), in the Paris metropolitan area, between January 1, 2015, and December 31, 2015. These data are not publicly available due to privacy restrictions, as they contain information that could compromise the confidentiality of research participants. The data used in this study resulted from the combination of 2 types of sources. On the one hand, it included systematically collected, routine, and standardized characteristics such as age, sex, and circumstances of violence (eg, locations). On the other hand, it involved retrieving corresponding medical certificates from electronic health records and extracting additional characteristics not included in the standard collection through simple textual analysis techniques (eg, searching for terms, variants, and lexical fields while accounting for typing errors or spelling variations). These analyses are facilitated by the fact that the certificates are in digital form and standardized. The features extracted in this manner primarily concern psychological symptoms. This research study was conducted retrospectively from data obtained for clinical purposes. An ethics approval of the project (reference number CERHUPO 2015-07-03) was given by the institutional review board (IRB 00001072) CPP Ile-de-France 2 on July 9, 2015. An information note for patients was displayed in each consultation room. This information was also available in the welcome booklet provided to each hospitalized patient. Patients were informed about the potential statistical use of their personal data, which would be anonymized and used solely for research purposes. They could opt out of this use by contacting the department manager. Patients reporting deliberate assault and battery were assessed for eligibility. We excluded patients younger than 10 years, those reporting unintended violence or neglect, those examined more than 30 days after the reported incident, those examined by physicians with fewer than 300 patients per year, and those assessed for a second evaluation of their functional impairment. Only patients with complete data were included in the analysis. Three self-evaluated Likert scales were used to measure pain, fear, and the perception of a life threat at 2 distinct end points: at the time of the assault and during the medical interview. Each Likert item had 7 levels (from 0=no pain to 6=maximum pain). The physician asked the questions in a general manner and could repeat them up to 2 times while providing a visual graduated scale for the patient to use in self-rating. Regarding the pain scale, the physician asked (translated from French) “On a scale of 0-6, where 0 means no pain and 6 represents the maximum amount of pain you can imagine, how would you rate the pain you experienced during the assault?” followed by, “Similarly, on a scale of 0-6, where 0 means no pain and 6 represents the maximum amount of pain you can imagine, how would you rate the pain you are experiencing now?” Questions about fear and the perception of a life threat were asked in a similar manner (see the “Questions Asked During Consultations” section in ). We introduced 2 physician-evaluated Likert scales, each with 7 levels from 0 to 6. One scale was used to measure the intensity of global functional impairment at the time of the consultation, with 0 indicating no functional impairment and 6 representing maximum functional impairment. The other scale was used to assess the quality of interaction between the physician and the patient. Physicians were asked to rate the quality of interaction from 0 to 6, with 0 indicating the worst quality and 6 representing the most satisfying interaction. Finally, we introduced the psychosomatic index, a 7-level scale from 0 to 6. Physicians rated this index from 0 if physical trauma was the only source of functional impairment to 6 if psychological trauma was the only source of functional impairment . Analysis Strategy We adopted a 2-stage strategy for analysis. First, we developed a typology of situations involving violence based on observational data. We identified homogeneous profiles by examining the most common characteristics among situations of violence, including the characteristics of the individuals exposed to violence, assailants, and the assaults themselves. We hypothesized that within a given type, the characteristics of all situations measured in real-life practice settings are sufficiently homogeneous to allow direct comparisons between practitioners in terms of scale results. Any potential intra- and interrater differences can therefore be attributed to the quality of the scales. Second, we paired patients based on their membership in the same homogeneous profile and evaluated the intrarater and interrater reproducibility of the scales. Details of a sensitivity analysis are provided in the “Sensitivity Analysis” section in , and information on the R package (R Foundation for Statistical Computing) used is provided in the “Software Used for Analyses” section in . Determining a Typology of Situations of Violence Situations of violence were characterized by the characteristics of individuals exposed to violence and assailants, as well as other circumstances of the assault. Among these, assault outcomes were defined by physical trauma and functional impairment. Functional impairment was quantified, as requested by French judicial authorities, in terms of days of total incapacity to work (TIW) . Details are provided in the “Defining Situations Involving Violence” section in . We used a clustering algorithm to identify homogeneous profiles of situations involving violence. The Partitioning Around Medoids algorithm was used to automatically detect groups or clusters of similar patients, provided the desired number of groups was specified. We applied the consensus clustering framework to determine the optimal number of clusters. Finally, we investigated the clinical characteristics underlying each violent situation profile. Scales Characterization First, we described the scales by median and IQR for the overall population and within each violent situation profile. Second, we compared scale scores with related subjective elements. We selected patient-reported symptoms for 5 psychological dimensions related to psychological trauma: sleep disorders (difficulty initiating sleep, frequent awakenings, or early-morning awakenings), loss of appetite, stress symptoms (recurrent memories, avoidance, hypervigilance), expression of pain, and expression of fear. We performed a comparative analysis of scale medians between physicians. We tested for global differences using the Kruskal-Wallis test and for pairwise differences between physicians using the Conover post hoc analysis . Adjustments were made using the Bonferroni method. Finally, we applied univariate linear regression models to determine whether each scale was associated with functional impairment as measured by TIW. Interrater Reproducibility For each pair of physicians, we randomly matched patients within the same violent situation profile. Thus, for each pair of physicians, we matched patients who experienced similar situations of violence. We used Kendall W , or the coefficient of concordance , to measure reproducibility between raters for the same violent situation profiles. Kendall W was corrected for ties (see formula in the “Kendall W Coefficient of Concordance” section of ). Intrarater Reproducibility For each physician, we randomly matched patients within the same violent situation profile and used Kendall W to measure the reproducibility of scales when a physician examined patients with the same violent situation profile. We adopted a 2-stage strategy for analysis. First, we developed a typology of situations involving violence based on observational data. We identified homogeneous profiles by examining the most common characteristics among situations of violence, including the characteristics of the individuals exposed to violence, assailants, and the assaults themselves. We hypothesized that within a given type, the characteristics of all situations measured in real-life practice settings are sufficiently homogeneous to allow direct comparisons between practitioners in terms of scale results. Any potential intra- and interrater differences can therefore be attributed to the quality of the scales. Second, we paired patients based on their membership in the same homogeneous profile and evaluated the intrarater and interrater reproducibility of the scales. Details of a sensitivity analysis are provided in the “Sensitivity Analysis” section in , and information on the R package (R Foundation for Statistical Computing) used is provided in the “Software Used for Analyses” section in . Situations of violence were characterized by the characteristics of individuals exposed to violence and assailants, as well as other circumstances of the assault. Among these, assault outcomes were defined by physical trauma and functional impairment. Functional impairment was quantified, as requested by French judicial authorities, in terms of days of total incapacity to work (TIW) . Details are provided in the “Defining Situations Involving Violence” section in . We used a clustering algorithm to identify homogeneous profiles of situations involving violence. The Partitioning Around Medoids algorithm was used to automatically detect groups or clusters of similar patients, provided the desired number of groups was specified. We applied the consensus clustering framework to determine the optimal number of clusters. Finally, we investigated the clinical characteristics underlying each violent situation profile. First, we described the scales by median and IQR for the overall population and within each violent situation profile. Second, we compared scale scores with related subjective elements. We selected patient-reported symptoms for 5 psychological dimensions related to psychological trauma: sleep disorders (difficulty initiating sleep, frequent awakenings, or early-morning awakenings), loss of appetite, stress symptoms (recurrent memories, avoidance, hypervigilance), expression of pain, and expression of fear. We performed a comparative analysis of scale medians between physicians. We tested for global differences using the Kruskal-Wallis test and for pairwise differences between physicians using the Conover post hoc analysis . Adjustments were made using the Bonferroni method. Finally, we applied univariate linear regression models to determine whether each scale was associated with functional impairment as measured by TIW. For each pair of physicians, we randomly matched patients within the same violent situation profile. Thus, for each pair of physicians, we matched patients who experienced similar situations of violence. We used Kendall W , or the coefficient of concordance , to measure reproducibility between raters for the same violent situation profiles. Kendall W was corrected for ties (see formula in the “Kendall W Coefficient of Concordance” section of ). For each physician, we randomly matched patients within the same violent situation profile and used Kendall W to measure the reproducibility of scales when a physician examined patients with the same violent situation profile. The flowchart of inclusions is shown in . Complete and incomplete data were comparable in terms of age, sex, time to consultation, TIW, and all types of assaults (Table S1 in ). Determining a Typology of Situations of Violence The optimal data set partition identified 5 profiles (see the “Identification of Homogeneous Profiles” section in ; also see Figures S1 and S2 in ). presents the characteristics of each profile. Profile A included mostly individuals exposed to single assailants and low-impact assaults, resulting in lower levels of functional impairment (TIW median 3, IQR 1-4) and rarely causing physical injury (538/779, 69.1%, patients without any somatic traumatic lesions), with a majority being women (474/779, 60.9%). Profile B mainly included males exposed to violence (564/749, 75.3%) involving multiple (585/749, 78.1%) and often unknown (464/749, 61.9%) assailants. Profile C consisted primarily of situations involving police violence (567/719, 78.9%), involving young (median age 22, IQR 18-28 years) and predominantly male (687/719, 95.5%) patients held in police custody (678/719, 94.3%). Profile D comprised mainly individuals exposed to single assailants and high-impact assaults, resulting in higher levels of functional impairment (TIW median 4 days, IQR 3-6 days) and systematic physical injury (1092/1092, 100%, patients with somatic lesions), with a majority being men (852/1092, 78.0%). Profile E included mostly women (703/841, 83.6%) who were exposed to repeated assaults (679/841, 80.7%) perpetrated by intimate partners (678/841, 80.6%). Scales for each profile and patient-reported symptoms related to psychological trauma are described in . Self-rated scales had higher scores at the time of the assault compared with the consultation time . Women who experienced repeated IPV (profile E) showed high scores for fear during both the assault (median 6, IQR 4-6) and consultation (median 5, IQR 2-6), as well as for the perception of a life threat during the assault (median 4, IQR 0-6) and consultation (median 2, IQR 0-6). Individuals exposed to violence involving multiple assailants (profile B) had high scores for the perception of a life threat during the assault (median 3, IQR 0-6). Young males exposed to police violence (profile C) reported low scores for pain during the assault (median 3, IQR 2-5) and low scores for fear both during the assault (median 3, IQR 0-5) and during the consultation (median 0, IQR 0-3). Women experiencing low-impact assaults (profile A) reported low scores for pain during the consultation (median 2, IQR 0-4). We found significant differences ( P< .001) in median ratings between physicians for each scale (Table S6 in ). Results for pain during the assault, comparing pairs of physicians in the overall population and within profiles of situations involving violence, are shown in (also see Table S3 in ). All physicians had examined more than 30 patients in each profile of situations involving violence (Table S2 in ). For pain during the assault, we observed significant differences between 16 out of 45 pairs of physicians in the overall population, while fewer than 4 pairs of physicians showed significant differences when assessed within profiles of situations involving violence . Results were similar for other self-rated scales, but most pairwise comparisons remained significant for physician-rated scales (Table S3 in ). All pain, fear, and life threat scales were significantly associated with functional impairment as measured by TIW . Reproducibility of Scales in the Same Violent Situation Profile Interrater Reproducibility Interrater reproducibility of scales was mild to good within the same violent situation profiles. Results for all scales are presented in Tables S4 and S5 in . Kendall W ranged from 0.43 (pain during assaults, paired physicians 7 and 9) to 0.66 (psychosomatic index, paired physicians 6 and 8). Intrarater Reproducibility Intrarater reproducibility of scales within the same violent situation profiles was found to be mild to good (Table S7 in ). The Kendall coefficient of concordance ranged from 0.46 (perception of a life threat during the assault, physicians 3 and 7) to 0.66 (psychosomatic index, physician 8). The optimal data set partition identified 5 profiles (see the “Identification of Homogeneous Profiles” section in ; also see Figures S1 and S2 in ). presents the characteristics of each profile. Profile A included mostly individuals exposed to single assailants and low-impact assaults, resulting in lower levels of functional impairment (TIW median 3, IQR 1-4) and rarely causing physical injury (538/779, 69.1%, patients without any somatic traumatic lesions), with a majority being women (474/779, 60.9%). Profile B mainly included males exposed to violence (564/749, 75.3%) involving multiple (585/749, 78.1%) and often unknown (464/749, 61.9%) assailants. Profile C consisted primarily of situations involving police violence (567/719, 78.9%), involving young (median age 22, IQR 18-28 years) and predominantly male (687/719, 95.5%) patients held in police custody (678/719, 94.3%). Profile D comprised mainly individuals exposed to single assailants and high-impact assaults, resulting in higher levels of functional impairment (TIW median 4 days, IQR 3-6 days) and systematic physical injury (1092/1092, 100%, patients with somatic lesions), with a majority being men (852/1092, 78.0%). Profile E included mostly women (703/841, 83.6%) who were exposed to repeated assaults (679/841, 80.7%) perpetrated by intimate partners (678/841, 80.6%). Scales for each profile and patient-reported symptoms related to psychological trauma are described in . Self-rated scales had higher scores at the time of the assault compared with the consultation time . Women who experienced repeated IPV (profile E) showed high scores for fear during both the assault (median 6, IQR 4-6) and consultation (median 5, IQR 2-6), as well as for the perception of a life threat during the assault (median 4, IQR 0-6) and consultation (median 2, IQR 0-6). Individuals exposed to violence involving multiple assailants (profile B) had high scores for the perception of a life threat during the assault (median 3, IQR 0-6). Young males exposed to police violence (profile C) reported low scores for pain during the assault (median 3, IQR 2-5) and low scores for fear both during the assault (median 3, IQR 0-5) and during the consultation (median 0, IQR 0-3). Women experiencing low-impact assaults (profile A) reported low scores for pain during the consultation (median 2, IQR 0-4). We found significant differences ( P< .001) in median ratings between physicians for each scale (Table S6 in ). Results for pain during the assault, comparing pairs of physicians in the overall population and within profiles of situations involving violence, are shown in (also see Table S3 in ). All physicians had examined more than 30 patients in each profile of situations involving violence (Table S2 in ). For pain during the assault, we observed significant differences between 16 out of 45 pairs of physicians in the overall population, while fewer than 4 pairs of physicians showed significant differences when assessed within profiles of situations involving violence . Results were similar for other self-rated scales, but most pairwise comparisons remained significant for physician-rated scales (Table S3 in ). All pain, fear, and life threat scales were significantly associated with functional impairment as measured by TIW . Interrater Reproducibility Interrater reproducibility of scales was mild to good within the same violent situation profiles. Results for all scales are presented in Tables S4 and S5 in . Kendall W ranged from 0.43 (pain during assaults, paired physicians 7 and 9) to 0.66 (psychosomatic index, paired physicians 6 and 8). Intrarater Reproducibility Intrarater reproducibility of scales within the same violent situation profiles was found to be mild to good (Table S7 in ). The Kendall coefficient of concordance ranged from 0.46 (perception of a life threat during the assault, physicians 3 and 7) to 0.66 (psychosomatic index, physician 8). Interrater reproducibility of scales was mild to good within the same violent situation profiles. Results for all scales are presented in Tables S4 and S5 in . Kendall W ranged from 0.43 (pain during assaults, paired physicians 7 and 9) to 0.66 (psychosomatic index, paired physicians 6 and 8). Intrarater reproducibility of scales within the same violent situation profiles was found to be mild to good (Table S7 in ). The Kendall coefficient of concordance ranged from 0.46 (perception of a life threat during the assault, physicians 3 and 7) to 0.66 (psychosomatic index, physician 8). Principal Findings In this study, we investigated tools for quantifying subjective elements related to functional impairment in individuals exposed to violence, based on a typology of situations involving violence derived from massively multivariate data. A clustering algorithm identified 5 remarkably stable profiles of violent situations, each with comparable sizes and characteristics consistent with clinical practice. This typology allowed us to evaluate the subjective scales with the assumption that within each type, the characteristics of violence and exposed individuals were homogeneous. Consequently, any remaining variability in scale results could reasonably be attributed to inter- and intrarater performance. All pain, fear, and life threat scales were significantly associated with functional impairment, indicating that they effectively measure elements contributing to the overall assessment of functional impairment. These scales highlighted different patterns of psychological trauma associated with the various profiles of situations involving violence. There were a few significant differences in ratings between physicians for self-rated scales within the same violent situation profiles. Finally, we found mild to good inter- and intrarater reproducibility for all scales within the same violent situation profiles. Patients were clustered consistently within intuitive clinical categories . Differences in scale scores between violent situation profiles should be examined in light of these clinical aspects . First, individuals exposed to violence involving multiple assailants (profile B) more frequently reported a perception of a life threat during the assault, which aligns with the high intensity of these assaults, often involving blows that caused falls (541/749, 72.2%) or being knocked to the ground (135/749, 18%). Interestingly, while the perception of a life threat was higher during the assault, it did not seem to persist over time and was lower at the time of consultation. Women exposed to IPV (profile E) displayed a different pattern of fear and perception of a life threat, with higher scores recorded both during the assault and at the consultation. This persistence of symptoms is consistent with these patients experiencing mostly repeated assaults (679/841, 80.7%) by known, intimate assailants, which increases the credibility and fear of future attacks. The scales were consistent with patients’ reported symptoms such as fear, sleeping disorders, and loss of appetite, while providing richer and more comprehensible information. Second, young men held in police custody and reporting police violence (profile C) exhibited lower scores for fear both during the assault and at the consultation, as well as a lower score for pain during the assault . These lower scores may reflect underlying psychosocial behaviors of pain or fear denial in this population. These findings are consistent with their lower functional impairment measurements (median 2, IQR 1-3 days of TIW). However, it has been previously shown that their functional impairment was systematically evaluated at lower values and that they were examined in shorter intervals after the assault (median 5, IQR 3-9 hours) and under specific circumstances (in police custody). There were differences in ratings between physicians in the overall population (Table S6 in ) for all scales and for most pairs of physicians (Table S3 in ). However, because all physicians were instructed to ask the questions in the same manner (see the “Questions Asked During Consultations” section in ), these observed differences were likely attributable to variations in the situations involving violence. For instance, some physicians more frequently consulted patients from cluster A than cluster D, possibly due to planning reasons or personal preferences. Supporting this interpretation, we found that within the same profiles of situations of violence, only a few significant differences for self-rated scales remained (Table S3 in and ). Self-rated scales were associated with functional impairment as measured by TIW; specifically, TIW increased with higher scale scores. This association underscores the clinical relevance of these scales, as subjective elements reported by patients themselves were directly linked to TIW, which is otherwise solely determined by physician assessment. These scales, now shown to be associated with functional impairment, could provide standardized access to a patient’s perspective and enhance clinical examination. The reproducibility of the scales within the same profiles of situations of violence was found to be mild to good, ranging from 0.43 to 0.66 for interrater and 0.46 to 0.66 for intrarater reproducibility across the 5 situations of violence profiles. Higher reproducibility (>70%) can be achieved with similar scales in a classic design, such as when patients are evaluated at a 2-week interval using the same scales. These differences could arise from evaluating patients in real-life conditions rather than in the controlled environment of a research study. Additionally, our study design made it challenging to determine whether strong specificities of individual situations persisted among matched patients. Such remaining specificities could lead to systematic differences in their scale scores, thereby reducing reproducibility. Increasing the number of situations of violence profiles helped test this hypothesis. As the number of profiles grew, the remaining specificities among patients within the same profiles decreased. A sensitivity analysis revealed slightly higher reproducibility values when differentiating 800 profiles (Tables S5 and S7 in ). This suggests that the reproducibility results were not underestimated and that the situations of violence profiles were sufficiently detailed. Reproducibility was assessed across similar situations involving violence rather than individual patients seen at different time intervals to stay close to real-life conditions, as a classic design was not feasible in this context. First, a classic design could not assess the reproducibility of subjective elements at consultations because a first consultation for violence cannot be replicated. Second, if 2 consultations occurred close together, patients might remember their responses for subjective elements during the assault, whereas if consultations were distant, memories might fade, affecting the accuracy of their responses. This study has several shortcomings. First, as a monocentric study, it may not fully represent forensic practice across the country. However, the population served by the study is among the most diverse in France in terms of geographical origins, socioeconomic status, and culture, according to the National Institute for Statistics and Economic Studies (Insee). Second, many patients (1000/5180, 19.31%) were excluded due to missing data. However, missing data were comparable for baseline characteristics. Third, combining physical and psychological impairments poses a challenge, and we may have lacked the power to detect small effect sizes. Finally, our definition of situations involving violence could have been enhanced by including information related to psychological trauma. However, psychological trauma symptoms were not systematically reported by physicians and could neither be reliably used nor provide informative data for determining the typology. To our knowledge, this study is the first to investigate the quantification of subjective elements related to functional impairment in individuals exposed to violence within the context of daily medical examinations in a judicial setting. Although the study is based on data collected 8 years ago, it remains relevant and current, as the activity of the forensic medicine department has not significantly changed since 2015, neither in volume nor in the reasons for the examination. The missions remain the same, and the practices have not significantly evolved in either direction. No modifications to the French penal code that could affect the relevance of this study have been noted either. Quantification of subjective elements provided direct access to patients’ opinions in a synthetic and reliable manner and revealed significant differences between situations involving violence, such as variations in fear and perception of a life threat among those exposed to IPV. Further studies are planned to explore whether these tools can lead to straightforward interpretations by magistrates and judicial police officers and to assess their impact on judicial decisions. Conclusions The pain, fear, and life threat scales were correlated with higher functional impairment, aligning with expert knowledge, and demonstrated fair reproducibility in real-life conditions for similar situations of violence. Subjective elements related to functional impairment in individuals exposed to violence can be quantified using Likert scales during medical interviews. High scores in fear and perception of a life threat, both during the assault and at the medical consultation, suggest greater impairment related to psychological trauma, particularly following IPV. In this study, we investigated tools for quantifying subjective elements related to functional impairment in individuals exposed to violence, based on a typology of situations involving violence derived from massively multivariate data. A clustering algorithm identified 5 remarkably stable profiles of violent situations, each with comparable sizes and characteristics consistent with clinical practice. This typology allowed us to evaluate the subjective scales with the assumption that within each type, the characteristics of violence and exposed individuals were homogeneous. Consequently, any remaining variability in scale results could reasonably be attributed to inter- and intrarater performance. All pain, fear, and life threat scales were significantly associated with functional impairment, indicating that they effectively measure elements contributing to the overall assessment of functional impairment. These scales highlighted different patterns of psychological trauma associated with the various profiles of situations involving violence. There were a few significant differences in ratings between physicians for self-rated scales within the same violent situation profiles. Finally, we found mild to good inter- and intrarater reproducibility for all scales within the same violent situation profiles. Patients were clustered consistently within intuitive clinical categories . Differences in scale scores between violent situation profiles should be examined in light of these clinical aspects . First, individuals exposed to violence involving multiple assailants (profile B) more frequently reported a perception of a life threat during the assault, which aligns with the high intensity of these assaults, often involving blows that caused falls (541/749, 72.2%) or being knocked to the ground (135/749, 18%). Interestingly, while the perception of a life threat was higher during the assault, it did not seem to persist over time and was lower at the time of consultation. Women exposed to IPV (profile E) displayed a different pattern of fear and perception of a life threat, with higher scores recorded both during the assault and at the consultation. This persistence of symptoms is consistent with these patients experiencing mostly repeated assaults (679/841, 80.7%) by known, intimate assailants, which increases the credibility and fear of future attacks. The scales were consistent with patients’ reported symptoms such as fear, sleeping disorders, and loss of appetite, while providing richer and more comprehensible information. Second, young men held in police custody and reporting police violence (profile C) exhibited lower scores for fear both during the assault and at the consultation, as well as a lower score for pain during the assault . These lower scores may reflect underlying psychosocial behaviors of pain or fear denial in this population. These findings are consistent with their lower functional impairment measurements (median 2, IQR 1-3 days of TIW). However, it has been previously shown that their functional impairment was systematically evaluated at lower values and that they were examined in shorter intervals after the assault (median 5, IQR 3-9 hours) and under specific circumstances (in police custody). There were differences in ratings between physicians in the overall population (Table S6 in ) for all scales and for most pairs of physicians (Table S3 in ). However, because all physicians were instructed to ask the questions in the same manner (see the “Questions Asked During Consultations” section in ), these observed differences were likely attributable to variations in the situations involving violence. For instance, some physicians more frequently consulted patients from cluster A than cluster D, possibly due to planning reasons or personal preferences. Supporting this interpretation, we found that within the same profiles of situations of violence, only a few significant differences for self-rated scales remained (Table S3 in and ). Self-rated scales were associated with functional impairment as measured by TIW; specifically, TIW increased with higher scale scores. This association underscores the clinical relevance of these scales, as subjective elements reported by patients themselves were directly linked to TIW, which is otherwise solely determined by physician assessment. These scales, now shown to be associated with functional impairment, could provide standardized access to a patient’s perspective and enhance clinical examination. The reproducibility of the scales within the same profiles of situations of violence was found to be mild to good, ranging from 0.43 to 0.66 for interrater and 0.46 to 0.66 for intrarater reproducibility across the 5 situations of violence profiles. Higher reproducibility (>70%) can be achieved with similar scales in a classic design, such as when patients are evaluated at a 2-week interval using the same scales. These differences could arise from evaluating patients in real-life conditions rather than in the controlled environment of a research study. Additionally, our study design made it challenging to determine whether strong specificities of individual situations persisted among matched patients. Such remaining specificities could lead to systematic differences in their scale scores, thereby reducing reproducibility. Increasing the number of situations of violence profiles helped test this hypothesis. As the number of profiles grew, the remaining specificities among patients within the same profiles decreased. A sensitivity analysis revealed slightly higher reproducibility values when differentiating 800 profiles (Tables S5 and S7 in ). This suggests that the reproducibility results were not underestimated and that the situations of violence profiles were sufficiently detailed. Reproducibility was assessed across similar situations involving violence rather than individual patients seen at different time intervals to stay close to real-life conditions, as a classic design was not feasible in this context. First, a classic design could not assess the reproducibility of subjective elements at consultations because a first consultation for violence cannot be replicated. Second, if 2 consultations occurred close together, patients might remember their responses for subjective elements during the assault, whereas if consultations were distant, memories might fade, affecting the accuracy of their responses. This study has several shortcomings. First, as a monocentric study, it may not fully represent forensic practice across the country. However, the population served by the study is among the most diverse in France in terms of geographical origins, socioeconomic status, and culture, according to the National Institute for Statistics and Economic Studies (Insee). Second, many patients (1000/5180, 19.31%) were excluded due to missing data. However, missing data were comparable for baseline characteristics. Third, combining physical and psychological impairments poses a challenge, and we may have lacked the power to detect small effect sizes. Finally, our definition of situations involving violence could have been enhanced by including information related to psychological trauma. However, psychological trauma symptoms were not systematically reported by physicians and could neither be reliably used nor provide informative data for determining the typology. To our knowledge, this study is the first to investigate the quantification of subjective elements related to functional impairment in individuals exposed to violence within the context of daily medical examinations in a judicial setting. Although the study is based on data collected 8 years ago, it remains relevant and current, as the activity of the forensic medicine department has not significantly changed since 2015, neither in volume nor in the reasons for the examination. The missions remain the same, and the practices have not significantly evolved in either direction. No modifications to the French penal code that could affect the relevance of this study have been noted either. Quantification of subjective elements provided direct access to patients’ opinions in a synthetic and reliable manner and revealed significant differences between situations involving violence, such as variations in fear and perception of a life threat among those exposed to IPV. Further studies are planned to explore whether these tools can lead to straightforward interpretations by magistrates and judicial police officers and to assess their impact on judicial decisions. The pain, fear, and life threat scales were correlated with higher functional impairment, aligning with expert knowledge, and demonstrated fair reproducibility in real-life conditions for similar situations of violence. Subjective elements related to functional impairment in individuals exposed to violence can be quantified using Likert scales during medical interviews. High scores in fear and perception of a life threat, both during the assault and at the medical consultation, suggest greater impairment related to psychological trauma, particularly following IPV. |
Telepathology in Nigeria for Global Health Collaboration | 48f34185-9dab-4548-9661-be71fe8bfd49 | 9479662 | Pathology[mh] | Telepathology services in West Africa are grossly inadequate despite scare pathology facilities and pathologists in this region . It is documented that in Africa, pathology-core services are hampered by lack of equipment, inefficient processes and inadequate personnel . An earlier study reported the pathologists to population ratio is in excess of 1 to 2.5 million people in most regions in Africa . In Nigeria there are approximately 105 pathologists for an estimated population size of 200 million people, supporting the pathologist to population ratio reported previously . However, this ratio is insufficient to meet the needs of quality and timely pathology services . Consequently, a small percentage of cancers are confirmed by pathologists, resulting in clinicians making medical decisions without pathology reports, which often leads to poor treatment outcomes as a result of misdiagnosis . Several attempts have been made to institutionalize telepathology in Sub-Saharan Africa, focusing primarily on teaching and research purposes . In 2013 at the Pathology Laboratory in Kamuzu Central Hospital (KCH) in Lilongwe Malawi telepathology service was institutionalized to support local pathologists, for both research and clinical care . This has led to improved pathology service for the region . Functional telepathology service is however currently unavailable in West-African countries including Nigeria, the most populous country in Africa . Telepathology involves the use of telecommunications to send image-rich data between remote locations for clinical diagnosis, education, and research . Generally, there are four basic platforms for telepathology: (1) static images, (2) whole-slide scanning, (3) dynamic non-robotic tele-microscopy, and (4) dynamic robotic tele-microscopy . The static images requires appropriate selection of relevant diagnostic fields and the use of limited technical infrastructure (internet connection, a microscope and a digital camera) which are of great benefit. This method, however, requires technical knowledge on appropriate fields for capture. It differs from whole-slide imaging systems which allows the pathologist to see the entire specimen at a range of magnifications. This obvious advantage comes with considerable costs which includes the purchase of slide scanning equipment, increased information technology (IT) support, and server space to allow data storage. A system that provides the benefit of whole-slide review with simpler technology at a much lower cost than whole-slide imaging systems is the dynamic non-robotic tele-microscopy. It involves the transmission of video images across any of several internet-based teleconference systems, however, it requires a skilled local pathologist to maneuver the microscope, depends on image resolution, camera quality and internet speed. Finally, a system that allows the consulting pathologist to control the objective and stage of microscopes remotely is known as the robotic tele-microscopy, which is often technically challenging and costly . Currently, the cost of pathology service in Nigerian Government facilities ranges from $10 US to $20 US (unpublished estimate) due to repeat sections and occasional Immunohistochemistry (IHC) requests, a service that is beyond the reach of the majority of the population. These facilities are available mainly in tertiary health centers and a few private laboratories located in the large cities distant from the general population in villages and local communities. Delay in obtaining reports with occasional misdiagnosis or incomplete diagnosis by local pathologist has resulted in poor treatment outcomes. Therefore, there is a dire need for a sustainable system or model that can provide second opinion through remote consultations. The purpose of telepathology was to generate high quality virtual images, improve turn-around time for accessing images remotely and promote team review of target disease, correct interpretation of pathology and arriving at consensus for clinical and research use. This will be achieved through robust international consultations and clinical support in addition to achieving training and research to improve local health outcomes. Establishment of basic telepathology at two Nigerian sites Leveraging on existing international research collaboration between North-western University (NU) and two Nigerian institutions: Jos University Teaching Hospital (JUTH) and University of Lagos Teaching Hospital (LUTH). We obtained an internal grant from the Northwestern Harvey Institute for Global Health for the purchase of telepathology facilities for the two participating institutions. Novel to this study was the use of portable digital slide scanner (Grundium Ocus MGU-00003), a soft-ware image viewer (Aperio Leica RUO version 12.4.3.5008) and a cloud server with secured password for uploading, transferring and storing of images. To achieve a seamless process doctors, technicians and computer operators from the two Nigerian institutions attended training sessions on installation and process of the system, and how to resolve challenges. Institutional support was obtained from the leaderships of the institutions to develop telepathology operational policies for patient care so as to ensure its sustainability locally. Establishment of training platform and standard procedure for Nigeria-Northwestern telepathology network Before the hands-on telepathology procedure, the Northwestern University (NU) Laurie Cancer Center pathologists first trained pathologists, residents, and lab technicians at two large Nigerian collaborating hospitals, Jos (JUTH) and Largos (LUTH) University Teaching Hospitals using a total of five virtual training sessions with question-and-answer components. The standard procedure that was established include proper handling of instruments, imaging scanning and review, patient’s deidentification procedure, secured data storage, data sharing, imaging reading and interpretation. Our immediate goal was as a collaborating pathology team (LUTH, JUTH and NU) to evaluate all collected research cases from the Nigerian pathology facilities to meet our current NIH funded research grant objectives as well as improve the diagnostic accuracy at the Nigerian hospitals. Application of a digital whole-slide scan system With a digital slide scanner, digital images of a whole slide at multiple magnifications were made. These files were stored on a computer or external storage device before uploading to a secure site thus creating a local copy that can be viewed offline after file synchronization between the three collaborating sites. The workings of this system were used in achieving the path-core goals for the on-going National Institutes for Health (NIH) HIV and malignancies grant (NIH U54CA221205 and R21TW12092 grants) where cervical cancer lesions were processed at the two collaborating Nigerian institutions (JUTH and LUTH), scanned, stored and uploaded for viewing by the international collaborators at NU. Leveraging on existing international research collaboration between North-western University (NU) and two Nigerian institutions: Jos University Teaching Hospital (JUTH) and University of Lagos Teaching Hospital (LUTH). We obtained an internal grant from the Northwestern Harvey Institute for Global Health for the purchase of telepathology facilities for the two participating institutions. Novel to this study was the use of portable digital slide scanner (Grundium Ocus MGU-00003), a soft-ware image viewer (Aperio Leica RUO version 12.4.3.5008) and a cloud server with secured password for uploading, transferring and storing of images. To achieve a seamless process doctors, technicians and computer operators from the two Nigerian institutions attended training sessions on installation and process of the system, and how to resolve challenges. Institutional support was obtained from the leaderships of the institutions to develop telepathology operational policies for patient care so as to ensure its sustainability locally. Before the hands-on telepathology procedure, the Northwestern University (NU) Laurie Cancer Center pathologists first trained pathologists, residents, and lab technicians at two large Nigerian collaborating hospitals, Jos (JUTH) and Largos (LUTH) University Teaching Hospitals using a total of five virtual training sessions with question-and-answer components. The standard procedure that was established include proper handling of instruments, imaging scanning and review, patient’s deidentification procedure, secured data storage, data sharing, imaging reading and interpretation. Our immediate goal was as a collaborating pathology team (LUTH, JUTH and NU) to evaluate all collected research cases from the Nigerian pathology facilities to meet our current NIH funded research grant objectives as well as improve the diagnostic accuracy at the Nigerian hospitals. With a digital slide scanner, digital images of a whole slide at multiple magnifications were made. These files were stored on a computer or external storage device before uploading to a secure site thus creating a local copy that can be viewed offline after file synchronization between the three collaborating sites. The workings of this system were used in achieving the path-core goals for the on-going National Institutes for Health (NIH) HIV and malignancies grant (NIH U54CA221205 and R21TW12092 grants) where cervical cancer lesions were processed at the two collaborating Nigerian institutions (JUTH and LUTH), scanned, stored and uploaded for viewing by the international collaborators at NU. Telepathology at JUTH and LUTH Digital imaging services are now becoming an important integral component of our workflow. It has been found to be useful for patient enrollment, digital archiving, and pathology review purposes in the U.S. and other western countries . Digital central pathology review and quality assurance activities of our Nigerian slides are now routinely performed at Northwestern University . Digital pathology has been used extensively for pathology review of cervical cancer cases for our U54 research project enrollment . We have also begun the process of scanning sets of Hematoxyline and Eosin (H&E) slides guided by a set of protocols, so as to have high-quality image sets that will be linked to REDCap for future image analysis (IA) and validation projects which are currently being planned. The use of this technology provides many advantages including: superior quality to static images, eliminates distribution of glass slides, solves logistical issues with rapid turnaround times, reduces costs of tracking and loss or broken slides, and is accessible anytime via the Internet and offline after download. A digital pathology review application facilitates a complete digital pathology review process and provides a digital mechanism for expert pathologists to review cases consisting of digital images, pathology reports, and custom-built case review forms. Our established imaging infrastructure has the capacity to generate up to 50 whole slide images at 40x magnification per day. All electronic histological image data generated by local sites were stored and served from a cloud-based server. The telepathology system is an invaluable medical resource for us. With the inadequate ratio of pathologist per population in most African countries worsened by lack of pathologist’s specialization, telepathology has become a tool that will revolutionize pathology services thus improving clinical care in most African countries . In the past two years, over 200 cases of cervical cancer slides collected from JUTH and LUTH were scanned and analyzed through this process. Digital slides were reviewed by pathologists from the three collaborating institutes (JUTH, LUTH and NU). The advantages of this system were that we were able to: 1) arrive at consensus of pathohistology analysis and evaluation, including tumor type, grade and differentiation; 2) accurately measure the tumor size, dimension, extension, necrosis, and other pathologic features; 3) capture high quality screen shot images for publication, education and illustration; 4) reach a fast turnaround time of 3 to 5 days as opposed to a few months for inter institute slide evaluation; and 5) scanned virtual slides have become valuable educational materials for local pathologist, researcher and trainees who can easily access these deidentified virtual slides. It is hoped that this current telepathology program developed and deployed in Nigeria can provide a model for other West-African countries, as well as other countries worldwide. The initial high cost for purchasing and installing of the equipment and that of training cannot be compared to its eventual positive impact on health outcomes and the knowledge transfer gained. The usual unquantifiable large cost for specialist training abroad, for slide and tissue transportation and the expense associated with repeat cut sections, the cost of delayed/inaccurate pathology diagnosis and the attendant cost of adverse health outcomes are now reduced. Impact of telepathology on diagnosis, medical research, and global health training and education The internet telecommunication personnel were on ground to train and guide us through the installation and deployment of the scanner and viewer. Subsequently, we obtained second opinions from international colleagues thus improving the quality of clinical diagnosis. We have noticed a reduction in turn-around time of pathology reports which has improved quality assurance due to multiple consultations. This is vital for the diagnostically difficult cases. It has afforded our local staffs (pathologist, resident doctors, and laboratory technician) the opportunity for well needed improvement in their knowledge and skills from our international collaborators. Our government and institutions are gaining high-quality knowledge without the expense of overseas travel and lodging. With improved qualitative and timely diagnosis, clinicians now make patient’s management decisions based on timely and reliable pathology reports. This will invariably improve clinical outcomes of our patients. With our current international collaboration and improved quality of our reports, we expect better health outcomes, and our local institutions will improve their reputation by producing quality pathology reports comparable to that obtained in developed countries. This will improve patronage and clientele. International collaborative research is now easier, less expensive and seamless. The transportation of tissues for further analysis is no longer needed as digital slides of comparable quality and representation can be viewed remotely in a synchronous or asynchronous manner. Hopefully with the proliferation of such telepathology services in many countries in Africa and continued collaboration from developed nations, an overall improvement in global health will be achieved through reduced disparities in efficient and quality pathology services. Challenges Although the installation of the scanner and viewing software was uncomplicated, several challenges can be faced in building telepathology system in developing countries. Most image view software can be access freely online, selection of the right software to match with different scanners can be problematic. This requires consulting with vendors and experienced pathologists and to find one that fits your needs. For example, we initially considered NDP2 viewing system used by the Northwestern team. It costs approximately $1,000 US for access lasting one month which required monthly renewal for the two sites. With the NDP2 software also came the challenge of image formats (SVS, NDP and TIF). The NDP2 software could only view images saved as TIF format, but these images were not stable, and the sharpness lost over time. We later resorted to the use of Aperio (Leica RUO version 12.4.3.5008) viewer which came with a free online download. The key instrument for telepathology is a scanner which costs $14,500 US dollars each for the two Nigerian sites. Other costs include external storage and web cloud server. Another challenge faced was the time for scanning and uploading of images into Dropbox or other commercial iCloud. It took 5 to 10 minutes to scan a slide depending on the tissue size but might take 10 to 30 minutes to upload each image to Dropbox depending on image size and internet quality. For a seamless process, we obtained a router with internet data worth approximately $15 US dollars for a session lasting 15 to 20 hours. This may add some unbearable costs for undeveloped regions and hospitals. A constant power source was mandatory for a seamless process during scanning and uploading onto the cloud server. This is a major challenge in our setting as we had to augment the local power source with external diesel-powered generators. Transition from a conventional pathology platform to telepathology platform was time consuming at the beginning and it requires the training for all key personnel involved in the process for efficiency and effectiveness including laboratory technicians, resident doctors, pathologists and IT personnel. Continuous education of all personnel involved helped address these issues. Lastly, most countries are yet to give clear-cut regulations regarding the liability of physicians delivering care across different institutes or internationally recognized borders. However, encrypting patients’ data on the Internet will limit the possibility of legal issues. Diagnostic support rather than consultation is currently being upheld. In the future, the possibility of integrating smart phones, mobile devices and other multimedia to this system can make telepathology an easy, economically viable tool for promoting pathology practice, medical education and research. There remains a need to be vigilant for government regulations regarding the protection of the human subject’s information through trans-border digital pathology. Furthermore, research into cost-to-benefit analysis, effects on the development of local expertise and improvement in service utilization over time to access progress achieved should be carried out and published. Digital imaging services are now becoming an important integral component of our workflow. It has been found to be useful for patient enrollment, digital archiving, and pathology review purposes in the U.S. and other western countries . Digital central pathology review and quality assurance activities of our Nigerian slides are now routinely performed at Northwestern University . Digital pathology has been used extensively for pathology review of cervical cancer cases for our U54 research project enrollment . We have also begun the process of scanning sets of Hematoxyline and Eosin (H&E) slides guided by a set of protocols, so as to have high-quality image sets that will be linked to REDCap for future image analysis (IA) and validation projects which are currently being planned. The use of this technology provides many advantages including: superior quality to static images, eliminates distribution of glass slides, solves logistical issues with rapid turnaround times, reduces costs of tracking and loss or broken slides, and is accessible anytime via the Internet and offline after download. A digital pathology review application facilitates a complete digital pathology review process and provides a digital mechanism for expert pathologists to review cases consisting of digital images, pathology reports, and custom-built case review forms. Our established imaging infrastructure has the capacity to generate up to 50 whole slide images at 40x magnification per day. All electronic histological image data generated by local sites were stored and served from a cloud-based server. The telepathology system is an invaluable medical resource for us. With the inadequate ratio of pathologist per population in most African countries worsened by lack of pathologist’s specialization, telepathology has become a tool that will revolutionize pathology services thus improving clinical care in most African countries . In the past two years, over 200 cases of cervical cancer slides collected from JUTH and LUTH were scanned and analyzed through this process. Digital slides were reviewed by pathologists from the three collaborating institutes (JUTH, LUTH and NU). The advantages of this system were that we were able to: 1) arrive at consensus of pathohistology analysis and evaluation, including tumor type, grade and differentiation; 2) accurately measure the tumor size, dimension, extension, necrosis, and other pathologic features; 3) capture high quality screen shot images for publication, education and illustration; 4) reach a fast turnaround time of 3 to 5 days as opposed to a few months for inter institute slide evaluation; and 5) scanned virtual slides have become valuable educational materials for local pathologist, researcher and trainees who can easily access these deidentified virtual slides. It is hoped that this current telepathology program developed and deployed in Nigeria can provide a model for other West-African countries, as well as other countries worldwide. The initial high cost for purchasing and installing of the equipment and that of training cannot be compared to its eventual positive impact on health outcomes and the knowledge transfer gained. The usual unquantifiable large cost for specialist training abroad, for slide and tissue transportation and the expense associated with repeat cut sections, the cost of delayed/inaccurate pathology diagnosis and the attendant cost of adverse health outcomes are now reduced. The internet telecommunication personnel were on ground to train and guide us through the installation and deployment of the scanner and viewer. Subsequently, we obtained second opinions from international colleagues thus improving the quality of clinical diagnosis. We have noticed a reduction in turn-around time of pathology reports which has improved quality assurance due to multiple consultations. This is vital for the diagnostically difficult cases. It has afforded our local staffs (pathologist, resident doctors, and laboratory technician) the opportunity for well needed improvement in their knowledge and skills from our international collaborators. Our government and institutions are gaining high-quality knowledge without the expense of overseas travel and lodging. With improved qualitative and timely diagnosis, clinicians now make patient’s management decisions based on timely and reliable pathology reports. This will invariably improve clinical outcomes of our patients. With our current international collaboration and improved quality of our reports, we expect better health outcomes, and our local institutions will improve their reputation by producing quality pathology reports comparable to that obtained in developed countries. This will improve patronage and clientele. International collaborative research is now easier, less expensive and seamless. The transportation of tissues for further analysis is no longer needed as digital slides of comparable quality and representation can be viewed remotely in a synchronous or asynchronous manner. Hopefully with the proliferation of such telepathology services in many countries in Africa and continued collaboration from developed nations, an overall improvement in global health will be achieved through reduced disparities in efficient and quality pathology services. Although the installation of the scanner and viewing software was uncomplicated, several challenges can be faced in building telepathology system in developing countries. Most image view software can be access freely online, selection of the right software to match with different scanners can be problematic. This requires consulting with vendors and experienced pathologists and to find one that fits your needs. For example, we initially considered NDP2 viewing system used by the Northwestern team. It costs approximately $1,000 US for access lasting one month which required monthly renewal for the two sites. With the NDP2 software also came the challenge of image formats (SVS, NDP and TIF). The NDP2 software could only view images saved as TIF format, but these images were not stable, and the sharpness lost over time. We later resorted to the use of Aperio (Leica RUO version 12.4.3.5008) viewer which came with a free online download. The key instrument for telepathology is a scanner which costs $14,500 US dollars each for the two Nigerian sites. Other costs include external storage and web cloud server. Another challenge faced was the time for scanning and uploading of images into Dropbox or other commercial iCloud. It took 5 to 10 minutes to scan a slide depending on the tissue size but might take 10 to 30 minutes to upload each image to Dropbox depending on image size and internet quality. For a seamless process, we obtained a router with internet data worth approximately $15 US dollars for a session lasting 15 to 20 hours. This may add some unbearable costs for undeveloped regions and hospitals. A constant power source was mandatory for a seamless process during scanning and uploading onto the cloud server. This is a major challenge in our setting as we had to augment the local power source with external diesel-powered generators. Transition from a conventional pathology platform to telepathology platform was time consuming at the beginning and it requires the training for all key personnel involved in the process for efficiency and effectiveness including laboratory technicians, resident doctors, pathologists and IT personnel. Continuous education of all personnel involved helped address these issues. Lastly, most countries are yet to give clear-cut regulations regarding the liability of physicians delivering care across different institutes or internationally recognized borders. However, encrypting patients’ data on the Internet will limit the possibility of legal issues. Diagnostic support rather than consultation is currently being upheld. In the future, the possibility of integrating smart phones, mobile devices and other multimedia to this system can make telepathology an easy, economically viable tool for promoting pathology practice, medical education and research. There remains a need to be vigilant for government regulations regarding the protection of the human subject’s information through trans-border digital pathology. Furthermore, research into cost-to-benefit analysis, effects on the development of local expertise and improvement in service utilization over time to access progress achieved should be carried out and published. We report and provide our experience, challenges and gains with the installation and use of telepathology equipment for research, training and clinical care in Nigeria by networking with Laurie Cancer Center pathologists at Northwestern University in the United States. Though associated with an initial high cost, the equipment and processes involved is cost-saving in the long run and revolutionary in positively impacting health outcomes and enhancing research in resource-constrained settings where scare pathology services predominate. Policy makers in resource-constrained settings should consider telepathology as necessary in-patient care, thus investing in this critical technology to improve its health indices. |
Impact of the COVID-19 pandemic on breast cancer surgeries in a Canadian population | 50d7c8e3-0d62-4f35-99fe-e862d671bed3 | 11787185 | Surgical Procedures, Operative[mh] | The COVID-19 pandemic has had a significant impact on cancer surgery worldwide. During the first months of the pandemic, elective operations including cancer surgeries were postponed to reallocate healthcare resources. As the pandemic continued, there were periods during of increased hospitalizations . This was coupled with the exacerbation of labour shortages in health human resources due to attrition and burnout . As such, cancer surgery volumes were impacted during these years. Breast cancer (BC) surgery is unique among cancer operations as many surgeries can be performed in an outpatient setting . Furthermore, immediate breast reconstruction at the same time as oncologic surgery is increasingly common . For this reason, the impact of pandemic restrictions on surgical care may differ from what is observed in other oncologic specialties, where patients may require prolonged hospitalization or even intensive care unit use. A systematic review of 74 studies investigating the impact of the COVID-19 pandemic on BC screening and diagnosis showed a reduction in the number of BC diagnoses by more than 25%. There was a higher proportion of patients with more advanced BC stage at diagnosis during the pandemic , which may be due to a lack of early-stage BCs being diagnosed with a complete cessation of screening. While several studies reported changes in BC surgery volumes, these have mostly been single institutions , and few have described volume changes by type of BC surgery . There has been little information about the impact on BC surgical care with more contemporary and longitudinal data. The purpose of this study was to assess the impact of the COVID-19 pandemic on the volumes and types of BC surgery using population-based data.
Study design We conducted a retrospective population-based cohort study using data linked from prospectively maintained administrative databases stored at ICES in Ontario, Canada. Under the Canada Health Act , Ontario’s 14 million residents receive universally accessible and publicly funded health care through the Ontario Health Insurance Plan (OHIP). ICES is an independent, non-profit research institute whose legal status under Ontario’s health information privacy law allows it to collect and analyze health care and demographic data, without consent, for health system evaluation and improvement. This study was reported following the Reporting of Studies Conducted using Observational Routinely Collected Health Data (RECORD) statement 11 . The use of data in this project is authorized under Section 45 of Ontario’s Personal Health Information Protection Act (PHIPA) and does not require review by a Research Ethics Board. In accordance with institutional policies, a Research Ethics Board application at the University Health Network, Toronto, Canada was submitted and approved prior to study commencement (22–5809). Data sources The Ontario Cancer Registry (OCR) is a provincial database including all patients with a cancer diagnosis (excluding non-melanoma skin cancer) in Ontario since 1964 and was used to identify incident BC cases, including biomarker status and stage data. The Registered Persons Database is a population-based registry maintained by the Ontario Ministry of Health and Long-Term Care which collects information on all individuals covered under OHIP and was used to obtain demographic data and vital status . The following databases were used to collect information on health services received: Canadian Institute of Health Information Discharge Abstract Database (CIHI-DAD) for acute inpatient hospitalizations; the National Ambulatory Care Reporting System (NACRS) for same-day surgery admissions, emergency room visits, and oncology clinic visits; and the OHIP Claims Database for billing from health care providers, including physicians, groups, laboratories, and out-of-province providers. The Cancer Activity Level Reporting (ALR) database was used to determine which chemotherapeutics and other medications were administered to patients. Dataset details are available in Supplementary Table 1. Datasets were linked using unique encoded identifiers and analyzed at ICES. The analyst (QL) had complete access to all anonymized datasets used in this study to create the study cohorts, perform data linkage, and conduct analyses. Study population & cohort We identified patients with a new diagnosis of BC in the OCR using International Classification of Diseases for Oncology (ICD-O.3) codes between 2017 and 2022. Patients were included if they had an ICD-behaviour 3 (breast cancer). We excluded patients with stage 0 BC (ductal carcinoma in situ/DCIS) as this was not reliably captured in these datasets. Stage was determined from a separate variable (best stage group),utilizing a computer algorithm that combines clinical and pathologic data on tumour size and nodal size to generate one stage . For example, patients who underwent neoadjuvant chemotherapy and had a complete pathologic response would be staged according to their final pathologic stage rather than their initial clinical stage. As stage data was not available for the later part of our study period, we did not set limits with respect to stage (i.e. patients with stage 4 BC were included). Patients whose date of death preceded the date of diagnosis were excluded. Exposure The date of BC surgery was the primary exposure of interest. Three time periods were defined: pre-pandemic/baseline (January 1, 2018, to March 14, 2020), immediate pandemic (March 15, 2020, to June 13, 2020), and peri-pandemic (June 14, 2020, to June 25, 2022). Outcome measures The primary outcome was the volume of BC surgeries. We determined BC surgeries from billing codes, which are classified into three procedure types: lumpectomy, mastectomy without reconstruction, and mastectomy with immediate breast reconstruction. The reconstruction was considered alloplastic if it was a tissue expander or direct to implant and autologous if it was a tissue flap based. Delayed reconstruction was not captured in this study. Covariates Age and sex were obtained from the Registered Persons Database. Rural residency was determined using the rurality index of Ontario based on the postal code of the patients’ primary residence . Socioeconomic status was captured using the Material Deprivation Index, a composite index of the ability of individuals or households to afford consumption goods and activities that are typical in a society at a given point in time, categorized into quintiles . Immigration status was identified using the Ontario portion of the Immigration, Refugees and Citizenship Canada’s (IRCC) permanent resident database, which identifies all legal immigrants who arrived in Ontario from 1985 onwards. Disease stage data were obtained through Cancer Care Ontario through the collaborative staging data collection system, which collects data on tumour size, number of positive lymph nodes, and involvement of specific tissues or adjacent structures and generates a “best stage” through a computer algorithm combining clinical and pathologic data . Biomarker status was subdivided into three categories based on estrogen receptor (ER), progesterone receptor (PR), and HER2 status: (1) hormone receptor (either/both of ER and PR) positive and HER2 negative, (2) HER2 positive, (3) triple negative or (4) missing. An “unknown” category was created for each variable. Statistical analysis Descriptive statistics were used to define baseline characteristics of patients who underwent surgery in each period within 2 months before and 9 months after BC diagnosis. Categorical variables are reported as absolute numbers (n) and proportions , and continuous variables as medians with interquartile ranges (IQRs). Chi-square tests were used to compare categorical variables and ANOVA (mean)/Kruskal–Wallis test for median were used to compare continuous variables, as appropriate. The volume of BC surgery performed per week and types of BC surgery procedures (lumpectomy, mastectomy, mastectomy with immediate reconstruction) were determined and compared between the three time periods. Segmented negative binomial regression models were used to quantify the weekly surgical volume trend within each period and the change in mean volume between time periods. All analyses were performed using SAS Enterprise Guide, version 7.1 (SAS Institute, Cary, NC). Results were considered statistically significant if p < 0.05. No adjustment was applied for multiple significance testing.
We conducted a retrospective population-based cohort study using data linked from prospectively maintained administrative databases stored at ICES in Ontario, Canada. Under the Canada Health Act , Ontario’s 14 million residents receive universally accessible and publicly funded health care through the Ontario Health Insurance Plan (OHIP). ICES is an independent, non-profit research institute whose legal status under Ontario’s health information privacy law allows it to collect and analyze health care and demographic data, without consent, for health system evaluation and improvement. This study was reported following the Reporting of Studies Conducted using Observational Routinely Collected Health Data (RECORD) statement 11 . The use of data in this project is authorized under Section 45 of Ontario’s Personal Health Information Protection Act (PHIPA) and does not require review by a Research Ethics Board. In accordance with institutional policies, a Research Ethics Board application at the University Health Network, Toronto, Canada was submitted and approved prior to study commencement (22–5809).
The Ontario Cancer Registry (OCR) is a provincial database including all patients with a cancer diagnosis (excluding non-melanoma skin cancer) in Ontario since 1964 and was used to identify incident BC cases, including biomarker status and stage data. The Registered Persons Database is a population-based registry maintained by the Ontario Ministry of Health and Long-Term Care which collects information on all individuals covered under OHIP and was used to obtain demographic data and vital status . The following databases were used to collect information on health services received: Canadian Institute of Health Information Discharge Abstract Database (CIHI-DAD) for acute inpatient hospitalizations; the National Ambulatory Care Reporting System (NACRS) for same-day surgery admissions, emergency room visits, and oncology clinic visits; and the OHIP Claims Database for billing from health care providers, including physicians, groups, laboratories, and out-of-province providers. The Cancer Activity Level Reporting (ALR) database was used to determine which chemotherapeutics and other medications were administered to patients. Dataset details are available in Supplementary Table 1. Datasets were linked using unique encoded identifiers and analyzed at ICES. The analyst (QL) had complete access to all anonymized datasets used in this study to create the study cohorts, perform data linkage, and conduct analyses.
We identified patients with a new diagnosis of BC in the OCR using International Classification of Diseases for Oncology (ICD-O.3) codes between 2017 and 2022. Patients were included if they had an ICD-behaviour 3 (breast cancer). We excluded patients with stage 0 BC (ductal carcinoma in situ/DCIS) as this was not reliably captured in these datasets. Stage was determined from a separate variable (best stage group),utilizing a computer algorithm that combines clinical and pathologic data on tumour size and nodal size to generate one stage . For example, patients who underwent neoadjuvant chemotherapy and had a complete pathologic response would be staged according to their final pathologic stage rather than their initial clinical stage. As stage data was not available for the later part of our study period, we did not set limits with respect to stage (i.e. patients with stage 4 BC were included). Patients whose date of death preceded the date of diagnosis were excluded.
The date of BC surgery was the primary exposure of interest. Three time periods were defined: pre-pandemic/baseline (January 1, 2018, to March 14, 2020), immediate pandemic (March 15, 2020, to June 13, 2020), and peri-pandemic (June 14, 2020, to June 25, 2022).
The primary outcome was the volume of BC surgeries. We determined BC surgeries from billing codes, which are classified into three procedure types: lumpectomy, mastectomy without reconstruction, and mastectomy with immediate breast reconstruction. The reconstruction was considered alloplastic if it was a tissue expander or direct to implant and autologous if it was a tissue flap based. Delayed reconstruction was not captured in this study.
Age and sex were obtained from the Registered Persons Database. Rural residency was determined using the rurality index of Ontario based on the postal code of the patients’ primary residence . Socioeconomic status was captured using the Material Deprivation Index, a composite index of the ability of individuals or households to afford consumption goods and activities that are typical in a society at a given point in time, categorized into quintiles . Immigration status was identified using the Ontario portion of the Immigration, Refugees and Citizenship Canada’s (IRCC) permanent resident database, which identifies all legal immigrants who arrived in Ontario from 1985 onwards. Disease stage data were obtained through Cancer Care Ontario through the collaborative staging data collection system, which collects data on tumour size, number of positive lymph nodes, and involvement of specific tissues or adjacent structures and generates a “best stage” through a computer algorithm combining clinical and pathologic data . Biomarker status was subdivided into three categories based on estrogen receptor (ER), progesterone receptor (PR), and HER2 status: (1) hormone receptor (either/both of ER and PR) positive and HER2 negative, (2) HER2 positive, (3) triple negative or (4) missing. An “unknown” category was created for each variable.
Descriptive statistics were used to define baseline characteristics of patients who underwent surgery in each period within 2 months before and 9 months after BC diagnosis. Categorical variables are reported as absolute numbers (n) and proportions , and continuous variables as medians with interquartile ranges (IQRs). Chi-square tests were used to compare categorical variables and ANOVA (mean)/Kruskal–Wallis test for median were used to compare continuous variables, as appropriate. The volume of BC surgery performed per week and types of BC surgery procedures (lumpectomy, mastectomy, mastectomy with immediate reconstruction) were determined and compared between the three time periods. Segmented negative binomial regression models were used to quantify the weekly surgical volume trend within each period and the change in mean volume between time periods. All analyses were performed using SAS Enterprise Guide, version 7.1 (SAS Institute, Cary, NC). Results were considered statistically significant if p < 0.05. No adjustment was applied for multiple significance testing.
The study cohort consisted of 50 440 surgeries performed among 44 226 patients. The characteristics of patients who had breast surgery in each period are presented in Table . During the immediate pandemic, the median patient age was 62 years. The proportion of patients who had surgery and were identified as immigrants was slightly higher during the peri-pandemic period (17.5%) vs the immediate (16.6%) and pre-pandemic periods (16.5%) (p < 0.01). Data for receptor status were missing for 19,364 (38.4%) surgeries and stage data were missing in 8,317 surgeries (16.5%). These data were missing mostly from the most recent peri-pandemic period, due to the lag in update for each of these databases, thus comparisons were only used between the pre-pandemic and immediate pandemic. Compared to the pre-pandemic period, there was a higher proportion of patients with stage 2, 3, and 4 BC who had surgery than in the immediate pandemic period (p < 0.01). During the immediate pandemic period, a higher proportion of patients who were triple negative and HER2 + and a lower proportion of patients with hormone receptor positive BC had surgery compared to the pre-pandemic period (p < 0.01). There was a significantly higher proportion of patients who received neoadjuvant chemotherapy (15.2% vs 10.6%, p < 0.01) and neoadjuvant endocrine therapy (10.6% vs 2.2%, p < 0.01) in the immediate pandemic period compared to baseline. Figure a shows the weekly BC surgery volumes in Ontario over the pre-, immediate, and peri-pandemic periods with estimated COVID-19 hospitalizations. During the immediate pandemic, there was a 16.9% relative reduction in the mean total number of BC surgeries performed weekly compared to baseline (immediate pandemic: 180.5 ± 32.5 vs pre-pandemic: 217.1 ± 43.7; p = 0.028). Based on the regression model, the weekly volume of BC surgeries returned to pre-pandemic levels in June 2021. During the peri-pandemic period, despite increases in COVID-19 hospitalizations, there was no drop in BC volumes as was previously reported during the immediate pandemic period and the number of weekly BC cases increased. Surgical volumes were further analyzed by type (Fig. b), showing that during the immediate pandemic, there was a larger drop in the number of lumpectomies than in mastectomies. The increase in breast surgery volume was driven mostly by the increase in the volume of lumpectomies over time. Type of breast surgery The proportion and volumes of lumpectomies and mastectomies performed during each of the three study time periods are depicted in Fig. . There were 281 surgeries (0.56%) that could not be classified as lumpectomy or mastectomy. Mastectomies represented a significantly higher proportion of BC surgeries in the immediate (36.3%; p < 0.001) and peri-pandemic periods (32.4%) compared to the pre-pandemic period (31.1%; p < 0.01). The proportion and volumes of mastectomies with and without immediate breast reconstruction performed during each of the three study time periods are depicted in Fig. . Of the patients undergoing mastectomy, there was a decrease in the proportion of mastectomies with immediate reconstruction during the immediate pandemic period (14.7%) compared to the pre-pandemic period (17.0%). However, this decrease was not statistically significant (p = 0.099). The proportion of mastectomies with immediate reconstruction was significantly higher in the peri-pandemic period compared to the pre-pandemic period (20.1% vs. 17.0%; p < 0.01). Figure highlights the volumes of alloplastic and autologous immediate breast reconstruction. During the immediate pandemic, there was a relative decrease in the proportion of autologous reconstruction surgeries (16.8%; p = 0.08) with a subsequent increase during the peri-pandemic period (25.3%; p = 0.34) compared to baseline (23.7%). However, these changes were not statistically significant compared to pre-pandemic volumes.
The proportion and volumes of lumpectomies and mastectomies performed during each of the three study time periods are depicted in Fig. . There were 281 surgeries (0.56%) that could not be classified as lumpectomy or mastectomy. Mastectomies represented a significantly higher proportion of BC surgeries in the immediate (36.3%; p < 0.001) and peri-pandemic periods (32.4%) compared to the pre-pandemic period (31.1%; p < 0.01). The proportion and volumes of mastectomies with and without immediate breast reconstruction performed during each of the three study time periods are depicted in Fig. . Of the patients undergoing mastectomy, there was a decrease in the proportion of mastectomies with immediate reconstruction during the immediate pandemic period (14.7%) compared to the pre-pandemic period (17.0%). However, this decrease was not statistically significant (p = 0.099). The proportion of mastectomies with immediate reconstruction was significantly higher in the peri-pandemic period compared to the pre-pandemic period (20.1% vs. 17.0%; p < 0.01). Figure highlights the volumes of alloplastic and autologous immediate breast reconstruction. During the immediate pandemic, there was a relative decrease in the proportion of autologous reconstruction surgeries (16.8%; p = 0.08) with a subsequent increase during the peri-pandemic period (25.3%; p = 0.34) compared to baseline (23.7%). However, these changes were not statistically significant compared to pre-pandemic volumes.
In this large population-based study, we described volumes and types of BC surgery across Ontario, Canada during the immediate and peri-pandemic periods. As expected, in the first three months of the pandemic, there was a significant reduction in the volume of BC surgeries performed, with an increased proportion of mastectomies compared to baseline. Despite this, the rates of immediate breast reconstruction did not significantly decrease. It took over one year for BC surgery rates to recover as the volume did not return to pre-pandemic levels until June 2021. However, this recovery persisted, and volumes continued to rise despite increases in COVID-19 hospitalization rates due to subvariants. The finding of a sustained increase in BC surgeries during the peri-pandemic period is important, particularly when comparing it to the patterns of other oncologic surgeries. In a study of all cancer directed surgeries in Ontario, as the number of COVID-19 hospitalizations increased, there was a drop in surgical volumes leading to a backlog. For example, during the “second COVID-19 wave” in January 2021, there was a 22% decrease in the mean cancer surgical volume with hospitals performing 568 fewer cancer operations per week. This led to an additional backlog of 18 737 surgeries . We hypothesize that the increase in BC surgery volumes after the initial pandemic period may be due to a variety of factors. The nature of breast surgery as predominantly an outpatient procedure is protective from a resource utilization standpoint as it does not rely on inpatient admission capacity or extensive peri-operative care. Other studies have shown that the pandemic accelerated the adoption of an outpatient approach for many general surgery operations, including breast surgery procedures like mastectomy. Other innovative care approaches such as increasing the use of regional anaesthesia may also have contributed to this transition. The increase in volumes may also be driven by the resumption of screening, leading to a relative increase in BC diagnoses, but this remains to be seen as further research with longer term outcomes is needed . Furthermore, data shows that the pandemic exacerbated barriers to accessing mammographic screening for immigrants and people with lower income . The intersection of COVID-19 restrictions and sociodemographic variables is another area that warrants further study. These data also show that the characteristics of patients undergoing surgery during the immediate and peri-pandemic periods were different than baseline with respect to neoadjuvant therapy. This finding is consistent with other studies that reported more patients received neoadjuvant endocrine therapy at the start of the pandemic . The aim was to safely delay surgery during the period when surgical resources were scarce, and this approach was endorsed by many surgical society guidelines and expert consensus panels . Further, more breast surgery patients in the immediate pandemic period underwent neoadjuvant chemotherapy compared to baseline. This is likely due to the prioritization of these patients to adhere to the recommended surgical intervention timeline of 4 to 8 weeks post-chemotherapy . Mastectomies accounted for a higher proportion of BC surgeries performed during the immediate pandemic period. This is likely confounded by the prioritization of patients with high-risk disease (e.g. those post-neoadjuvant). This is confirmed by the finding that although there was a decrease in the number of patients receiving radiation therapy, the proportion of patients who received regional nodal radiation was higher than in the pre-pandemic time period . It is also possible that during the immediate pandemic period, patients who otherwise would have been eligible for lumpectomy elected to undergo mastectomy despite higher rates of complications in order to minimize additional hospital visits for further surgery or to avoid radiation therapy. However, this was not reflected in qualitative studies on the patient or surgeon experience during the pandemic. The fact that patients were still able to access immediate breast reconstruction during the immediate pandemic and that the proportion of patients who underwent mastectomy with immediate reconstruction continued to increase is encouraging from a clinical standpoint. Previous studies show that patients undergoing immediate breast reconstruction experience less suffering and pain and have better psychosocial well-being than patients who undergo delayed reconstruction . Recently, same day mastectomy with immediate alloplastic reconstruction has been increasingly used, as more studies show safety and improved patient satisfaction . The pandemic may have increased the adoption of same day mastectomy with reconstruction protocols to better utilize healthcare resources, however this is another area that warrants further investigation. There was however a change in the type of reconstruction being offered with a switch from autologous, which does require hospital admission and intensive monitoring, to alloplastic approaches. A single institution retrospective study in New York between January 1, 2019, to June 30, 2021, reported a decrease in breast reconstruction volumes in 2020 with a subsequent increase in volumes in 2021. There was also a change in the type of reconstruction performed during the pandemic in 2020, with a decrease in autologous breast reconstruction by 43% and two-stage reconstruction with tissue expanders placement by 27% . Our data showed a similar trend with a decrease in autologous reconstruction, but this was not significant, likely due to the low volume of autologous cases being performed. Even among patients undergoing alloplastic reconstruction, there may have been a shift from using tissue expanders to direct-to-implant reconstruction . Unfortunately, as physician billing codes were used to determine reconstruction, it would not be possible to assess this within our dataset since the same code is used for both procedures. This study has some limitations. We used a retrospective study design and administrative healthcare datasets. Some patient and disease details were not available such as stage and biomarker status in the more recent era, due to a lag in administrative data capture and reporting. We were also unable to evaluate the impact on surgical care for patients who had DCIS or high-risk lesions such as atypia, as ICES databases do not routinely collect this information. However, this study encompasses data from Ontario, Canada’s largest province, accounting for almost 40% of its population. As such, this is one of the largest studies describing the impact of the COVID-19 pandemic on BC surgical treatment. A variety of areas need to be addressed because of this hypothesis generating data. Future work may assess how different hospital systems across the province managed the pandemic restrictions and whether regional variation was impacted by the local burden of COVID-19 patient volumes, resource availability, and other socio-economic factors that may have impacted surgery delivery. The impact of suspending BC screening during the immediate pandemic period remains to be determined through long term follow up studies. Further, the return to pre-pandemic surgery volumes does not address the backlog or understanding of surgical wait times. Future research exploring strategies to reduce the backlog on population-level wait times and improve access to timely surgical treatment, such as providing consultations through virtual care, can provide actionable insights to strengthen healthcare systems.
In this large population-based study, there was a significant decrease in all BC surgeries during the first three months of the COVID-19 pandemic. However, despite rising COVID-19 hospitalizations during subsequent waves, the volume of BC surgeries in Ontario increased. This suggests that health systems adaptations occurred, enabling the continuation of breast surgical care. This finding is important given the increasing demand for healthcare resources and optimization of care. Further work is needed to identify specific approaches that facilitated care, as well as the long-term impact of COVID-19 disruptions on patients with breast cancer.
Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 20 KB)
|
Data annotators: The unacclaimed heroes of artificial intelligence revolution in ophthalmology | 0067b132-ec72-4d6e-99ed-e754740a8507 | 9333041 | Ophthalmology[mh] | The essential aspect of a data annotator is the quality of data annotated. Once the quality check is passed, only then can the data be merged to bring out a meaningful AI algorithm. Their other important role is to manage the crashes associated with data annotation when machine learning fails to process. Hence, they are also called data annotation specialists. In a nutshell, AI is still dependent on data annotators, and we must not forget that they are the unacclaimed heroes of the AI revolution in ophthalmology.
Nil.
There are no conflicts of interest.
|
What Is Ailing Oncology Clinical Trials? Can We Fix Them? | dd014960-e51b-4adc-ade2-8d63d32d8597 | 11276279 | Internal Medicine[mh] | Case examples: A patient with early triple-negative breast cancer on a clinical trial for circulating tumor DNA (ctDNA)-based screening after curative therapy tests positive and has a computed tomography (CT) scan showing metastatic disease. The physician wants to enroll her on a first line trial for metastatic disease; however, she is not eligible as the scan happened 350 days after finishing her curative treatment, rather than after >365 days. Does 15 days make a clinical difference? A patient with metastatic castrate-resistant prostate cancer on a study drug has clinical progression (a significant increase in pain, requiring the initiation of opioid analgesia), along with a consecutive increase in prostate-specific antigen (PSA). CT and bone scan imaging show new progressive bone lesions corresponding to the site of pain; however, confirmed radiographic progression requires repeat imaging in eight weeks, as per the Prostate Cancer Working Group Criteria. The trial allows cross-over; however, only in cases where radiographic progression is confirmed centrally. In this clinical scenario, is it warranted to wait for confirmatory imaging and central review? Is unequivocal clinical progression not sufficient to allow cross-over? A 58-year-old male has a non-contrast CT chest ordered by his family doctor because of a persistent cough. It shows a lung mass, and after comprehensive workup including PET scan and mediastinoscopy, he has a successful surgical resection for a stage 2a non-small-cell lung cancer. He is approached about an adjuvant systemic therapy trial, but because the initial CT chest had been performed without contrast, he is deemed ineligible. How do these criteria help either trial data, safety, or enrolment, and would more thoughtful protocol development have allowed many ‘ineligible’ patients to actually be eligible? Clinical trials are the cornerstone for cancer research and providing high-quality practice recommendations. Early phase clinical trials are usually hypothesis generating, whereas phase III trials attempt to confirm the efficacy of a new treatment compared to an existing standard of care (SOC) . The success of a clinical trial depends on recruiting and retaining an adequate number of representative patients to answer the question at hand, adherence to protocol during trial conduct, following good clinical practice (GCP) guidelines , and minimizing barriers to facilitate successful trial completion. Previous studies have shown that only <5% of adult patients with cancer participate in cancer clinical trials . Overall, >80% of clinical trials in the United States fail to finish on time, with >20% being delayed for >6 months . Among the National Cancer Institute (NCI)-sponsored trials, around 40% of trials never reach completion, 20% are closed prematurely, with <50% meeting enrollment targets . One study estimated a loss of approximately USD 1 million of low enrolling trials at a single academic institution in a single financial year Furthermore, certain subgroups including older adults with cancer , ethnic minority backgrounds , and certain comorbidities (e.g., HIV, renal disease, and brain metastasis) are underrepresented in trials. Enrollment barriers can occur at the level of patients, physicians, and trials operating systems . Investigators may have a limited awareness of ongoing clinical trials, time constraints in clinics, and few referrals for trials. Potential patients may lack health literacy to ask about clinical trials and have limited ability for travel and time requirements to participate in the study. The European Clinical Research Infrastructure Network (ECRIN) identified eight major barriers to RCTs. These included the following inadequacies: identification of a relevant research question; knowledge and understanding of clinical research; knowledge and understanding of clinical trials; funding; infrastructure and overly complex regulation; excessive, non-focused monitoring; as well as restrictive privacy and lack of transparency . For clinicians and patients engaged in clinical research that are motivated to successfully bring new and effective treatments to the clinic, many barriers seem nonsensical, as they do not obviously impact patient safety or the ability of the study to answer the primary questions. Often, issues have been handed down from trial design to trial design without thoughtful modifications and evolution. In this review, we highlight key barriers to the successful conduct of clinical trials, from phases of trial design, trial activation, and conduct. We discuss potential strategies to help overcome these issues and factors for the consider of their implementation. In highlighting these barriers in this paper, we advocate for more flexibility and shared decision making between patients, clinicians, regulators, and trial sponsors, and hope to encourage more thoughtful trial design. : Trial Eligibility and Design Recent work has suggested that <50% of trials report a screen fail rate; when reported, ineligibility was the most common reason for screen failure . The ASCO Friends for Cancer group have given several recommendations, but very few have been implemented. Most have only been converted to draft guidance by the FDA, rather than mandatory requirements for sponsors. Some of these are discussed below: Patients with brain metastasis or leptomeningeal disease (LMD) have been historically excluded from clinical trials. This issue is of particular importance in malignancies with a high propensity for CNS spread. A systematic review of ongoing trials in non-small-cell lung cancer (NSCLC) showed that only 40% of trials allowed patients with treated brain metastases, while 26% of trials allowed untreated brain metastasis, 14% excluded CNS metastasis all together, and 19% excluded LMD . The exclusion of brain metastases from clinical trials is often unjustified . While the blood–brain barrier (BBB) limits the penetration of some systemic agents into the normal central nervous system, it is largely disrupted by brain metastases. Many systemic agents can achieve high drug concentrations in brain tumors, leading to similar response rates for treatment-naïve brain metastases compared to extracerebral tumors. Effective therapies with established intra-cranial activity include ALK targeting tyrosine kinase inhibitors like brigatinib , alectinib/lorlatinib , and antibody–drug conjugates (ADCs) such as trastuzumab deruxtecan (T-DXd) . The American Society of Clinical Oncology (ASCO) Friends for Cancer Brain Metastasis Research group put forward recommendations in 2016 for including most patients with brain metastasis in clinical trials if they are stable (symptomatic or asymptomatic). A reasonable time frame for stability of 4 weeks was discussed. However, there is no such period of stability requirement for other disease sites such as the liver, bone, or adrenal metastases, so why is this requirement necessary for brain metastases? Requiring time periods of CNS stability prior to inclusion on a trial may end up being a back-door way to obtain over-estimated efficacy outcomes. There have been recent studies of novel compounds exclusively in patients with brain metastases, many of those with symptomatic disease and vasogenic edema, outlining the potential for undertaking these studies in prospective clinical trials . A pragmatic approach with regard to the use of anti-convulsant drugs was suggested, by increasing the use of non-enzyme-inducing anti-epileptic drugs such as levetiracetam, and only a minority of patients with brain metastases ever needed anticonvulsants . The US Food and Drug Administration (FDA) has published guidance for industry advocating for the inclusion of patients with stable brain metastases in clinical trials, and requires a strong justification for their exclusion. Similar recommendations apply to patients with leptomeningeal disease (LMD). However, the practical implementation of these recommendations has lagged. In a study analyzing NCI-sponsored clinical trials between 2018 and 2020, only 15% explicitly implemented inclusion criteria for active brain metastasis . Similarly, only 3% of trials (8/244) (all lung cancer studies) allowed the inclusion of patients with either asymptomatic or treated LMD, despite the growing number and efficacy of CNS-active drugs across cancer types . Restrictive eligibility criteria around previous malignancies often exclude patients; however, many of these may not have bearing on the overall prognosis, safety, or efficacy of the investigational drug in question and can heighten age disparities in trials (e.g., stage I breast cancer on surveillance). Similarly, not all exclusion criteria based on prior cancer treatments are justified. For instance, some trials for metastatic disease require patients who need palliative radiation to be taken off the study due to concerns of overlapping toxicity, even though there is no adequate supporting evidence. The ASCO Friends for Cancer group has also advocated for a broader eligibility for patients with well-controlled HIV in oncology trials if they are otherwise stable , with the support of the FDA . With the current treatment landscape of HIV, and the almost normal life expectancy for treatment-compliant patients with normal CD4 counts , there is thought to be no additional risk with anti-cancer drugs that should hinder trial participation . Most trials also exclude patients with serological evidence of Hepatitis B and Hepatitis C, given the risk of reactivation; however, with the availability of effective anti-viral drugs, the risks have decreased substantially and exclusion based on serological evidence alone should no longer be considered standard . Patients with mild renal/hepatic/cardiac dysfunction may also be excluded because of arbitrary “cutoffs” rather than clinical relevance (usually >60 mL/min for GFR, >50% ejection fraction or <450 milliseconds for corrected QT interval for cardiac status, <2–3 times upper limit of normal (ULN) of liver enzymes, and bilirubin of <1.5 mg/dL). 5.1. Renal The Kaiser Permanente Northern California group analyzed >12,000 patients with four common cancers—breast, lung, bladder, and colon. They found that based on traditional GFR exclusion criteria of <60 mL/min, between 20 and 45% of patients would be excluded from clinical trials, with the highest impact observed in bladder cancer . Harvey et al. showed that relaxing eligibility criteria with respect to renal function would allow >20% additional patients to be recruited in lung cancer trials . Other studies in different cancers have shown similar findings with no increase in toxicity when appropriate dose adjustments were used for GFR . In addition, novel agents such as immunotherapy and ADCs used in bladder cancer can be safely administered in patients with renal impairment, and arbitrary GFR cutoffs for eligibility for clinical trials in this setting are becoming less evidence-based . 5.2. Hepatic Liver enzymes alone may not encompass the entire synthetic state of the liver and may not be sufficient to assess hepatic metabolism and drug tolerability . A classic example is a patient with Gilbert Syndrome who may be wrongfully denied a clinical trial based on an asymptomatic bilirubin elevation. 5.3. Cardiac Although ejection fraction (EF) has been traditionally used as an indicator for myocardial function, the predictive value of baseline EF on the cardiotoxicity of cancer drugs is unclear . We must consider whether a patient with an EF of 44% is different from one with an EF of 46%, if both are asymptomatic? Further, trials often mandate a pre-trial ECG to have a QTc < 450 ms. However, there is poor concordance between the different criteria used to measure corrected QTc , and asymptomatic ECG abnormalities do not clearly associate with cardiac events in phase 1 trials . 5.4. Other Laboratory Parameters A similar example is the exclusion of a patient based on a trivial lab abnormality like a platelet count of 99 × 10 9 , when the study protocol defines eligibility of platelet count as ≥100 × 10 9 . In general, exclusion criteria with arbitrary and fixed cut-points are problematic. Until approximately 25 years ago, a study principal investigator was permitted to use clinical judgment regarding the appropriateness to include a patient who had a value close to an eligibility criterion cutoff point. The majority of these would classify as minor protocol deviations and would not need to be reported to IRB or compromise patient safety . Regrettably, that latitude generally no longer exists. Other parameters, such as persistently elevated amylase, even in the face of normal lipase levels and normal pancreatic imaging, or elevated beta-HCG levels with normal pelvic ultrasound results, may reflect a paraneoplastic syndrome, rather than pancreatitis or pregnancy ; yet, registration trials often refuse to make pragmatic and common sense eligibility exceptions. Laeeq et al. showed that even among phase 1 trials where drug toxicity is of particular concern, the use of restrictive criteria did not lead to lower dose-limiting toxicities, serious adverse events, or death with similar response rates in trials, irrespective of eligibility criteria . Harvey et al. have demonstrated the near doubling of the number of eligible participants if criteria pertaining to brain metastasis, renal function, and prior malignancy are relaxed . The Kaiser Permanente Northern California group analyzed >12,000 patients with four common cancers—breast, lung, bladder, and colon. They found that based on traditional GFR exclusion criteria of <60 mL/min, between 20 and 45% of patients would be excluded from clinical trials, with the highest impact observed in bladder cancer . Harvey et al. showed that relaxing eligibility criteria with respect to renal function would allow >20% additional patients to be recruited in lung cancer trials . Other studies in different cancers have shown similar findings with no increase in toxicity when appropriate dose adjustments were used for GFR . In addition, novel agents such as immunotherapy and ADCs used in bladder cancer can be safely administered in patients with renal impairment, and arbitrary GFR cutoffs for eligibility for clinical trials in this setting are becoming less evidence-based . Liver enzymes alone may not encompass the entire synthetic state of the liver and may not be sufficient to assess hepatic metabolism and drug tolerability . A classic example is a patient with Gilbert Syndrome who may be wrongfully denied a clinical trial based on an asymptomatic bilirubin elevation. Although ejection fraction (EF) has been traditionally used as an indicator for myocardial function, the predictive value of baseline EF on the cardiotoxicity of cancer drugs is unclear . We must consider whether a patient with an EF of 44% is different from one with an EF of 46%, if both are asymptomatic? Further, trials often mandate a pre-trial ECG to have a QTc < 450 ms. However, there is poor concordance between the different criteria used to measure corrected QTc , and asymptomatic ECG abnormalities do not clearly associate with cardiac events in phase 1 trials . A similar example is the exclusion of a patient based on a trivial lab abnormality like a platelet count of 99 × 10 9 , when the study protocol defines eligibility of platelet count as ≥100 × 10 9 . In general, exclusion criteria with arbitrary and fixed cut-points are problematic. Until approximately 25 years ago, a study principal investigator was permitted to use clinical judgment regarding the appropriateness to include a patient who had a value close to an eligibility criterion cutoff point. The majority of these would classify as minor protocol deviations and would not need to be reported to IRB or compromise patient safety . Regrettably, that latitude generally no longer exists. Other parameters, such as persistently elevated amylase, even in the face of normal lipase levels and normal pancreatic imaging, or elevated beta-HCG levels with normal pelvic ultrasound results, may reflect a paraneoplastic syndrome, rather than pancreatitis or pregnancy ; yet, registration trials often refuse to make pragmatic and common sense eligibility exceptions. Laeeq et al. showed that even among phase 1 trials where drug toxicity is of particular concern, the use of restrictive criteria did not lead to lower dose-limiting toxicities, serious adverse events, or death with similar response rates in trials, irrespective of eligibility criteria . Harvey et al. have demonstrated the near doubling of the number of eligible participants if criteria pertaining to brain metastasis, renal function, and prior malignancy are relaxed . Older patients with an ECOG performance status (PS) of ≥ 2 are often excluded or underrepresented in clinical trials . Although previous recommendations from various working groups have advocated for the inclusion of patients with an ECOG PS of 2 , the pharmaceutical industry has pushed back, citing inter-physician variability in the assessment of ECOG PS, with the potential for the inclusion of sicker patients, thereby jeopardizing the safety of study treatment . Recent trials have shown the feasibility of administering drugs such as immunotherapy to older adults or patients with an ECOG PS of 2-3 with durable clinical benefit and without compromising safety . While the absolute OS gain with an effective new therapy may be reduced in poor PS vs. good PS patients, the relative gain may be very similar . Increasing the use of geriatric oncology specialists and assessments can play a substantial role in better assessing fitness and frailty for cancer treatment . ) Taken together, current oncology trial eligibility criteria are overly restrictive, lack flexibility, and limit physician judgment. These criteria limit trial enrolment, the generalizability of study results, and adoption/access in real-world settings. Although some restrictions are necessary to define a study population and protect patient safety, there is a need to justify them with a scientific-based rationale. Ill-considered eligibility criteria unethically restrict access to novel and potentially beneficial therapies to patients. If patient safety is not compromised in the judgment of the investigator, patients should be offered the option of study participation. The disclosure of reasons for screen failures and their relevant supporting evidence published from study sponsors can also increase scientific rigor and the sense of transparency. Some recommendations to expand the eligibility criteria inclusivity of trials are listed in . : Trial Activation Activating a clinical trial can be a long and arduous process, taking lengthy time periods from when an institution receives a protocol until their first patient is recruited . Previous studies have even found delays up to two years (with the longest being 5.2 years) for a trial to be activated . A delay in trial activation has been associated with early trial closure due to poor recruitment, thus leading to a waste of money and resources . Delays in the activation and completion of trials may lead to obsolete results if the SOC has changed. Additionally, most trials are carried out in patients with advanced disease, where such delays can translates to negative patient outcomes, satisfaction life-years lost, and a delay in accessing new effective treatments . Several institutional-, regulatory-, and sponsor-related hurdles need to be overcome before a trial can be activated . Dilts et al. found that a minimum of 296 distinct steps were involved in activating an NCI CTEP trial and receiving Institutional Review Board (IRB) approval. These involved 21 loops (i.e., points where the protocol would be returned to a previous point for changes) and 11 major stopping points . Recent work by Williams et al. at a tertiary center in the US found a median activation time of 182 days but 21 potential bottlenecks causing delays in trial activation . Most of these steps are conducted sequentially, compounding delays. Notably, more stringent regulations do not correlate with improved patient safety, even in early phase trials, where the toxicity profile is uncertain to a large extent . The healthcare system works with constrained budgets, resources, and workforce capacity. Ever increasing research costs and more complex trial designs can lead to lengthy inter-departmental approvals and budget negotiations. Notwithstanding these challenges, institutions and trial sponsors need to evaluate innovative processes to ensure efficiency, capacity, and sustainability across clinical research portfolios. We discuss some possible solutions in . : Trial Conduct Roadblocks during the conduct of a clinical trial create unnecessary clinic visits, tests, paperwork, and overall burden to patients and staff. This contributes to time toxicity and costs , even being so cumbersome that a research center is unable to activate or continue trial activity. For example, some trials mandate scans in a certain time interval prior to enrollment. A lung cancer patient with appropriate staging investigations may have to undergo repeat scans to fit mandated trial timelines. This can lead to a delay in starting treatment and may not provide any additional clinically relevant information. Some trials mandate follow-up schedules that are vastly more intensive than routine clinical practice (bloodwork, scans, and appointments), posing enormous unnecessary burdens on the patient and trial team (physicians [oncologists, radiologists, etc.], nurses, and support staff). Occasionally, non-trial patients are impacted so that mandated additional trial patient assessments can be accommodated. Inconsequential (to patient or data safety) protocol deviations ensue, causing additional work. Moreover, these practices perpetuate inequity, as patients living far from large centers have impeded access to trials due to practicality and transportation issues. Even post-pandemic, most trials do not allow the use of tele-health for follow-up assessments when it is routinely used in clinical care. Available evidence suggests that tele-health based follow-up is safe, accepted by patients and oncologists, and is also cost-effective . Several trials insist on in-person assessments for collection of safety, follow-up, and PRO data, even after patients have progressed in the trial and have gone onto another treatment even though they might live far out from the cancer center. Whether strict regulations such as these improve patient outcomes is not well supported by evidence, but intuitively seems unnecessary and burdensome . Certain procedures are mandated by trials, and likely these are far too many. A review of phase 1 trials showed that a mean of 3.16 physician exams, 5.6 vitals sign measurements, 4.36 ECGs, 18 non-pharmacological blood draws, and 15.1 pharmacological blood draws were carried out in just the first 28 days of a phase 1 trial . Although a higher degree of scrutiny for phase 1 trials can be justified given the experimental nature of treatments, even this may be excessive . In 2010, excessively stringent clinical research regulation cost an estimated 2.7 million dollars per life-year saved. This drives up the research costs, which only the large pharmaceutical companies can afford to pay, and this in turn drives the focus away from investigator-initiated and cooperative group studies, which often answer more patient-centric questions . Many studies mandate the central confirmation of immunohistochemical and genomic markers, even though validated and standardized assays for such biomarkers exist. Examples include the requirement of the central confirmation of PD-L1 testing in NSCLC, the confirmation of genomic biomarkers for targeted therapy trials in oncogene-addicted NSCLC, confirmation of HER-2 IHC in breast cancer, etc. When local accredited laboratories perform these tests, mandating central confirmation can only delay and restrict access, without a clear benefit. After the conclusion of a central trial, hospitals will not start sending specimens for central confirmation, they will continue to rely on validated local testing. Cross-over is allowed in some trials, where patients on control arm therapy can access the experimental treatment at progression. However, when radiologic PFS is a primary endpoint, the central confirmation of disease progression may be required prior to cross-over, even if clear clinical progression is evident, as judged by the treating investigator. This may be contrary to Good Clinical Practice (GCP) by compromising patient outcomes when treatment is delayed/denied due to mandated protocol procedures. If patients withdraw from such a study prior to the completion of the central review, there is a loss of important data due to censoring. Trials can also impose restrictions on the timing of drug delivery (e.g., with 1–2 days of due date) and drug administration times between preparation in the pharmacy and administration to the patient. This poses unnecessary pressure on often already overburdened day care units and pharmacies and can impact the care of non-trial patients. While there may be good scientific reasons to recommend the accurate timing of drug dosing, pragmatic considerations could be adopted. We have summarized some suggestions to make trial conduct more patient-centric in . A number of examples discussed above go against the principle of common sense oncology, for which there has been a lot of recent advocacy . The examples discussed in this manuscript are reflective of situations that the authors have encountered and bring a desire for improvement in patient care in the context of clinical trials. Some of the recommendations have been highlighted before, and if implemented, would certainly help in ‘cutting out the stupid’ that plagues oncology clinical trials. The development of choosing wisely choosing recommendations for clinical research might be a step worth considering, given the success of similar recommendations for clinical practice . Similar issues were noted during the early days of AIDS research and a change in mindset and approach from the community was the key to eventual rapid progress. While these recommendations may seem relatively straightforward to implement, a more balanced and interactive collaboration between stakeholders (sponsors, patients, investigators, institutions, and regulators) is required. Patients should be part of clinical trial design and steering committees to advocate for patient interests and priorities, rather than satisfying only commercial interests. Ultimately, pragmatic and inclusive clinical trial reform may lead to commercial benefits with faster and cheaper clinical trials. If we achieve that balance, many of the procedures that are currently ingrained in clinical trial design, enrollment, activation, and conduct would eventually be taken care of, therefore ‘cutting out the stupid’. A recent example of a pragmatic clinical trial is the SWOG 2302 (PRAGMATICA LUNG) testing ramucirumab plus pembrolizumab vs. standard of care (as determined by investigators) in the second line treatment of NSCLC . The trial’s aims were to remove traditional barriers to eligibility, as discussed in this paper. The trial included patients with an ECOG performance status of 2, did not mandate strict timing for lab tests and scans, permitted investigator-assessed PFS as opposed to central assessment, used OS as the primary end point, and required mandatory reporting of only serious adverse events, thus minimizing paperwork . The success of this trial might pave the way for more patient-centered studies and help in establishing the feasibility of such studies. George Bernard Shaw wrote ‘Progress is impossible without change, and those who cannot change their minds cannot change anything’. We hope that this review will highlight the need for change in clinical trial conduct to ensure progress for people with cancer. |
Assessing the understandability and actionability of online resources for patients undergoing hemodialysis | 9ff72a29-c2e7-4a39-baf8-d6ad57b109e5 | 11879477 | Patient Education as Topic[mh] | INTRODUCTION End‐stage kidney disease is a global public health threat affecting an estimated 843.6 million individuals worldwide in 2017 . When renal function significantly declines, patients require dialysis or kidney transplantation. According to an annual survey conducted by the Japan Society for Dialysis Therapy at the end of 2022, the number of patients undergoing hemodialysis (HD) in Japan has reached 347 474 or 2781 patients per million population . Patients undergoing HD are prone to disease‐ and treatment‐specific health problems. In comparison to healthy individuals, such patients are at higher risk of cardiovascular diseases, including myocardial infarction and stroke. Cardiovascular deaths account for ~30% of deaths among the patients . The prevalence of hypertension remains high, making continuous fluid volume and blood pressure control essential. As dialysis therapy is usually lifelong, blood access‐related problems have become critical issues . Other risk factors include malnutrition, infectious diseases, and anemia . Long‐term HD can lead to decreased motivation for treatment and poor self‐management, both of which require psychosocial support . Patient education is an effective intervention to enhance the health and overall well‐being of individuals undergoing HD. The education typically focuses on dietary considerations; physical activity engagement; modification of risk factors, including smoking; strategies for symptom management; and promotion of psychosocial well‐being. However, more than half of these patients do not adhere to their prescribed regimen . Compliance with dietary guidelines, fluid restrictions, and medication regimens poses challenges for patients, with non‐adherence resulting in significant health risks . Patient education materials suggest desirable health behaviors for patients and can help address these challenges. Moreover, they can facilitate communication between healthcare providers and patients . In 2006–2007, 1804 patients undergoing dialysis in the United States used the internet , while approximately 60% of 149 patients undergoing HD in Canada reported seeking online information about their health conditions . However, nearly half of the patients with end‐stage kidney disease have limited health literacy . To facilitate patients’ appropriate access, comprehension, and utilization of health information, caregivers must ensure organizational health literacy and validate the medical information they disseminate . Nevertheless, whether the online materials developed by these organizations can support health behaviors in patients remains questionable. Among materials on chronic kidney disease (CKD), the messages conveyed in materials on HD may differ from those in the predialysis stage. Materials at the pre‐dialysis stage focus on preventing progression to end‐stage kidney disease. Conversely, patient‐focused materials primarily address the management of conditions and complications specific to individuals with end‐stage kidney disease. We previously analyzed the content of Japanese language materials for patients with pre‐dialysis‐stage CKD . Our previous study revealed the need to improve the actionability of messages and visual aids presented in the materials. However, whether Japanese‐language materials for patients present the same challenges as those mentioned in our previous analysis remains unclear. In this study, we quantitatively evaluated whether existing online materials written in Japanese are easy for patients to read, understand, and act upon. The following research questions were posed: RQ 1 Are online HD materials for Japanese patients easy to understand and actionable? RQ 2 What issues do materials face regarding their understandability and actionability? RQ 3 What characteristics of the materials impact their understandability and actionability? MATERIALS AND METHODS 2.1 Study design This is a cross‐sectional study that comprehensively collected existing patient education materials on HD and evaluated them using evidence‐based indicators. This study was exempted from approval by the Research Ethics Committee as the materials were available to the public and did not include patient records or personal information. 2.2 Material selection We conducted an extensive online search for HD materials on June 16, 2022. The search involved obtaining materials disseminated by healthcare providers in clinical practice. We included Japanese‐language online resources for non‐medical professionals that healthcare professionals might recommend their patients view or that patients might search on their own, based on past studies . As such, Google Japan and Yahoo Japan, which account for more than 90% of the search engine market share in Japan, were included in the search. . The five most frequently searched words related to end‐stage kidney disease or HD were obtained from Google Trends . Each of the search terms, “dialysis,” “renal failure,” “end‐stage kidney disease,” “hemodialysis,” and “artificial dialysis,” all of which were in Japanese, were individually entered into the search window. Based on prior search engine analysis , ~10%–17% of Internet users explored results beyond the initial three pages . Consequently, we examined 50 webpages per search engine for each term. When a screened webpage contained only hyperlinks or thumbnails for the patient education materials, we opted to incorporate the linked materials directly. To enhance comprehensiveness, we incorporated online resources that are widely used in Japanese clinical settings. The online search was conducted without an institutional login to avoid retrieving webpages inaccessible to individuals without institutional credentials. In addition, personal Google and Yahoo accounts were signed out, and all search histories were cleared to minimize bias in the search results. Websites that allowed free entry, without requiring a password, presented information in Japanese, and provided educational content on HD were included in the analysis. We excluded the following materials: (1) webpages aimed at medical professionals, (2) webpages with misinformation, (3) patient experience, (4) webpages lacking relevance, (5) news or press releases, and (6) webpages without any specific educational information. A board‐certified nephrologist (EF) thoroughly read the full text and determined the inclusion and exclusion criteria. 2.3 Variables extracted The board‐certified nephrologist (EF) categorized webpage topics as follows: (1) overview of HD, (2) treatment options (e.g., modality change (daily or long intermittent HD), switching to PD/transplant), (3) complications (e.g., cardiovascular disease, vascular access trouble), (4) self‐management (e.g., healthy diet, exercise, mental health), and (5) social support (e.g., public subsidies for medical expenses). We also classified the source of material uploads into one of the following categories: (1) medical institutions, (2) academic institutions, (3) for‐profit companies, (4) governmental organizations (e.g., Ministry of Health, Labour, and Welfare), and (5) non‐profit organizations. We divided the target audience into two groups: (1) patients and their families and (2) the general public. These classifications were based on previous analyses of online patient education materials . 2.4 Evaluation criteria 2.4.1 Understandability and actionability of the materials The understandability and actionability of the materials were assessed using the Japanese version of the Patient Education Material Evaluation Tool for Printed Materials (PEMAT‐P). Similar to the original PEMAT, the Japanese version of PEMAT‐P was tested for reliability and validity . This tool has two subdomains: (1) understandability, which assesses how well patients with varying levels of health literacy understand the printable material, and (2) actionability, which evaluates how well patients can identify what they need to do based on the information presented. This instrument includes 23 items (16 items for understandability and seven items for actionability) on a binary scale (agree = 1 or disagree = 0). The PEMAT understandability and actionability scores were calculated by summing the points, dividing by the total possible points (excluding unapplicable items), and multiplying by 100 to yield a percentage. Scores below 70% indicated poor understandability or actionability, whereas scores of 70% or higher were considered understandable or actionable, consistent with the original PEMAT criteria . 2.4.2 Quality (natural flow and comprehensiveness) The quality of each webpage was rated using a Global Quality Score (GQS). The GQS allows users to evaluate the overall quality (natural flow and comprehensiveness) of an online resource on a five‐point Likert scale . A score of one point represents poor quality, whereas a score of five points signifies excellent quality. 2.4.3 Readability The text from each webpage was copied into Microsoft Word (Microsoft Corp.), and any formatting elements that might affect the readability assessment were eliminated (such as headings, symbols, author information, and references). The plain text from each webpage was then assessed using an online readability calculator, jReadability . This validated measure calculated readability based on the average length of sentences, word complexity, and types of characters per sentence. Scores ranged from 0.5 to 6.4, with a high score indicating that the text was easy to read . 2.5 Statistical analysis Descriptive statistics (means, standard deviations, and proportions) are used to summarize the characteristics of the retrieved webpages. For the PEMAT, GQS, and jReadability scores between groups, the initial data analysis involved applying the Shapiro–Wilk test, a crucial procedure to evaluate the normality of the data distribution. We conducted a one‐way analysis of variance (ANOVA) for normally distributed data. When the data did not follow a normal distribution, the Kruskal–Wallis test was employed. If a significant difference was detected, Kruskal–Wallis multiple comparisons were performed, and p‐ values were adjusted using the Benjamini–Hochberg method. Additionally, for the PEMAT‐P and GQS assessments, inter‐rater reliability (averaged Gwet's AC1) was examined by two raters (EF and YF, both board‐certified internal medicine specialists) for one‐quarter of the total webpages. Statistical significance was set at p < 0.05. All statistical analyses were performed using R version 4.3.3 (2024‐02‐29; R Foundation for Statistical Computing, Vienna, Austria). Study design This is a cross‐sectional study that comprehensively collected existing patient education materials on HD and evaluated them using evidence‐based indicators. This study was exempted from approval by the Research Ethics Committee as the materials were available to the public and did not include patient records or personal information. Material selection We conducted an extensive online search for HD materials on June 16, 2022. The search involved obtaining materials disseminated by healthcare providers in clinical practice. We included Japanese‐language online resources for non‐medical professionals that healthcare professionals might recommend their patients view or that patients might search on their own, based on past studies . As such, Google Japan and Yahoo Japan, which account for more than 90% of the search engine market share in Japan, were included in the search. . The five most frequently searched words related to end‐stage kidney disease or HD were obtained from Google Trends . Each of the search terms, “dialysis,” “renal failure,” “end‐stage kidney disease,” “hemodialysis,” and “artificial dialysis,” all of which were in Japanese, were individually entered into the search window. Based on prior search engine analysis , ~10%–17% of Internet users explored results beyond the initial three pages . Consequently, we examined 50 webpages per search engine for each term. When a screened webpage contained only hyperlinks or thumbnails for the patient education materials, we opted to incorporate the linked materials directly. To enhance comprehensiveness, we incorporated online resources that are widely used in Japanese clinical settings. The online search was conducted without an institutional login to avoid retrieving webpages inaccessible to individuals without institutional credentials. In addition, personal Google and Yahoo accounts were signed out, and all search histories were cleared to minimize bias in the search results. Websites that allowed free entry, without requiring a password, presented information in Japanese, and provided educational content on HD were included in the analysis. We excluded the following materials: (1) webpages aimed at medical professionals, (2) webpages with misinformation, (3) patient experience, (4) webpages lacking relevance, (5) news or press releases, and (6) webpages without any specific educational information. A board‐certified nephrologist (EF) thoroughly read the full text and determined the inclusion and exclusion criteria. Variables extracted The board‐certified nephrologist (EF) categorized webpage topics as follows: (1) overview of HD, (2) treatment options (e.g., modality change (daily or long intermittent HD), switching to PD/transplant), (3) complications (e.g., cardiovascular disease, vascular access trouble), (4) self‐management (e.g., healthy diet, exercise, mental health), and (5) social support (e.g., public subsidies for medical expenses). We also classified the source of material uploads into one of the following categories: (1) medical institutions, (2) academic institutions, (3) for‐profit companies, (4) governmental organizations (e.g., Ministry of Health, Labour, and Welfare), and (5) non‐profit organizations. We divided the target audience into two groups: (1) patients and their families and (2) the general public. These classifications were based on previous analyses of online patient education materials . Evaluation criteria 2.4.1 Understandability and actionability of the materials The understandability and actionability of the materials were assessed using the Japanese version of the Patient Education Material Evaluation Tool for Printed Materials (PEMAT‐P). Similar to the original PEMAT, the Japanese version of PEMAT‐P was tested for reliability and validity . This tool has two subdomains: (1) understandability, which assesses how well patients with varying levels of health literacy understand the printable material, and (2) actionability, which evaluates how well patients can identify what they need to do based on the information presented. This instrument includes 23 items (16 items for understandability and seven items for actionability) on a binary scale (agree = 1 or disagree = 0). The PEMAT understandability and actionability scores were calculated by summing the points, dividing by the total possible points (excluding unapplicable items), and multiplying by 100 to yield a percentage. Scores below 70% indicated poor understandability or actionability, whereas scores of 70% or higher were considered understandable or actionable, consistent with the original PEMAT criteria . 2.4.2 Quality (natural flow and comprehensiveness) The quality of each webpage was rated using a Global Quality Score (GQS). The GQS allows users to evaluate the overall quality (natural flow and comprehensiveness) of an online resource on a five‐point Likert scale . A score of one point represents poor quality, whereas a score of five points signifies excellent quality. 2.4.3 Readability The text from each webpage was copied into Microsoft Word (Microsoft Corp.), and any formatting elements that might affect the readability assessment were eliminated (such as headings, symbols, author information, and references). The plain text from each webpage was then assessed using an online readability calculator, jReadability . This validated measure calculated readability based on the average length of sentences, word complexity, and types of characters per sentence. Scores ranged from 0.5 to 6.4, with a high score indicating that the text was easy to read . Understandability and actionability of the materials The understandability and actionability of the materials were assessed using the Japanese version of the Patient Education Material Evaluation Tool for Printed Materials (PEMAT‐P). Similar to the original PEMAT, the Japanese version of PEMAT‐P was tested for reliability and validity . This tool has two subdomains: (1) understandability, which assesses how well patients with varying levels of health literacy understand the printable material, and (2) actionability, which evaluates how well patients can identify what they need to do based on the information presented. This instrument includes 23 items (16 items for understandability and seven items for actionability) on a binary scale (agree = 1 or disagree = 0). The PEMAT understandability and actionability scores were calculated by summing the points, dividing by the total possible points (excluding unapplicable items), and multiplying by 100 to yield a percentage. Scores below 70% indicated poor understandability or actionability, whereas scores of 70% or higher were considered understandable or actionable, consistent with the original PEMAT criteria . Quality (natural flow and comprehensiveness) The quality of each webpage was rated using a Global Quality Score (GQS). The GQS allows users to evaluate the overall quality (natural flow and comprehensiveness) of an online resource on a five‐point Likert scale . A score of one point represents poor quality, whereas a score of five points signifies excellent quality. Readability The text from each webpage was copied into Microsoft Word (Microsoft Corp.), and any formatting elements that might affect the readability assessment were eliminated (such as headings, symbols, author information, and references). The plain text from each webpage was then assessed using an online readability calculator, jReadability . This validated measure calculated readability based on the average length of sentences, word complexity, and types of characters per sentence. Scores ranged from 0.5 to 6.4, with a high score indicating that the text was easy to read . Statistical analysis Descriptive statistics (means, standard deviations, and proportions) are used to summarize the characteristics of the retrieved webpages. For the PEMAT, GQS, and jReadability scores between groups, the initial data analysis involved applying the Shapiro–Wilk test, a crucial procedure to evaluate the normality of the data distribution. We conducted a one‐way analysis of variance (ANOVA) for normally distributed data. When the data did not follow a normal distribution, the Kruskal–Wallis test was employed. If a significant difference was detected, Kruskal–Wallis multiple comparisons were performed, and p‐ values were adjusted using the Benjamini–Hochberg method. Additionally, for the PEMAT‐P and GQS assessments, inter‐rater reliability (averaged Gwet's AC1) was examined by two raters (EF and YF, both board‐certified internal medicine specialists) for one‐quarter of the total webpages. Statistical significance was set at p < 0.05. All statistical analyses were performed using R version 4.3.3 (2024‐02‐29; R Foundation for Statistical Computing, Vienna, Austria). RESULTS 3.1 Demographic characteristics of the materials Of the 676 retrieved articles, 194 were included in the final analysis (Figure ). The characteristics of the materials are listed in Table . Half of the materials were produced by for‐profit companies, primarily pharmaceutical and medical equipment manufacturing companies. Medical institutions accounted for approximately one‐quarter of the material sources. Approximately 40% of the materials provided an overview of HD, and nearly 30% of the materials described complications. A moderate to substantial inter‐rater agreement was observed for PEMAT‐P and GQS (average Gwet's AC1:0.75 for PEMAT‐P, 0.61 for GQS). 3.2 Overall understandability and actionability of the HD materials The median PEMAT‐P score was 66.7 (interquartile range [IQR] 54.5–75.0) for understandability and 33.3 (IQR 3.6–59.3) for actionability. Out of the total, 75 (38.7%) materials were considered satisfactory in terms of understanding, whereas only 32 (16.5%) materials met the criteria for actionability. Twenty‐eight (14.4%) educational materials met the acceptable thresholds for both understandability and actionability. The median GQS was 4 (IQR 3–5), and 121 (62.3%) materials scored four or more points, which is considered good quality. The mean jReadability score was 2.1 (standard deviation 0.7), and the text level was “a little difficult.” This implies that the text requires Japanese language skills to completely understand every day and technical terms. 3.3 Issues related to understandability and actionability The results for each PEMAT‐P item are listed in Table . For understandability, most materials satisfied Item 2 “The material does not include information or content that distracts from its purpose” and Item 9 “The material presents information in a logical sequence.” In addition, ~90% of the materials do not require users to perform calculations. However, only a few materials meet the standards of understandability for use as visual aids (e.g., figures, tables, illustrations, and diagrams). Less than 40% of the materials met Item 16 “The material's visual aids have clear titles or captions.” Approximately 10% of the materials met Item 10 “The material provides a summary.” For actionability, more than half of the materials satisfied Item 19, “The material clearly identifies at least one action the user can take,” and Item 20 “The material addresses the user directly when describing actions.” However, <30% of the materials met Item 21 “The material breaks down any action into explicit steps,” Item 22 “The material provides a tangible tool whenever it could help the user take action,” Item 24 “The material explains how to use the charts, graphs, tables, or diagrams to take actions,” and Item 25 “The material uses visual aids whenever they could to make it easier to act on the instructions” (Table ). 3.4 Factors that affect the quality of the materials 3.4.1 Understandability and actionability For comparison among topics, no significant differences in understandability were observed ( χ 2 4 = 5.88, p = 0.21). However, significant differences were identified between groups in terms of actionability ( χ 2 4 = 28.32, p < 0.001). The materials for self‐management were significantly more actionable than those for an overview of HD, treatment options, and complications. Kruskal–Wallis multiple comparisons test revealed that the median actionability was 50 vs. 20 for self‐management vs. overview of HD ( p < 0.001; 50 vs. 20 for self‐management vs. treatment options) ( p = 0.007; 50 vs. 33.3 for self‐management vs. complications ( p = 0.02)) (Table ). Nearly all self‐management materials directly urge readers to take action. In addition, materials on self‐management tended to offer more detailed instructions and utilized visual aids to facilitate readers in taking action, distinguishing them from materials on other topics. A significant difference was noted in understandability by source ( χ 2 4 = 10.72, p = 0.03), with the materials by non‐profit organizations being significantly less understandable than the materials by medical institutions or for‐profit companies (median: 66.7 vs. 54.5 for medical institutions vs. non‐profit organizations, p = 0.03; 66.7 vs. 54.5 for for‐profit companies vs. non‐profit, p = 0.04). Although not statistically significant, materials produced by academic organizations also tended to be less comprehensible than those produced by medical institutions and for‐profit companies (median: 66.7 vs. 50 for medical institutions vs. academic organizations ( p = 0.29); 66.7 vs. 50 for for‐profit companies vs academic institutions ( p = 0.25)). No differences were identified by sources in actionability ( χ 2 = 6.09, df = 4, p = 0.19) (Table ). However, for‐profit companies tend to employ visual aids whenever they can make it easier to follow instructions. 3.4.2 Natural flow and comprehensiveness For comparison among topics, a significant difference was observed in GQS ( χ 2 4 = 12.29, p = 0.02). Although the Kruskal–Wallis multiple comparisons demonstrated no significant differences between topics, the materials on social support tended to be less comprehensive and flowed less naturally than materials on other topics. There was no significant difference in GQS by source ( χ 2 4 = 2.92, p = 0.57). 3.4.3 Readability For comparisons among topics, ANOVA revealed a significant difference ( F 4,188 , p < 0.001). Materials on self‐management had higher readability compared to the readability of those on treatment options and social support (mean: 2.46 vs. 1.73 for self‐management vs. treatment options, p = 0.003; 2.46 vs. 1.63 for self‐management vs. social support, p < 0.001). Materials on complications were more readable than those on treatment options and social support (mean: 2.33 vs. 1.73, complications vs. treatment options, p = 0.009; 2.33 vs. 1.63, complications vs. social support, p < 0.001). For comparisons among sources, the ANOVA revealed a significant difference ( F 4,188 , p = 0.003). Materials produced by for‐profit companies had significantly higher readability scores than the scores of those produced by governmental organizations (mean: 2.25 vs. 1.46, for‐profit companies vs. governmental organizations, p = 0.003). Demographic characteristics of the materials Of the 676 retrieved articles, 194 were included in the final analysis (Figure ). The characteristics of the materials are listed in Table . Half of the materials were produced by for‐profit companies, primarily pharmaceutical and medical equipment manufacturing companies. Medical institutions accounted for approximately one‐quarter of the material sources. Approximately 40% of the materials provided an overview of HD, and nearly 30% of the materials described complications. A moderate to substantial inter‐rater agreement was observed for PEMAT‐P and GQS (average Gwet's AC1:0.75 for PEMAT‐P, 0.61 for GQS). Overall understandability and actionability of the HD materials The median PEMAT‐P score was 66.7 (interquartile range [IQR] 54.5–75.0) for understandability and 33.3 (IQR 3.6–59.3) for actionability. Out of the total, 75 (38.7%) materials were considered satisfactory in terms of understanding, whereas only 32 (16.5%) materials met the criteria for actionability. Twenty‐eight (14.4%) educational materials met the acceptable thresholds for both understandability and actionability. The median GQS was 4 (IQR 3–5), and 121 (62.3%) materials scored four or more points, which is considered good quality. The mean jReadability score was 2.1 (standard deviation 0.7), and the text level was “a little difficult.” This implies that the text requires Japanese language skills to completely understand every day and technical terms. Issues related to understandability and actionability The results for each PEMAT‐P item are listed in Table . For understandability, most materials satisfied Item 2 “The material does not include information or content that distracts from its purpose” and Item 9 “The material presents information in a logical sequence.” In addition, ~90% of the materials do not require users to perform calculations. However, only a few materials meet the standards of understandability for use as visual aids (e.g., figures, tables, illustrations, and diagrams). Less than 40% of the materials met Item 16 “The material's visual aids have clear titles or captions.” Approximately 10% of the materials met Item 10 “The material provides a summary.” For actionability, more than half of the materials satisfied Item 19, “The material clearly identifies at least one action the user can take,” and Item 20 “The material addresses the user directly when describing actions.” However, <30% of the materials met Item 21 “The material breaks down any action into explicit steps,” Item 22 “The material provides a tangible tool whenever it could help the user take action,” Item 24 “The material explains how to use the charts, graphs, tables, or diagrams to take actions,” and Item 25 “The material uses visual aids whenever they could to make it easier to act on the instructions” (Table ). Factors that affect the quality of the materials 3.4.1 Understandability and actionability For comparison among topics, no significant differences in understandability were observed ( χ 2 4 = 5.88, p = 0.21). However, significant differences were identified between groups in terms of actionability ( χ 2 4 = 28.32, p < 0.001). The materials for self‐management were significantly more actionable than those for an overview of HD, treatment options, and complications. Kruskal–Wallis multiple comparisons test revealed that the median actionability was 50 vs. 20 for self‐management vs. overview of HD ( p < 0.001; 50 vs. 20 for self‐management vs. treatment options) ( p = 0.007; 50 vs. 33.3 for self‐management vs. complications ( p = 0.02)) (Table ). Nearly all self‐management materials directly urge readers to take action. In addition, materials on self‐management tended to offer more detailed instructions and utilized visual aids to facilitate readers in taking action, distinguishing them from materials on other topics. A significant difference was noted in understandability by source ( χ 2 4 = 10.72, p = 0.03), with the materials by non‐profit organizations being significantly less understandable than the materials by medical institutions or for‐profit companies (median: 66.7 vs. 54.5 for medical institutions vs. non‐profit organizations, p = 0.03; 66.7 vs. 54.5 for for‐profit companies vs. non‐profit, p = 0.04). Although not statistically significant, materials produced by academic organizations also tended to be less comprehensible than those produced by medical institutions and for‐profit companies (median: 66.7 vs. 50 for medical institutions vs. academic organizations ( p = 0.29); 66.7 vs. 50 for for‐profit companies vs academic institutions ( p = 0.25)). No differences were identified by sources in actionability ( χ 2 = 6.09, df = 4, p = 0.19) (Table ). However, for‐profit companies tend to employ visual aids whenever they can make it easier to follow instructions. 3.4.2 Natural flow and comprehensiveness For comparison among topics, a significant difference was observed in GQS ( χ 2 4 = 12.29, p = 0.02). Although the Kruskal–Wallis multiple comparisons demonstrated no significant differences between topics, the materials on social support tended to be less comprehensive and flowed less naturally than materials on other topics. There was no significant difference in GQS by source ( χ 2 4 = 2.92, p = 0.57). 3.4.3 Readability For comparisons among topics, ANOVA revealed a significant difference ( F 4,188 , p < 0.001). Materials on self‐management had higher readability compared to the readability of those on treatment options and social support (mean: 2.46 vs. 1.73 for self‐management vs. treatment options, p = 0.003; 2.46 vs. 1.63 for self‐management vs. social support, p < 0.001). Materials on complications were more readable than those on treatment options and social support (mean: 2.33 vs. 1.73, complications vs. treatment options, p = 0.009; 2.33 vs. 1.63, complications vs. social support, p < 0.001). For comparisons among sources, the ANOVA revealed a significant difference ( F 4,188 , p = 0.003). Materials produced by for‐profit companies had significantly higher readability scores than the scores of those produced by governmental organizations (mean: 2.25 vs. 1.46, for‐profit companies vs. governmental organizations, p = 0.003). Understandability and actionability For comparison among topics, no significant differences in understandability were observed ( χ 2 4 = 5.88, p = 0.21). However, significant differences were identified between groups in terms of actionability ( χ 2 4 = 28.32, p < 0.001). The materials for self‐management were significantly more actionable than those for an overview of HD, treatment options, and complications. Kruskal–Wallis multiple comparisons test revealed that the median actionability was 50 vs. 20 for self‐management vs. overview of HD ( p < 0.001; 50 vs. 20 for self‐management vs. treatment options) ( p = 0.007; 50 vs. 33.3 for self‐management vs. complications ( p = 0.02)) (Table ). Nearly all self‐management materials directly urge readers to take action. In addition, materials on self‐management tended to offer more detailed instructions and utilized visual aids to facilitate readers in taking action, distinguishing them from materials on other topics. A significant difference was noted in understandability by source ( χ 2 4 = 10.72, p = 0.03), with the materials by non‐profit organizations being significantly less understandable than the materials by medical institutions or for‐profit companies (median: 66.7 vs. 54.5 for medical institutions vs. non‐profit organizations, p = 0.03; 66.7 vs. 54.5 for for‐profit companies vs. non‐profit, p = 0.04). Although not statistically significant, materials produced by academic organizations also tended to be less comprehensible than those produced by medical institutions and for‐profit companies (median: 66.7 vs. 50 for medical institutions vs. academic organizations ( p = 0.29); 66.7 vs. 50 for for‐profit companies vs academic institutions ( p = 0.25)). No differences were identified by sources in actionability ( χ 2 = 6.09, df = 4, p = 0.19) (Table ). However, for‐profit companies tend to employ visual aids whenever they can make it easier to follow instructions. Natural flow and comprehensiveness For comparison among topics, a significant difference was observed in GQS ( χ 2 4 = 12.29, p = 0.02). Although the Kruskal–Wallis multiple comparisons demonstrated no significant differences between topics, the materials on social support tended to be less comprehensive and flowed less naturally than materials on other topics. There was no significant difference in GQS by source ( χ 2 4 = 2.92, p = 0.57). Readability For comparisons among topics, ANOVA revealed a significant difference ( F 4,188 , p < 0.001). Materials on self‐management had higher readability compared to the readability of those on treatment options and social support (mean: 2.46 vs. 1.73 for self‐management vs. treatment options, p = 0.003; 2.46 vs. 1.63 for self‐management vs. social support, p < 0.001). Materials on complications were more readable than those on treatment options and social support (mean: 2.33 vs. 1.73, complications vs. treatment options, p = 0.009; 2.33 vs. 1.63, complications vs. social support, p < 0.001). For comparisons among sources, the ANOVA revealed a significant difference ( F 4,188 , p = 0.003). Materials produced by for‐profit companies had significantly higher readability scores than the scores of those produced by governmental organizations (mean: 2.25 vs. 1.46, for‐profit companies vs. governmental organizations, p = 0.003). DISCUSSION This study revealed that ~60% and 85% of the materials on HD are difficult to understand and act upon, respectively. More than half of the materials met the quality requirements for comprehensibility and fluency with little dispersion among the materials. Issues of understandability included a lack of materials with attached summaries and a limited number of materials with titles and captions accompanying visual aids. Nielsen et al. stated that web users typically scan online content rather than reading it thoroughly. In general, users avoid reading lengthy pages, favoring concise texts that directly address their needs . This implies that readers require concise summaries, titles, and legends as hooks to help them understand complicated information. Regarding actionability, the materials did not utilize tools to support health behaviors. Visual aids such as charts, illustrations, and diagrams facilitate action by helping readers visualize or break down behaviors . However, it is notable that existing materials only provide recommendations in writing. Professionals preparing healthcare information should keep in mind that many materials on HD have disease‐specific messages, such as weight management during dialysis therapy, dietary management , and public healthcare coverage, which makes it challenging to explain actions in text alone. When compared by topic, materials on self‐management were notably easier to act on, followed by those on social support. Both also specified the actions to be taken and directly stated the readers to follow through with them. The actionability of the materials in self‐management may be attributed to the fact that half of the materials detailed their actions and utilized visual aids. In addition, materials on self‐management had higher readability than the readability of those on treatment options and social support. Furthermore, plain language can also increase the actionability of materials. A study that evaluated English‐language materials on arteriovenous fistulas in patients with HD identified issues similar to those observed in this study, such as low readability of materials (13–15 grade level) and inadequate quality . Although Bresler et al. did not identify detailed quality issues, the existing materials in both languages share common challenges, particularly in terms of plain language, suggesting room for improvement. Regarding comparisons among sources, materials produced by non‐profit organizations are difficult to understand and act upon, whereas those produced by commercial companies are understandable and actionable. Materials produced by for‐profit companies also had significantly high readability scores. No significant differences were identified between the sources for PEMAT Items 3 and 4, “use of everyday language,” and “definition of medical terms.” However, the complexity of the syntax and length of each sentence may have influenced the differences in readability. Non‐commercial institutions can improve the clarity of their material by referring to existing resources from commercial companies that employ plain language and effective visual aids. Materials by the governmental organizations were relatively easy to understand and act upon as well. In addition to medical knowledge about HD, governmental organizations provided patients with information about public healthcare coverage and disaster support. These materials presented in small steps how patients should act, which contributed to the relatively high scores for understandability and actionability. Our previous analysis of Japanese‐language materials on non‐dialysis‐dependent CKD similarly discovered that materials produced by commercial companies were easier to understand and act upon than those produced by public institutions . A common feature of all materials from the for‐profit companies was the excellent use of visual aids to promote understanding and action. However, previous studies overseas have reported that materials produced by professional organizations are of higher quality than those produced by non‐professional organizations . The results of the previous studies suggest that the understandability and actionability of materials are not generally dependent on the budget or workforce (i.e., professional illustrators and editors) to produce the materials. Instead, they indicate the importance of guidelines that provide incentives for professional organizations to produce high‐quality materials. To the best of our knowledge, this is the first study to quantitatively analyze HD in a Japanese population. We collected not only top‐ranked materials from search engines but also a comprehensive collection of materials from academic organizations and commercial companies (pharmaceutical companies and medical device manufacturers) that are frequently used in clinical practice. Therefore, this study confirms the comprehensiveness of HD‐related materials. While dialysis treatment in Japan boasts better patient outcomes than in other countries , it also demands a deeper understanding of the disease and a higher standard of self‐management over a longer period of time. For patients living with HD, decades of constant self‐management are often challenging. In a previous study, we found that materials that are easy to understand and act on improved the public's self‐efficacy for health behaviors, that is, the psychological state of feeling capable of taking action. . Development and dissemination of quality materials in patient education can minimize the gap between patient education and health behavior practices. The findings of this study provide suggestions for improving these materials to make them understandable and actionable for medical professionals, corporate personnel, and government officials who produce health and medical information. 4.1 Limitations This study had several limitations. First, we cannot guarantee that these materials will be used in clinical settings. However, we collected extensive materials from medical institutions and companies that could be used in clinical settings. Additionally, a board‐certified nephrologist screened the materials for accurate information that healthcare providers would recommend to their patients. Secondly, the majority of the materials did not credit the creation or update dates. Therefore, some of the materials that ranked high in our Internet search may not fully reflect the latest guidelines or medical policy. Another significant limitation is that the analyzed resources were written in Japanese only and did not target patients from non‐Japanese‐speaking backgrounds. According to the United States Renal Data System, Taiwan (3593) and Japan (2682) stood out in terms of the number of patients with end‐stage kidney disease receiving dialysis per million population at the end of 2020 , indicating an exceptionally high demand for HD materials in Japan. However, HD is a major treatment option for patients with end‐stage kidney disease worldwide , and the findings of this study may be helpful to professionals who disseminate healthcare information to patients undergoing HD in other countries. 4.2 Implications for future research In developing and disseminating quality patient education materials, a thorough understanding of the current gaps in patient education content and health behavior practices in Japanese dialysis care is essential. Future research should investigate the gaps and patient education needs as perceived by patients and providers. As the patient education system and guidelines are revised, it is expected that HD‐related materials will also see improvements. In fact, a previous study in Australia reported improvements in the understandability and actionability of materials on diabetes published by public agencies as diabetes health policy has changed . However, many materials incorporated in this study did not provide the publishing date, making longitudinal analysis difficult. Future studies should consider follow‐up of changes in the understandability and actionability of materials with changes in guidelines and healthcare policy in CKD. Again, this study uncovered deficiencies in patient education materials regarding HD in Japan and suggested solutions to enhance organizational health literacy. The Certified Kidney Disease Educator (CKDE) system was launched in Japan in 2017, and it is expected to improve the prognosis of CKD through multidisciplinary intervention for patients. However, shared decision‐making in the area of CKD is still in the process of becoming widespread. Patients are often hesitant to discuss their questions and concerns with their healthcare providers. Therefore, healthcare providers are required to improve materials to help bridge the gap between health education and health behavior practice. To encourage healthcare providers to distribute quality patient education materials, PEMAT can help identify the most understandable and actionable from a wide range of available resources. The auto‐scoring form and user's guide support healthcare professionals in refining and selecting materials. Moreover, they could potentially aid in the development of guidelines for better materials. Limitations This study had several limitations. First, we cannot guarantee that these materials will be used in clinical settings. However, we collected extensive materials from medical institutions and companies that could be used in clinical settings. Additionally, a board‐certified nephrologist screened the materials for accurate information that healthcare providers would recommend to their patients. Secondly, the majority of the materials did not credit the creation or update dates. Therefore, some of the materials that ranked high in our Internet search may not fully reflect the latest guidelines or medical policy. Another significant limitation is that the analyzed resources were written in Japanese only and did not target patients from non‐Japanese‐speaking backgrounds. According to the United States Renal Data System, Taiwan (3593) and Japan (2682) stood out in terms of the number of patients with end‐stage kidney disease receiving dialysis per million population at the end of 2020 , indicating an exceptionally high demand for HD materials in Japan. However, HD is a major treatment option for patients with end‐stage kidney disease worldwide , and the findings of this study may be helpful to professionals who disseminate healthcare information to patients undergoing HD in other countries. Implications for future research In developing and disseminating quality patient education materials, a thorough understanding of the current gaps in patient education content and health behavior practices in Japanese dialysis care is essential. Future research should investigate the gaps and patient education needs as perceived by patients and providers. As the patient education system and guidelines are revised, it is expected that HD‐related materials will also see improvements. In fact, a previous study in Australia reported improvements in the understandability and actionability of materials on diabetes published by public agencies as diabetes health policy has changed . However, many materials incorporated in this study did not provide the publishing date, making longitudinal analysis difficult. Future studies should consider follow‐up of changes in the understandability and actionability of materials with changes in guidelines and healthcare policy in CKD. Again, this study uncovered deficiencies in patient education materials regarding HD in Japan and suggested solutions to enhance organizational health literacy. The Certified Kidney Disease Educator (CKDE) system was launched in Japan in 2017, and it is expected to improve the prognosis of CKD through multidisciplinary intervention for patients. However, shared decision‐making in the area of CKD is still in the process of becoming widespread. Patients are often hesitant to discuss their questions and concerns with their healthcare providers. Therefore, healthcare providers are required to improve materials to help bridge the gap between health education and health behavior practice. To encourage healthcare providers to distribute quality patient education materials, PEMAT can help identify the most understandable and actionable from a wide range of available resources. The auto‐scoring form and user's guide support healthcare professionals in refining and selecting materials. Moreover, they could potentially aid in the development of guidelines for better materials. CONCLUSION This study quantitatively evaluated the understandability, actionability, readability, and quality of the educational materials provided to patients undergoing HD. The materials encounter challenges related to plain language and concise summaries for improved understandability as well as the effective use of visual aids to enhance actionability. This study offers guidance for enhancing the quality of existing webpages and creating new materials. This work was supported by JSPS KAKENHI Grant Number JP 23K09595. The authors declare that they have no conflicts of interest. It is not applicable because the study was not conducted on human subjects. Data S1: Supporting Information |
The psychological effects of protective isolation on haematological stem cell transplant patients: an integrative, descriptive review | b6d6334f-639b-4183-9527-e5b1b577e2df | 11785653 | Surgical Procedures, Operative[mh] | Haematopoietic stem cell transplantation (HSCT) is a complex and specialised medical procedure. It is used to treat haematological malignancies which are now the 5th most common cancer in the UK. Approximately 37,320 cases are diagnosed every year and 1 in 19 people will be diagnosed with a haematological malignancy at some point during their life . Between 2006 and 2019, the average annual number of HSCTs increased by 5% per year . HSCT involves suppressing the immune system through ‘conditioning therapy’ (chemotherapy and possibly total body irradiation) to minimise the risk of healthy new stem cells being rejected by the body’s immune system. However, this does increase the risk of infection and can lead to neutropenic sepsis, which is the prominent cause of death for patients receiving HSCT. Consequently, protective isolation is used for infection control during HSCT. Protective isolation typically involves single-occupancy rooms for strict isolation and the restriction of visitors, and when discharged, standard practice requires patients to continue rigorous self-isolation for at least 3 months . The intensity of this isolation in addition to treatment effects makes HSCT immensely challenging for patients. It has been suggested that 20% of patients who experience ongoing health challenges, such as a haematological disorder or malignancy, have an increased risk of developing depression . This probability may be increased during HSCT due to the strict conditions of protective isolation and inadequate and controlled interaction with friends, family, and staff. There is also a concern that mental health issues may affect a patient’s health and recovery. This may potentially lead to longer stays in hospital and further costs to healthcare . After HSCT, patients were found to be at greater risk of negative psychological effects due to low satisfaction with visiting hours and not receiving adequate emotional support . When asked about visiting hours, most patients wanted less restrictions and the ability to spend more time with their loved ones. However, in contrast, other patients chose not to socialise with visitors as they perceived family and friends as a cause of further distress and so preferred to be alone. Importantly for most patients, their emotional well-being depended on contact with friends and family . Nevertheless, visitors are prospectively a source of infection as pathogens have been identified in the hands of visitors and emphasised the need for immunocompromised patients to have restrictions on visitors .
Search strategy This integrative, descriptive review followed the five stages proposed by Whittemore and Knafl : namely (1) problem identification, (2) literature search, (3) data evaluation, (4) data analysis, and (5) presentation. An integrative review is a specific review method that allows for the inclusion of a range of methodologies and combines empirical or theoretical literature to provide a comprehensive understanding of a particular phenomenon of concern. This review aimed to be as comprehensive as possible. The review focused on qualitative studies—those that use open-ended techniques, such as interviews and non-statistical techniques for analysis as qualitative studies allow concepts to be evaluated in context (e.g. how the experiences of patients in protective isolation impact psychological health). Searches were completed between November 2022 and March 2023. After an initial scoping search, inclusion and exclusion criteria (see Table in the Appendix) were developed to assist with refining the search. Using these criteria, a search of academic databases including CINAHL, Proquest, Medline, and ASSIA was performed using the keywords HSCT, protective isolation, psychological effects, and synonyms haematological stem cell transplant, bone marrow transplant, BMT, psychological impact, and emotional impact (both independently and combined using Boolean operators, such as ‘and’). Parameters of the search strategy were a date restriction from 2016 to 2023, research written in English, and included published peer-reviewed papers. Reference lists provided further sources of information through cited articles and books authored prior to 2016. These were included only if the title focused on psychological challenges that HSCT patients may face or added benefit or clarity to the review. Study selection Screening of research gathered from the search was completed using the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) , which also enabled the different phases of the search, application of inclusion/exclusion criteria, omission of unsuitable papers, and final included studies to be clearly documented (see Fig. ). Quality appraisal and data extraction The final included papers were reviewed for quality using the CASP (Critical Appraisal Skills Programme) qualitative checklist . The CASP tool consists of 10 questions: 9 addressing quality and 1 addressing ‘value’ (contribution to existing literature). All studies used a qualitative approach due to the focus of the research being on patient experiences and were considered to be of mixed quality by both authors (see Table in the Appendix). Study risk of bias was also assessed using CASP and each author assessed each study independently and in duplicate with disagreements resolved through consensus.
This integrative, descriptive review followed the five stages proposed by Whittemore and Knafl : namely (1) problem identification, (2) literature search, (3) data evaluation, (4) data analysis, and (5) presentation. An integrative review is a specific review method that allows for the inclusion of a range of methodologies and combines empirical or theoretical literature to provide a comprehensive understanding of a particular phenomenon of concern. This review aimed to be as comprehensive as possible. The review focused on qualitative studies—those that use open-ended techniques, such as interviews and non-statistical techniques for analysis as qualitative studies allow concepts to be evaluated in context (e.g. how the experiences of patients in protective isolation impact psychological health). Searches were completed between November 2022 and March 2023. After an initial scoping search, inclusion and exclusion criteria (see Table in the Appendix) were developed to assist with refining the search. Using these criteria, a search of academic databases including CINAHL, Proquest, Medline, and ASSIA was performed using the keywords HSCT, protective isolation, psychological effects, and synonyms haematological stem cell transplant, bone marrow transplant, BMT, psychological impact, and emotional impact (both independently and combined using Boolean operators, such as ‘and’). Parameters of the search strategy were a date restriction from 2016 to 2023, research written in English, and included published peer-reviewed papers. Reference lists provided further sources of information through cited articles and books authored prior to 2016. These were included only if the title focused on psychological challenges that HSCT patients may face or added benefit or clarity to the review.
Screening of research gathered from the search was completed using the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) , which also enabled the different phases of the search, application of inclusion/exclusion criteria, omission of unsuitable papers, and final included studies to be clearly documented (see Fig. ).
The final included papers were reviewed for quality using the CASP (Critical Appraisal Skills Programme) qualitative checklist . The CASP tool consists of 10 questions: 9 addressing quality and 1 addressing ‘value’ (contribution to existing literature). All studies used a qualitative approach due to the focus of the research being on patient experiences and were considered to be of mixed quality by both authors (see Table in the Appendix). Study risk of bias was also assessed using CASP and each author assessed each study independently and in duplicate with disagreements resolved through consensus.
The initial search strategy identified 154 records. The PRISMA flowchart (see Fig. ) demonstrates the process used to exclude studies. Five studies remained and were used in this review. The findings of the primary papers are summarised in tabular form with descriptions of the key characteristics of each (see Table in the Appendix). The five studies included all used qualitative methods . The studies were carried out in Italy , Australia , Iran , the USA , and China . There were no studies identified that have been completed in the UK. Four of the studies involved semi-structured interviews with 60 patients who had received HSCT treatment . One study was conducted by a survey completed by 441 patients . Two of the interviews were carried out by nurse researchers , one was by a trained psychiatrist , and one was by the author who had 10 years’ experience within the subject area . One study focused on patients throughout treatment . The other four studies were carried out post-HSCT . Two studies explored the impact of protective isolation during HSCT . The other three studies explored the overall impact of HSCT treatment . Inductive thematic analysis was performed following the three stages described by Thomas and Harden . The first author applied the process outlined by Thomas and Harden and identified themes based on the psychological effects of protective isolation in HSCT. The second author independently reviewed each paper assessing for common themes, and after discussion, the final four themes were agreed upon: (1) feeling disconnected, (2) contemplation, (3) loss of control, and (4) negative emotional states. Theme 1: feeling disconnected All five studies demonstrated how being confined to a room with limited contact resulted in a feeling of disconnection from others and society . This resulted in patients experiencing immense loneliness during their time in the hospital and afterwards in the community: The isolation is heavy, because you are actually alone and you lack support . Loneliness was often felt after just a few days of isolation and the mental pain of being lonely and homesick was identified as being even less bearable than the physical pain: I felt that the physical pain was bearable. However, the mental pain and stress broke me down. I’m so lonely and so homesick . Patients also expressed feeling disconnected from society. They reflected on how their social world had diminished due to the physical restrictions of isolation . In addition, it was identified that protective isolation created a feeling of alienation from other human beings and suppressed emotions which would normally be shared with others: After the transplantation I felt that I have become another human being and I should suppress these emotions . Protective isolation gave some patients a sense of safety claiming that being disconnected from the outside world also protected them from its demands, making them feel safe and even described being alone as a salvation: Being in isolation is a salvation, because you could not cope being with others . However, in contrast, other patients felt unsafe and frightened. They perceived the isolation as a dangerous trap or as being imprisoned which contributed to the negative emotions they experienced . Theme 2: contemplation Spending long periods alone in protective isolation provided many patients with time to spend in deep, reflective thought . During protective isolation, HSCT patients felt discouraged and uncertain about their treatment and whether it would be successful and fear about cancer recurrence: Tremendous fear that the cancer would come back and the inability to return to their pre-cancer level of functioning . Thinking about, and fear of death was also common: I thought that I would die. I thought that it would end like that, as I was so bad! . Overall, it was acknowledged that fear and uncertainty are two main emotions experienced by patients during protective isolation . It needs to be acknowledged though, that, contemplation during protective isolation, for some patients, produced positive psychological effects. Patients stated they felt lucky to be given a second life and were grateful for the care they received . Other patients developed strong positive emotions due to recovery and sensing that new life had begun . Theme 3: loss of control Four papers revealed that patients experienced a loss of control during protective isolation. This may be a loss of identity due to the implications of being in isolation, a loss of their old selves before treatment, or a feeling of losing one’s mind . During HSCT treatment and recovery, patients lost the capacity to work, became reliant on others, and lost their independence. This led to a loss of self-worth. Patients stated that they noticed large changes in their personalities. They also felt like a non-entity , felt they had lost their old selves and needed to rediscover who they were. This feeling of powerlessness and loss of control was heightened due to the added restrictions and isolation: I did not expect it to be so difficult to feel so powerless . Patients also experienced cognitive symptoms during isolation, such as confusion and aprosexia. The confusion was facilitated by intense treatment and anxiety and this often prevented planned isolation activities to be completed. One patient expressed that they were unable to use the Wi-Fi stating, it was impossible because I was completely out of my mind . Other patients reported symptoms of aprosexia and memory loss. The physical, emotional, and cognitive symptoms experienced by patients during isolation were unpredictable and changed throughout the treatment process suggesting a loss of control . Theme 4: negative emotional states All studies revealed that patients experienced a range of negative emotional states including guilt, anxiety, and depression . Patients stated how family and friends played a large part in providing emotional support throughout treatment. However, this could have a terrible impact on loved ones causing huge disruption to their lives. Patients often felt guilty and a ‘burden’ due to how their illness and treatment had impacted their family. These feelings were amplified for patients who had children : I couldn’t see my daughter and be close to her. That hurt me most . Stress and anxiety were commonly experienced and severity increased with the time spent in isolation . In addition to feeling anxious about treatment and future outcomes, patients felt stress and worry about loved ones . Protective isolation was seen to increase the level of this stress. Anxiety was also experienced about the transition from being in isolation in the hospital, to being at home and having reduced restrictions: Well, I’ve just been anxious to getting back to being allowed to do things . Depression was experienced and increased with time spent in isolation: Now I feel like time is getting longer and longer, and I feel very depressed and lonely . Patients felt that depression had an impact on their interaction with their family or carers. Some patients chose to have no interaction with their loved ones or the outside world: I couldn’t even Facetime with them [family] because I was so depressed . Depression had a varying impact on recovery, function, and quality of life throughout HSCT .
All five studies demonstrated how being confined to a room with limited contact resulted in a feeling of disconnection from others and society . This resulted in patients experiencing immense loneliness during their time in the hospital and afterwards in the community: The isolation is heavy, because you are actually alone and you lack support . Loneliness was often felt after just a few days of isolation and the mental pain of being lonely and homesick was identified as being even less bearable than the physical pain: I felt that the physical pain was bearable. However, the mental pain and stress broke me down. I’m so lonely and so homesick . Patients also expressed feeling disconnected from society. They reflected on how their social world had diminished due to the physical restrictions of isolation . In addition, it was identified that protective isolation created a feeling of alienation from other human beings and suppressed emotions which would normally be shared with others: After the transplantation I felt that I have become another human being and I should suppress these emotions . Protective isolation gave some patients a sense of safety claiming that being disconnected from the outside world also protected them from its demands, making them feel safe and even described being alone as a salvation: Being in isolation is a salvation, because you could not cope being with others . However, in contrast, other patients felt unsafe and frightened. They perceived the isolation as a dangerous trap or as being imprisoned which contributed to the negative emotions they experienced .
Spending long periods alone in protective isolation provided many patients with time to spend in deep, reflective thought . During protective isolation, HSCT patients felt discouraged and uncertain about their treatment and whether it would be successful and fear about cancer recurrence: Tremendous fear that the cancer would come back and the inability to return to their pre-cancer level of functioning . Thinking about, and fear of death was also common: I thought that I would die. I thought that it would end like that, as I was so bad! . Overall, it was acknowledged that fear and uncertainty are two main emotions experienced by patients during protective isolation . It needs to be acknowledged though, that, contemplation during protective isolation, for some patients, produced positive psychological effects. Patients stated they felt lucky to be given a second life and were grateful for the care they received . Other patients developed strong positive emotions due to recovery and sensing that new life had begun .
Four papers revealed that patients experienced a loss of control during protective isolation. This may be a loss of identity due to the implications of being in isolation, a loss of their old selves before treatment, or a feeling of losing one’s mind . During HSCT treatment and recovery, patients lost the capacity to work, became reliant on others, and lost their independence. This led to a loss of self-worth. Patients stated that they noticed large changes in their personalities. They also felt like a non-entity , felt they had lost their old selves and needed to rediscover who they were. This feeling of powerlessness and loss of control was heightened due to the added restrictions and isolation: I did not expect it to be so difficult to feel so powerless . Patients also experienced cognitive symptoms during isolation, such as confusion and aprosexia. The confusion was facilitated by intense treatment and anxiety and this often prevented planned isolation activities to be completed. One patient expressed that they were unable to use the Wi-Fi stating, it was impossible because I was completely out of my mind . Other patients reported symptoms of aprosexia and memory loss. The physical, emotional, and cognitive symptoms experienced by patients during isolation were unpredictable and changed throughout the treatment process suggesting a loss of control .
All studies revealed that patients experienced a range of negative emotional states including guilt, anxiety, and depression . Patients stated how family and friends played a large part in providing emotional support throughout treatment. However, this could have a terrible impact on loved ones causing huge disruption to their lives. Patients often felt guilty and a ‘burden’ due to how their illness and treatment had impacted their family. These feelings were amplified for patients who had children : I couldn’t see my daughter and be close to her. That hurt me most . Stress and anxiety were commonly experienced and severity increased with the time spent in isolation . In addition to feeling anxious about treatment and future outcomes, patients felt stress and worry about loved ones . Protective isolation was seen to increase the level of this stress. Anxiety was also experienced about the transition from being in isolation in the hospital, to being at home and having reduced restrictions: Well, I’ve just been anxious to getting back to being allowed to do things . Depression was experienced and increased with time spent in isolation: Now I feel like time is getting longer and longer, and I feel very depressed and lonely . Patients felt that depression had an impact on their interaction with their family or carers. Some patients chose to have no interaction with their loved ones or the outside world: I couldn’t even Facetime with them [family] because I was so depressed . Depression had a varying impact on recovery, function, and quality of life throughout HSCT .
All five of the studies identified patients feeling disconnected from others and society . This feeling of disconnection resulted in extreme loneliness after only a few days in isolation and mirrors findings in other studies . To maintain social effectiveness , stay connected to society, and receive necessary social support, it is important for patients to maintain relationships with others . These may be loved ones , members of staff, peer support , or volunteers . However, it has been acknowledged that some patients are more at risk of having problems in maintaining relationships during isolation. These include being a male patient, patients with lower education levels, those with higher levels of pain post-transplant, those having had fewer chemotherapy cycles before HSCT, and those with low satisfaction with visiting hours . This suggests that it is necessary to ensure that all HSCT patients have access to ongoing social support and access to enhanced psychological services during protective isolation to mitigate and minimise these recognised . Safety was also identified as an issue for patients in isolation both negatively experienced as imprisonment and positively experienced as an enhanced feeling of protection due to the sterile conditions and the private environment. Earlier literature found that being cared for in a single bedroom enabled patients to feel secure and able to focus on themselves and their recovery . It may also help to make improvements to the isolation room itself to make it more comfortable and homely for the patient. Art interventions, such as the study Open Window , have been shown to have a positive influence on the patient experience and QoL through enhancing the environment and providing stimulation . Protective isolation provided patients with time to contemplate on their present circumstances and future lives . Patients were found to either spend time feeling fearful and uncertain or were enabled to find ways of feeling positive and hopeful. Previous research reported patient’s experiences of fear during HSCT and the significant influence of the fear of recurrence on patient’s psychological well-being and quality of life . This was magnified in female patients and those with pre-existing subclinical depressive symptoms . Various activities have been suggested to help occupy and refocus patient’s thoughts to more positive areas including therapeutic music videos and creative arts . Nonetheless, contemplation is important and required to enable some patients to regulate their own thoughts and feelings and to enable effective adaptation to the isolation environment . It is suggested that for effective coping, patients are required to find meaning in the experience—this meaning making is considered essential in HSCT to minimise the psychological impact of HSCT isolation and assist patients in finding positive personal growth . Experiencing loss of control of both identity and minds was often reported . After HSCT, patients return home to live very different lives to pre-treatment. They are unable to socialise and return to work for some time. It is evident that this leads to a loss of independence, identity, and self-worth . Other studies have discussed the challenges post-HSCT has regarding career and financial domains and how this affects the ability to provide family support leading to a sense of uselessness . A review which examined the effects of isolation on memory found that social isolation combined with loneliness had an adverse effect on cognitive functioning . It suggested that cognitive symptoms reduce confidence in the ability to manage other symptoms experienced during HSCT, thus affecting physical and psychological well-being of patients . Negative emotional states, such as guilt, anxiety, and depression, are commonly reported as part of the HSCT protective isolation patient’s experience and referred to as isolation as a source of suffering , linking psychological distress to both the emotional and physical demands of isolation. Other studies provide evidence that protective isolation can lead to depression and anxiety . In fact, it has been posited that 20% of patients develop a psychological disorder with female patients being more at risk . Pre-transplant psychosocial issues also predict patients experiencing depression and poor quality of life during isolation , highlighting a need to be aware of the patient’s mental health before transplant to predict those at increased risk of psychological issues especially given that pre-transplant depression and anxiety may decrease a patients’ overall survival time after transplantation . Evidence from the wider literature, combined with the findings from this review, acknowledge that there are considerable psychological effects of being in protective isolation for patients during HSCT. Findings have enabled several suggestions to improve the experience of patients in protective isolation post-HSCT. These include pre-HSCT mental health assessments to identify patients at risk, increasing staff knowledge of the potential psychological impacts, clear referral pathways if required, and increased support and education for family and loved ones of HSCT patients.
This review explored research on the psychological impact of protective isolation for HSCT patients. It addressed the need for an increased focus on this area of care after HSCT, particularly in the UK where research is lacking, as outlined by the Clinical Commissioning Policy on HSCT . It compared findings to an earlier review and made recommendations for improving the management of patients’ post-HSCT experiencing protective isolation to contribute to an improved quality of life. This review had several limitations. There was a paucity of available research that focused on the psychological impact of protective isolation in HSCT. This is not surprising due to the limited patient population and their vulnerability which may have affected participation in research. This resulted in only five studies being included in the thematic analysis. Sample sizes were also small in the reviewed studies, although this is not unusual for qualitative methods and only three databases were searched meaning quality studies could have been missed. Importantly, no appropriate studies were found based on patient experiences in the UK. However, recommendations may still be useful and provide insight into enhanced practice in the NHS and UK healthcare delivery around improving the patient experience. Nevertheless, the findings from all studies were complimentary and, when taken together, have provided useful insight into the psychological experience of HSCT patients. It demonstrated that there are considerable psychological effects of being in protective isolation during HSCT and made recommendations for future management to minimise the impact on patients. Due to the increasing number of HSCTs being carried out in the UK, it is essential that more research is conducted that focuses on this impact of treatment and patient management. Further research on protective isolation in the hospital and at post-discharge for patients in the UK is urgently required as currently there is little evidence available to inform effective patient support or best practice care within the context of the NHS. Additionally, more research focus is required on the psychological impact of protective isolation. This should cover all aspects of mental health conditions, including the impact on cognitive function. A previous review, which was conducted 8 years ago, identified similar psychological effects impacting HSCT patients, suggesting that little progress has been made to improve the management of these symptoms or improve the patient experience. HSCT is undoubtedly a necessary treatment approach for haematological disorders but when combined with the treatment process, symptoms produced, and protective isolation it is an extremely difficult and intense experience.
|
Single-Cell and Spatial Multi-Omics Analysis Reveal That Targeting JAG1 in Epithelial Cells Reduces Periodontal Inflammation and Alveolar Bone Loss | ac858255-c96a-4f6a-8698-a2f723a0355b | 11675447 | Biochemistry[mh] | Periodontitis is a prevalent and severe chronic inflammatory disease that impacts the supporting structures of the teeth, leading to tooth loss if left untreated . Despite advancements in clinical treatment options such as scaling, root planing, and surgical interventions, effective treatments targeting the mechanisms underlying periodontitis progression remain elusive . A critical aspect of the pathophysiology of periodontitis is the disruption of the gingival mucosal barrier, a primary interface directly exposed to food and external microbes. This barrier is susceptible to damage by mechanical, chemical, thermal, and biological stimuli leading to dysregulated immune responses and persistent inflammation; this damage not only results in alveolar bone destruction but also significantly diminishes patient quality of life and poses risks to their overall health . The resident cells within the gingival mucosa play pivotal roles in regulating immune responses, repairing periodontal tissues, and maintaining barrier homeostasis . For example, the epithelial cells contribute to periodontal homeostasis through antigen presentation , whereas the immune cells are important in pathogen clearance and cytokine release . Our previous studies revealed the interactions among these resident cells that compromise gingival barrier stability and induce periodontitis, providing a theoretical basis for developing targeted therapeutic interventions. However, research related to periodontal treatment has focused predominantly on animal models, particularly mice . Recent studies have highlighted significant functional and immunological differences between the human and the murine mucosal barriers, which limits the translational potential of findings from murine models to precise human therapies . Understanding the differences between human and murine gingival mucosal cells is essential for developing targeted treatment strategies. While single-cell RNA sequencing (scRNA-seq) offers a powerful approach to explore interspecies differences with high resolution , integrating techniques such as spatial transcriptomics and spatial proteomics enhances our understanding of tissue-specific cellular behaviors . The mucosal immune system is pivotal in the development and progression of inflammatory diseases like periodontitis . Previous applications of scRNA-seq have successfully characterized the heterogeneity of epithelial and immune cells in conditions such as inflammatory bowel disease , revealing critical insights into cell-specific functions and interactions. Furthermore, these advanced methodologies enable detailed investigations of communication networks between immune and epithelial cells, which are vital for regulating immune responses and preserving mucosal barrier integrity during inflammation . This study aimed to utilize scRNA-seq, spatial transcriptomics, and spatial proteomics to compare epithelial and immune cell populations in the human and murine gingival mucosa. Our findings highlight the close communication between epithelial cells and macrophages in periodontitis. Additionally, spatial transcriptomics and proteomics identified key genes and proteins linked to basal layer inflammation under these conditions. Targeting the JAG1–NOTCH2 axis effectively reduced periodontal inflammation and alveolar bone loss. Overall, this research emphasizes the importance of interspecies comparisons for enhancing the precision and efficacy of therapeutic strategies in inflammatory diseases.
2.1. Single-Cell and Spatial Transcriptomic Analysis Reveals Close Communication Between Epithelial Cells and Macrophages in Periodontitis To investigate the cellular landscape of mucosal defense responses in periodontitis, we conducted a cross-species single-cell RNA sequencing (scRNA-seq) analysis on mucosal samples from healthy and periodontitis patients, as well as healthy and periodontitis model mice ( A). After applying quality control metrics and utilizing the Seurat V4 R package, we obtained a total of 69,670 cells from 21 human scRNA-seq datasets, with a median of 24,376 genes per cell, and 24,205 cells from six mouse scRNA-seq datasets, with a median of 21,861 genes per cell . Unsupervised clustering of the scRNA-seq data from both species revealed ten major shared cell types. Uniform manifold approximation and projection (UMAP) clustering highlighted various cell lineages, including endothelial cells, fibroblasts, epithelial cells, T cells, NK cells, plasma cells, pericytes, myeloid cells, B cells, mast cells, and neurons ( B). Notably, further comparisons of marker gene expression within these cell populations revealed a high degree of similarity between human and mouse gingival mucosal cells, particularly in epithelial cells, myeloid immune cells, and neurons, which presented the highest levels of conserved gene expression across both species ( C). Epithelial cells are crucial components of both physical and biological barriers in mucosal defense . We extracted and integrated gingival mucosal epithelial cells from both species for clustering analysis. The epithelial cell subpopulation classification was consistent between humans and mice, categorizing the cells into three main subpopulations: basal, spinous, and outer layers ( D). We observed consistent proportions of these subpopulations, with basal cells being the most abundant, and outer cells the least abundant. However, the proportion of basal cells in mouse gingival epithelial cells was greater than that in human gingival epithelial cells. The distribution patterns of the epithelial cell subpopulations revealed significant differences, particularly in the basal layer, where human cells predominantly clustered in a single basal subgroup (Basal Layer 1), whereas mouse cells clustered in a different subgroup (Basal Layer 2) . Further comparison of marker gene expression between these epithelial cell subpopulations revealed similar expression patterns. In humans, Basal Layer 1 cells predominantly expressed COL7A1 , KRT5 , and KRT14 , whereas Basal Layer 2 cells in mice presented elevated expression of Txn and Krt17 . Interestingly, Basal Layer 2 cells in mice also expressed markers of both Basal Layer 1 and spinous cells, suggesting that Basal Layer 2 cells may represent a transitional state between basal and spinous cells . To further explore epithelial cell differentiation trajectories, we performed pseudotime trajectory analysis using Monocle 3. The differentiation trajectory of epithelial cells is largely consistent between species, following a pathway from Basal Layer 1 to Basal Layer 2, then to spinous, and finally to outer layers. However, dynamic gene expression changes during epithelial cell differentiation exhibited species-specific variations ( E). For example, KRT14 remained highly expressed in both the basal and the spinous layers of humans and mice, whereas its expression was downregulated in the outer layers. Conversely, LY6D expression was increased in human basal cells, maintaining high levels in the spinous and outer layers, but was downregulated in the outer layers of mice ( F,G). Spatial transcriptomics revealed the spatial distribution of gene expression and cellular interactions within periodontitis-affected gingiva. Focusing on the epithelial cell subpopulations on the basis of spatial location and cell marker genes, we identified three major subpopulations, namely, the basal, spinous, and outer layers, which was consistent with the scRNA-seq data ( H). Additionally, we conducted marker analysis for the predominant immune cell types in periodontitis, revealing a significant increase in the number of myeloid immune cells, which were primarily localized in the epithelial basal layer, while no significant changes were observed for B cells or T cells. These findings suggest that myeloid immune cells may interact with basal epithelial cells during periodontitis. 2.2. Regulation of Macrophages by Basal Epithelial Cells To further explore the role of myeloid immune cells in periodontitis and their interactions with basal epithelial cells, we reclassified the myeloid immune cells using scRNA-seq analysis. We identified six major cell types common to both humans and mice. UMAP clustering highlighted distinct immune lineages, including M1 and M2 macrophages, dendritic cells (DCs), myeloid dendritic cells (MDCs), and neutrophils ( A). The proportions of M2 cells, DCs, and MDCs in the mouse gingival mucosa were similar to those in the human gingival mucosa, while the M1 cell proportions were significantly greater in the human samples than in the mouse samples, where neutrophils were more abundant ( B). Further comparison of cell-specific marker gene expression revealed similarities in the expression profiles between the two species. In humans, M1 macrophages primarily expressed genes such as CLEC10A , NDRG2 , PKIB , FCERIA , and CD86 , which were not significantly expressed in the mouse samples, likely due to the lower abundance of M1 cells in the latter ( C,D). Heatmap analysis revealed that proinflammatory macrophages in mice presented upregulated expression of genes such as Tomm6 , Znf705a , Mia2 , Nrd1 , Tff2 , Gda , Tf , Krt24 , and Arl5c , whereas human proinflammatory macrophages presented increased expression of PTMA , TMSB4X , MALAT1 , TMSB10 , H3F3B , EIF1 , S100A11 , SRGN , NEAT1 , and IGKC ( E). Differential gene expression analysis among these subpopulations provided additional insights. In M2 macrophages, genes such as Atp6v0c , Tomm6 , Nrd1 , Znf705a , F10 , Tff2 , Gda , Calml5 , Tf , and Cyp4f2 were upregulated in mice, while PTMA , EIF1 , SRGN , NEAT1 , IGKC , CXCL8 , and CCL3 were upregulated in humans ( F). These findings indicate distinct immunoregulatory mechanisms employed by macrophage subtypes during periodontitis, emphasizing both shared and species-specific responses. By utilizing high-resolution single-cell annotation, we predicted cell communication pairs between epithelial cells and macrophages during periodontitis through the coexpression of ligand–receptor pairs on the basis of interaction structures. Strong interactions were observed between basal epithelial cells and macrophages. Notably, the interaction between human basal epithelial cells and M1 macrophages was stronger than that between mouse basal epithelial cells and M1 macrophages, suggesting interspecies differences in the regulation of macrophages by basal epithelial cells during periodontitis ( G). To explore the ligand–receptor interactions between basal epithelial cells and M1 macrophages, we conducted a comparative analysis. The key ligand–receptor pairs in humans included CD99-PILRA , TGFB2-TGFβ receptor 1, JAG1-NOTCH2 , MIF-TNFRSF14 , CSF3-CSF3R , and ANXA1-FPR1 . In mice, the principal ligand–receptor pairs were GRN-CLEC4M , ANXA1-FPR2 , ICAM1-ITGAL , COPA-P2RY6 , and JAG1-NOTCH2 ( H). Interestingly, both species exhibited JAG1–NOTCH2 ligand–receptor interactions, highlighting a common mechanism for epithelial cell–M1 macrophage communication during periodontitis ( H). These findings suggest that although signaling pathways exhibit species-specific differences, the regulatory interactions between epithelial cells and M1 macrophages are conserved, with JAG1–NOTCH2 potentially playing a key role. 2.3. Analysis of Basal Epithelial Cell Activation Pathways and Downstream Genes During Periodontitis To further investigate the roles of epithelial cells during periodontitis, we analyzed gene expression patterns and functional pathways within these cells. Our analysis revealed distinct gene expression patterns and functional roles among the different epithelial cell subpopulations during periodontitis. Specifically, the genes expressed in basal cells were primarily enriched in inflammation-related pathways, including “Positive Regulation of MAPK Cascade”, “Neutrophil Degranulation”, and “Response to Cytokine Stimulus” ( A). In contrast, the genes expressed in the spinous and outer layers were primarily associated with epithelial proliferation and repair processes ( B,C). We focused on comparing the basal epithelial cells of periodontitis-affected humans and mice and analyzed the differentially enriched gene pathways. In both species, these cells were enriched in inflammation-related pathways, indicating a conserved inflammatory response ( D,E). However, compared with their mouse counterparts, the human gingival epithelial cells demonstrated a unique enrichment in immune cell factor-related pathways ( D). In addition to shared inflammatory pathways, species-specific differences in gene expression during periodontitis were also observed ( F). For example, genes such as Actb , Dmkn , and Gsto1 were significantly upregulated in mice, whereas ACTG1 , LAMC2 , and CTSZ were significantly upregulated in humans ( G). These findings highlight the crucial role of epithelial cells in immune regulation during periodontitis, particularly concerning pathways related to interactions with immune cells. While both species display similar core inflammatory responses, species-specific mechanisms contribute to the immunoregulatory functions of epithelial cells during periodontitis. 2.4. Spatial Proteomics and Spatial Transcriptomics Verify Genes and Proteins Associated with Basal Layer Inflammation in Periodontitis Given that proteins are direct executors of biological functions, revealing their spatial expression is crucial for determining protein localization and function within tissues. We conducted a combined spatial proteomics analysis of basal epithelial cells from humans and mice with periodontitis ( A). By integrating data from previous scRNA-seq analyses that identified commonly upregulated genes in the basal epithelial cells of both species, we identified eight proteins with high coexpression: ACTB, CTSZ, DMKN, GSTO1, HBB, KRT17, LAMC2, and RAB1B ( B). Spatial transcriptomics revealed significant upregulation of ACTB, GSTO1, RAB1B, HBB, DMKN, and CTSZ in basal epithelial cells during periodontitis ( C,D). Among these genes, CTSZ and GSTO1 presented the highest expression levels associated with inflammation and immune modulation, prompting further investigation into their potential regulation of epithelial JAG1. siRNA-mediated gene silencing experiments demonstrated that CTSZ inhibition significantly reduced JAG1 expression in epithelial cells upon LPS stimulation, whereas no notable changes were observed for GSTO1 ( E,F). These results suggest that CTSZ may play a key role in the upregulation of JAG1 in epithelial cells during periodontitis, potentially driving a proinflammatory response in macrophages ( G). 2.5. Targeting the JAG1–NOTCH2 Axis Reduces Periodontal Inflammation and Alveolar Bone Loss in Periodontitis Considering the critical role of the JAG1–NOTCH2 axis in macrophage-mediated inflammation and our prior findings of a significant upregulation of the JAG1–NOTCH2 ligand–receptor pairs in both human and mouse gingival epithelial cells and macrophages during periodontitis, we investigated the specific role of this axis in epithelial–macrophage interactions. We first measured the expression levels of JAG1 and NOTCH2 across various cell types. JAG1 was primarily expressed in stromal cells, particularly in epithelial cells, whereas NOTCH2 was predominantly expressed in myeloid cells ( A). We compared the expression of JAG1 and NOTCH2 under inflammatory and normal conditions and found that JAG1 expression in basal cells and NOTCH2 expression in myeloid cells were significantly elevated under inflammatory conditions ( B). Spatial transcriptomics further confirmed that JAG1 expression in basal epithelial cells was markedly increased in the context of periodontitis, with concurrent upregulation of NOTCH2 in macrophages ( C). To validate the role of the JAG1–NOTCH2 axis in periodontitis, we employed a ligature-induced periodontitis mouse model and administered localized siRNA to suppress JAG1 expression . We then examined the effect of siJAG1 on JAG1 expression at the mRNA level using RT-qPCR. JAG1 expression was reduced after siJAG1 treatment ( D). IF staining showed that JAG1 expression was significantly reduced in periodontal tissue after siJAG1 treatment . Micro-CT imaging indicated that the siRNA treatment significantly reduced alveolar bone resorption in the periodontitis model mice ( E,F). Additionally, hematoxylin and eosin (HE) staining revealed that siJAG1 treatment decreased alveolar bone loss in the affected mice ( G). Finally, we compared the expression levels of periodontitis-related genes, such as Il1b and Tnf, before and after treatment and found that their expression was notably downregulated ( H). IF staining showed that TNF-α expression was significantly reduced, while ARG1expression was significantly upregulated in periodontal tissue after siJAG1 treatment . These findings indicate that the JAG1–NOTCH2 axis serves as a key mediator of proinflammatory crosstalk between epithelial cells and macrophages during periodontitis, highlighting the potential therapeutic implications of targeting this pathway to reduce inflammation and alveolar bone loss in patients with periodontal disease.
To investigate the cellular landscape of mucosal defense responses in periodontitis, we conducted a cross-species single-cell RNA sequencing (scRNA-seq) analysis on mucosal samples from healthy and periodontitis patients, as well as healthy and periodontitis model mice ( A). After applying quality control metrics and utilizing the Seurat V4 R package, we obtained a total of 69,670 cells from 21 human scRNA-seq datasets, with a median of 24,376 genes per cell, and 24,205 cells from six mouse scRNA-seq datasets, with a median of 21,861 genes per cell . Unsupervised clustering of the scRNA-seq data from both species revealed ten major shared cell types. Uniform manifold approximation and projection (UMAP) clustering highlighted various cell lineages, including endothelial cells, fibroblasts, epithelial cells, T cells, NK cells, plasma cells, pericytes, myeloid cells, B cells, mast cells, and neurons ( B). Notably, further comparisons of marker gene expression within these cell populations revealed a high degree of similarity between human and mouse gingival mucosal cells, particularly in epithelial cells, myeloid immune cells, and neurons, which presented the highest levels of conserved gene expression across both species ( C). Epithelial cells are crucial components of both physical and biological barriers in mucosal defense . We extracted and integrated gingival mucosal epithelial cells from both species for clustering analysis. The epithelial cell subpopulation classification was consistent between humans and mice, categorizing the cells into three main subpopulations: basal, spinous, and outer layers ( D). We observed consistent proportions of these subpopulations, with basal cells being the most abundant, and outer cells the least abundant. However, the proportion of basal cells in mouse gingival epithelial cells was greater than that in human gingival epithelial cells. The distribution patterns of the epithelial cell subpopulations revealed significant differences, particularly in the basal layer, where human cells predominantly clustered in a single basal subgroup (Basal Layer 1), whereas mouse cells clustered in a different subgroup (Basal Layer 2) . Further comparison of marker gene expression between these epithelial cell subpopulations revealed similar expression patterns. In humans, Basal Layer 1 cells predominantly expressed COL7A1 , KRT5 , and KRT14 , whereas Basal Layer 2 cells in mice presented elevated expression of Txn and Krt17 . Interestingly, Basal Layer 2 cells in mice also expressed markers of both Basal Layer 1 and spinous cells, suggesting that Basal Layer 2 cells may represent a transitional state between basal and spinous cells . To further explore epithelial cell differentiation trajectories, we performed pseudotime trajectory analysis using Monocle 3. The differentiation trajectory of epithelial cells is largely consistent between species, following a pathway from Basal Layer 1 to Basal Layer 2, then to spinous, and finally to outer layers. However, dynamic gene expression changes during epithelial cell differentiation exhibited species-specific variations ( E). For example, KRT14 remained highly expressed in both the basal and the spinous layers of humans and mice, whereas its expression was downregulated in the outer layers. Conversely, LY6D expression was increased in human basal cells, maintaining high levels in the spinous and outer layers, but was downregulated in the outer layers of mice ( F,G). Spatial transcriptomics revealed the spatial distribution of gene expression and cellular interactions within periodontitis-affected gingiva. Focusing on the epithelial cell subpopulations on the basis of spatial location and cell marker genes, we identified three major subpopulations, namely, the basal, spinous, and outer layers, which was consistent with the scRNA-seq data ( H). Additionally, we conducted marker analysis for the predominant immune cell types in periodontitis, revealing a significant increase in the number of myeloid immune cells, which were primarily localized in the epithelial basal layer, while no significant changes were observed for B cells or T cells. These findings suggest that myeloid immune cells may interact with basal epithelial cells during periodontitis.
To further explore the role of myeloid immune cells in periodontitis and their interactions with basal epithelial cells, we reclassified the myeloid immune cells using scRNA-seq analysis. We identified six major cell types common to both humans and mice. UMAP clustering highlighted distinct immune lineages, including M1 and M2 macrophages, dendritic cells (DCs), myeloid dendritic cells (MDCs), and neutrophils ( A). The proportions of M2 cells, DCs, and MDCs in the mouse gingival mucosa were similar to those in the human gingival mucosa, while the M1 cell proportions were significantly greater in the human samples than in the mouse samples, where neutrophils were more abundant ( B). Further comparison of cell-specific marker gene expression revealed similarities in the expression profiles between the two species. In humans, M1 macrophages primarily expressed genes such as CLEC10A , NDRG2 , PKIB , FCERIA , and CD86 , which were not significantly expressed in the mouse samples, likely due to the lower abundance of M1 cells in the latter ( C,D). Heatmap analysis revealed that proinflammatory macrophages in mice presented upregulated expression of genes such as Tomm6 , Znf705a , Mia2 , Nrd1 , Tff2 , Gda , Tf , Krt24 , and Arl5c , whereas human proinflammatory macrophages presented increased expression of PTMA , TMSB4X , MALAT1 , TMSB10 , H3F3B , EIF1 , S100A11 , SRGN , NEAT1 , and IGKC ( E). Differential gene expression analysis among these subpopulations provided additional insights. In M2 macrophages, genes such as Atp6v0c , Tomm6 , Nrd1 , Znf705a , F10 , Tff2 , Gda , Calml5 , Tf , and Cyp4f2 were upregulated in mice, while PTMA , EIF1 , SRGN , NEAT1 , IGKC , CXCL8 , and CCL3 were upregulated in humans ( F). These findings indicate distinct immunoregulatory mechanisms employed by macrophage subtypes during periodontitis, emphasizing both shared and species-specific responses. By utilizing high-resolution single-cell annotation, we predicted cell communication pairs between epithelial cells and macrophages during periodontitis through the coexpression of ligand–receptor pairs on the basis of interaction structures. Strong interactions were observed between basal epithelial cells and macrophages. Notably, the interaction between human basal epithelial cells and M1 macrophages was stronger than that between mouse basal epithelial cells and M1 macrophages, suggesting interspecies differences in the regulation of macrophages by basal epithelial cells during periodontitis ( G). To explore the ligand–receptor interactions between basal epithelial cells and M1 macrophages, we conducted a comparative analysis. The key ligand–receptor pairs in humans included CD99-PILRA , TGFB2-TGFβ receptor 1, JAG1-NOTCH2 , MIF-TNFRSF14 , CSF3-CSF3R , and ANXA1-FPR1 . In mice, the principal ligand–receptor pairs were GRN-CLEC4M , ANXA1-FPR2 , ICAM1-ITGAL , COPA-P2RY6 , and JAG1-NOTCH2 ( H). Interestingly, both species exhibited JAG1–NOTCH2 ligand–receptor interactions, highlighting a common mechanism for epithelial cell–M1 macrophage communication during periodontitis ( H). These findings suggest that although signaling pathways exhibit species-specific differences, the regulatory interactions between epithelial cells and M1 macrophages are conserved, with JAG1–NOTCH2 potentially playing a key role.
To further investigate the roles of epithelial cells during periodontitis, we analyzed gene expression patterns and functional pathways within these cells. Our analysis revealed distinct gene expression patterns and functional roles among the different epithelial cell subpopulations during periodontitis. Specifically, the genes expressed in basal cells were primarily enriched in inflammation-related pathways, including “Positive Regulation of MAPK Cascade”, “Neutrophil Degranulation”, and “Response to Cytokine Stimulus” ( A). In contrast, the genes expressed in the spinous and outer layers were primarily associated with epithelial proliferation and repair processes ( B,C). We focused on comparing the basal epithelial cells of periodontitis-affected humans and mice and analyzed the differentially enriched gene pathways. In both species, these cells were enriched in inflammation-related pathways, indicating a conserved inflammatory response ( D,E). However, compared with their mouse counterparts, the human gingival epithelial cells demonstrated a unique enrichment in immune cell factor-related pathways ( D). In addition to shared inflammatory pathways, species-specific differences in gene expression during periodontitis were also observed ( F). For example, genes such as Actb , Dmkn , and Gsto1 were significantly upregulated in mice, whereas ACTG1 , LAMC2 , and CTSZ were significantly upregulated in humans ( G). These findings highlight the crucial role of epithelial cells in immune regulation during periodontitis, particularly concerning pathways related to interactions with immune cells. While both species display similar core inflammatory responses, species-specific mechanisms contribute to the immunoregulatory functions of epithelial cells during periodontitis.
Given that proteins are direct executors of biological functions, revealing their spatial expression is crucial for determining protein localization and function within tissues. We conducted a combined spatial proteomics analysis of basal epithelial cells from humans and mice with periodontitis ( A). By integrating data from previous scRNA-seq analyses that identified commonly upregulated genes in the basal epithelial cells of both species, we identified eight proteins with high coexpression: ACTB, CTSZ, DMKN, GSTO1, HBB, KRT17, LAMC2, and RAB1B ( B). Spatial transcriptomics revealed significant upregulation of ACTB, GSTO1, RAB1B, HBB, DMKN, and CTSZ in basal epithelial cells during periodontitis ( C,D). Among these genes, CTSZ and GSTO1 presented the highest expression levels associated with inflammation and immune modulation, prompting further investigation into their potential regulation of epithelial JAG1. siRNA-mediated gene silencing experiments demonstrated that CTSZ inhibition significantly reduced JAG1 expression in epithelial cells upon LPS stimulation, whereas no notable changes were observed for GSTO1 ( E,F). These results suggest that CTSZ may play a key role in the upregulation of JAG1 in epithelial cells during periodontitis, potentially driving a proinflammatory response in macrophages ( G).
Considering the critical role of the JAG1–NOTCH2 axis in macrophage-mediated inflammation and our prior findings of a significant upregulation of the JAG1–NOTCH2 ligand–receptor pairs in both human and mouse gingival epithelial cells and macrophages during periodontitis, we investigated the specific role of this axis in epithelial–macrophage interactions. We first measured the expression levels of JAG1 and NOTCH2 across various cell types. JAG1 was primarily expressed in stromal cells, particularly in epithelial cells, whereas NOTCH2 was predominantly expressed in myeloid cells ( A). We compared the expression of JAG1 and NOTCH2 under inflammatory and normal conditions and found that JAG1 expression in basal cells and NOTCH2 expression in myeloid cells were significantly elevated under inflammatory conditions ( B). Spatial transcriptomics further confirmed that JAG1 expression in basal epithelial cells was markedly increased in the context of periodontitis, with concurrent upregulation of NOTCH2 in macrophages ( C). To validate the role of the JAG1–NOTCH2 axis in periodontitis, we employed a ligature-induced periodontitis mouse model and administered localized siRNA to suppress JAG1 expression . We then examined the effect of siJAG1 on JAG1 expression at the mRNA level using RT-qPCR. JAG1 expression was reduced after siJAG1 treatment ( D). IF staining showed that JAG1 expression was significantly reduced in periodontal tissue after siJAG1 treatment . Micro-CT imaging indicated that the siRNA treatment significantly reduced alveolar bone resorption in the periodontitis model mice ( E,F). Additionally, hematoxylin and eosin (HE) staining revealed that siJAG1 treatment decreased alveolar bone loss in the affected mice ( G). Finally, we compared the expression levels of periodontitis-related genes, such as Il1b and Tnf, before and after treatment and found that their expression was notably downregulated ( H). IF staining showed that TNF-α expression was significantly reduced, while ARG1expression was significantly upregulated in periodontal tissue after siJAG1 treatment . These findings indicate that the JAG1–NOTCH2 axis serves as a key mediator of proinflammatory crosstalk between epithelial cells and macrophages during periodontitis, highlighting the potential therapeutic implications of targeting this pathway to reduce inflammation and alveolar bone loss in patients with periodontal disease.
In this study, we integrated scRNA-seq data, spatial transcriptomics, and spatial proteomics from both human and murine models to explore the cellular and molecular mechanisms underlying differences and similarities in the gingival mucosal barrier during periodontitis, supported by in vitro and in vivo experimental validation. We discovered a regulatory mechanism involving the JAG1–Notch signaling axis between basal epithelial cells and M1-type macrophages, which plays a critical role in the progression of periodontitis. A comprehensive analysis of these datasets provided key insights into the conserved and divergent cell populations, gene expression patterns, and cell–cell communication pathways involved in the inflammatory processes of periodontitis. These findings not only deepen our understanding of the pathogenesis of periodontal disease but also offer potential targets for therapeutic interventions aimed at controlling inflammation and promoting tissue repair. The integration of scRNA-seq data, spatial transcriptomics, and spatial proteomics has facilitated a detailed characterization of cellular populations across species . Our study highlights the similarities and critical differences in the cellular and molecular actions within the gingival mucosal barrier in humans and mice during periodontitis. Traditionally, many studies rely solely on mouse models to understand periodontal disease ; however, our research emphasizes interspecies similarities and crucial discrepancies. For example, we observed conservation in the cell types of the gingival mucosal barrier and molecular functions across the examined species. The composition of barrier cell populations and their expressed markers showed a high degree of similarity between humans and mice, with a similar analysis of epithelial and immune cell subpopulations revealing high congruence. However, we also identified species-specific differences in the gingival mucosal barrier, particularly in the distribution and activation states of epithelial and immune cell subgroups. Compared with the human samples, the murine model samples exhibited differences in the functions of basal epithelial cells. The analysis of immune cells revealed that the murine model had a more pronounced inflammatory response and greater neutrophil infiltration than the human model did, suggesting that mouse models might overestimate certain inflammatory aspects of the disease. These differences highlight the limitations of solely relying on mouse models and indicate that the results from animal studies may not fully translate to human disease without careful consideration of these discrepancies. By comparing these species, we can refine our preclinical research approaches, ensuring that therapeutic strategies are better tailored to human biology, potentially leading to more precise and effective treatments for periodontal disease. We also identified a potential mechanism by which epithelial cells regulate macrophages to exacerbate periodontitis. We determined that JAG1 is one of the key molecules by which epithelial cells upregulate the inflammatory state of macrophages. Studies have shown that JAG1 (Jagged1), a ligand in the Notch signaling pathway, can influence cell function and activity through the activation of this pathway . Specifically, activation of Notch enhances M1 gene expression and the proinflammatory response in macrophages ; macrophage Notch can be activated by LPS-mediated TLR4 stimulation, resulting in the production of proinflammatory cytokines (including TNF, IL-6, IL-10, and IL-12) . However, its role and potential mechanisms of action in periodontitis remain unclear. Therefore, we first clarified through in vitro experiments that epithelial cells can activate the Notch signaling pathway in macrophages via JAG1, promoting their polarization to M1-type macrophages and producing proinflammatory cytokines; the suppression of epithelial cell JAG1 expression through siRNA could eliminate its proinflammatory regulatory effect on macrophages. Furthermore, the use of siJAG1 in a murine periodontitis model significantly reduced the local inflammatory cytokine levels and alveolar bone resorption. These findings indicate that epithelial cells regulate macrophages through the JAG1–Notch axis in both human and murine periodontitis and serve as potential targets for the treatment of periodontitis. Our approach is novel in that it bridges the gap between purely murine-based research and the need for models more relevant to humans. While previous studies based on animal models have laid the groundwork for understanding the inflammatory processes of periodontitis, the differences we identified suggest that integrating human data early in the treatment development process is necessary. By identifying these interspecies differences, we can move toward more informed strategies to develop treatments that are more likely to succeed in human clinical trials. Notably, periodontitis is not only a prevalent oral disease but also closely linked to systemic conditions such as metabolic syndrome. Evidence suggests that the chronic inflammatory state associated with periodontitis may exacerbate manifestations of metabolic syndrome, including insulin resistance and dysregulated lipid metabolism . Therefore, we propose that targeting the JAG1-Notch pathway could not only reduce the local inflammation associated with periodontitis but also have a positive clinical impact on systemic diseases such as metabolic syndrome. Our study highlights the role of the JAG1/Notch axis in epithelial cells and M1 macrophages at the gingival mucosa barrier. While the gingival mucosa shares similarities with other mucosal barriers, including immune cell composition and antimicrobial peptide production, it has unique characteristics, such as a propensity for strong inflammatory responses against localized pathogens . These differences may limit the generalizability of our findings to other mucosal barriers. However, our cross-species and multi-omics strategy can be extended to other mucosal immunity studies to identify potential therapeutic targets. While our study reveals the potential role of JAG1 in periodontitis and provides initial validation through a mouse model, we acknowledge several limitations in our current research. First, the ligature-induced periodontitis model used in this study is a classical experimental model that effectively induces localized inflammation and periodontal tissue destruction. But human periodontitis generally follows a chronic course . Therefore, the acute nature of this model may not fully replicate the complex pathophysiological processes of human periodontitis. Additionally, the downstream signaling cascade through which epithelial cells regulate macrophage polarization via the JAG1/Notch axis received relatively little attention in our study. Further studies are needed to elucidate how the JAG1/Notch axis regulates macrophage function and contributes to the inflammatory response. In summary, this study demonstrated the ability to integrate scRNA-seq data from human and murine models to study gingival inflammation. Our results underscore critical immune–epithelial interactions and emphasize the importance of considering species-specific differences. Insights gained from comparing human and mouse data provide a more detailed understanding of periodontal disease and pave the way for developing more precise and effective treatment strategies that account for these interspecies differences. By advancing our understanding of the pathogenesis of periodontal disease, this research lays the foundation for future studies aimed at developing more targeted and human-specific therapeutic approaches for inflammatory periodontal diseases.
4.1. Data Acquisition The scRNA-seq data used in this study were obtained from publicly available datasets deposited in the Gene Expression Omnibus (GEO) database. Specifically, human gingival tissue scRNA-seq data were retrieved from the GSE164241 dataset. Mouse gingival tissue scRNA-seq data were obtained from the GSE228635 and GSE254766 datasets. These datasets were selected on the basis of their comprehensive profiling of immune and epithelial cell populations in gingival tissues, as well as their relevance to the study objectives. The spatial transcriptomics data used in this study were obtained from the Genome Sequence Archive (GSA) database ( https://bigd.big.ac.cn/gsa-human ; accessed on 28 September 2024) under the accession code HRA003217. 4.2. Patient Recruitment and Tissue Collection We collected human gingival samples from patients undergoing tooth extraction procedures at the Hospital of Stomatology, Sun Yat-sen University, Guangzhou. Each sample complied with the Chinese GCP, ICH GCP, and relevant regulations and was approved by the Medical Ethics Committee of the Hospital of Stomatology, Sun Yat-sen University (No. KQEC-2022-14-01). Informed consent was obtained from the participants before they were included in our study. Patients with a history of any medical condition other than periodontitis were excluded. The periodontitis samples presented periodontal pockets with depths >4 mm and bleeding on probing. The healthy samples showed no signs of periodontal disease and presented no deep periodontal pockets (depth <3 mm) or bleeding on probing. 4.3. Spatial Proteomics of the Human Gingival Samples The spatial proteomics workflow includes several key steps: (1) tissue sectioning and staining; (2) microdissection of regions of interest; (3) microsample preparation; (4) mass spectrometry; (5) database search; (6) data analysis. The gingival samples were embedded in paraffin and sliced into 5–10 µm sections, which were attached to MMI (MMI GmbH, Eching, Germany) membranes on slides. The slides were subsequently stained with HE to assess tissue morphology and inflammation via laser capture microdissection (LCM, Eching, Germany). For LCM, the slide to be tested was placed on the microscope slide frame, and then the cut sample was placed on the fixed slide. Next, the MMI software (v5.1 # 262; Eching, Germany) was opened, the entire sample was scanned with a 4× mirror, the area to be cut was dragged and selected, and the focal length was adjusted. Subsequently, the system was switched to the 10× magnification area, and the focal length was adjusted again. Then, a mouse or electronic pen was used to circle and select the area to be cut. Finally, the “CUT CELL” button was clicked to cut. After the cutting process was complete, the centrifuge tube covered with an adhesive coating was placed upside down on a glass slide to adhere the cut sample to it. Five microliters of lysis buffer was added to the lid of the EP tube containing the tissue sample, which was subsequently transferred to a new EP tube. Next, a 95 °C heat denaturation treatment was performed in the PCR machine for 10 min. After 10 min of noncontact ultrasonic treatment, the samples were centrifuged at 10,000× g for 1 min. The supernatant was diluted with 5 mL of 50 mM TEAB, and 2 µL of 0.5 µg/mL trypsin was added; the reaction was performed at 37 °C overnight in a PCR machine. Finally, an SDB-RPS column was used for the desalination treatment, followed by vacuum drying and mass spectrometry detection. The mass spectrometry data were collected via computer software to obtain the identification of and quantitative information on the peptides and proteins. The databases used in this study were Mus musculus UP000000589 (protein count: 54,822, database: UniProt, loading time: 7 March 2024) and Homo sapiens SP (protein count: 20,434, database: UniProt, download time: 7 March 2024). 4.4. Animal Experimentation Ethical considerations: The use of publicly available datasets in this study complied with the data access and usage policies of the GEO database. All animal experiments were conducted in accordance with protocols approved by the Institutional Animal Care and Use Committee (IACUC) of Sun Yat-sen University (SYSU-IACUC-2024-002851). Eight-week-old male C57BL/6J mice, purchased from the Sun Yet-sen University Animal Supply Center, were used in this study. The mice were housed under specific pathogen-free (SPF) conditions with a 12 h light/dark cycle and provided access to food and water ad libitum. To induce gingival inflammation, the mice were anesthetized with 4% isoflurane, and a 5–0 silk ligature was tied around the maxillary second molars, with the ligature situated in the gingival sulcus, to induce periodontitis. For the in vivo si-JAG1 treatment, an siJAG1 solution was locally injected into the gingival mucosa after ligature. si-JAG1 (sense: 5′-GUGCCAGUUAGAUGCAAAUTT-3′, antisense: AUUUGCAUCUAACUGGCACTT-3′) was purchased from Kidan Bioscience (Guangzhou, China) and was modified with cholesterol to improve in vivo delivery. The injection solution was prepared with enzyme-free physiological saline, and the injection dose for each animal was 3 nmol. An equal amount of vehicle (enzyme-free physiological saline) was locally injected into the gingival mucosa after ligature placement in the control group. The mice were monitored daily, and after 10 days, they were sacrificed via CO 2 euthanasia followed by cervical dislocation. 4.5. Tissue Collection and Processing for the Animal Study After euthanasia, the periodontal tissues were harvested. The tissues were immediately placed in ice-cold PBS and processed for histological analysis and gene expression studies. For the histological examination, the tissues were fixed in 4% paraformaldehyde (PFA) for 24 h, embedded in paraffin, and sectioned at a thickness of 5 μm. The sections were stained with HE to assess tissue morphology and inflammation. For the gene expression analysis, the fresh gingival tissues were snap-frozen in liquid nitrogen and stored at −80 °C until RNA extraction. RNA was extracted using the TRIzol reagent (Thermo Fisher Scientific, Waltham, MA, USA) according to the manufacturer’s protocol. RNA purity was measured using a Nanodrop spectrophotometer, ensuring A260/A280 ratios between 1.8 and 2.0 and A260/A230 ratios above 2.0 for all samples. Reverse transcription was performed using a reverse transcription kit (Takara, Tokyo, Japan). Quantitative real-time PCR (qRT–PCR) was conducted using SYBR Green Master Mix (Thermo Fisher Scientific), and the expression levels of the target genes, including inflammatory cytokines, were normalized to that of the housekeeping gene GAPDH. 4.6. Cell Culture The epithelial cells were cultured in 24-well plates. When the cells reached 50% confluence, they were treated with siRNA (at a concentration of 10 nM). After 48 h, the cells were collected, and total RNA was extracted using TRIzol reagent for quantitative RT–PCR to validate the expression level of JAG1. 4.7. uCT Analysis The maxillae harvested from the mice in the different groups were fixed in 4% PFA overnight. The mandibles were subsequently washed with PBS, dehydrated with 75% ethanol, placed in standardized cylindrical sample holders, and subjected to high-resolution HCT (Scano Medical AG, Bassersdorf, Switzerland). The imaging parameters were as follows: 70 kV, 114 mA, 20 μL increments, and a 3000-millisecond integration time. A three-dimensional image analysis software, Materialise MIMICS (version 20.0; Leuven, Belgium), was used to reconstruct the images and analyze the imaging data. 4.8. Data Preprocessing and Quality Control The scRNA-seq data downloaded from GEO were processed using the Seurat (v5.0) package for alignment and quantification of gene expression. Feature–barcode matrices were generated for both the human and the mouse datasets. Initial quality control (QC) was performed to filter out low-quality cells, which were defined as those with fewer than 500 genes detected or with greater than 15% mitochondrial gene content. 4.9. Data Integration and Comparative Analysis To integrate the human and mouse datasets, we employed the Seurat (v5.0) package. The datasets were normalized using the SCTransform method, and highly variable genes were identified for each dataset. We then used the biomaRt (v 2.60) package and harmony package (v 1.2) to align homologous genes between the two species. After alignment, dimensionality reduction was performed using principal component analysis (PCA), followed by uniform manifold approximation and projection (UMAP) for visualization. Clustering was conducted using the Louvain algorithm, and clusters were annotated on the basis of canonical marker genes for epithelial cells, fibroblasts, and immune cell populations. To assess the conservation of cell populations between human and mouse gingival tissues, we performed cross-species comparisons via orthologous gene mapping. Differentially expressed genes (DEGs) between inflamed and noninflamed tissues were identified using the Wilcoxon rank-sum test, with adjusted p values calculated via the Benjamini–Hochberg correction method. 4.10. Cell-Cell Communication Analysis To infer intercellular communication between immune and epithelial cells in both the human and the mouse datasets, we used the CellChat (v1.6) package. This analysis identified potential ligand–receptor interactions that regulate immune responses and epithelial barrier integrity. The communication networks were visualized using chord diagrams and heatmaps, and pathway enrichment analysis was performed to identify key signaling pathways involved in inflammation and tissue repair. 4.11. Immunofluorescence Staining Gingival sections were deparaffinated and hydrated. After fixation and permeabilization, the sections were incubated with primary antibodies against JAG1, TNF-α, and ARG1, separately at 4 °C overnight, followed by fluorescently labeled secondary antibodies at room temperature for 1 h. DAPI was used to stain the nuclei. Images were captured using a fluorescence microscope (ZEISS Microscopy, Jena, Germany) under identical exposure conditions for all samples. The quantification of mean fluorescence intensity (MFI) was performed using ImageJ software (version 1.52a; NIH, USA). 4.12. Statistical Analysis Statistical analyses were conducted using R (v4.4.1). Data are presented as the means ± standard deviations (SDs) for normally distributed variables or as medians with interquartile ranges (IQRs) for nonnormally distributed variables. The significance of differences between groups was determined using Student’s t test or the Mann–Whitney U test, as appropriate. For comparisons involving more than two groups, one-way analysis of variance (ANOVA) followed by Tukey’s post hoc test was used. A p value of 0.05 was considered statistically significant.
The scRNA-seq data used in this study were obtained from publicly available datasets deposited in the Gene Expression Omnibus (GEO) database. Specifically, human gingival tissue scRNA-seq data were retrieved from the GSE164241 dataset. Mouse gingival tissue scRNA-seq data were obtained from the GSE228635 and GSE254766 datasets. These datasets were selected on the basis of their comprehensive profiling of immune and epithelial cell populations in gingival tissues, as well as their relevance to the study objectives. The spatial transcriptomics data used in this study were obtained from the Genome Sequence Archive (GSA) database ( https://bigd.big.ac.cn/gsa-human ; accessed on 28 September 2024) under the accession code HRA003217.
We collected human gingival samples from patients undergoing tooth extraction procedures at the Hospital of Stomatology, Sun Yat-sen University, Guangzhou. Each sample complied with the Chinese GCP, ICH GCP, and relevant regulations and was approved by the Medical Ethics Committee of the Hospital of Stomatology, Sun Yat-sen University (No. KQEC-2022-14-01). Informed consent was obtained from the participants before they were included in our study. Patients with a history of any medical condition other than periodontitis were excluded. The periodontitis samples presented periodontal pockets with depths >4 mm and bleeding on probing. The healthy samples showed no signs of periodontal disease and presented no deep periodontal pockets (depth <3 mm) or bleeding on probing.
The spatial proteomics workflow includes several key steps: (1) tissue sectioning and staining; (2) microdissection of regions of interest; (3) microsample preparation; (4) mass spectrometry; (5) database search; (6) data analysis. The gingival samples were embedded in paraffin and sliced into 5–10 µm sections, which were attached to MMI (MMI GmbH, Eching, Germany) membranes on slides. The slides were subsequently stained with HE to assess tissue morphology and inflammation via laser capture microdissection (LCM, Eching, Germany). For LCM, the slide to be tested was placed on the microscope slide frame, and then the cut sample was placed on the fixed slide. Next, the MMI software (v5.1 # 262; Eching, Germany) was opened, the entire sample was scanned with a 4× mirror, the area to be cut was dragged and selected, and the focal length was adjusted. Subsequently, the system was switched to the 10× magnification area, and the focal length was adjusted again. Then, a mouse or electronic pen was used to circle and select the area to be cut. Finally, the “CUT CELL” button was clicked to cut. After the cutting process was complete, the centrifuge tube covered with an adhesive coating was placed upside down on a glass slide to adhere the cut sample to it. Five microliters of lysis buffer was added to the lid of the EP tube containing the tissue sample, which was subsequently transferred to a new EP tube. Next, a 95 °C heat denaturation treatment was performed in the PCR machine for 10 min. After 10 min of noncontact ultrasonic treatment, the samples were centrifuged at 10,000× g for 1 min. The supernatant was diluted with 5 mL of 50 mM TEAB, and 2 µL of 0.5 µg/mL trypsin was added; the reaction was performed at 37 °C overnight in a PCR machine. Finally, an SDB-RPS column was used for the desalination treatment, followed by vacuum drying and mass spectrometry detection. The mass spectrometry data were collected via computer software to obtain the identification of and quantitative information on the peptides and proteins. The databases used in this study were Mus musculus UP000000589 (protein count: 54,822, database: UniProt, loading time: 7 March 2024) and Homo sapiens SP (protein count: 20,434, database: UniProt, download time: 7 March 2024).
Ethical considerations: The use of publicly available datasets in this study complied with the data access and usage policies of the GEO database. All animal experiments were conducted in accordance with protocols approved by the Institutional Animal Care and Use Committee (IACUC) of Sun Yat-sen University (SYSU-IACUC-2024-002851). Eight-week-old male C57BL/6J mice, purchased from the Sun Yet-sen University Animal Supply Center, were used in this study. The mice were housed under specific pathogen-free (SPF) conditions with a 12 h light/dark cycle and provided access to food and water ad libitum. To induce gingival inflammation, the mice were anesthetized with 4% isoflurane, and a 5–0 silk ligature was tied around the maxillary second molars, with the ligature situated in the gingival sulcus, to induce periodontitis. For the in vivo si-JAG1 treatment, an siJAG1 solution was locally injected into the gingival mucosa after ligature. si-JAG1 (sense: 5′-GUGCCAGUUAGAUGCAAAUTT-3′, antisense: AUUUGCAUCUAACUGGCACTT-3′) was purchased from Kidan Bioscience (Guangzhou, China) and was modified with cholesterol to improve in vivo delivery. The injection solution was prepared with enzyme-free physiological saline, and the injection dose for each animal was 3 nmol. An equal amount of vehicle (enzyme-free physiological saline) was locally injected into the gingival mucosa after ligature placement in the control group. The mice were monitored daily, and after 10 days, they were sacrificed via CO 2 euthanasia followed by cervical dislocation.
After euthanasia, the periodontal tissues were harvested. The tissues were immediately placed in ice-cold PBS and processed for histological analysis and gene expression studies. For the histological examination, the tissues were fixed in 4% paraformaldehyde (PFA) for 24 h, embedded in paraffin, and sectioned at a thickness of 5 μm. The sections were stained with HE to assess tissue morphology and inflammation. For the gene expression analysis, the fresh gingival tissues were snap-frozen in liquid nitrogen and stored at −80 °C until RNA extraction. RNA was extracted using the TRIzol reagent (Thermo Fisher Scientific, Waltham, MA, USA) according to the manufacturer’s protocol. RNA purity was measured using a Nanodrop spectrophotometer, ensuring A260/A280 ratios between 1.8 and 2.0 and A260/A230 ratios above 2.0 for all samples. Reverse transcription was performed using a reverse transcription kit (Takara, Tokyo, Japan). Quantitative real-time PCR (qRT–PCR) was conducted using SYBR Green Master Mix (Thermo Fisher Scientific), and the expression levels of the target genes, including inflammatory cytokines, were normalized to that of the housekeeping gene GAPDH.
The epithelial cells were cultured in 24-well plates. When the cells reached 50% confluence, they were treated with siRNA (at a concentration of 10 nM). After 48 h, the cells were collected, and total RNA was extracted using TRIzol reagent for quantitative RT–PCR to validate the expression level of JAG1.
The maxillae harvested from the mice in the different groups were fixed in 4% PFA overnight. The mandibles were subsequently washed with PBS, dehydrated with 75% ethanol, placed in standardized cylindrical sample holders, and subjected to high-resolution HCT (Scano Medical AG, Bassersdorf, Switzerland). The imaging parameters were as follows: 70 kV, 114 mA, 20 μL increments, and a 3000-millisecond integration time. A three-dimensional image analysis software, Materialise MIMICS (version 20.0; Leuven, Belgium), was used to reconstruct the images and analyze the imaging data.
The scRNA-seq data downloaded from GEO were processed using the Seurat (v5.0) package for alignment and quantification of gene expression. Feature–barcode matrices were generated for both the human and the mouse datasets. Initial quality control (QC) was performed to filter out low-quality cells, which were defined as those with fewer than 500 genes detected or with greater than 15% mitochondrial gene content.
To integrate the human and mouse datasets, we employed the Seurat (v5.0) package. The datasets were normalized using the SCTransform method, and highly variable genes were identified for each dataset. We then used the biomaRt (v 2.60) package and harmony package (v 1.2) to align homologous genes between the two species. After alignment, dimensionality reduction was performed using principal component analysis (PCA), followed by uniform manifold approximation and projection (UMAP) for visualization. Clustering was conducted using the Louvain algorithm, and clusters were annotated on the basis of canonical marker genes for epithelial cells, fibroblasts, and immune cell populations. To assess the conservation of cell populations between human and mouse gingival tissues, we performed cross-species comparisons via orthologous gene mapping. Differentially expressed genes (DEGs) between inflamed and noninflamed tissues were identified using the Wilcoxon rank-sum test, with adjusted p values calculated via the Benjamini–Hochberg correction method.
To infer intercellular communication between immune and epithelial cells in both the human and the mouse datasets, we used the CellChat (v1.6) package. This analysis identified potential ligand–receptor interactions that regulate immune responses and epithelial barrier integrity. The communication networks were visualized using chord diagrams and heatmaps, and pathway enrichment analysis was performed to identify key signaling pathways involved in inflammation and tissue repair.
Gingival sections were deparaffinated and hydrated. After fixation and permeabilization, the sections were incubated with primary antibodies against JAG1, TNF-α, and ARG1, separately at 4 °C overnight, followed by fluorescently labeled secondary antibodies at room temperature for 1 h. DAPI was used to stain the nuclei. Images were captured using a fluorescence microscope (ZEISS Microscopy, Jena, Germany) under identical exposure conditions for all samples. The quantification of mean fluorescence intensity (MFI) was performed using ImageJ software (version 1.52a; NIH, USA).
Statistical analyses were conducted using R (v4.4.1). Data are presented as the means ± standard deviations (SDs) for normally distributed variables or as medians with interquartile ranges (IQRs) for nonnormally distributed variables. The significance of differences between groups was determined using Student’s t test or the Mann–Whitney U test, as appropriate. For comparisons involving more than two groups, one-way analysis of variance (ANOVA) followed by Tukey’s post hoc test was used. A p value of 0.05 was considered statistically significant.
|
Structural Determinants of Health Literacy Among Formerly Incarcerated Individuals: Insights From the Survey of Racism and Public Health | 5f0f053d-709d-46ba-be72-6b79f9dc1ecc | 11729762 | Health Literacy[mh] | Participant Recruitment and Data Collection A multidisciplinary team of statisticians, applied psychologists, epidemiologists, and community health scientists at the Center for Anti-racism, Social Justice & Public Health (CASJPH) hired Qualtrics Research Services (QRS) to field the Survey of Racism and Public Health (SRPH). QRS is an online research panel service that compensates (e.g., gift cards) its panel members for survey participation. QRS recruits panel members through various mechanisms, such as targeted marketing materials, referrals, and conferences. The multidisciplinary scientists at the CASJPH provided QRS with institutional access to a Qualtrics account to recruit study participants, manage survey data collection, and provide incentives to study participants. QRS is a third-party service that had no relationship with the CASJPH researchers. The CASJPH scientists also did not interact with or recruit any study participants. The SRPH is an online cross-sectional survey that took participants about 15 minutes to complete. The SRPH aimed to learn more about participants' experiences with discrimination, interactions with the police, financial and food security, voting practices, and health. Readability was assumed for all survey measures. The study participants were not provided with assistance to complete any of the survey components. The primary analyses intended for the SRPH did not focus on FIIs as a vulnerable subpopulation. We chose the SRPH for this secondary data analysis because of the availability of incarceration history, zip code information, and health literacy data. QRS recruited potential participants who were at least age 18 years, English proficient, and resided in states/territories within U.S. Department of Health & Human Services Regions 1, 2, and 3. These regions encompassed Connecticut, Delaware, the District of Columbia, Maine, Maryland, Massachusetts, New Hampshire, New Jersey, New York, Pennsylvania, Puerto Rico, Rhode Island, Vermont, and Virginia (Region 1). The U.S. Virgin Islands (Region 2) and West Virginia (Region 3) also comprise regions of the U.S. Department of Health & Human Services but were not included as recruitment areas because these areas were not within scope of the primary analyses for the SRPH. QRS oversampled minoritized racial and ethnic groups (50% White, 20% Black, 20% Latino/a/e, and 10% American Indian/Native American, Arab/Middle Eastern/North African, Asian American/Pacific Islander, and Multiracial) and obtained specific age group distributions (30% aged 18–34, 32% aged 35–54, and 38% aged 55 or older). Study recruitment began in March 2023 and ended in April 2023 once target distributions were achieved for race and ethnicity and age. Additional details on the study protocol have been reported previously . Informed consent was obtained prior to survey participation. The New York University Institutional Review Board approved the study protocol (IRBFY2023-7408). The present analysis adheres to the Declaration of Helsinki and the Strengthening the Reporting of Observational Studies in Epidemiology guidelines . Analytic Sample Between March and April 2023, 9,096 potential participants were invited to complete the SRPH. Among the initial 9,096 potential participants ( Figure ), 1,106 (12%) declined to participate, 542 (6%) did not finish the survey, and 2,389 (26%) were excluded through Qualtrics data cleaning services. Reasons for exclusion included invalid IP addresses, nonsensical responses, implausible height or weight values, duplications, contradictory answers, and bot activity. This resulted in 5,059 participants who both consented and completed the survey. Participants were further excluded based on their binary responses (yes or no) to the question: “Have you ever been incarcerated?” This process identified 595 participants who self-reported having an incarceration history. We then excluded 17 (3%) participants due to missing information regarding health literacy, zip code, or covariates, yielding a final analytic sample of 578 FIIs. Health Literacy Assessment Health literacy was assessed using the Brief Health Literacy Screen (BHLS), a validated subjective measure that has been applied among population-based samples, primary care patients, and a nationwide cohort of young women with breast cancer . The BHLS comprises three Single Item Literacy Screeners assessing a participant's difficulty reading hospital materials, difficulty learning about their medical condition, and confidence in filling out medical forms . Response choices for the difficulty questions ranged from (0) none of the time to (4) all of the time , and the confidence question had response options from (0) not at all to (4) extremely . To create a composite BHLS index (range: 0–11), responses to the confidence question were reverse-coded and summed with the responses to the difficulty questions. Index scores of three or higher were coded as having limited health literacy, and scores lower than three were coded as having adequate health literacy. This dichotomization is consistent with previous health literacy studies . We computed the alpha reliability coefficient (Cronbach's α = 0.64) for the BHLS. Neighborhood Factors We merged self-reported zip codes with census tract-level measures of neighborhood deprivation, racial and economic polarization, and residential segregation. We created weighted measures by multiplying census tract weights within each zip code to each neighborhood characteristic. We obtained neighborhood measures using the ndi R package and census tract weights using US Department of Housing and Urban Development US Postal Service Zip Code CrossWalk files . We standardized each weighted neighborhood measure to have a mean of 0 and a standard deviation of 1. We assessed neighborhood deprivation using the Powell-Wiley Neighborhood Deprivation Index (NDI) , which is based on a factor analysis of 13 socioeconomic factors (e.g., education, occupation) . Theoretical NDI scores range from −3.6 to 2.8, with higher scores indicating greater neighborhood deprivation . We used the Index of Concentration at the Extremes (ICE) to measure racial and economic polarization . ICE scores are derived from racial and ethnic stratified household income estimates, theoretically ranging from −1 to 1. A score of −1 denotes a complete concentration of residents racialized as Black with low income . By contrast, a score of 1 represents a complete concentration of residents racialized as White with high income . We assessed residential segregation using the Dissimilarity Index (DI) . Theoretical DI scores range from 0 to 1, representing the proportion of individuals racialized as Black needed to relocate to achieve an equal Black-to-White distribution . Public Assistance Program Enrollment The survey collected information on enrollment in public assistance programs. Participants were asked: “Have you received public assistance in the last 12 months? Please select all that apply.” Response options included: Disability; Medicaid; Supplemental Nutrition Assistance Program; Temporary Assistance for Needy Families; Unemployment; Special Supplemental Nutrition Program for Women, Infants, and Children; other; and none. Responses were summed for each participant and categorized as zero versus one or more programs. Sociodemographic Characteristics We controlled for several key sociodemographic characteristics based on prior literature . These following characteristics were recategorized for analysis: age (measured continuously), race and ethnicity (White, Black, Latino/a/e, American Indian/Native American, Arab/Middle Eastern/North African, Asian American/Pacific Islander, and Multiracial), gender identity (man, woman), educational attainment (high school or less, some college, college degree or higher), marital status (married/living with a partner, divorced/separated/widowed, never married), employment status (full-time, part-time, independent contractor/business owner, looking for work/unemployed), number of children (0, 1, 2+), number chronic health conditions (0, 1+), and incarceration length (less than 1 year, 1 year or more). One item was used to assess participants' number of chronic health conditions: “Have you ever been diagnosed with any of the following conditions? Check all that apply.” Response options included: arthritis, asthma, cancer, chronic lung disease, congestive heart failure, diabetes mellitus, periodontal disease, hypertension, mental health condition/psychiatric problems, obesity, stroke, substance use disorder, and none. Responses were summed and categorized as zero versus one or more chronic health conditions. Statistical Analysis Data were analyzed using R , with statistical significance set at p < .05. We used the gtsummary R package to examine descriptive statistics for all variables . Bivariate relationships with health literacy were evaluated using the Wilcoxon rank sum and Pearson's Chi-squared tests. Logistic regression models estimated unadjusted and adjusted associations with health literacy. We present odds ratios with 95% confidence intervals. A multidisciplinary team of statisticians, applied psychologists, epidemiologists, and community health scientists at the Center for Anti-racism, Social Justice & Public Health (CASJPH) hired Qualtrics Research Services (QRS) to field the Survey of Racism and Public Health (SRPH). QRS is an online research panel service that compensates (e.g., gift cards) its panel members for survey participation. QRS recruits panel members through various mechanisms, such as targeted marketing materials, referrals, and conferences. The multidisciplinary scientists at the CASJPH provided QRS with institutional access to a Qualtrics account to recruit study participants, manage survey data collection, and provide incentives to study participants. QRS is a third-party service that had no relationship with the CASJPH researchers. The CASJPH scientists also did not interact with or recruit any study participants. The SRPH is an online cross-sectional survey that took participants about 15 minutes to complete. The SRPH aimed to learn more about participants' experiences with discrimination, interactions with the police, financial and food security, voting practices, and health. Readability was assumed for all survey measures. The study participants were not provided with assistance to complete any of the survey components. The primary analyses intended for the SRPH did not focus on FIIs as a vulnerable subpopulation. We chose the SRPH for this secondary data analysis because of the availability of incarceration history, zip code information, and health literacy data. QRS recruited potential participants who were at least age 18 years, English proficient, and resided in states/territories within U.S. Department of Health & Human Services Regions 1, 2, and 3. These regions encompassed Connecticut, Delaware, the District of Columbia, Maine, Maryland, Massachusetts, New Hampshire, New Jersey, New York, Pennsylvania, Puerto Rico, Rhode Island, Vermont, and Virginia (Region 1). The U.S. Virgin Islands (Region 2) and West Virginia (Region 3) also comprise regions of the U.S. Department of Health & Human Services but were not included as recruitment areas because these areas were not within scope of the primary analyses for the SRPH. QRS oversampled minoritized racial and ethnic groups (50% White, 20% Black, 20% Latino/a/e, and 10% American Indian/Native American, Arab/Middle Eastern/North African, Asian American/Pacific Islander, and Multiracial) and obtained specific age group distributions (30% aged 18–34, 32% aged 35–54, and 38% aged 55 or older). Study recruitment began in March 2023 and ended in April 2023 once target distributions were achieved for race and ethnicity and age. Additional details on the study protocol have been reported previously . Informed consent was obtained prior to survey participation. The New York University Institutional Review Board approved the study protocol (IRBFY2023-7408). The present analysis adheres to the Declaration of Helsinki and the Strengthening the Reporting of Observational Studies in Epidemiology guidelines . Between March and April 2023, 9,096 potential participants were invited to complete the SRPH. Among the initial 9,096 potential participants ( Figure ), 1,106 (12%) declined to participate, 542 (6%) did not finish the survey, and 2,389 (26%) were excluded through Qualtrics data cleaning services. Reasons for exclusion included invalid IP addresses, nonsensical responses, implausible height or weight values, duplications, contradictory answers, and bot activity. This resulted in 5,059 participants who both consented and completed the survey. Participants were further excluded based on their binary responses (yes or no) to the question: “Have you ever been incarcerated?” This process identified 595 participants who self-reported having an incarceration history. We then excluded 17 (3%) participants due to missing information regarding health literacy, zip code, or covariates, yielding a final analytic sample of 578 FIIs. Health literacy was assessed using the Brief Health Literacy Screen (BHLS), a validated subjective measure that has been applied among population-based samples, primary care patients, and a nationwide cohort of young women with breast cancer . The BHLS comprises three Single Item Literacy Screeners assessing a participant's difficulty reading hospital materials, difficulty learning about their medical condition, and confidence in filling out medical forms . Response choices for the difficulty questions ranged from (0) none of the time to (4) all of the time , and the confidence question had response options from (0) not at all to (4) extremely . To create a composite BHLS index (range: 0–11), responses to the confidence question were reverse-coded and summed with the responses to the difficulty questions. Index scores of three or higher were coded as having limited health literacy, and scores lower than three were coded as having adequate health literacy. This dichotomization is consistent with previous health literacy studies . We computed the alpha reliability coefficient (Cronbach's α = 0.64) for the BHLS. We merged self-reported zip codes with census tract-level measures of neighborhood deprivation, racial and economic polarization, and residential segregation. We created weighted measures by multiplying census tract weights within each zip code to each neighborhood characteristic. We obtained neighborhood measures using the ndi R package and census tract weights using US Department of Housing and Urban Development US Postal Service Zip Code CrossWalk files . We standardized each weighted neighborhood measure to have a mean of 0 and a standard deviation of 1. We assessed neighborhood deprivation using the Powell-Wiley Neighborhood Deprivation Index (NDI) , which is based on a factor analysis of 13 socioeconomic factors (e.g., education, occupation) . Theoretical NDI scores range from −3.6 to 2.8, with higher scores indicating greater neighborhood deprivation . We used the Index of Concentration at the Extremes (ICE) to measure racial and economic polarization . ICE scores are derived from racial and ethnic stratified household income estimates, theoretically ranging from −1 to 1. A score of −1 denotes a complete concentration of residents racialized as Black with low income . By contrast, a score of 1 represents a complete concentration of residents racialized as White with high income . We assessed residential segregation using the Dissimilarity Index (DI) . Theoretical DI scores range from 0 to 1, representing the proportion of individuals racialized as Black needed to relocate to achieve an equal Black-to-White distribution . The survey collected information on enrollment in public assistance programs. Participants were asked: “Have you received public assistance in the last 12 months? Please select all that apply.” Response options included: Disability; Medicaid; Supplemental Nutrition Assistance Program; Temporary Assistance for Needy Families; Unemployment; Special Supplemental Nutrition Program for Women, Infants, and Children; other; and none. Responses were summed for each participant and categorized as zero versus one or more programs. We controlled for several key sociodemographic characteristics based on prior literature . These following characteristics were recategorized for analysis: age (measured continuously), race and ethnicity (White, Black, Latino/a/e, American Indian/Native American, Arab/Middle Eastern/North African, Asian American/Pacific Islander, and Multiracial), gender identity (man, woman), educational attainment (high school or less, some college, college degree or higher), marital status (married/living with a partner, divorced/separated/widowed, never married), employment status (full-time, part-time, independent contractor/business owner, looking for work/unemployed), number of children (0, 1, 2+), number chronic health conditions (0, 1+), and incarceration length (less than 1 year, 1 year or more). One item was used to assess participants' number of chronic health conditions: “Have you ever been diagnosed with any of the following conditions? Check all that apply.” Response options included: arthritis, asthma, cancer, chronic lung disease, congestive heart failure, diabetes mellitus, periodontal disease, hypertension, mental health condition/psychiatric problems, obesity, stroke, substance use disorder, and none. Responses were summed and categorized as zero versus one or more chronic health conditions. Data were analyzed using R , with statistical significance set at p < .05. We used the gtsummary R package to examine descriptive statistics for all variables . Bivariate relationships with health literacy were evaluated using the Wilcoxon rank sum and Pearson's Chi-squared tests. Logistic regression models estimated unadjusted and adjusted associations with health literacy. We present odds ratios with 95% confidence intervals. Description of Sample The 578 FIIs had an average age of 46 (standard deviation [ SD ] = 14), with 42% having limited health literacy based on the BHLS ( Table ). The majority identified as White (44%) or Black (28%) and as a man (69%). Sixty-one percent had at least some college education, which includes those who earned some college credits, an associate degree, a bachelor's degree, or a graduate degree. Almost half were married or living with a partner (47%), working full-time (47%), and had no children (46%). Seventy percent were previously incarcerated for less than a year, 80% had at least one chronic health condition, and 68% were enrolled in at least one public assistance program. The average NDI score was 0.3 ( SD = 0.8), indicating that participants tended to reside in areas with moderate neighborhood deprivation. The mean ICE score was 0.1 ( SD = 0.2), suggesting that, on average, study participants did not live in areas of high racial and economic polarization. The average DI score was 0.3 ( SD = 0.1), indicating that participants tended to live in areas that require 30% of individuals racialized as Black to move to achieve an equal Black-to-White distribution. Table also shows statistically significant differences in characteristics by health literacy status. Compared to those with adequate health literacy (AHL), FIIs with limited health literacy (LHL) were more likely to be enrolled in at least one public assistance program (LHL: 80%; AHL: 59%; p < .001), have two or more children (LHL: 39%; AHL: 23%; p < .001), and have an incarceration history of one year or more (LHL: 37%; AHL: 24%; p = .001). Associations with Limited Health Literacy Table shows the unadjusted and adjusted relationships with limited health literacy among FIIs in this sample. We observed a statistically significant association between public assistance program enrollment and limited health literacy (unadjusted OR = 2.72, 95% CI: 1.87, 4.01; adjusted OR = 2.50, 95% CI: 1.62, 3.88). The adjusted model controlled for age, race/ethnicity, gender identity, educational attainment, marital status, employment status, number of children, chronic health conditions, and incarceration length. We found no statistically significant associations of limited health literacy with neighborhood deprivation, racial and economic polarization, and residential segregation. The 578 FIIs had an average age of 46 (standard deviation [ SD ] = 14), with 42% having limited health literacy based on the BHLS ( Table ). The majority identified as White (44%) or Black (28%) and as a man (69%). Sixty-one percent had at least some college education, which includes those who earned some college credits, an associate degree, a bachelor's degree, or a graduate degree. Almost half were married or living with a partner (47%), working full-time (47%), and had no children (46%). Seventy percent were previously incarcerated for less than a year, 80% had at least one chronic health condition, and 68% were enrolled in at least one public assistance program. The average NDI score was 0.3 ( SD = 0.8), indicating that participants tended to reside in areas with moderate neighborhood deprivation. The mean ICE score was 0.1 ( SD = 0.2), suggesting that, on average, study participants did not live in areas of high racial and economic polarization. The average DI score was 0.3 ( SD = 0.1), indicating that participants tended to live in areas that require 30% of individuals racialized as Black to move to achieve an equal Black-to-White distribution. Table also shows statistically significant differences in characteristics by health literacy status. Compared to those with adequate health literacy (AHL), FIIs with limited health literacy (LHL) were more likely to be enrolled in at least one public assistance program (LHL: 80%; AHL: 59%; p < .001), have two or more children (LHL: 39%; AHL: 23%; p < .001), and have an incarceration history of one year or more (LHL: 37%; AHL: 24%; p = .001). Table shows the unadjusted and adjusted relationships with limited health literacy among FIIs in this sample. We observed a statistically significant association between public assistance program enrollment and limited health literacy (unadjusted OR = 2.72, 95% CI: 1.87, 4.01; adjusted OR = 2.50, 95% CI: 1.62, 3.88). The adjusted model controlled for age, race/ethnicity, gender identity, educational attainment, marital status, employment status, number of children, chronic health conditions, and incarceration length. We found no statistically significant associations of limited health literacy with neighborhood deprivation, racial and economic polarization, and residential segregation. This study explored the associations of neighborhood factors and public assistance program enrollment with limited health literacy among FIIs. The data showed that FIIs enrolled in at least one public assistance program were more likely than those not enrolled to have limited health literacy, controlling for sociodemographic characteristics. We did not observe any statistically significant associations of limited health literacy with neighborhood deprivation, racial and economic polarization, and residential segregation. To our knowledge, this is the first study to examine the relationships between neighborhood factors, public assistance program enrollment, and limited health literacy among FIIs. Hadden et al. studied 751 FIIs in the Transitions Clinic Network, encompassing six states and Puerto Rico. They found that FIIs with limited health literacy tended to visit the emergency department more frequently and have lower confidence in taking medications after release than FIIs with adequate health literacy . The findings from the present study add to prior literature, showing that FIIs enrolled in at least one public assistance program tended to have limited health literacy. Following Schillinger's socio-ecological framework , the observations from the present study suggest that public assistance programs may be an opportunity to develop targeted interventions for increasing health literacy among FIIs. Higher health literacy may decrease this vulnerable population's high chronic disease incidence . This is of paramount importance because FIIs must simultaneously navigate other challenges after incarceration (e.g., unstable housing) . Program enrollment timing may be a plausible explanation for observing a significant association between public assistance program enrollment and limited health literacy. It may be that most of the study population categorized as enrolled in a public assistance program may have enrolled during post-release. Future investigations should evaluate whether associations of public assistance program enrollment with limited health literacy differ between those who enrolled before and after release. Collectively, findings from these studies can inform policies aimed at enrolling incarcerated individuals in public assistance programs before their release, which, in turn, may positively affect their health. An example is the Medicaid expansion, which grants eligibility to FIIs for health care coverage . In a cohort study of 16,000 FIIs with substance use history, Burns et al. observed increased utilization of outpatient care services after Wisconsin established a pre-release Medicaid enrollment assistance program. Relatedly, other researchers have detected significant increases in Medicaid enrollment following the implementation of a pre-release Medicaid enrollment assistance program . The nonsignificant association between neighborhood characteristics and limited health literacy aligns with prior research, showing that past neighborhood characteristics have a greater impact on health literacy, genetic knowledge, and upward mobility than current neighborhood characteristics . The present study assessed neighborhood factors using self-reported zip codes. Longitudinal analyses are needed to ascertain whether past neighborhood characteristics impact health literacy among FII more than current neighborhood characteristics. This study has several limitations. First, we observed a low reliability coefficient (0.64) for the BHLS, likely due to differing samples between this study and the BHLS validation sample . The initial analyses planned for the Survey of Racism and Public Health did not specifically target FIIs as a vulnerable subgroup. However, we selected the SRPH for this secondary analysis due to the availability of information on participants' incarceration history, residential zip codes, and BHLS measure. We used a self-report subjective measure of health literacy. Future analyses should assess health literacy among FIIs using a test-based health literacy measure. Second, the nonprobability cross-sectional survey design limits generalizability because all the states considered for recruitment are in the eastern region of the United States. Furthermore, the analytic sample was not representative of the formerly incarcerated population. Longitudinal studies using generalizable designs are needed to draw causal inferences and account for trends over time. Third, self-reporting incarceration history may have been sensitive for some participants, possibly leading to underreporting of incarceration history, thus excluding some FIIs from the analysis. Our study has several strengths despite these limitations. We used Schillinger's socio-ecological framework to examine structural determinants of limited health literacy among FIIs . We used a subsample of over 500 FIIs from the Survey of Racism and Public Health, a unique data source spanning U.S. Department of Health & Human Services Regions 1, 2, and 3. We used validated measures of neighborhood deprivation, racial and economic polarization, residential segregation, and health literacy, which can be expanded and developed for use in international settings. The current investigation used census weights to account for different zip codes covered within a census tract. The overall prevalence of limited health literacy among FIIs is high. Still, the practical resources and strategies to reduce limited health literacy among this vulnerable population remain inconclusive. Using a socioecological framework, modifiable pathways (neighborhood factors and public assistance program enrollment) in relation to limited health literacy. We found that FIIs enrolled in at least one public assistance program were more likely to have limited health literacy than those not enrolled. Implications for public health and clinical practice include developing tailored interventions for increasing health literacy through public assistance programs and disseminating resources among FIIs to help with the self-management of chronic diseases. |
Comparing field and lab quantitative stable isotope probing for nitrogen assimilation in soil microbes | d57bf0f7-7308-4536-9487-6d1a7da285fe | 11837507 | Microbiology[mh] | Microbes play a vital role in soil ecosystem functions, especially nitrogen (N) cycling, as they immobilize, mineralize, oxidize, and reduce N, determining its accessibility to plants . These processes—and the microbes that drive them—are likely sensitive to environmental change, sensitivities we need to understand . Historically, our ability to tie microbial community composition to ecosystem function has been limited. Recent methodological advances help decipher the overwhelmingly complex influence of microbial life on biogeochemical processes . Quantitative stable isotope probing (qSIP) is a powerful technique that can help identify which microorganisms are responsible for specific ecosystem processes, such as carbon (C) and N assimilation, by tracing the incorporation of isotopically labeled, or “heavy,” substrates into DNA . Often qSIP is conducted on microbes living under varying experimental conditions, for example, temperature, moisture, or fertilization, to better predict how microbes might respond to anthropogenic influences in the wild . This technique is most often performed using 18 O water to quantify taxon-specific growth rates or using 13 C substrates to quantify C assimilation rates. With the development of methods for 15 N qSIP , and the importance of microbes for soil N cycling, qSIP is starting to be used to assess microbial N assimilation. Nitrogen is the most common limiting nutrient to crop growth in temperate soil and must be supplemented in many agricultural systems to maintain productivity. Soil microbes transform nitrogenous compounds, and N is readily lost from soil through processes performed or influenced by microbes . Yet, microbes can also help soils retain N by assimilating it into their biomass, promoting soil N retention through immobilization . This is of particular importance in agricultural soil, where expensive fertilizer-N is regularly applied for production. It is estimated that only 40%–53% of applied N globally is taken up by crops as intended . Leached N ends up in waterways, resulting in pollution and eutrophication . When soil microbes assimilate N into their biomass, it is more likely to be retained in the soil and gradually released to crops over time as those microbes live and die , estimating that the contribution of bacterial and fungal residue N to soil organic N is between 27% and 100% . Implementing agricultural management practices that result in sustained abundance and growth of microbes that immobilize fertilizer N—whether due to their traits, cell or population size, or some combination thereof—could reduce soil N loss . It is possible to quantify N assimilation rates of individual microbes using qSIP. Microbial community response to N fertilization is often assessed by DNA sequencing, which cannot distinguish active microbes from those that are dormant or recently dead . In the context of N assimilation, active populations are most relevant and responsive to fertilization . Nevertheless, to the best of our knowledge, only a few studies have used 15 N thus far to measure microbial N assimilation , and even fewer have focused on row-crop agricultural soils . One concern regarding the use of qSIP to explore microbial responses to environmental change is that the rates observed in the lab may not be field-relevant. This concern is shared for most biogeochemical analyses, and often there is disagreement between lab and field studies, particularly for N-cycling rates . Lab measurements are often used because they are more precise, more practical, and less expensive. However, they are more likely to introduce unintended artifacts from plant removal and soil processing which may include sieving, homogenization, and drying or rewetting. It is important to capture field-relevant processes in agricultural systems that include dynamic plant-soil interactions. Plant-soil-microbial feedback loops drive nutrient cycling in agricultural settings and shape microbial community composition and activity in the soil surrounding their roots. The rhizosphere, defined as the soil immediately surrounding plant roots and their fungal symbionts, is a hot spot of microbial activity, nutrient cycling, and networking . Plant roots and root secretions support a complex economy with fluctuating proportions of plant mutualists and pathogens, as well as microbes that capitalize on the resource-rich environment without directly impacting the plant. Accordingly, the removal of plant roots prior to lab qSIP measurements may disrupt the composition and functioning of rhizosphere microbes. Understanding the impact of laboratory artifacts on qSIP measurements is necessary to determine if microbial functions measured in the lab reflect their activities in the wild. Although several qSIP experiments have been conducted in the field , no past works have directly compared field and lab measures of microbial activity via qSIP. As such, the extent to which artifacts (from sieving, root removal, etc.) influence lab measurements remains unknown. In addition, to our knowledge, no prior field qSIP studies have examined nitrogen assimilation with 15 N or focused on agricultural soil. This work aimed to (i) determine whether and how genus-specific N assimilation as assessed via qSIP differs between lab and field measurements and (ii) identify soil prokaryotes important for N assimilation in the rhizosphere of maize ( Zea mays ). We hypothesized that lab conditions could inflate rates of N assimilation relative to field conditions due to more optimal growth conditions and lowered competition for N from plants. We also predicted the function and abundance of root-associated microbial groups would differ between the field and lab measurements, given the root removal and soil disturbance that occurs during the lab procedure. To address the aims and hypotheses, we performed qSIP with 15 N in the field and the lab with soil from two maize-cropped agricultural fields. Microbial access to nutrients, including N, is at least partially determined by soil physical characteristics like bulk density . We leveraged two topographically distinct sites with differing soil properties, to assess how the comparability of field and lab measurements might vary in relation to soil characteristics. Field and lab procedures were kept as consistent as possible to determine the field relevance of lab qSIP measurements. Site description and experimental setup We examined N cycling in two maize-cropped agricultural fields in Morgantown, West Virginia, USA, in August 2020. These fields were located at the West Virginia University Organic Research and Outreach Center (39.650910, –79.937906) and the West Virginia University Animal Science and Husbandry Research and Outreach Center (39.661446, –79.926243). The sites are in the same watershed, but topographically distinct. Hereafter, the WVU Organic Farm field will be referred to as the “Ridge” site and the WVU Animal Science farm field as the “Valley” site due to their distinguishing topographical characteristics. At each site, the experimental work was conducted in fields planted with maize. The Valley site is a footslope that has been under regular row crop production (mostly maize) for over a century. The Valley soil is classified as fine-loamy, mixed, superactive, mesic Oxyaquic Fragiudalfs in the Clarksburg series . Prior to maize planting, the field was cultivated with a triticale winter cover crop, which was mowed and baled in mid-May 2020, followed by the application of a broad-spectrum glyphosate-based herbicide. The Ridge site is a topographical shoulder that was previously naturalized grassland and was not in annual production until maize planting in June 2020. The Ridge soil is classified as fine-loamy, mixed, superactive to active, mesic Ultic Hapludalfs in the Westmoreland (WeC), and Dormont/Guernsey (DgC) soil series . At both sites, dairy manure and straw compost were applied and tilled into the soil a few days before maize planting in early June 2020. These sites were chosen for their distinct geomorphic positions, soil types, and land-usage histories to enable us to assess whether the field relevance of lab measurements is context-dependent. In late July 2020, seven 1 m × 1 m plots were randomly established within a 10 m × 10 m area at each site ( n = 7). Plots were centered around maize rows and gently hand-weeded as needed. For each plot, two 7.5 cm diameter PVC collars were placed approximately 4 cm deep within 15 cm of a maize stalk to prepare for the field qSIP measurements . Identical collars were also placed to measure N immobilization and mineralization. In addition, a 25 cm diameter PVC collar was installed within 15 cm of a maize stalk in each plot to measure field soil respiration. Field and lab qSIP measurements were collected in mid-August 2020. At this time, the maize had reached the grain fill stage (V12–V15 stages), a stage commonly used for 15 N pulse labeling experiments to determine nutrient availability for yield prediction . Before starting the incubations, soil cores were collected outside the collars to determine gravimetric soil moisture , water holding capacity (WHC) , and bulk density via the excavation method . Soil sampling and characterization To assess genus-specific 15 N assimilation in the lab and field, simultaneous stable isotope incubations were performed . For the lab incubation, soil cores collected were within 15 cm of a maize stalk using a soil hammer corer (AMS 2 × 6 inch [5 × 15 cm] Soil Core Sampler). In total, over the length of the experiment, four soil cores were taken from each replicate plot and brought to the laboratory for separate analyses (Lab qSIP, Lab Chem., Field qSIP, and Field Chem., as shown in ). Rhizosphere and bulk soil were separated by root picking and handshaking. Soil remaining on the root was classified as rhizosphere soil and the soil that fell off as bulk soil . Total C, N, and sulfur (S) of both bulk and rhizosphere soil were analyzed using the vario MAX cube (Elementar, Landenselbold, Germany). Soil pH was measured on bulk soil using an Accumet AE150 pH meter (Fisher Scientific, Hampton, NH, USA) . Soil organic matter (SOM) was determined on bulk soil via loss-on-ignition . Experimental incubations Within 24 hours from the initial collection, rhizosphere soil was collected and sieved for initial community characterization and the lab qSIP incubation. For initial community characterization, 3 g of soil was subsampled and frozen at −80°C to represent the “pre-incubation” microbial community at natural abundance 15 N (i.e., unlabeled) for both field and lab measurements. For the lab qSIP incubation, 3 g of the rhizosphere soil was incubated with 98 atom % ( 15 NH 4 ) 2 SO 4 at a concentration of 3 µmole N g −1 soil at 60% WHC in Falcon tubes. All lab incubations were carried out in the dark at room temperature (~21°C). The amount of N added (3 µmole N g −1 ) is roughly equivalent to 83.3 kg N ha −1 , assuming a soil depth of 6 inches (15.24 cm) and a bulk density of 1.3 g/cm³, and is comparable to a late-season fertilization event . The field qSIP incubation was initiated on the same day as the lab incubation, by adding 98 atom % ( 15 NH 4 ) 2 SO 4 to achieve 3 µmole N g −1 soil (equivalent to ~42 µg natural abundance N g −1 soil). Before injection, pilot holes were made using 15.24 cm long, 3.2 mm diameter drill bits (Dewalt) to prevent soil compaction and clogging in the injection needle. The solution was injected into the soil within the PVC collar to a depth of 15 cm using a 50 mL syringe and 15.24 cm steel needle with 12 side ports (Hold Your Horses BBQ, Professional Meat Injection Kit). The solution was injected gradually throughout several pilot holes (similar to references , ). The needle was slowly withdrawn and turned to achieve an even distribution of the solution throughout the soil column . The soil was fully saturated (100% water holding capacity) for an even distribution of ( 15 NH 4 ) 2 SO 4 . The PVC collars were loosely covered with a 10.2 cm PVC cap supported by rubber props to minimize leaching during rain events while allowing sufficient airflow. In addition, a parallel lab and field incubation were set up to measure soil biogeochemistry (“Chem.” in ) including CO 2 flux and N transformation (see details below). Apart from unlabeled ( 14 NH 4 ) 2 SO 4 addition (3 µmole N g Soil −1 ), the biogeochemical incubations were identical in conditions to qSIP field (100% WHC) and lab (60% WHC) incubation. The field soil was saturated initially to account for the loss of solution through evaporation and leaching over the incubation and to ensure adequate microbial isotope uptake, whereas the lab soil was saturated to 60% WHC but in an enclosed environment. Though field and lab soils initially had different moisture contents, we found that at the point of sample collection in the field the soil moisture had decreased from 100% to ~40% WHC (Table 2), suggesting that the field soil moisture probably experienced a similar WHC (60%) to the lab incubations over much of the incubation period. For the lab incubation, approximately 25 g of rhizosphere soil was incubated in a 960 mL mason jar fitted with a rubber septum for gas sampling. Since inorganic N assimilation mainly occurs within the first week of fertilizer N addition, often peaking by the 3rd day , both lab and field incubations were halted after 5 days. No significant rainfall events occurred during this time and the surface soil temperature averaged around 22°C. Field qSIP cores were collected to a depth of 15 cm and transported back to the lab on ice. For both methods, soil samples were stored at −80°C. N immobilization and nitrification Net nitrogen immobilization and nitrification were measured in the field and the lab under conditions identical to those used for qSIP. To measure initial N concentrations, an intact soil core (15 cm length × 5 cm diameter plastic sleeve) was transported to the lab in a sleeve and injected with a 14 (NH 4 ) 2 SO 4 solution while still in the sleeve. After injection, rhizosphere soil was immediately collected, homogenized, sieved, and extracted for inorganic N analysis. Inorganic N was analyzed using soil subsamples collected upon initiation and completion of the 5-day incubation. Subsamples (8 g) were extracted with 40 mL of 0.5 M K 2 SO 4 for 1 hour and stored at −20°C until analysis using an AQ300 Discrete Chemical Analyzer (SEAL Analytical, Mequon, WI, USA), following U.S. Environmental Protection Agency (EPA) methods 353.2 for nitrate and nitrite and 350.1 for ammonium . Net nitrification and immobilization rates were calculated as described by Hart et al . CO 2 flux and microbial biomass carbon Soil CO 2 flux measurements were measured in the field at 0, 24, and 72 hours, over bare soil after initiation of the qSIP incubation using a LI-7810 Smart Chamber (LI-COR Biosciences, Lincoln, Nebraska). In the lab, cumulative soil respiration was measured using the closed chamber technique with a LI-6400XT as described in Walkup et al.. These measurements were used to calculate the average rate of C respiration in μg C kg soil −1 hour −1 during the 5-day incubation period. At the end of the incubation period, microbial biomass C was determined from 4 g subsamples of both lab and field soils, as described in Kane et al. . Plant biomass and 15 N uptake Upon termination of the 5-day incubation, the closest three maize plants to the field incubation cores were measured for height, and aboveground biomass was collected. The roots in the collected qSIP cores were rinsed, dried, and weighed before being stored at −80°C. Plant material was dried and then partitioned, as described in de Oliveira Silva , before being weighed and ground using a Wiley mill. The ground material was homogenized and subsequently analyzed for δ 15 N and %N using a Carlo-Erba NC 2500 Elemental Analyzer at the University of Maryland Central Appalachian Stable Isotope Facility. The rate of plant 15 N uptake was calculated as follows: (atom% excess of sample × total N)/(atom % excess of atmosphere × incubation time) following the method outlined in Smercina et al. . Quantitative stable isotope probing Total DNA extraction was conducted using the PowerLyzer PowerSoil DNA extraction kit, following the manufacturer’s instructions (MoBio Laboratories, Carlsbad, CA, USA). DNA quantification was performed using the Qubit dsDNA high-sensitivity assay kit and a Qubit 2.0 fluorometer (Invitrogen, Eugene, OR, USA). Further quantification of total DNA was performed via PicoGreen (Molecular Probe Inc., Eugene, OR, USA). To assess 15 N uptake by microbial taxa, qSIP was employed based on the methodology described by Morrissey et al. , with slight modifications described in Purcell et al. . For density centrifugation, approximately 2 µg of DNA was added to 3.5 mL of saturated CsCl solution (density of 1.89 g/mL). The 4.7 mL OptiSeal ultracentrifuge tube (Beckman Coulter, Fullerton, CA, USA) was then filled completely with a gradient buffer (200 mM Tris, 200 mM KCl, 2 mM EDTA). The samples were centrifuged in an Optima Max benchtop ultracentrifuge (Beckman Coulter) using a Beckman TLN-100 rotor at 55,000 rpm for 72 hours at 18°C. Subsequently, the density gradient was fractionated, and approximately 140 µL fractions were collected using a density gradient fractionation system (Brandel, Gaithersburg, MD, USA), resulting in approximately 25 fractions per sample. The density of each fraction was determined using a Reichert AR200 digital refractometer (Reichert Analytical Instruments, Depew, NY, USA). Contamination was avoided by thorough cleaning of the entire fractionation system and refractometer between tubes. Following density fractionation, DNA was purified from each fraction using isopropanol precipitation. Fractions near the ends of the density gradient in each tube had very low levels of DNA suggesting minimal contamination across fractions. The quantification of the 16S rRNA gene in each fraction, from all samples, was determined using quantitative PCR (qPCR). All qPCRs were performed in triplicate and consisted of 2 µL of DNA template, 0.2 µM each of the 515F and 806R primer sets, 7.5 µL SYBR Green Master Mix (BIO-RAD, Hercules, CA, USA), and molecular grade water to a final volume of 15 µL per reaction. Thermal cycling conditions were as follows: 95°C for 2 min, followed by 40 cycles of 95°C for 30 s, 55°C for 30 s, and 72°C for 60 s. qPCR efficiencies were between 90% and 110%, the standard curve slope values were around −3.2 and R 2 values were around 0.99. Based on the number of 16S rRNA gene copies in each fraction, we selected those fractions that had sufficient microbial DNA present in them for amplicon sequencing; these fractions had a density ranging from 1.67 to 1.74 g mL −1 . Amplicon sequencing of the V4 hyper-variable region of the 16S rRNA gene was conducted using Illumina-compatible, dual-indexed 515f/806 primers, following the protocol by Kozich et al. . The Illumina MiSeq platform was used for sequencing, employing a 2 × 250 bp paired-end format at the Michigan State University Research Technology Support Facility Genomics Core. Data analysis Sequence data were analyzed using MacQIIME2 , following the QIIME2 pipeline for sequence analysis, as previously described by Purcell et al. . In brief, the sequences were demultiplexed, joined (using vsearch), quality-filtered (with the removal of one sample), and denoised (depurated) using the Deblur algorithm with a trim length of 250 . To improve data quality, amplicon sequence variants (ASVs) occurring less than 200 times and in less than 20 fractions, across all samples, were removed. Taxonomy assignment to ASVs was performed using the q2-feature-classifier plugin with a pre-trained naive Bayes classifier for the 515F/806R 16S region of the SILVA database version 138, 99% identity ASVs . Downstream analyses were then performed using genus-level operational taxonomic units (OTUs), wherein each “genus” contained one or more ASVs with identical genus-level taxonomy (“ Level 6 ” in SILVA ). OTUs that did not belong to a described genus within the database and could not be classified at the genus level are referred to using the finest taxonomic level that could be assigned followed by “gen.” in the figures and throughout this text as an “unclassified” or “uncultured” genus in their respective taxonomic group (i.e., Family or Order). Relative abundance tables were exported for qSIP calculations, following the methodology of Finley et al. . An indicator species analysis, using the R package “indicspecies,” was conducted to determine which groups were different between the sites . The “pre-incubation” samples were used to determine the natural abundance-weighted average density (WAD) and GC content of DNA as in Koch et al. (2018) and others . For each pair of 15 N samples (unlabeled/pre-incubation and labeled/post-incubation) in each treatment (site and method type), the change in WAD of each genus was calculated. The WAD for each genus was calculated using the 16S rRNA gene copies (qPCR) for each genus in each fraction and weighted by the proportional abundance of total 16S rRNA gene copies measured in that fraction for each sample. Low-abundance taxa were filtered by setting a relative abundance threshold of less than 0.0001 (proportion), resulting in the removal of 11 genera. In addition, 179 taxa that were present in fewer than 5 samples per site (out of 7 total) were excluded. These filtering steps retained 98.6% of genera, all of which (511 total) were present in both the lab and the field. A tube correction was applied to account for tube-by-tube variation in WAD caused by variability in CsCl density, as described in Morrissey et al. , using the 50 genera with the smallest WAD shifts. Following the approach described by Hungate et al. with adjustments for 15 N as outlined in Morrissey et al. , the excess atom fraction (EAF) of 15 N incorporated into a genus’ DNA was determined based on the post-incubation shift in WAD. Negative values (estimated EAF 15 N < 0) were replaced with zero for subsequent analyses. The 15 N EAF reflects the amount of assimilated 15 N in DNA, relative to the total N in DNA, over the incubation period (5 days) and is hereafter referred to as the “relative N assimilation rate” (similar to reference ). The proportion of N assimilated by each genus “% N assimilated” was calculated as the product of each taxon’s relative abundance (expressed as a proportion) and 15 N EAF divided by the sum of the product of relative abundance and 15 N EAF for all genera as in . To evaluate differences in 15 N enrichment between methodologies, Equivariant Passing-Bablok Regression was performed using the “mcr” package , following the methodology of Bablok and Dufey . Passing-Bablok regression is a non-parametric and robust method commonly employed in method comparison studies . It assumes a high correlation and linear relationship between the two variables being compared. Interpretation of Passing-Bablok regressions involves examining the confidence intervals for both the slope and intercept. If the confidence interval for the slope does not include the value of 1, it indicates statistically significant evidence of a proportional difference between the two methods. Similarly, if the confidence interval for the intercept does not include the value of 0, it suggests statistically significant evidence of a constant difference between the methods. For visualization, we identified the 20 genera with the highest relative rates and total N assimilation values for each method (lab or field), averaged across both sites. Because top genera were not always shared between the two methods, more than 20 genera were reported. Ridgeline plots were created using the R package “ggridges” and show the variation in taxon-specific measurements across the replicates. Differences in taxon-specific relative N assimilation rates or N assimilated between the lab and the field methods were determined using one-way ANOVA ( n = 7 per method). Differences in soil characteristics and biogeochemical processes among sites, and where applicable, method, were assessed using one- and two-way ANOVA. Differences in prokaryotic community composition were assessed with two-way PerMANOVA using the R package “vegan” . Calculations, filtering, subsequent analyses, and figure creation were performed using R version 4.3.2 in RStudio version 2024.4.0.735 . We examined N cycling in two maize-cropped agricultural fields in Morgantown, West Virginia, USA, in August 2020. These fields were located at the West Virginia University Organic Research and Outreach Center (39.650910, –79.937906) and the West Virginia University Animal Science and Husbandry Research and Outreach Center (39.661446, –79.926243). The sites are in the same watershed, but topographically distinct. Hereafter, the WVU Organic Farm field will be referred to as the “Ridge” site and the WVU Animal Science farm field as the “Valley” site due to their distinguishing topographical characteristics. At each site, the experimental work was conducted in fields planted with maize. The Valley site is a footslope that has been under regular row crop production (mostly maize) for over a century. The Valley soil is classified as fine-loamy, mixed, superactive, mesic Oxyaquic Fragiudalfs in the Clarksburg series . Prior to maize planting, the field was cultivated with a triticale winter cover crop, which was mowed and baled in mid-May 2020, followed by the application of a broad-spectrum glyphosate-based herbicide. The Ridge site is a topographical shoulder that was previously naturalized grassland and was not in annual production until maize planting in June 2020. The Ridge soil is classified as fine-loamy, mixed, superactive to active, mesic Ultic Hapludalfs in the Westmoreland (WeC), and Dormont/Guernsey (DgC) soil series . At both sites, dairy manure and straw compost were applied and tilled into the soil a few days before maize planting in early June 2020. These sites were chosen for their distinct geomorphic positions, soil types, and land-usage histories to enable us to assess whether the field relevance of lab measurements is context-dependent. In late July 2020, seven 1 m × 1 m plots were randomly established within a 10 m × 10 m area at each site ( n = 7). Plots were centered around maize rows and gently hand-weeded as needed. For each plot, two 7.5 cm diameter PVC collars were placed approximately 4 cm deep within 15 cm of a maize stalk to prepare for the field qSIP measurements . Identical collars were also placed to measure N immobilization and mineralization. In addition, a 25 cm diameter PVC collar was installed within 15 cm of a maize stalk in each plot to measure field soil respiration. Field and lab qSIP measurements were collected in mid-August 2020. At this time, the maize had reached the grain fill stage (V12–V15 stages), a stage commonly used for 15 N pulse labeling experiments to determine nutrient availability for yield prediction . Before starting the incubations, soil cores were collected outside the collars to determine gravimetric soil moisture , water holding capacity (WHC) , and bulk density via the excavation method . To assess genus-specific 15 N assimilation in the lab and field, simultaneous stable isotope incubations were performed . For the lab incubation, soil cores collected were within 15 cm of a maize stalk using a soil hammer corer (AMS 2 × 6 inch [5 × 15 cm] Soil Core Sampler). In total, over the length of the experiment, four soil cores were taken from each replicate plot and brought to the laboratory for separate analyses (Lab qSIP, Lab Chem., Field qSIP, and Field Chem., as shown in ). Rhizosphere and bulk soil were separated by root picking and handshaking. Soil remaining on the root was classified as rhizosphere soil and the soil that fell off as bulk soil . Total C, N, and sulfur (S) of both bulk and rhizosphere soil were analyzed using the vario MAX cube (Elementar, Landenselbold, Germany). Soil pH was measured on bulk soil using an Accumet AE150 pH meter (Fisher Scientific, Hampton, NH, USA) . Soil organic matter (SOM) was determined on bulk soil via loss-on-ignition . Within 24 hours from the initial collection, rhizosphere soil was collected and sieved for initial community characterization and the lab qSIP incubation. For initial community characterization, 3 g of soil was subsampled and frozen at −80°C to represent the “pre-incubation” microbial community at natural abundance 15 N (i.e., unlabeled) for both field and lab measurements. For the lab qSIP incubation, 3 g of the rhizosphere soil was incubated with 98 atom % ( 15 NH 4 ) 2 SO 4 at a concentration of 3 µmole N g −1 soil at 60% WHC in Falcon tubes. All lab incubations were carried out in the dark at room temperature (~21°C). The amount of N added (3 µmole N g −1 ) is roughly equivalent to 83.3 kg N ha −1 , assuming a soil depth of 6 inches (15.24 cm) and a bulk density of 1.3 g/cm³, and is comparable to a late-season fertilization event . The field qSIP incubation was initiated on the same day as the lab incubation, by adding 98 atom % ( 15 NH 4 ) 2 SO 4 to achieve 3 µmole N g −1 soil (equivalent to ~42 µg natural abundance N g −1 soil). Before injection, pilot holes were made using 15.24 cm long, 3.2 mm diameter drill bits (Dewalt) to prevent soil compaction and clogging in the injection needle. The solution was injected into the soil within the PVC collar to a depth of 15 cm using a 50 mL syringe and 15.24 cm steel needle with 12 side ports (Hold Your Horses BBQ, Professional Meat Injection Kit). The solution was injected gradually throughout several pilot holes (similar to references , ). The needle was slowly withdrawn and turned to achieve an even distribution of the solution throughout the soil column . The soil was fully saturated (100% water holding capacity) for an even distribution of ( 15 NH 4 ) 2 SO 4 . The PVC collars were loosely covered with a 10.2 cm PVC cap supported by rubber props to minimize leaching during rain events while allowing sufficient airflow. In addition, a parallel lab and field incubation were set up to measure soil biogeochemistry (“Chem.” in ) including CO 2 flux and N transformation (see details below). Apart from unlabeled ( 14 NH 4 ) 2 SO 4 addition (3 µmole N g Soil −1 ), the biogeochemical incubations were identical in conditions to qSIP field (100% WHC) and lab (60% WHC) incubation. The field soil was saturated initially to account for the loss of solution through evaporation and leaching over the incubation and to ensure adequate microbial isotope uptake, whereas the lab soil was saturated to 60% WHC but in an enclosed environment. Though field and lab soils initially had different moisture contents, we found that at the point of sample collection in the field the soil moisture had decreased from 100% to ~40% WHC (Table 2), suggesting that the field soil moisture probably experienced a similar WHC (60%) to the lab incubations over much of the incubation period. For the lab incubation, approximately 25 g of rhizosphere soil was incubated in a 960 mL mason jar fitted with a rubber septum for gas sampling. Since inorganic N assimilation mainly occurs within the first week of fertilizer N addition, often peaking by the 3rd day , both lab and field incubations were halted after 5 days. No significant rainfall events occurred during this time and the surface soil temperature averaged around 22°C. Field qSIP cores were collected to a depth of 15 cm and transported back to the lab on ice. For both methods, soil samples were stored at −80°C. Net nitrogen immobilization and nitrification were measured in the field and the lab under conditions identical to those used for qSIP. To measure initial N concentrations, an intact soil core (15 cm length × 5 cm diameter plastic sleeve) was transported to the lab in a sleeve and injected with a 14 (NH 4 ) 2 SO 4 solution while still in the sleeve. After injection, rhizosphere soil was immediately collected, homogenized, sieved, and extracted for inorganic N analysis. Inorganic N was analyzed using soil subsamples collected upon initiation and completion of the 5-day incubation. Subsamples (8 g) were extracted with 40 mL of 0.5 M K 2 SO 4 for 1 hour and stored at −20°C until analysis using an AQ300 Discrete Chemical Analyzer (SEAL Analytical, Mequon, WI, USA), following U.S. Environmental Protection Agency (EPA) methods 353.2 for nitrate and nitrite and 350.1 for ammonium . Net nitrification and immobilization rates were calculated as described by Hart et al . 2 flux and microbial biomass carbon Soil CO 2 flux measurements were measured in the field at 0, 24, and 72 hours, over bare soil after initiation of the qSIP incubation using a LI-7810 Smart Chamber (LI-COR Biosciences, Lincoln, Nebraska). In the lab, cumulative soil respiration was measured using the closed chamber technique with a LI-6400XT as described in Walkup et al.. These measurements were used to calculate the average rate of C respiration in μg C kg soil −1 hour −1 during the 5-day incubation period. At the end of the incubation period, microbial biomass C was determined from 4 g subsamples of both lab and field soils, as described in Kane et al. . 15 N uptake Upon termination of the 5-day incubation, the closest three maize plants to the field incubation cores were measured for height, and aboveground biomass was collected. The roots in the collected qSIP cores were rinsed, dried, and weighed before being stored at −80°C. Plant material was dried and then partitioned, as described in de Oliveira Silva , before being weighed and ground using a Wiley mill. The ground material was homogenized and subsequently analyzed for δ 15 N and %N using a Carlo-Erba NC 2500 Elemental Analyzer at the University of Maryland Central Appalachian Stable Isotope Facility. The rate of plant 15 N uptake was calculated as follows: (atom% excess of sample × total N)/(atom % excess of atmosphere × incubation time) following the method outlined in Smercina et al. . Total DNA extraction was conducted using the PowerLyzer PowerSoil DNA extraction kit, following the manufacturer’s instructions (MoBio Laboratories, Carlsbad, CA, USA). DNA quantification was performed using the Qubit dsDNA high-sensitivity assay kit and a Qubit 2.0 fluorometer (Invitrogen, Eugene, OR, USA). Further quantification of total DNA was performed via PicoGreen (Molecular Probe Inc., Eugene, OR, USA). To assess 15 N uptake by microbial taxa, qSIP was employed based on the methodology described by Morrissey et al. , with slight modifications described in Purcell et al. . For density centrifugation, approximately 2 µg of DNA was added to 3.5 mL of saturated CsCl solution (density of 1.89 g/mL). The 4.7 mL OptiSeal ultracentrifuge tube (Beckman Coulter, Fullerton, CA, USA) was then filled completely with a gradient buffer (200 mM Tris, 200 mM KCl, 2 mM EDTA). The samples were centrifuged in an Optima Max benchtop ultracentrifuge (Beckman Coulter) using a Beckman TLN-100 rotor at 55,000 rpm for 72 hours at 18°C. Subsequently, the density gradient was fractionated, and approximately 140 µL fractions were collected using a density gradient fractionation system (Brandel, Gaithersburg, MD, USA), resulting in approximately 25 fractions per sample. The density of each fraction was determined using a Reichert AR200 digital refractometer (Reichert Analytical Instruments, Depew, NY, USA). Contamination was avoided by thorough cleaning of the entire fractionation system and refractometer between tubes. Following density fractionation, DNA was purified from each fraction using isopropanol precipitation. Fractions near the ends of the density gradient in each tube had very low levels of DNA suggesting minimal contamination across fractions. The quantification of the 16S rRNA gene in each fraction, from all samples, was determined using quantitative PCR (qPCR). All qPCRs were performed in triplicate and consisted of 2 µL of DNA template, 0.2 µM each of the 515F and 806R primer sets, 7.5 µL SYBR Green Master Mix (BIO-RAD, Hercules, CA, USA), and molecular grade water to a final volume of 15 µL per reaction. Thermal cycling conditions were as follows: 95°C for 2 min, followed by 40 cycles of 95°C for 30 s, 55°C for 30 s, and 72°C for 60 s. qPCR efficiencies were between 90% and 110%, the standard curve slope values were around −3.2 and R 2 values were around 0.99. Based on the number of 16S rRNA gene copies in each fraction, we selected those fractions that had sufficient microbial DNA present in them for amplicon sequencing; these fractions had a density ranging from 1.67 to 1.74 g mL −1 . Amplicon sequencing of the V4 hyper-variable region of the 16S rRNA gene was conducted using Illumina-compatible, dual-indexed 515f/806 primers, following the protocol by Kozich et al. . The Illumina MiSeq platform was used for sequencing, employing a 2 × 250 bp paired-end format at the Michigan State University Research Technology Support Facility Genomics Core. Sequence data were analyzed using MacQIIME2 , following the QIIME2 pipeline for sequence analysis, as previously described by Purcell et al. . In brief, the sequences were demultiplexed, joined (using vsearch), quality-filtered (with the removal of one sample), and denoised (depurated) using the Deblur algorithm with a trim length of 250 . To improve data quality, amplicon sequence variants (ASVs) occurring less than 200 times and in less than 20 fractions, across all samples, were removed. Taxonomy assignment to ASVs was performed using the q2-feature-classifier plugin with a pre-trained naive Bayes classifier for the 515F/806R 16S region of the SILVA database version 138, 99% identity ASVs . Downstream analyses were then performed using genus-level operational taxonomic units (OTUs), wherein each “genus” contained one or more ASVs with identical genus-level taxonomy (“ Level 6 ” in SILVA ). OTUs that did not belong to a described genus within the database and could not be classified at the genus level are referred to using the finest taxonomic level that could be assigned followed by “gen.” in the figures and throughout this text as an “unclassified” or “uncultured” genus in their respective taxonomic group (i.e., Family or Order). Relative abundance tables were exported for qSIP calculations, following the methodology of Finley et al. . An indicator species analysis, using the R package “indicspecies,” was conducted to determine which groups were different between the sites . The “pre-incubation” samples were used to determine the natural abundance-weighted average density (WAD) and GC content of DNA as in Koch et al. (2018) and others . For each pair of 15 N samples (unlabeled/pre-incubation and labeled/post-incubation) in each treatment (site and method type), the change in WAD of each genus was calculated. The WAD for each genus was calculated using the 16S rRNA gene copies (qPCR) for each genus in each fraction and weighted by the proportional abundance of total 16S rRNA gene copies measured in that fraction for each sample. Low-abundance taxa were filtered by setting a relative abundance threshold of less than 0.0001 (proportion), resulting in the removal of 11 genera. In addition, 179 taxa that were present in fewer than 5 samples per site (out of 7 total) were excluded. These filtering steps retained 98.6% of genera, all of which (511 total) were present in both the lab and the field. A tube correction was applied to account for tube-by-tube variation in WAD caused by variability in CsCl density, as described in Morrissey et al. , using the 50 genera with the smallest WAD shifts. Following the approach described by Hungate et al. with adjustments for 15 N as outlined in Morrissey et al. , the excess atom fraction (EAF) of 15 N incorporated into a genus’ DNA was determined based on the post-incubation shift in WAD. Negative values (estimated EAF 15 N < 0) were replaced with zero for subsequent analyses. The 15 N EAF reflects the amount of assimilated 15 N in DNA, relative to the total N in DNA, over the incubation period (5 days) and is hereafter referred to as the “relative N assimilation rate” (similar to reference ). The proportion of N assimilated by each genus “% N assimilated” was calculated as the product of each taxon’s relative abundance (expressed as a proportion) and 15 N EAF divided by the sum of the product of relative abundance and 15 N EAF for all genera as in . To evaluate differences in 15 N enrichment between methodologies, Equivariant Passing-Bablok Regression was performed using the “mcr” package , following the methodology of Bablok and Dufey . Passing-Bablok regression is a non-parametric and robust method commonly employed in method comparison studies . It assumes a high correlation and linear relationship between the two variables being compared. Interpretation of Passing-Bablok regressions involves examining the confidence intervals for both the slope and intercept. If the confidence interval for the slope does not include the value of 1, it indicates statistically significant evidence of a proportional difference between the two methods. Similarly, if the confidence interval for the intercept does not include the value of 0, it suggests statistically significant evidence of a constant difference between the methods. For visualization, we identified the 20 genera with the highest relative rates and total N assimilation values for each method (lab or field), averaged across both sites. Because top genera were not always shared between the two methods, more than 20 genera were reported. Ridgeline plots were created using the R package “ggridges” and show the variation in taxon-specific measurements across the replicates. Differences in taxon-specific relative N assimilation rates or N assimilated between the lab and the field methods were determined using one-way ANOVA ( n = 7 per method). Differences in soil characteristics and biogeochemical processes among sites, and where applicable, method, were assessed using one- and two-way ANOVA. Differences in prokaryotic community composition were assessed with two-way PerMANOVA using the R package “vegan” . Calculations, filtering, subsequent analyses, and figure creation were performed using R version 4.3.2 in RStudio version 2024.4.0.735 . The Valley and Ridge soils differed in several physical and chemical characteristics despite their geographical proximity. The Valley site soil had significantly higher SOM (Welch t-test, P < 0.05), soil C and N contents, and a higher pH. The Valley site had a higher bulk density, and soil moisture in the field qSIP core after the incubation ( P < 0.01), and tended to have higher soil WHC ( P < 0.10). The Valley site also had shorter plants ( P = 0.023) and tended to have lower plant biomass ( P = 0.096) and plant uptake of the 15 N-label ( P = 0.097). Biogeochemistry and microbial community Carbon and N cycling process rates differed between field and lab methodologies . Rates of CO 2 flux were consistent at the Ridge site in field and lab measurements while at the Valley site, field-measured CO 2 flux was 39% higher than lab-measured. Rates of net nitrification and immobilization were higher when measured in the lab relative to the field and higher overall at the Ridge site than at the Valley site. Net immobilization rates did not significantly differ between sites. There was no significant effect of method or site on microbial biomass C. Whole prokaryotic community composition was distinct between sites (PerMANOVA; F = 9.49, P = 0.001) and did not differ by method (F = 1.07, P = 0.281) (NMDS ; ). An indicator species analysis of total communities prior to the experiment (pre-incubation) showed the Valley site had significantly higher ( P < 0.05) proportions of Bacillaceae , Gaiellaceae , Nitrososphaeraceae , Blrii41 , and Gemmatimonadaceae , whereas the Ridge site had more Xanthobacteraceae , Chtoniobacteraceae , Xiphinematobacteraceae , Vicinamibacteraceae , and Haliangiaceae . An indicator species analysis of the total communities from both sites by incubation method (field or lab) showed the field soils had more Firmicute families (including but not limited to Thermoactinomycetaceae , Peptostreptococcaceae , and Bacillaceae ), Chloroflexi families, Beijerinckiaceae (free-living aerobic nitrogen-fixing bacteria in Pseudomonadota), Nitrososphaeraceae , and Micromonosporaceae . Lab soils had more Acidobacteriota, Chitinophagaceae , Methylomirabilota, Planctomycetota families (most notably Isosphaeraceae ), Rhizobiaceae , and Xanthobacteraceae . Genus-specific nitrogen assimilation Relative N assimilation rates were calculated for each genus and reflected the uptake of 15 N into that organism’s DNA during the 5-day incubation period in the lab or field. The distribution of relative N assimilation rates for the Valley site was similar in the lab and the field . In the Ridge site, some taxa exhibited higher relative assimilation rates in the lab. When relative assimilation rates were weighted by relative abundance to estimate the proportion of N assimilated by each genus (% N assimilated), the distributions were highly similar across sites and methods . The relationships between lab and field measurements of relative N assimilation rate and N assimilated were assessed using an equivariant Passing-Bablok regression for each site independently, and both sites together . Similar regression results were obtained for the Valley and Ridge sites, so only combined values are discussed. For the relative N assimilation rate, field measurements covaried with those from the lab (Pearson’s r = 0.493; ); the Passing-Bablok regression slope was 0.81 (95% CI = 0.76 and 0.87) and the intercept was 0.01 (95% CI = 0.001 and 0.019). Since the confidence interval for the slope and the intercept did not overlap 1, this provides evidence of proportional and constant differences between methods, respectively. In this case, field measurements of the relative N assimilation rate were, in general, lower than those measured in the lab. The difference between field and lab methods largely disappeared when examining the proportion of N assimilated by each genus . Field and lab measurements were highly correlated across both sites (Pearson’s r = 0.97). In addition, the Passing-Bablok regression slope was 0.99 (95% CI = 0.95 and 1.02) and the intercept was 0.0000 (95% CI = −0.0001 and 0.000), indicating no proportional (slope) or constant (intercept) difference between the methods. Genera with relatively high median N assimilation rates across both sites occasionally differed between the lab and field ( ; Supp. Data). The genera responsible for the largest N assimilation rates were moderately consistent between methods; 17 genera of those reported were in the top 20 only in one method ( ; Supp Data). In the field, the highest relative N assimilation rates were observed for Chitinophaga , an unassigned genus within Solirubrobacterales , an unassigned genus within Nitrososphaeraceae , and Sumerlaea ( ; Supp Data). For lab measurements, the highest rates were observed for Aeromicrobium , an unknown genus within Oxalobacteraceae , Luteolibacter , Bdellovibrio , and Chitinophaga . Field measurements of relative N assimilation rates differed significantly from lab measurements at one or both sites for 7 of 37 genera with the highest relative N assimilation rates (~19% of top genera; one-way ANOVA P < 0.05). When considering all genera across both sites, the lab N assimilation rates were on average 16% higher than field rates. By site, lab assimilation rates were 23% higher for all genera at the Ridge site but only 7% higher at the Valley site. Among the top genera, only the genus Nitrosospira had significantly higher lab rates at both sites (Ridge + 53%, P = 0.005; Valley + 39%, P = 0.02). At the Valley site, Aeromicrobium (Valley + 69%, P = 0.002) was clearly higher in the lab, but an unassigned genus within Nocardioidaceae (Valley + 22%, P = 0.05) and Bdellovibrio (Valley + 61%, P = 0.07) also tended to have higher lab rates. The Ridge site had more taxa with higher lab rates including Luteolibacter (Ridge + 44%, P = 0.007), Lineage Ila (Ridge + 42%, P = 0.03), Nakamurella (Ridge + 43%, P = 0.009), Marmoricola (Ridge + 56%, P = 0.006), and Polycyclovorans (Ridge + 60%, P = 0.01). Edaphobaculum also had slightly higher lab rates (Ridge + 48%, P = 0.10). Genera with relatively high % N assimilation across both sites differed regularly between the lab and field methods only at the Valley site ( ; Supp. Data). The genera responsible for the largest proportions of N assimilated were consistent across both methods; only 2 of the reported genera were in the top 20 only in one method ( ; Supp Data). The top five were an uncultured Vicinamibacterales genus, Bacillus , Gaiella , an uncultured Gaiellales genus, and KD4.96 . Field measurements of the proportion of the N assimilated differed from lab measurements for 7 of the top 22 genera (~32%) at one or both sites (one-way ANOVA; P < 0.05). Of the top assimilators, only Sphingomonas ’ % N assimilation was significantly different at both sites, being higher in the lab (Valley + 33%, P = 0.02; Ridge + 38%, P = 0.005). Uniquely at the Ridge site, only SC.I.84 differed between methods and was higher in the lab (+18%, P = 0.03). At the Valley site, several genera were higher in the lab including Bradyrhizobium (+35%, P = 0.003), Steroidobacter (+17%, P = 0.01), and Paenibacillus (+45%, P = 0.09), while others were higher in the field including Gaiella (+11% Field, P = 0.02), KD4.96 (+23%, Valley P = 0.001), an uncultured Gaiellales genus (+17%, P = 0.006), 67.14 (+3%, P = 0.09), MB.A2.108 (+40%, P = 0.10), and an uncultured Xanthobacteraceae genus (+26%, P = 0.10). At the phylum level, the relative N assimilation rates and proportions of N assimilated were generally consistent between the field and the lab . Similar to the genera level results, the mean relative N assimilation rate at the phylum level across both sites was ~15% higher in the lab compared to the field. By site, the mean rates were only 4% higher at the Valley site but 24% higher at the Ridge site. Carbon and N cycling process rates differed between field and lab methodologies . Rates of CO 2 flux were consistent at the Ridge site in field and lab measurements while at the Valley site, field-measured CO 2 flux was 39% higher than lab-measured. Rates of net nitrification and immobilization were higher when measured in the lab relative to the field and higher overall at the Ridge site than at the Valley site. Net immobilization rates did not significantly differ between sites. There was no significant effect of method or site on microbial biomass C. Whole prokaryotic community composition was distinct between sites (PerMANOVA; F = 9.49, P = 0.001) and did not differ by method (F = 1.07, P = 0.281) (NMDS ; ). An indicator species analysis of total communities prior to the experiment (pre-incubation) showed the Valley site had significantly higher ( P < 0.05) proportions of Bacillaceae , Gaiellaceae , Nitrososphaeraceae , Blrii41 , and Gemmatimonadaceae , whereas the Ridge site had more Xanthobacteraceae , Chtoniobacteraceae , Xiphinematobacteraceae , Vicinamibacteraceae , and Haliangiaceae . An indicator species analysis of the total communities from both sites by incubation method (field or lab) showed the field soils had more Firmicute families (including but not limited to Thermoactinomycetaceae , Peptostreptococcaceae , and Bacillaceae ), Chloroflexi families, Beijerinckiaceae (free-living aerobic nitrogen-fixing bacteria in Pseudomonadota), Nitrososphaeraceae , and Micromonosporaceae . Lab soils had more Acidobacteriota, Chitinophagaceae , Methylomirabilota, Planctomycetota families (most notably Isosphaeraceae ), Rhizobiaceae , and Xanthobacteraceae . Relative N assimilation rates were calculated for each genus and reflected the uptake of 15 N into that organism’s DNA during the 5-day incubation period in the lab or field. The distribution of relative N assimilation rates for the Valley site was similar in the lab and the field . In the Ridge site, some taxa exhibited higher relative assimilation rates in the lab. When relative assimilation rates were weighted by relative abundance to estimate the proportion of N assimilated by each genus (% N assimilated), the distributions were highly similar across sites and methods . The relationships between lab and field measurements of relative N assimilation rate and N assimilated were assessed using an equivariant Passing-Bablok regression for each site independently, and both sites together . Similar regression results were obtained for the Valley and Ridge sites, so only combined values are discussed. For the relative N assimilation rate, field measurements covaried with those from the lab (Pearson’s r = 0.493; ); the Passing-Bablok regression slope was 0.81 (95% CI = 0.76 and 0.87) and the intercept was 0.01 (95% CI = 0.001 and 0.019). Since the confidence interval for the slope and the intercept did not overlap 1, this provides evidence of proportional and constant differences between methods, respectively. In this case, field measurements of the relative N assimilation rate were, in general, lower than those measured in the lab. The difference between field and lab methods largely disappeared when examining the proportion of N assimilated by each genus . Field and lab measurements were highly correlated across both sites (Pearson’s r = 0.97). In addition, the Passing-Bablok regression slope was 0.99 (95% CI = 0.95 and 1.02) and the intercept was 0.0000 (95% CI = −0.0001 and 0.000), indicating no proportional (slope) or constant (intercept) difference between the methods. Genera with relatively high median N assimilation rates across both sites occasionally differed between the lab and field ( ; Supp. Data). The genera responsible for the largest N assimilation rates were moderately consistent between methods; 17 genera of those reported were in the top 20 only in one method ( ; Supp Data). In the field, the highest relative N assimilation rates were observed for Chitinophaga , an unassigned genus within Solirubrobacterales , an unassigned genus within Nitrososphaeraceae , and Sumerlaea ( ; Supp Data). For lab measurements, the highest rates were observed for Aeromicrobium , an unknown genus within Oxalobacteraceae , Luteolibacter , Bdellovibrio , and Chitinophaga . Field measurements of relative N assimilation rates differed significantly from lab measurements at one or both sites for 7 of 37 genera with the highest relative N assimilation rates (~19% of top genera; one-way ANOVA P < 0.05). When considering all genera across both sites, the lab N assimilation rates were on average 16% higher than field rates. By site, lab assimilation rates were 23% higher for all genera at the Ridge site but only 7% higher at the Valley site. Among the top genera, only the genus Nitrosospira had significantly higher lab rates at both sites (Ridge + 53%, P = 0.005; Valley + 39%, P = 0.02). At the Valley site, Aeromicrobium (Valley + 69%, P = 0.002) was clearly higher in the lab, but an unassigned genus within Nocardioidaceae (Valley + 22%, P = 0.05) and Bdellovibrio (Valley + 61%, P = 0.07) also tended to have higher lab rates. The Ridge site had more taxa with higher lab rates including Luteolibacter (Ridge + 44%, P = 0.007), Lineage Ila (Ridge + 42%, P = 0.03), Nakamurella (Ridge + 43%, P = 0.009), Marmoricola (Ridge + 56%, P = 0.006), and Polycyclovorans (Ridge + 60%, P = 0.01). Edaphobaculum also had slightly higher lab rates (Ridge + 48%, P = 0.10). Genera with relatively high % N assimilation across both sites differed regularly between the lab and field methods only at the Valley site ( ; Supp. Data). The genera responsible for the largest proportions of N assimilated were consistent across both methods; only 2 of the reported genera were in the top 20 only in one method ( ; Supp Data). The top five were an uncultured Vicinamibacterales genus, Bacillus , Gaiella , an uncultured Gaiellales genus, and KD4.96 . Field measurements of the proportion of the N assimilated differed from lab measurements for 7 of the top 22 genera (~32%) at one or both sites (one-way ANOVA; P < 0.05). Of the top assimilators, only Sphingomonas ’ % N assimilation was significantly different at both sites, being higher in the lab (Valley + 33%, P = 0.02; Ridge + 38%, P = 0.005). Uniquely at the Ridge site, only SC.I.84 differed between methods and was higher in the lab (+18%, P = 0.03). At the Valley site, several genera were higher in the lab including Bradyrhizobium (+35%, P = 0.003), Steroidobacter (+17%, P = 0.01), and Paenibacillus (+45%, P = 0.09), while others were higher in the field including Gaiella (+11% Field, P = 0.02), KD4.96 (+23%, Valley P = 0.001), an uncultured Gaiellales genus (+17%, P = 0.006), 67.14 (+3%, P = 0.09), MB.A2.108 (+40%, P = 0.10), and an uncultured Xanthobacteraceae genus (+26%, P = 0.10). At the phylum level, the relative N assimilation rates and proportions of N assimilated were generally consistent between the field and the lab . Similar to the genera level results, the mean relative N assimilation rate at the phylum level across both sites was ~15% higher in the lab compared to the field. By site, the mean rates were only 4% higher at the Valley site but 24% higher at the Ridge site. The development of qSIP was a methodological advance that permits the measurement of genus-specific rates of element assimilation , which can help open the “black box” of microbial ecology to advance our mechanistic understanding of below-ground ecosystem function. We aimed to build upon this to assess the comparability of lab-based qSIP measurements from agricultural soil with tandem field measurements . Overall, we found that relative N assimilation rates were generally lower in the field for the total community, but the magnitude of this difference varied by site and genera . The field and lab methods became much more comparable when the relative assimilation rates were weighted by relative abundance to estimate the proportion of N assimilated by each genus (% N assimilated), though this was also site- and genus-dependent . We identified several taxa that showed both high relative rates and % N assimilation across sites and methods, suggesting or confirming their importance for N retention and cycling in maize-cropped rhizosphere soils. The novel methodological approach for field qSIP with 15 N described here enabled the measurement of genus-specific relative N assimilation rates in relatively undisturbed rhizosphere soil. There were detectable levels of DNA enrichment with 15 N in the field , alleviating concerns that plant competition for N, soil heterogeneity limiting access to the isotope solution, and leaching would prevent microorganisms from sufficiently assimilating the isotope. Overall, the field enrichment levels were broadly comparable to those measured in the lab for most taxa , suggesting that lab qSIP methods provide a reasonable estimate of microbial activity in the field. Notably, dominant assimilators were phylogenetically diverse regardless of method . Compared to previous field qSIP experiments , our study differed in several important ways. First, we implemented a concurrent lab incubation for direct methods comparison. Second, the field qSIP incubation was conducted in the presence of live plants. Third, this experiment was conducted with agricultural soils. Fourth, we used ( 15 NH 4 ) 2 SO 4 to investigate N assimilation specifically, instead of 18 O water which is used to track growth rates more broadly. Due to the lower cost of 15 N, we were able to use a much larger volume of soil, including more roots and capturing a larger spatial scale. Lab and field measurements of 15 N assimilation were correlated , indicating that these lab measurements can help us generally understand microbial function in the wild. Although the correlation between the lab and field relative N assimilation rates was only of intermediate strength (Pearson’s r = 0.49), the rates did not differ between methods for 81% of high N assimilating taxa . Most bacteria experience only a tiny local environment , as such laboratory conditions may have been sufficiently similar to field conditions to produce moderately similar N assimilation rates despite the disturbance associated with soil collection, sieving, and the absence of roots. Many genera also had similar relative N assimilation rates across both sites, suggesting some consistency in genus-specific microbial traits across environmental variation . The relative N assimilation rate enables the evaluation of a genus’ N assimilation ability, regardless of its abundance in the soil. This is valuable as it allows the detection of rarer taxa that influence soil nutrient acquisition processes . However, abundance in soil must still be considered as a genus’ contribution to community-level processes scales in proportion to its relative abundance . In our study, 31%–36% of added N was assimilated by the most abundant 5% of taxa. The % N assimilation is a relative abundance-weighted assessment of a genus’ contribution to N assimilation. When considering the % 15 N assimilated by each genus, there was no significant difference between lab and field measurements overall, as indicated by the high correlation between these measures (Pearson’s r = 0.97, ). The similarity in community composition between the field and the lab likely contributed to the agreement between lab and field measures of genus-specific % 15 N assimilation. One limitation of comparing genus-specific % 15 N assimilation values is that a genus’ relative abundance does not necessarily equate to biomass. Across all environments, prokaryotic cells can vary dramatically in size, and therefore biomass, typically between 0.2 and 5 µm in diameter, with the largest reaching 750 µm . Therefore, this metric will likely overestimate the % 15 N assimilation of smaller, more quickly replicating microbes, and underestimate it for larger, slow-growing microbes. Cross-referencing both relative N assimilation rates and the abundance-weighted % N assimilation can lead to useful inferences about the relevance of a particular genus’s N immobilization capabilities. Despite some similarity between the lab and the field, relative N assimilation rates were generally lower in the field . Interestingly, rates of net N immobilization and nitrification were also lower in the field . Our results suggest this is primarily due to in-field plant uptake of 15 N, reducing 15 N availability to microbes. We found that ~50%–70% of the added 15 N was recovered in plant biomass . Field N assimilation may also have been lower due to loss of 15 N through leaching or denitrification, though this is expected to be minimal for ammonium-based fertilizers . Soil physical heterogeneity could also have limited prokaryotic mobility and access to nutrients. However, this is unlikely as total microbial activity, measured through soil respiration, was either the same, or slightly higher when measured in the field . Higher field respiration could be due to live-root respiration, fresh root C inputs, or bioturbation . Given these respiration results, it seems likely that the reduced field N assimilation rates and N cycling rates were due to N removal from the soil either by plants or leaching. Soil characteristics and plant activity may impact the disparity between field and lab measurements. The Ridge soil had lower bulk density, water-holding capacity, and end-point soil moisture , likely from lower clay and silt content compared to the Valley site’s bottomland soil . We can reasonably infer that the Ridge soil was less able to “hold onto” the isotope solution during the field incubation. This inference is also supported by our measurements of plant 15 N uptake; ~70% of the added 15 N at the Ridge site was utilized by the maize, compared to only ~51% at the Valley site . This property of the Ridge soil likely led to the various site-specific differences in our analysis. The difference between field and lab method values was often larger at the Ridge site than at the Valley site . Microbial community composition was less homogeneous at the Ridge site . A few taxa, most notably Bacillus, were highly abundant in several field qSIP samples at the Ridge site, reaching up to 19% of the total % N assimilated despite below-average relative assimilation rates . Bacillus species can rapidly form robust biofilms on plant roots, including maize . Wu et al. found that fertilization induced soil biofilm formation and those biofilms sustained 40 times more active microbes than free-living cells . Planctomycetota can also be biofilm formers and were found to have higher % N assimilation in the field . Through the inclusion of live plant roots and maintenance of soil structure, it is possible that the field qSIP method captured this key process in some replicate samples. The most effective microbial allies for N retention within agricultural soils are likely to be microbes that are both abundant and effective N assimilators. Within the top relative N assimilators, we identified three taxa in maize-rhizosphere soils with the highest % N assimilation and relative N assimilation: an unassigned genus in Nitrososphaeraceae (Crenarchaeota; 28%, 0.22), Luteolibacter (Verrucomicrobiota; 30%, 0.23), and Terrimonas (Bacteroidota; 22%, 0.15) ( ; Supp. Data). Of the taxa with the highest % N assimilation values ( ; Supp, Data), the only genus with a median relative assimilation rate above 0.14 was Candidatus Udaeobacter (Verrucomicrobiota; 1.43%, 0.149). A genus from the order Vicinamibacterales (Acidobacteriota) also had a very high % N assimilation (3.9%) and slightly above average relative N assimilation rate (0.13). Despite some taxa being more active in either the field or lab, many active taxa could be identified using either method. Notably, Chitinophaga had some of the highest relative assimilation rates across both methods . For relative N assimilation rates, ~19% of the top assimilating genera had significantly different activities between methods, usually at only one site . Only Nitrospira clearly had higher lab rates at both sites. Nitrospira are well known for their nitrifying capabilities, yet our results suggest they are also effective at N assimilation, indicating a potential dual role where they contribute both to soil N retention and loss . Many Nitrospira are considered opportunists . They likely benefited from the enhanced access to the N solution in sieved and mixed soil samples. For % N assimilation values, only 10% had significantly different activities between methods, all at only one site with the exception of Sphingomonas , which had higher lab % N assimilation at both sites . Some Sphingomonas are opportunists, reliably taking advantage of soil disturbances and to N fertilization . The lab method had higher relative N assimilation rates for most taxa , but a few had higher N assimilation values in the field method: Gaiellales, Solirubrobacterales, Rhizobiales, Crenarchaeota, Chloroflexi, Planctomycetota, Armatimonadota, and Firmicutes ( ; ; Supp. Data). Most of these taxa have been observed living in close association with, or within, plant roots . Members of Rhizobiales and Xanthobacteraceae (also slightly higher % N assimilation at the Valley site) include diazotrophic taxa that live in, on, and around plant roots. Actinobacteria (Gaiellales, 67.14 , and MB.A2.108, Solirubrobacterales) are filamentous and some have mycelial structures spread throughout the soil . These lifestyle strategies are intimately tied to roots (e.g., Rhizobiales, Xanthobacteraceae) or soil structure (Gaiellales, Solirubrobacterales) and the removal of plant roots and disruption of soil structure may have led to their reduced uptake and/or abundance in the lab method. Based on these results, the choice of qSIP methodology (field or lab) depends on the research objective. Field measurements may be most appropriate for researchers aiming to collect field-relevant data for downstream applications (climate models, precision fertilization, etc.) . Capturing the realized, in situ rate of N assimilation and growth for soil microbes may be especially important for field experiments that manipulate environmental factors (e.g., fertilization, warming, . Field measurements are likely to best capture the influence of plant-microbe interactions on the activities of rhizosphere symbionts . Taxa with greater N assimilation in the field may form close associations, and N recycling networks, with maize roots and/or root-associated fungi . Future field qSIP experiments should include fungal and other eukaryotic microbes, given the importance of these groups for N and C cycling in soil . Lab measurements, without root presence, may significantly undervalue the activity of these taxa in rhizosphere nutrient cycling in the field. However, the ability to carefully control experimental conditions is a clear advantage of lab measurements, and this approach may be preferable for measuring how microbial traits respond to environmental drivers . In addition, as lab measurements yielded higher genus-specific activity rates , this approach may be preferable for answering questions regarding trait variation across microbial phylogeny where the influence of evolutionary history is most clear when growth or assimilation rates are comparatively high . Even so, qSIP is predicted to have relatively high sensitivity and balanced accuracy, even at the lower levels of isotope incorporation seen in the field . Well-replicated field qSIP captures the complex ecology of soil microbes, their environment, and their interactions where those microbes live and grow. Our results show that field qSIP with 15 N can be used to measure N assimilation of microbial taxa in intact soil with living roots. While microbial relative N assimilation rates and their contribution to N cycling (% N assimilated) were contextually comparable in the lab and field, we did observe constant and proportional differences by method, the magnitude of which varied by site . The lower relative rates of nitrogen assimilation observed in the field, mainly due to plant competition , underscore the critical role of field conditions in shaping microbial activity. Field measurements may be preferred to accurately assess the activity of microbes that are heavily influenced by soil disturbance or the presence of living roots. |
Polyoxygenated Klysimplexane- and Eunicellin-Based Diterpenoids from the Gorgonian | 23b711d5-6a91-4968-9949-c5386618de6f | 8198191 | Pharmacology[mh] | Gorgonian corals belonging to genus Briareum (Cnidaria, Octocorallia, Briareidae) inhabiting the western Pacific Ocean and Caribbean waters have been found to be a rich source of diterpenoids possessing fused bicarbocyclic structures of briarane , eunicellin , and asbestinane types, in addition to cembranoids . Many of these metabolites exhibit a wide range of bioactivities, including anti-inflammatory , cytotoxic , antiviral , antimalarial , antimicrobial , and analgesic activities. Our previous study on the chemical constituents of Briareum violaceum afforded the isolation of briarellins (2,9:3,16-diepoxyeunicellins), which were shown to possess interesting structures generated from intramolecular cyclization of corresponding cembranoids . In our efforts to discover new natural products from marine organisms, a continuous chemical investigation of B. violaceum was carried out. The present study led to the discovery of four new diterpenoids. Three of them, briarols A‒C (1‒3), were identified as compounds of a rare (4-isopropyl-1,5,8a-trimethylperhydrophenanthrane) skeleton, which was discovered for only one time as klysimplexin T in 2011 and is herein denominated as the klysimplexane skeleton . The structure elucidation of the new metabolites was performed by extensive spectroscopic analyses, including two-dimensional (2D) NMR correlation and high-resolution electrospray ionization mass spectroscopy (HRESIMS) analyses. A plausible biosynthetic pathway was suggested and the cytotoxicity of the new compounds was evaluated.
The lyophilized organism was extracted with ethyl acetate (EtOAc) followed by chromatographic fractionation of solvent-free extract on silica (Si) gel. Fractions showing 1 H NMR signals characteristic of polyoxygenated terpenoids were separated mainly by a reverse-phase (RP) column and high-performance liquid chromatography (RP-HPLC), yielding diterpenoids 1‒4 . The spectra of these compounds are given in the . The IR absorption bands at ν max 3413–3464 cm −1 and the four 13 C NMR signals, resonating at the region of δ C 70.7 to 81.9 ppm, disclosed the multi-hydroxylated pattern of the isolated compounds. Briarol A ( 1 ) was obtained as a white powder with an optical rotation of [ α ] D 25 = −101.9 ( c 0.24, CHCl 3 ). The sodiated ion peak at m/z 377.2300 [M + Na] + in the HRESIMS established a molecular formula of C 20 H 34 O 5 for 1 , appropriate for four degrees of unsaturation. The IR absorption at ν max 3430 cm −1 revealed the presence of hydroxy functionality. As the 1 H NMR spectrum, measured in CDCl 3 , showed nine overlapped proton signals at δ H 1.50–1.80 ppm, we remeasured 1 in C 6 D 6 and acetone- d 6 to allow better signal resolution and to facilitate integrated 2D NMR correlation analyses ( and ). The 13 C NMR spectrum of 1 , combined with distortionless enhancement by polarization transfer (DEPT) and heteronuclear single quantum correlation (HSQC) spectra, displayed 20 sp 3 -hybridized carbon signals (δ C 10.9–81.9 ppm) assignable for 5 methyl, 4 methylene, 7 methine, and 4 quaternary carbons . Therefore, the four degrees of unsaturation identified metabolite 1 as a tetracyclic diterpenoid. Analyzing proton homonuclear correlation spectroscopy ( 1 H- 1 H COSY) correlations revealed the presence of three partial structures of consecutive proton systems extending from H-1 and H 3 -18 to H-6 through to H-3, from H-8 to H-9, and from H 3 -20 to H 2 -13 through H-11 . The 2 J CH and 3 J CH correlations, as determined by the heteronuclear multiple bond correlation (HMBC) experiments, established the connectivities of the partial structures, and hence the 6-6-6 tricarbocyclic framework of 1 . The four most downfield-shifted carbon signals in the 13 C NMR spectrum (δ C 78.9–81.9) were attributable to four hydroxy-bearing carbons. Thus, the remaining oxygen atom in the molecular formula of 1 together with the two upfield-shifted oxycarbons (δ C 70.9, C and 64.2, C) suggested the presence of a tetrasubstituted epoxy ring. Four 1H singlets (δ H 4.54, 4.12, 2.94, and 2.32; ), lacking HSQC correlations, were assigned to the protons of four hydroxy groups. Two protons of these (δ H 4.12 and 4.54), exhibiting HMBC correlations with C-5 (δ C 25.4, CH 2 ) and C-9 (δ C 79.8, CH)/C-1 (δ C 43.5, CH), were recognized as 6-OH and 10-OH, respectively . The HMBC correlations from both H-1 and CH 3 -20 to C-10 (δ C 78.9, C) confirmed the presence of a hydroxy group at C-10. Moreover, the HMBC correlations observed for H 3 -19 (δ H 1.09 3H, s) with the oxymethine carbons C-6 and C-8 together with the 1 H- 1 H COSY correlation H-8/H-9 are indicative of the hydroxy groups at C-8 and C-9, respectively . Furthermore, the long-range connectivities from the protons of the tertiary methyls H 3 -16 and H 3 -17 (δ H 1.08 and 0.97, each 3H, s) and from the angular methine proton H-1 (δ H 1.64, d, J = 12.0 Hz) to the oxycarbons C-14 (δ C 70.9) and C-15 (δ C 64.2) placed the epoxy group at C-14/C-15. The above findings and other detailed 2D NMR correlation analyses unambiguously established the planar structure of 1 . The relative configurations of the 10 chiral carbons in 1 were mostly deduced by examining nuclear Overhauser effect (NOE) correlations . The large 3 J H–H value of the ring juncture protons H-1 and H-2 (12.0 Hz) must be due to anti orientation of the two axial protons, which were assumed to be on the β- and α-faces of the molecule, respectively. Therefore, the key NOE interactions of H-1 with H 3 -18, H 3 -19, H-9, and 10-OH (red-colored arrows) revealed these protons to be cofacial, indicating the α-oriented hydroxy group at C-9 and hence the S *, R *, S *, R *, R *-configurations at C-1, C-3, C-7, C-9, and C-10, respectively. Consequently, the NOE correlations found for H-3 with H-2, H-2 with H-6 and H-8, and H-8 with H-11 (blue-colored arrows) designated the S *, S *, S *, R *-configurations at C-2, C-6, C-8, and C-11, respectively. However, the NOE interaction of H-1 with H 3 -16 could not be used for effective elucidation of the relative configuration at C-14. Fortunately, the NOE correlations for H 3 -20/H-12β and H-12β/H 3 -17 were observed in the Nuclear Overhauser Effect Spectroscopy NOESY spectra of 1 , measured in both CDCl 3 and acetone- d 6 , and established the α-orientation of the 14,15-epoxy group. Therefore, briarol A ( 1 ) could be defined as (1 S* ,2 S* ,3 R* ,6 S* ,7 S* ,8 S* ,9 R* ,10 R* ,11 R* ,14 R* )-14:15-epoxy-klysimplexan-6,8,9,10-tetrol. Briarol B (2) was obtained as a white powder. It possessed the molecular formula of C 20 H 34 O 4 as indicated by the adduct ion peak at m/z 361.2348 [M + Na] + in its HREIMS, with 16 mass units fewer than that of 1. A comparison of the 13 C and 1 H NMR data of 2 with those of 1 ( and , respectively) revealed the presence of another klysimplexane-based metabolite. However, the NMR spectroscopic data of 2 showed the appearance of an olefinic double bond (δ C 144.1, C and 111.8, CH 2 ; δ H 5.02 and 4.81, each 1H, s) and the absence of the epoxy group. Thus, the two oxycarbons (δ C 70.9 and 64.2, each C) and the methylated methine carbons (δ C 17.2, CH 3 and 28.5, CH) in 1 were replaced by the carbons of two methines (δ C 43.9 and 28.5, each CH) and a 1,1-disubstituted double bond in 2, respectively. These carbons were then assigned as C-14, C-15, C-3, and C-18, respectively, from the 2D NMR correlation analyses of 2 . Therefore, the gross structure of compound 2 was recognized as klysimplexan-3(18)-en-6,8,9,10-tetrol. The investigation of NOE correlations of 2 resulted in the same relative configurations at C-1, C-6, C-7, C-8, C-9, C-10, and C-11 as those of 1. Furthermore, the NOE interactions found for the β-oriented H-1 (δ H 2.08, m) with H 3 -16 (δ H 0.92, 3H, d, J = 6.5 Hz), H 3 -16 with one of the exo-methylene protons (δ H 4.81, s, H-18b), and H-18b with H-1 favored the β-orientation for the 14-isopropyl group and thus the R * configuration at C-14. These findings together with detailed 2D NMR correlations ( and ) unambiguously established compound 2 as (1 S* ,2 R* ,6 S* ,7 S* ,8 S* ,9 R* ,10 R* ,11 R* ,14 R* )- klysimplexan-3(18)-en-6,8,9,10-tetrol. Briarol C (3) was also isolated as a white powder that gave a pseudomolecular ion peak at 361.2347 [M + Na] + in the HRESIMS, consistent with the molecular formula C 20 H 34 O 4 and four degrees of unsaturation, as with 2. The 13 C NMR spectroscopic data of 3 were found to be in accordance with those of 1 from C-1 to C-13 and C-16 to C-20, except for the presence of a tetrasubstituted double bond (δ C 128.3 and 128.0, each C) instead of the tetrasubstituted epoxy group in 1 . Thus, the 2 J CH and 3 J CH correlations displayed by the two olefinic methyl protons (δ H 1.81 and 1.74, each 3H, s) with the olefinic carbons (δ C 128.3 and 128.0, each C), which in turn were correlated with H-1 (δ H 2.81, d, J = 7.5 Hz), confirmed the presence of a 14,15-double bond . The relative configuration of 2 was deduced from NOESY correlations, as illustrated in . Furthermore, it was found that the 13 C NMR chemical shifts of C-1 to C-11 in 1 and 1 H NMR data of H-6, H-8, and H-9 in 1 and 2 were analogous to those of 3, reflecting the same β-orientation for H-1, 6-OH, 8-OH, 10-OH, H 3 -18, H 3 -19, and H 3 -20, and the α-orientation for H-2 and 9-OH. Therefore, compound 3 was clearly identified as (1 R* ,2 S* ,3 R* ,6 S* ,7 S* ,8 S* ,9 R* ,10 R* ,11 R* )-klysimplexan-14(15)-en-6,8,9,10-tetrol. Briarol D (4) was obtained as a white powder and gave a sodiated ion peak at m/z 361.2349 [M + Na] + by HREIMS, appropriate for a molecular formula of C 20 H 34 O 4 and four degrees of unsaturation. The 13 C NMR and DEPT spectra indicated the presence of 20 carbon signals corresponding to 4 methyls, 5 methylenes (including 1 exomethylene), 8 methines (including 1 olefinic and 3 oxymethines), and 3 quaternary carbons (2 olefinic and 1 oxycarbon) of a diterpenoid. The NMR spectroscopic data ( and ) revealed the presence of a trisubstituted [δ C 125.5, CH, 137.7, C; δ H 5.35 (d, J = 10.0 Hz)] and an 1,1-substituted [δ C 162.8, C, 113.8, CH 2 and δ H 5.43, 5.37 (each d, J = 1.2 Hz)] double bond. The remaining two degrees of unsaturation were thus attributed to a bicyclic structure for 4. This was further substantiated by the NMR data comparison of 4 with those of 1–3, which showed the substitution of one ring-juncture methine (2-CH) with an olefinic methine [δ H /δ C 5.35 (d, J = 10.0 Hz)/125.5, CH] in 4. However, the 1 H and 13 C NMR data pointed out the presence of three hydroxy-bearing methines [δ H /δ C 4.21 (d, J = 4.5 Hz, H-8)/70.7; 4.08 (d, J = 7.5 Hz, H-6)/76.2; and 3.46 (d, J = 5.5 Hz, H-9)/80.9], as in the case of compounds 1‒3. Moreover, two protons resonating at δ H 4.26 (d, J = 4.5 Hz) and 4.00 (d, J = 5.5 Hz) exhibited COSY correlations with H-8 and H-9 due to 8-OH and 9-OH, respectively. The gross structure of 4 as a eunicellin-derived diterpenoid , including the positions of the two olefinic bonds and the four hydroxy groups, was further resolved by the study of the long-range proton–carbon correlations . In particular, the HMBC correlations found from the only available ring-juncture proton (δ H 2.52, dd, J = 10.0, 3.0 Hz, H-1) to C-2 (δ C 125.5, CH) and C-10 (δ C 78.3, C), from the olefinic methyl protons (δ H 1.57, s, H 3 -18) to C-2, C-3 (δ C 137.7, C), and C-4 (δ C 38.1, CH 2 ) and from the exomethylene protons (δ H 5.43, 5.37, each d, J = 1.2 Hz, H 2 -19) to C-6 (δ C 76.2, CH), C-7 (δ C 162.8, C), and C-8 (δ C 70.7, CH), positioned the trisubstituted and 1,1-disubstituted double bonds at C-2/C-3 and C-7, respectively. Based on the above findings and detailed 2D NMR correlations , the molecular framework of 4 was established. An inspection of NOESY correlations enabled us to assign the relative configurations of the seven chiral carbons C-1, C-6, C-8, C-9, C-10, C-11, and C-14 in 4. The NOE correlations observed for the β-oriented ring-juncture proton H-1 with the protons of the 9-oxymethine and one of the 14-isopropyl methyls reflected the α-orientation of H-14 and 9-OH. Furthermore, the NOE observed for H-1/H 3 -18 and H-2/H-4 combined with upfield chemical shift (δ C < 20 ppm) observed for C-18 (δ C 18.3 ppm) determined the E -geometry of the olefinic bond at C-2/C-3. This finding placed the olefinic H-2 on the α-face of the molecule. Consequently, the NOE interactions found for H-2 with H-8 and H-8 with both H-6 and H-11 revealed the β-orientation for H-6, H-8, H-10, and H 3 -20. Compound 4 was thus unambiguously identified as (1 S* ,2 E ,6 S* ,8 S* ,9 R* ,10 R* ,11 R* )-eunicellin-2,7(19)-dien-6,8,9,10-tetrol. Based on the above discoveries, it is proposed that compounds 1–4 can be derived from the common eunicellin intermediate (b) after the 2,11-cyclization and 1,3-hydride shift of a cembranoid cation (a). Oxidation of CH 2 -8, CH 2 -9, CH-10, and CH 3 -18 followed by acid-catalyzed hydroxylation at the olefinic C-6 with a subsequent formation of an exomethylene at C-7 in the intermediate 5 yields 4. Furthermore, the 6,7-epoxidation for the intermediate b gives 6 as the intermediate of metabolite 4 and the tricarbocyclic carbonium ion 7. Both 4 and 7 could be further converted into carbonium ion 8, as shown in . Deprotonation at C-18 in 8 can produce 2, while reduction at C-3 and dehydrogenation at C-14/C-15 in 8 gives 3. Subsequently, the epoxidation of the olefinic double bond in metabolite 3 affords 1 . To the best of our knowledge, the biosynthesis of the klysimplexane- and eunicellin-type diterpenoids is limited to marine invertebrates, and there are no analogous structures in terrestrial natural products. The in vitro cytotoxicity of the new diterpenoid metabolites (1‒4) was assessed against the cancer cell lines of human colon cholangiocellular carcinoma (HuCC-T1), human colon carcinoma (HT-29), and human colon adenocarcinoma (DLD-1). The results showed that all compounds only exhibited very weak cytotoxicity against the tested cancer cells, with the IC 50 values ranging from 220.75 to 238.88 μM as compared to doxorubicin hydrochloride (IC 50 1.38 to 2.24 μM). Because of the low yield (< 2.5 mg) and the consumption of the isolated metabolites in measurements of spectroscopic data and cytotoxicity, we suggest that further investigation on other biological activities should be carried out once these tetradroxylated diterpenoid molecules, in particular those with the rare klysimplexane skeleton, can be obtained in sufficient quantities.
3.1. General Experimental Procedures IR spectra and optical rotations were measured on JASCO FT/IR-4100 spectrophotometer and JASCO P-1020 polarimeter (JASCO Corporation, Tokyo, Japan), respectively. LRESIMS and HRESIMS spectra were measured on Bruker APEX II mass spectrometer (Bruker, Bremen, Germany). 1 H and 13 C NMR spectra were measured on Varian Unity INOVA 600 FT-NMR (or 500 or 400 FT-NMR) instruments (Varian Inc., Palo Alto, CA, USA) at 600 MHz (or 500 or 400 MHz) for 1 H and 150 MHz (or 125 or 100 MHz) for 13 C in CDCl 3 or CD 3 OD or acetone- d 6 . Silica (Si) gel (230–400 mesh) (Merck, Darmstadt, Germany) and C18 reverse-phase Si gel (RP-18; 40–63 µM) (Parc-Technologique Blvd, Quebec, Canada) were used for column chromatography. Thin-layer chromatography (TLC) analyses were achieved using precoated Si gel (Kieselgel 60 F-254, 0.2 mm) plates (Merck, Darmstadt, Germany). Further purification and the separation of compounds were performed by reverse-phase high-performance liquid chromatography (RP-HPLC) on a Hitachi L-2455 HPLC instrument with a Supelco C18 column (250 × 21.2 mm, 5 μm) (Supelco Inc., Bellefonte, PA, USA). 3.2. Animal Material The soft coral B. violaceum was collected from Jihui Fish Port, Taitung, Taiwan, identified, and extracted as described before . A voucher specimen was taken and deposited at the Department of Marine Biotechnology and Resources, National Sun Yat-sen (NSYSU) University, Kaohsiung. 3.3. Extraction and Isolation The lyophilized bodies of soft coral (500 g, wet weight) were crushed and extracted with EtOAc. The EtOAc extract (3.9 g) was fractionated with Si gel column chromatography (CC) using EtOAc-hexane (0:100 to 100:0, gradient). Polar fractions eluted with EtOAc-hexane (10:1), which showed the diagnostic 1 H NMR (methyl and oxymethine) signals of polyoxygenated terpenoids, were combined and subfractionated on Si gel CC using acetone-hexane (1:2.5), affording the subfractions F1 and F5. Subfraction F4 was separated on RP-18 Si gel CC using acetyl nitrite (CH 3 CN)-H 2 O (1.5:1 then 1.2:1) to give compounds 2 (1.5 mg), 3 (2.0 mg), and 4 (2.2 mg), respectively. Compound 1 (2.4 mg) was obtained from subfraction F5 with a 3-step purification process with RP-18 Si gel CC using MeOH-H 2 O (1.5:1 then 5:1), RP-HPLC using CH 3 CN-H 2 O (1:2), and then on Si gel CC using acetone-hexane (1:5). Briarol A (1). White powder; [ α ] D 25 −101.9 (c 0.24, CHCl 3 ); IR (neat) ν max 3430, 2927, 2853, and 1382 cm −1 ; 13 C NMR (100 MHz, C 6 D 6 ) and 1 H NMR (400 MHz, C 6 D 6 ). See and , respectively. 13 C NMR (100 MHz, CDCl 3 ) δ C 81.3 (CH, C-6), 79.3 (CH, C-8), 78.7 (CH, C-9), 78.3 (C, C-10), 70.4 (C, C-14), 64.1 (C, C-15), 43.7 (C, C-7), 43.7 (CH, C-2), 42.6 (CH, C-1), 32.0 (CH, C-11), 31.5 (CH 2 , C-4), 29.6 (CH 2 , C-12), 29.6 (CH 3 , C-17), 27.6 (CH, C-3), 25.1 (CH 2 , C-13), 24.2 (CH 2 , C-5), 23.2 (CH 3 , C-16), 17.0 (CH 3 , C-20), 16.6 (CH 3 , C-18), 9.9 (CH 3 , C-19); 1 H NMR (400 MHz, CDCl 3 ) δ H 4.33, 3.94, 3.23, and 2.65 (each 1H, br s, 6-OH, 8-OH, 9-OH, and 10-OH), 3.64 (1H, d, J = 11.2 Hz, H-8), 3.61 (1H, dd, J = 10.4, 5.2 Hz, H-6), 3.56 (1H, d, J = 11.2 Hz, H-9), 2.16 (1H, m, H-11), 1.79 (1H, d, J = 6.0 Hz, H-1), 1.73 (1H, m, H-3), 1.72 (1H, m, H-5β), 1.69 (1H, m, H-12β), 1.68 (1H, m, H-5α), 1.67 (1H, m, H-13β), 1.56 (2H, m, H 2 -4), 1.55 (1H, m, H-2), 1.54 (1H, m, H-12α), 1.45 (3H, s, H 3 -16), 1.39 (1H, m, H-13α), 1.33 (3H, s, H 3 -17), 1.15 (3H, d, J = 6.4 Hz, H 3 -20), 1.06 (3H, s, H 3 -19), 0.97 (3H, d, J = 7.6 Hz, H 3 -18); 13 C NMR (100 MHz, acetone-d 6 ) δ C 81.9 (CH, C-6), 80.2 (CH, C-8), 79.0 (CH, C-9), 78.6 (C, C-10), 70.0 (C, C-14), 64.2 (C, C-15), 44.2 (C, C-7), 44.0 (CH, C-2), 43.5 (CH, C-1), 32.4 (CH, C-11), 31.7 (CH 2 , C-4), 30.5 (CH 2 , C-12), 28.2 (CH, C-3), 25.6 (CH 2 , C-13), 25.1 (CH 2 , C-5), 23.2 (CH 3 , C-16), 20.3 (CH 3 , C-17), 17.5 (CH 3 , C-20), 16.8 (CH 3 , C-18), 10.0 (CH 3 , C-19; 1 H NMR (400 MHz, acetone-d 6 ) δ H 4.30 and 3.88 (each 1H, br s, 8-OH and 9-OH), 4.19 (1H, br s, 6-OH), 4.11 (1H, br s, 10-OH), 3.66 (1H, m, H-6), 3.65 (1H, d, J = 11.2 Hz, H-8), 3.49 (1H, br d, J = 11.2 Hz, H-9), 2.25 (1H, m, H-11), 1.84 (1H, d, J = 6.0 Hz, H-1), 1.79 (1H, m, H-3), 1.72 (1H, d, J = 6.8 Hz, H-2), 1.67 (1H, m, H-4β), 1.57 (1H, m, H-5β), 1.53 (1H, m, H-12β), 1.52 (1H, m, H-5α), 1.50 (1H, m, H-4α), 1.49 (1H, m, H-12α), 1.48 (3H, s, H 3 -16), 1.41 (1H, m, H-13β), 1.37 ( 1 H, m, H-13α), 1.365 (3H, s, H 3 -17), 1.16 (3H, d, J = 6.8 Hz, H 3 -20), 1.06 (3H, s, H 3 -19), 1.03 (3H, d, J = 6.8 Hz, H 3 -18). ESIMS m/z 377 [M + Na] + ; HRESIMS m/z 377.2300 [M + Na] + (calcd for C 20 H 34 O 5 Na, m/z 377.2299). Briarol B (2). White powder; [ α ] D 25 −32.0 (c 0.15, CHCl 3 ); IR (neat) ν max 3464, 2923, 2854, and 1381 cm −1 ; 13 C NMR (125 MHz, CDCl 3 ) and 1 H NMR (500 MHz, CDCl 3 ). See and , respectively. ESIMS m/z 361 [M + Na] + , 339 [M + H] + HRESIMS m/z 361.2348 [M + Na] + (calcd for C 20 H 34 O 4 Na, m/z 361.2349). Briarol C (3). White powder; [ α ] D 25 −21.3 (c 0.22, CHCl 3 ); IR (neat) ν max 3413, 2925, 2858, and 1374 cm −1 ; 13 C NMR (100 MHz, CDCl 3 ) and 1 H NMR (500 MHz, CDCl 3 ). See and , respectively. ESIMS m/z 361 [M + Na] + ; HRESIMS m/z 361.2347 [M + Na] + (calcd for C 20 H 34 O 4 Na, m/z 361.2349). Briarol D (4). White powder; [ α ] D 25 −29.7 (c 0.22, CHCl 3 ); IR (neat) ν max 3418, 2923, 2853, and 1381 cm −1 ; 13 C NMR (150 MHz, acetone- d 6 ) and 1 H NMR (600 MHz, acetone- d 6 ), see and , respectively. ESIMS m/z 361 [M + Na] + ; HRESIMS m/z 361.2349 [M + Na] + (calcd for C 20 H 34 O 4 Na, m/z 361.2349). 3.4. Cytotoxicity Assay Cancer cell lines (HT-29, HuCC-T1, and DLD-1) were obtained from the American Type Culture Collection (ATCC). Compounds 1‒4 were evaluated for the cytotoxic activity using an Alamar blue assay as previously described . The intensity of the produced color was measured at 570 nm using an ELISA plate reader.
IR spectra and optical rotations were measured on JASCO FT/IR-4100 spectrophotometer and JASCO P-1020 polarimeter (JASCO Corporation, Tokyo, Japan), respectively. LRESIMS and HRESIMS spectra were measured on Bruker APEX II mass spectrometer (Bruker, Bremen, Germany). 1 H and 13 C NMR spectra were measured on Varian Unity INOVA 600 FT-NMR (or 500 or 400 FT-NMR) instruments (Varian Inc., Palo Alto, CA, USA) at 600 MHz (or 500 or 400 MHz) for 1 H and 150 MHz (or 125 or 100 MHz) for 13 C in CDCl 3 or CD 3 OD or acetone- d 6 . Silica (Si) gel (230–400 mesh) (Merck, Darmstadt, Germany) and C18 reverse-phase Si gel (RP-18; 40–63 µM) (Parc-Technologique Blvd, Quebec, Canada) were used for column chromatography. Thin-layer chromatography (TLC) analyses were achieved using precoated Si gel (Kieselgel 60 F-254, 0.2 mm) plates (Merck, Darmstadt, Germany). Further purification and the separation of compounds were performed by reverse-phase high-performance liquid chromatography (RP-HPLC) on a Hitachi L-2455 HPLC instrument with a Supelco C18 column (250 × 21.2 mm, 5 μm) (Supelco Inc., Bellefonte, PA, USA).
The soft coral B. violaceum was collected from Jihui Fish Port, Taitung, Taiwan, identified, and extracted as described before . A voucher specimen was taken and deposited at the Department of Marine Biotechnology and Resources, National Sun Yat-sen (NSYSU) University, Kaohsiung.
The lyophilized bodies of soft coral (500 g, wet weight) were crushed and extracted with EtOAc. The EtOAc extract (3.9 g) was fractionated with Si gel column chromatography (CC) using EtOAc-hexane (0:100 to 100:0, gradient). Polar fractions eluted with EtOAc-hexane (10:1), which showed the diagnostic 1 H NMR (methyl and oxymethine) signals of polyoxygenated terpenoids, were combined and subfractionated on Si gel CC using acetone-hexane (1:2.5), affording the subfractions F1 and F5. Subfraction F4 was separated on RP-18 Si gel CC using acetyl nitrite (CH 3 CN)-H 2 O (1.5:1 then 1.2:1) to give compounds 2 (1.5 mg), 3 (2.0 mg), and 4 (2.2 mg), respectively. Compound 1 (2.4 mg) was obtained from subfraction F5 with a 3-step purification process with RP-18 Si gel CC using MeOH-H 2 O (1.5:1 then 5:1), RP-HPLC using CH 3 CN-H 2 O (1:2), and then on Si gel CC using acetone-hexane (1:5). Briarol A (1). White powder; [ α ] D 25 −101.9 (c 0.24, CHCl 3 ); IR (neat) ν max 3430, 2927, 2853, and 1382 cm −1 ; 13 C NMR (100 MHz, C 6 D 6 ) and 1 H NMR (400 MHz, C 6 D 6 ). See and , respectively. 13 C NMR (100 MHz, CDCl 3 ) δ C 81.3 (CH, C-6), 79.3 (CH, C-8), 78.7 (CH, C-9), 78.3 (C, C-10), 70.4 (C, C-14), 64.1 (C, C-15), 43.7 (C, C-7), 43.7 (CH, C-2), 42.6 (CH, C-1), 32.0 (CH, C-11), 31.5 (CH 2 , C-4), 29.6 (CH 2 , C-12), 29.6 (CH 3 , C-17), 27.6 (CH, C-3), 25.1 (CH 2 , C-13), 24.2 (CH 2 , C-5), 23.2 (CH 3 , C-16), 17.0 (CH 3 , C-20), 16.6 (CH 3 , C-18), 9.9 (CH 3 , C-19); 1 H NMR (400 MHz, CDCl 3 ) δ H 4.33, 3.94, 3.23, and 2.65 (each 1H, br s, 6-OH, 8-OH, 9-OH, and 10-OH), 3.64 (1H, d, J = 11.2 Hz, H-8), 3.61 (1H, dd, J = 10.4, 5.2 Hz, H-6), 3.56 (1H, d, J = 11.2 Hz, H-9), 2.16 (1H, m, H-11), 1.79 (1H, d, J = 6.0 Hz, H-1), 1.73 (1H, m, H-3), 1.72 (1H, m, H-5β), 1.69 (1H, m, H-12β), 1.68 (1H, m, H-5α), 1.67 (1H, m, H-13β), 1.56 (2H, m, H 2 -4), 1.55 (1H, m, H-2), 1.54 (1H, m, H-12α), 1.45 (3H, s, H 3 -16), 1.39 (1H, m, H-13α), 1.33 (3H, s, H 3 -17), 1.15 (3H, d, J = 6.4 Hz, H 3 -20), 1.06 (3H, s, H 3 -19), 0.97 (3H, d, J = 7.6 Hz, H 3 -18); 13 C NMR (100 MHz, acetone-d 6 ) δ C 81.9 (CH, C-6), 80.2 (CH, C-8), 79.0 (CH, C-9), 78.6 (C, C-10), 70.0 (C, C-14), 64.2 (C, C-15), 44.2 (C, C-7), 44.0 (CH, C-2), 43.5 (CH, C-1), 32.4 (CH, C-11), 31.7 (CH 2 , C-4), 30.5 (CH 2 , C-12), 28.2 (CH, C-3), 25.6 (CH 2 , C-13), 25.1 (CH 2 , C-5), 23.2 (CH 3 , C-16), 20.3 (CH 3 , C-17), 17.5 (CH 3 , C-20), 16.8 (CH 3 , C-18), 10.0 (CH 3 , C-19; 1 H NMR (400 MHz, acetone-d 6 ) δ H 4.30 and 3.88 (each 1H, br s, 8-OH and 9-OH), 4.19 (1H, br s, 6-OH), 4.11 (1H, br s, 10-OH), 3.66 (1H, m, H-6), 3.65 (1H, d, J = 11.2 Hz, H-8), 3.49 (1H, br d, J = 11.2 Hz, H-9), 2.25 (1H, m, H-11), 1.84 (1H, d, J = 6.0 Hz, H-1), 1.79 (1H, m, H-3), 1.72 (1H, d, J = 6.8 Hz, H-2), 1.67 (1H, m, H-4β), 1.57 (1H, m, H-5β), 1.53 (1H, m, H-12β), 1.52 (1H, m, H-5α), 1.50 (1H, m, H-4α), 1.49 (1H, m, H-12α), 1.48 (3H, s, H 3 -16), 1.41 (1H, m, H-13β), 1.37 ( 1 H, m, H-13α), 1.365 (3H, s, H 3 -17), 1.16 (3H, d, J = 6.8 Hz, H 3 -20), 1.06 (3H, s, H 3 -19), 1.03 (3H, d, J = 6.8 Hz, H 3 -18). ESIMS m/z 377 [M + Na] + ; HRESIMS m/z 377.2300 [M + Na] + (calcd for C 20 H 34 O 5 Na, m/z 377.2299). Briarol B (2). White powder; [ α ] D 25 −32.0 (c 0.15, CHCl 3 ); IR (neat) ν max 3464, 2923, 2854, and 1381 cm −1 ; 13 C NMR (125 MHz, CDCl 3 ) and 1 H NMR (500 MHz, CDCl 3 ). See and , respectively. ESIMS m/z 361 [M + Na] + , 339 [M + H] + HRESIMS m/z 361.2348 [M + Na] + (calcd for C 20 H 34 O 4 Na, m/z 361.2349). Briarol C (3). White powder; [ α ] D 25 −21.3 (c 0.22, CHCl 3 ); IR (neat) ν max 3413, 2925, 2858, and 1374 cm −1 ; 13 C NMR (100 MHz, CDCl 3 ) and 1 H NMR (500 MHz, CDCl 3 ). See and , respectively. ESIMS m/z 361 [M + Na] + ; HRESIMS m/z 361.2347 [M + Na] + (calcd for C 20 H 34 O 4 Na, m/z 361.2349). Briarol D (4). White powder; [ α ] D 25 −29.7 (c 0.22, CHCl 3 ); IR (neat) ν max 3418, 2923, 2853, and 1381 cm −1 ; 13 C NMR (150 MHz, acetone- d 6 ) and 1 H NMR (600 MHz, acetone- d 6 ), see and , respectively. ESIMS m/z 361 [M + Na] + ; HRESIMS m/z 361.2349 [M + Na] + (calcd for C 20 H 34 O 4 Na, m/z 361.2349).
Cancer cell lines (HT-29, HuCC-T1, and DLD-1) were obtained from the American Type Culture Collection (ATCC). Compounds 1‒4 were evaluated for the cytotoxic activity using an Alamar blue assay as previously described . The intensity of the produced color was measured at 570 nm using an ELISA plate reader.
Three new polyoxygenated diterpenoids of the rare klysimplexane-skeleton, along with a non-ether bridged eunicellin diterpenoid, were discovered from the gorgonian coral Briareum violaceum and named briarols A‒D, respectively. A possible biosynthetic pathway for briarols A‒C from the coexisting eunicellin diterpenoid was postulated for the first time. Although the compounds did not show potent cytotoxic activity against the tested cancer lines, other possible bioactivities for these metabolites might be worthwhile for further screening. It is noteworthy to mention that this is the first discovery of these rare klysimplexane-type metabolites from a gorgonian coral since the isolation of klysimplexin T from the cultured soft coral Klyxum simplex a decade ago.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.